title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DECRL: A Deep Evolutionary Clustering Jointed Temporal Knowledge Graph Representation Learning Approach | Accept (poster) | Summary: This paper proposed an interesting model to integrate high-order clusters into the TKG representation learning. A cluster-aware unsupervised alignment mechanism is introduced to ensure the alignment of soft overlapping clusters across timestamps. An implicit correlation encoder is also proposed to capture latent correlations between clusters. Experimental results show the effectiveness of the proposed model.
Strengths: 1. Integrating high-order structure in TKG representation learning is interesting.
2. The proposed model seems technically reasonable.
Weaknesses: 1. Entity graph and cluster graph are two important concepts for understanding the proposed method. However, they are never mentioned in the introduction and their definitions in Section 3 are very brief, making the intuition behind the proposed method unclear to me.
2. The authors use a relatively simple task future relation prediction to evaluate the quality of the learned TKG representations. However, the future entity prediction (i.e., [s, r, ?, t]) is more challenging due to the large size of the entity set and the evolution of entity semantics. Intuitively, modeling high-order correlation among entities can also be beneficial for entity prediction. It is better to also evaluate the performance of the learned representations on this task.
3. The authors cluster entities only based on their representations. However, in knowledge graphs, entities are connected via various relations and different relations may indicate different correlations. For example, the relation "leave from" means an athlete will not interact with a club, but "transfer to" means they will have more interactions. How can the proposed method handle such correlations brought by relation semantics?
4. Only two benchmark datasets are used for evaluation, making the experimental results unconvincing. ICEWS14C and ICEWS18C datasets are actually subsets of ICEWS14 and ICEWS18, and they are both derived from the same resource (i.e., ICEWS). More TKGs especially from different resources such as GDELT, YAGO, and Wikidata should also be used for evaluation.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s meaningful comments and insightful questions.
>Q1: Lack of clarity in introducing and defining entity graph and cluster graph concepts
Thank you for raising this concern. We acknowledge that these crucial concepts were not sufficiently introduced or defined, potentially obscuring the intuition behind our approach. **Therefore, in the camera-ready version, we will incorporate a concise yet informative discussion of entity graphs and cluster graphs in the introduction**, providing readers with an early understanding of these key concepts and their role in our approach. **In Section 3 (Preliminaries), we will expand our definitions of entity graphs and cluster graphs, including more detailed explanations of their structures, properties, and significance in our approach.**
>Q2: Lack of evaluation on future entity prediction task
In response to this valuable feedback, **we have conducted additional experiments to evaluate our model’s performance on the future entity prediction task, as shown in Table 2 of the rebuttal PDF (which is attached in the global rebuttal in the beginning).** Although our approach do not achieve the SOTA performance in terms of MRR and Hits@1, it achieves the best results for Hits@3 and Hits@10. These results demonstrate the effectiveness and robustness of our approach, particularly in capturing a broader range of relevant entities.
>Q3: Handling of different relation semantics in entity clustering
Firstly, it is important to note that TKG datasets do not provide explicit semantic descriptions of relations like “leave from” or “transfer to”. The training process typically uses only entity and relation IDs. However, we do model different relation types using a Relation-Aware Graph Convolutional Network, capturing distinct characteristics of various relation types even without explicit semantic descriptions.
**Moreover, our DECRL approach is designed to capture the temporal evolution of high-order correlations, which indirectly addresses the issue of different relation semantics.** For example, if entities consistently interact over a continuous period, they have a higher probability of being clustered together at each timestamp. By capturing the temporal evolution of high-order correlations, our approach reinforces the closeness of their relationship over time. Conversely, if entities do not consistently interact over time, they have a lower probability of being clustered together. The temporal evolution component of our approach allows for the gradual distancing of these entities in the representation space. Therefore, DECRL can effectively handle scenarios where different relations may indicate varying levels of future interaction, without relying on explicit semantic information.
**To illustrate this capability, we would like to draw attention to the comparison between Figure 2d (Final DECRL) and Figure 2f (Final DECRL-w/o-fusion, which only models high-order correlations without capturing their temporal evolution) in the manuscript.** This comparison clearly illustrates that capturing the temporal evolution of high-order correlations leads to superior entity representations, as evidenced by the larger inter-cluster distances and tighter intra-cluster entity groupings. Furthermore, by comparing the first and third columns of Figure 2 in the manuscript, we can observe the progression of training. This comparison demonstrates that capturing the temporal evolution of high-order correlations gradually increases the separation between clusters while simultaneously tightening the grouping of entities within clusters.
>Q4: Omission of Wikidata, YAGO and GDELT datasets in the experiments.
Firstly, there is a fundamental difference in timestamp types between ICEWS and the Wikidata and YAGO datasets. ICEWS uses single-point timestamps, while Wikidata and YAGO use time intervals for events. Our research focuses on modeling the temporal evolution of high-order correlations between entities. Datasets with single-point timestamps, e.g., ICEWS, show more frequent temporal changes and higher temporal complexity, making them more suitable for capturing the temporal evolution of high-order correlations. This is why we initially focused on the ICEWS dataset. Moreover, the SOTA models have already achieved very high performance on Wikidata and YAGO datasets, with MRR scores exceeding 99\% for relation prediction. Given the limited room for improvement, we initially excluded these datasets from our preliminary manuscript.
In addition, we initially excluded GDELT dataset due to its known issues with false positives [1] and a high proportion of abstract conceptual entities (e.g., POLICE and GOVERNMENT) [2], since we cannot predict a government’s activities without knowing which country it belongs to.
However, we acknowledge that including a wider range of datasets would provide a more comprehensive evaluation of our approach’s performance and robustness. **In light of your valuable feedback, we have conducted additional experiments on the GDELT dataset, as well as on the WIKI and YAGO datasets.** The results of these experiments are presented in Tables 1 and 3 of the attached rebuttal PDF (which is attached in the global rebuttal in the beginning). We are pleased to report that our approach has achieved the SOTA relation prediction performance across all these datasets, demonstrating the effectiveness and robustness of our approach.
[1] Ward, M. D., Beger, A., Cutler, J., Dickenson, M., Dorff, C., \& Radford, B. Comparing GDELT and ICEWS event data. *Analysis*, 21(1): 267-297, 2013.
[2] Li, Z., Jin, X., Li, W., Guan, S., Guo, J., Shen, H. et al. Temporal knowledge graph reasoning based on evolutional representation learning. In *Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval*, pages 408-417, 2021.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their efforts in addressing my concerns. My main concerns have been addressed and I trust that the authors can address others in the final version of the paper.
---
Reply to Comment 1.1.1:
Comment: We would like to thank Reviewer fmj8 for providing a valuable and constructive review, which has inspired us to improve our paper substantially. We will be dedicated to updating our manuscript as suggested.
Thanks again for your response and raising the score! | Summary: The paper addresses Temporal Knowledge Graph (TKG) representation learning, which aims to embed temporally evolving entities and relations into a continuous low-dimensional vector space. Existing methods struggle to capture the temporal evolution of high-order correlations in TKGs. The authors propose a novel approach called Deep Evolutionary Clustering jointed temporal knowledge graph Representation Learning (DECRL). DECRL is the first to integrate deep evolutionary clustering with TKG representation learning to capture the temporal evolution of high-order correlations.
Strengths: 1 The author clearly describes the motivation for the paper and the methods used.
2 The experimental results demonstrate that DECRL achieves state-of-the-art (SOTA) performance.
Weaknesses: 1 Most event prediction models are often tested on ICEWS05-15 and GDELT datasets. The author does not give experimental results on these two datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1 When clustering nodes using fuzzy clustering, have the authors considered incorporating domain knowledge for node classification? Additionally, have they compared their approach with other node classification methods?
2 Some nodes may not have clear relationships with other nodes. Will clustering them together affect the behavior prediction of these nodes?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s meaningful comments and insightful questions.
>Q1: Absence of experimental results on ICEWS05-15 and GDELT datasets
ICEWS05-15 dataset shares the same source as ICEWS and has a similar scale to ICEWS18, which we included in our initial experiments, so we do not test our approach on ICEWS05-15 dataset. In addition, we initially excluded GDELT dataset due to its known issues with false positives [1] and a high proportion of abstract conceptual entities (e.g., POLICE and GOVERNMENT) [2], as we cannot predict a government’s activities without knowing which country it belongs to.
However, we acknowledge that including a wider range of datasets would provide a more comprehensive evaluation of our approach’s performance and robustness. **In light of your valuable feedback, we have conducted additional experiments on the GDELT dataset, as well as on the WIKI and YAGO datasets.** The results of these experiments are presented in Tables 1 and 3 of the attached rebuttal PDF (which is attached in the global rebuttal in the beginning). We are pleased to report that our approach has achieved the SOTA relation prediction performance across all these datasets, demonstrating the effectiveness and robustness of our approach.
[1] Ward, M. D., Beger, A., Cutler, J., Dickenson, M., Dorff, C., \& Radford, B. Comparing GDELT and ICEWS event data. *Analysis*, 21(1): 267-297, 2013.
[2] Li, Z., Jin, X., Li, W., Guan, S., Guo, J., Shen, H., et al. Temporal knowledge graph reasoning based on evolutional representation learning. In *Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval*, pages 408-417, 2021.
>Q2: Consideration of domain knowledge in node classification and comparison with other node classification methods
We did not incorporate domain knowledge into our fuzzy clustering process for two primary reasons: First, the publicly available datasets we used do not provide corresponding domain knowledge information. Second, since additional domain knowledge are typically not employed in other temporal knowledge graph research, doing so could lead to unfair comparisons with existing methods.
**In temporal knowledge graphs, there are no explicit ground truth entity categories.** Given this lack of predefined classes, we chose an unsupervised clustering algorithm to model high-order correlations among entities, allowing us to discover latent structures without relying on pre-existing labels.
However, we acknowledge the importance of comparing our approach with alternative clustering techniques. To address this, **we have conducted additional experiments using different clustering algorithms as variants of our approach.** The results of these experiments are shown in Table 4 of the attached rebuttal PDF, providing a more comprehensive evaluation of our approach’s effectiveness compared to other potential clustering strategies.
>Q3: Potential impact of clustering nodes with unclear relationships on behavior prediction
Firstly, in our temporal knowledge graph datasets, nodes cannot exist in complete isolation as the data is structured around events, ensuring each node participates in at least one event and thus has a relationship with at least one other node.
We acknowledge that some nodes may have fewer interactions than others. Our approach addresses this variation effectively through a fuzzy clustering algorithm, which allows nodes to belong to multiple clusters with varying degrees of membership. **The fuzzy smoothing hyperparameter controls node membership distribution across clusters, preventing scenarios where clusters contain very few nodes.** This method enables effective cluster construction even for less frequently interacting nodes, allowing them to have partial memberships in multiple clusters and reflecting their potentially ambiguous relationships.
To further address this concern, in our camera-ready version, we will group nodes based on their interaction frequency in the training set and analyze relation prediction performance across these different node groups. | Summary: This paper studies the temporal knowledge graph representation learning task and proposes a temporal evolution-aware framework DECRL.
By assigning different entities to distinct clusters at each timestamp and modeling the evolution and shifts of these clusters, cluster-aware information is explicitly incorporated into both entity and relation embeddings. This enables better temporal intelligence for making precise predictions.
Extensive experiments demonstrate promising results on several benchmarks.
Strengths: * This paper is well-written and easy to follow.
* Technical details are well presented and, to some extent, clearly explained.
* The experiments are extensive, covering various benchmarks, comparing diverse baselines, and demonstrating effectiveness through both qualitative and quantitative analyses.
Weaknesses: * The motivation for introducing a cluster graph at each timestamp to capture temporal shift information at the clustering scale remains unclear. The authors should further elaborate on this overall motivation. Additionally, the authors mentioned that “some researchers have leveraged derived structures, e.g., communities, entity groups, and hypergraphs, to model high-order correlations among …” (Lines 29-30). In my understanding, the modeling of communities and groups is similar to this paper’s clustering, and hypergraphs can be regarded as another approach for group modeling since a hyperedge connects different nodes. The differences in motivation and technique between DECRL and these methods should be briefly discussed.
* The methodology section of DECRL incorporates many minor techniques without corresponding ablation studies to demonstrate their effectiveness or detailed explanations of the rationale. For example, the effectiveness of temporal attentive pooling (Section 4.5) lacks ablation study evidence. Besides, detailed operations for each variant in the ablation study should be provided.
* For Figure 2 in the case study, additional textual explanations are needed, such as clarifying what the red dots represent. Additionally, the authors should consider including baseline methods in the figure, rather than only showing DECRL and its variants.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see weaknesses
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s meaningful comments and insightful questions.
>Q1: Unclear motivation for introducing a cluster graph at each timestamp to capture temporal shift information at the clustering scale
We consider the complex dynamics of international alliances and conflicts, which exhibit high-order correlations that evolve over time. **Countries rarely interact in isolation**, for instance, the relationship between the USA and Russia affects not only these two countries but also influences their respective allies and trade partners. We use clustering to capture these complex high-order correlations.
**These high-order correlations evolve smoothly over time.** The Cold War, for example, did not end abruptly but gradually thawed through a series of events. When constructing our cluster graph, we consider how past high-order correlations (previous clusters) influence current ones, modeling this smooth temporal evolution.
**Furthermore, within a single timestamp, different clusters of entities can affect each other.** For example, tensions between NATO countries and Russia might influence relations between OPEC members and Western nations. We model these intra-timestamp influences through an implicit correlation encoder in our proposed approach.
By capturing these aspects, our approach can represent the nuanced, temporal evolving nature of high-order correlations more accurately.
>Q2: Lack of discussion on the differences in motivation and technique between DECRL and other methods using derived structures
While entity groups, hypergraphs, and clustering can all model high-order correlations, our approach offers unique advantages:
Firstly, methods using entity groups and hypergraphs require learning entity assignment mappers at each timestamp, a process that is complex to update and maintain, resulting in significant computational overhead. **Our clustering-based approach, however, adapts more flexibly to dynamic data changes and is lightweight, facilitating easier integration with other techniques.**
Secondly, we use a fuzzy clustering algorithm with a fuzzy smoothing hyperparameter that controls node membership distribution, preventing clusters with very few nodes. **This approach allows for effective cluster construction even for nodes with limited interactions.** Using entity graphs or hypergraphs to achieve a similar advantage would be much more computationally expensive and resource-intensive.
**In addition, our experiments demonstrate the effectiveness of our approach compared to methods using entity groups and hypergraphs.** For example, the DECRL-w/o-fusion variant, which uses only clustering for representation learning, achieves MRR, Hits@1, Hits@3, and Hits@10 scores of 57.98, 41.90, 66.97, and 92.00, respectively. These results outperform the hypergraph-based method, i.e., DHyper, which scores 56.15, 43.76, 65.46, and 85.89 on the same metrics. This superior performance can be partly attributed to fuzzy clustering’s ability to prevent the formation of extremely small clusters.
>Q3: Lack of ablation studies for minor techniques and insufficient explanation of variant operations in existing ablation studies
Thank you for raising this concern. We recognize the oversight in not including an ablation study for the temporal attentive pooling component. To address this, **we have conducted an additional experiment to demonstrate its effectiveness, as shown in Table 4 of the rebuttal PDF (which is attached in the global rebuttal in the beginning).** These results clearly illustrate the impact of temporal attentive pooling on our approach’s performance.
In addition, in light of your valuable feedback, we acknowledge that our initial description of the ablation study variants may have been insufficient. To address this, **we will prepare more detailed explanations for each variant in our camera-ready version.**
>Q4: Need for additional explanations in Figure 2 and inclusion of baseline methods in the case study
**We will enhance the explanations of Figure 2 in the manuscript**, clarifying that red dots represent individual entities in the temporal knowledge graph, with their groupings indicating entity clusters.
**Furthermore, we would like to highlight key observations from the Figure 2.** The comparison between Figure 2d (Final DECRL) and Figure 2f (Final DECRL-w/o-fusion, which only models high-order correlations without capturing their temporal evolution) in the manuscript clearly illustrates that capturing the temporal evolution of high-order correlations leads to superior entity representations, as evidenced by the larger inter-cluster distances and tighter intra-cluster entity groupings. In addition, by comparing the first and third columns of Figure 2 in the manuscript, we can observe the progression of training. This comparison demonstrates that capturing the temporal evolution of high-order correlations gradually increases the separation between clusters while simultaneously tightening the grouping of entities within clusters. This observation reveals that the capability of DECRL to model the temporal evolution of high-order correlation significantly enhances its ability to capture more nuanced cluster representations.
We also acknowledge the importance of comparing our approach with baselines in the case study. **We have conducted additional visualizations for DHyper, the second-best baseline model, in Figure 2 of the attached rebuttal PDF.**
In light of your valuable feedback, we will refine all the content above and incorporate it into the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their rebuttal in response to the weaknesses I pointed out in the review, including the motivation, extra ablation study, etc.
The authors sufficiently address these weaknesses in their rebuttal, and I trust that authors are able to address them in the final version of the paper.
---
Reply to Comment 1.1.1:
Comment: We would like to thank Reviewer S3KV for providing a detailed and valuable review, which has greatly assisted us in the paper revision. We will address all the weaknesses you pointed out and incorporate them in the final version of the paper.
We are profoundly grateful for the generous score increases from reviewers C7a5 and fmj8. In light of this, we humbly and respectfully ask if you might consider increasing your score. Any potential increase in your score would be received with the utmost gratitude and appreciation. We fully understand the time and effort involved in the review process and are sincerely thankful for your valuable suggestions. | Summary: The paper proposed a deep evolutionary clustering method for TKGE to capture the temporal evolution of high-order correlation in TKGs. A cluster-aware unsupervised alignment mechanism is introduced to ensure the precise one-to-one alignment of soft overlapping clusters
across timestamps. Extensive experiments on four real-world datasets demonstrate the remarkable improvement of the proposed method compared to other baselines.
Strengths: 1. The experimental results are remarkable for the improvement of the ICEWS datasets.
2. The paper is well-organized and clearly presented.
Weaknesses: 1. The paper proposed to capture the temporal evolution of high-order correlation, while no intuitional case study is provided.
2. Though extensive experiments are conducted, two other main TKGE datasets are omitted, namely Wikidata and YAGO.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Will the proposed method still outperform other baselines by large margins on Wikidata and YAGO datasets? The type of time information in these two datasets is different from ICEWS datasets.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The proposed method mainly focuses on improving the accuracy of link prediction for TKGs, while other aspects such as efficiency and transparency are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s meaningful comments and insightful questions.
>Q1: Lack of intuitive case study demonstrating temporal evolution of high-order correlations
**We would like to draw attention to the comparison between Figure 2d (Final DECRL) and Figure 2f (Final DECRL-w/o-fusion, which only models high-order correlations without capturing their temporal evolution) in the manuscript.** This comparison clearly illustrates that capturing the temporal evolution of high-order correlations leads to superior entity representations, as evidenced by the larger inter-cluster distances and tighter intra-cluster entity groupings. Furthermore, by comparing the first and third columns of Figure 2 in the manuscript, we can observe the progression of training. This comparison demonstrates that capturing the temporal evolution of high-order correlations gradually increases the separation between clusters while simultaneously tightening the grouping of entities within clusters. This observation reveals that the capability of DECRL to model the temporal evolution of high-order correlation significantly enhances its ability to capture more nuanced cluster representations.
**In addition, we also incorporate the case study results from DHyper to further substantiate the effectiveness of our approach, as shown in Figure 2 of the rebuttal PDF (which is attached in the global rebuttal in the beginning).**
>Q2: Omission of Wikidata and YAGO datasets in the experiments
Firstly, there is a fundamental difference in timestamp types between ICEWS and the Wikidata and YAGO datasets. ICEWS uses single-point timestamps, while Wikidata and YAGO use time intervals for events. Our research focuses on modeling the temporal evolution of high-order correlations between entities. Datasets with single-point timestamps, e.g., ICEWS, show more frequent temporal changes and higher temporal complexity, making them more suitable for capturing the temporal evolution of high-order correlations. This is why we initially focused on the ICEWS dataset.
Moreover, the SOTA models have already achieved very high performance on Wikidata and YAGO datasets, with MRR scores exceeding 99\% for relation prediction. Given the limited room for improvement, we initially excluded these datasets.
However, we acknowledge the importance of comprehensive evaluation. **In light of your valuable feedback, we have conducted additional experiments on the Wikidata, YAGO, and GDELT datasets, as shown in Tables 1 and 3 of the attached rebuttal PDF.** We are pleased to report that our approach has achieved the SOTA relation prediction performance across all these datasets, demonstrating the effectiveness and robustness of our approach.
>Q3: Lack of discussion on efficiency and transparency aspects of the proposed method
**We have indeed considered the efficiency of our approach and have calculated its time complexity, which is detailed in Appendix A.1 of our manuscript.** Our approach employs evolutionary clustering to capture the temporal evolution of high-order correlations, requiring fewer parameters and lower memory resources compared to approaches using learnable structures like hypergraphs. This design choice significantly contributes to the overall efficiency of our approach.
**To further demonstrate the efficiency of our approach, we have conducted additional experiments comparing the training time of our approach with DHyper.** The results of these experiments are presented in Figure 1 of the attached rebuttal PDF, clearly illustrating the computational advantages of our approach.
Regarding transparency, we acknowledge that this aspect is not thoroughly addressed. We appreciate the reviewer bringing this to our attention. **We will include a comprehensive discussion on the transparency of our approach in the limitations section of the camera-ready version.** This addition will highlight areas for future improvement. | Rebuttal 1:
Rebuttal: Summary of Revision:
We sincerely thank all the reviewers for their insightful reviews and valuable comments, which are instructive for us to improve our paper further.
The reviewers generally held positive opinions of our paper, in that the proposed approach is **“technically reasonable”, “well presented”, “clearly explained”**, and we **“clearly describes the motivation for the paper and the methods used”**; this paper is **“well-organized”, “clearly presented”, “well-written”**, and **“easy to follow”**; we **“demonstrate effectiveness through both qualitative and quantitative analyses”**, the experiment results are **“remarkable”**; and the proposed approach **“achieves state-of-the-art (SOTA) performance”**.
The reviewers also raised insightful and constructive concerns. We made every effort to address all the concerns by clarifying DECRL’s distinctions from related methods and the ability to address diverse relation semantics. We supplement with new experiments on Wikidata, YAGO, and GDELT datasets, along with additional ablation and case studies.
>Q1: DECRL’s distinctions from related methods
While entity groups, hypergraphs, and clustering can all model high-order correlations, our approach offers unique advantages:
Firstly, methods using entity groups and hypergraphs require learning entity assignment mappers at each timestamp, a process that is complex to update and maintain, resulting in significant computational overhead. **Our clustering-based approach, however, adapts more flexibly to dynamic data changes and is lightweight, facilitating easier integration with other techniques.**
Secondly, we use a fuzzy clustering algorithm with a fuzzy smoothing hyperparameter that controls node membership distribution, preventing clusters with very few nodes. **This approach allows for effective cluster construction even for nodes with limited interactions.** Using entity graphs or hypergraphs to achieve a similar advantage would be much more computationally expensive and resource-intensive.
**In addition, our experiments demonstrate the effectiveness of our approach compared to methods using entity groups and hypergraphs.** For example, the DECRL-w/o-fusion variant, which uses only clustering for representation learning, achieves MRR, Hits@1, Hits@3, and Hits@10 scores of 57.98, 41.90, 66.97, and 92.00, respectively. These results outperform the hypergraph-based method, i.e., DHyper, which scores 56.15, 43.76, 65.46, and 85.89 on the same metrics. This superior performance can be partly attributed to fuzzy clustering’s ability to prevent the formation of extremely small clusters.
>Q2: The ability to address diverse relation semantics
Firstly, it is important to note that TKG datasets do not provide explicit semantic descriptions of relations like “leave from” or “transfer to”. The training process typically uses only entity and relation IDs. However, we do model different relation types using a Relation-Aware Graph Convolutional Network, capturing distinct characteristics of various relation types even without explicit semantic descriptions.
**Moreover, DECRL is designed to capture the temporal evolution of high-order correlations, which indirectly addresses the issue of diverse relation semantics.** For example, if entities consistently interact over a continuous period, they have a higher probability of being clustered together at each timestamp. By capturing the temporal evolution of high-order correlations, our approach reinforces the closeness of their relationship over time. Conversely, if entities do not consistently interact over time, they have a lower probability of being clustered together. The temporal evolution component allows for the gradual distancing of these entities in the representation space. Therefore, DECRL can effectively handle scenarios where different relations may indicate varying levels of future interaction, without relying on explicit semantic information.
**To illustrate this capability, we would like to draw attention to the comparison between Figure 2d (Final DECRL) and Figure 2f (Final DECRL-w/o-fusion, which only models high-order correlations without capturing their temporal evolution).** This comparison clearly illustrates that capturing the temporal evolution of high-order correlations leads to superior entity representations, as evidenced by the larger inter-cluster distances and tighter intra-cluster entity groupings. Furthermore, by comparing the first and third columns of Figure 2 in the manuscript, we can observe the progression of training. This comparison demonstrates that capturing the temporal evolution of high-order correlations gradually increases the separation between clusters while simultaneously tightening the grouping of entities within clusters.
>Q3: Performance on Wikidata, YAGO, and GDELT datasets.
**We have conducted additional experiments on WIKI, YAGO, and GDELT datasets.** The results of these experiments are presented in Tables 1 and 3 of the attached rebuttal PDF. We are pleased to report that our approach has achieved the SOTA relation prediction performance across all these datasets, demonstrating the effectiveness and robustness of our approach.
**New experimental results (see the rebuttal PDF)**:
1. Performance on different datasets: Tables 1 and 3 in the rebuttal PDF show the relation prediction performance of DECRL on WIKI, YAGO, and GDELT.
2. Performance of entity prediction task: Table 2 in the rebuttal PDF shows the entity prediction performance of DECRL on GDELT.
3. The contributions of attentive temporal encoder and the fuzzy c-means clustering method: Table 4 in the rebuttal PDF shows the performance comparison of DECRL and its variants on ICEWS14.
4. Model efficiency: Figure 1 in the rebuttal PDF illustrates the training time comparison with DHyper on ICEWS14 (in seconds).
5. Case study: Figure 2 in the rebuttal PDF illustrates the entity representations of DHyper on ICEWS14C.
Pdf: /pdf/af94042c067bd2c9b8c6ef66fc5a78993ea36ed7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
REBORN: Reinforcement-Learned Boundary Segmentation with Iterative Training for Unsupervised ASR | Accept (poster) | Summary: This paper proposes a training framework for unsupervised speech recognition models, building on top of wav2vec-u. The authors note that the performance in wav2vec-u is hindered by the quality of segment boundaries, as they show that the phoneme-error-rate can be improved significantly with ground truth segmentation boundaries. The proposed method introduces a segment boundary prediction mode and a phoneme prediction model. First, the segment boundary prediction model is trained with a policy gradient method. The reward function is based on the perplexity of the phoneme sequence. This sequence is predicted from the computed segmentation boundaries, and is compared to earlier versions of the boundary prediction model, i.e, the boundary segmentation model from a previous iteration, or wav2vec-u at iteration 0. The perplexity should be as low as possible and can be computed by using a language model from unpaired text data. Secondly, the phoneme prediction model is trained with an adversarial loss, where the phoneme prediction model acts as a generator of a phoneme sequence, and the discriminator has to distinguish between these and real phoneme sequences from the unpaired text data. These steps as then repeated until no performance improvements are observed.
Strengths: The authors clearly show that the wav2vec-u model suffers from the quality of the segmentation boundaries, and propose a well-described methodology to improve on it. The authors report results on the expected datasets, and perform a small ablation study to analyze the behavior of the proposed method.
Weaknesses: The authors propose a *very* complex methodology, with a lot of moving and finicky components, as reinforcement learning and GANs are both known to be fickle and hard to reproduce. I worry therefore that it will be hard to reproduce these results, but the authors promised in the checklist to release the code.
The authors use many tricks to make the method more stable and performant:
1. Initialize the phoneme prediction model with the pre-trained wav2vec-u model
2. Introduce 2 auxiliary reward functions, which heavily punish predicted sequences which are longer or shorter than the previous iteration
3. Use behavior cloning (supervised loss on boundary pseudolabels) before the RL loss
4. boundary merging (segments with the same phoneme prediction are merged)
5. self-training of the phoneme prediction model (supervised loss on phoneme pseudolabels)
These are not a weakness per se, but point towards the overall complex picture of the proposed method. I hope future work can find a simpler and more elegant formulation of the UASR problem :)
The authors claim their method is iterative, but in paragraph 4.3 state they only use 1 to 2 iterations. I then wonder why the authors try to sell this as an iterative method, instead of follow-up stages to (significantly) improve on wav2vec-u.
Minor comments on writing:
* line 21: supervisedly trained with a ... -> trained with self-supervision using a ...
* line 33: translate and -> translate, and
* line 50: model: -> model.
* line 74: remove "the"
* line 77: GAN -> The GAN
* line 95: The wav2vec-U network
* line 98: to -> for
* line 99: I cannot really parse this sentence, origanated to be -> originally (?)
* line 102: We discuss these two related topics further in Appendix F.
* line 149: The REINFORCE algorithm
Technical Quality: 3
Clarity: 2
Questions for Authors: Why do you state in 4.3 to only use 1 to 2 iterations, while figure 2 shows results with improvements until 5 iterations?
Why do you think that your model performs better on 5 languages but worse on Portuguese compared to wav2vec-u 2.0?
Why did you decide to mean-pool the segment before predicting the phoneme? I would venture that this throws away information and is not necessary?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The ultimate goal of UASR is to enable speech technology for digitally rare languages without good labeled data. Although this work studies UASR on 6 languages, they are all from rich European countries. It remains to be seen whether this method works for languages from a different continent, especially because this method relies heavily on high quality textual data, to compute the perplexity in the RL loss, and for the discriminator.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below.
> The authors propose a very complex methodology. **These are not a weakness per se**, but point towards the overall complex picture of the proposed method.
To ensure the reproducibility of our work, **we will release the codes, training and evaluation scripts, and model weights**.
> Why the authors try to sell this as an iterative method, instead of follow-up stages to (significantly) improve on wav2vec-u.
> Why do you state in 4.3 to only use 1 to 2 iterations, while figure 2 shows results with improvements until 5 iterations?
While the performance gain is significant in the early iterations, Figure 2 shows that further improvements can still be achieved with more iterations. **In Table 1, we use only two iterations to emphasize the effective integration of REBORN and self-training,** as well as to demonstrate the efficiency of REBORN, i.e., achieving improved performance only requires few iterations.
> Minor comments on writing:
We thank the reviewer for the detailed suggestions. We will revise the paper to fix them.
> Why do you think that your model performs better on 5 languages but worse on Portuguese?
We hypothesize that our method performs slightly worse in Portuguese because the cross-lingual feature extractor (XLSR-53) uses the least amount of Portuguese data compared to other languages during self-supervised pre-training.
> Why did you decide to mean-pool the segment before predicting the phoneme?
Our phoneme prediction model is initialized from wav2vec-U, which uses mean-pooled features. To ensure consistency with wav2vec-U, we also use mean-pooling in REBORN. Exploring other pooling methods is interesting and potentially beneficial, which is left for future work.
> The ultimate goal of UASR is to enable speech technology for digitally rare languages without good labeled data. Although this work studies UASR on 6 languages, they are all from rich European countries.
We thank the reviewer for pointing this out. Indeed, we have not yet tested REBORN on a digitally rare language. Referencing prior works [1-3], we tested REBORN on the most commonly used datasets in UASR. We will revise the paper to include this limitation in Appendix A.
### References
[1] Chen et al. “Completely Unsupervised Phoneme Recognition by a Generative Adversarial Network Harmonized with Iteratively Refined Hidden Markov Models.” *Interspeech, 2019*.
[2] Baevski et al. "Unsupervised speech recognition." *NeurIPS, 2021*.
[3] Gao et al. "Euro: Espnet unsupervised asr open-source toolkit." *ICASSP, 2023*.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the rebuttal and will not be changing my (favorable) score. | Summary: This paper addresses the challenging problem of learning speech recognition without parallel recordings and transcriptions. They build upon state-of-the-art approaches of Wav2Vec-U(2) in this area and propose to refine a pretrained model using a reinforcement learning approach. The main idea is to split the problem into two parts: segmentation and segment classification into “phonemes”. Since neither segmentation nor labels are known, they use policy gradients and leverage “phoneme” classifier rewards to learn the segmentation model. The training is stabilized by splitting it into iterations where only one of the two models is being trained while the other is frozen, and additional rewards (penalties?) that act as regularizers for the main perplexity difference reward. In essence, the model is overall encouraged to improve the perplexity vs the previous model checkpoint, but not change either the segment length or phone edit distance too much. The evaluation is performed on 100h subset of LibriSpeech and full TIMIT for English, and 100h subsets of MLS for German, Dutch, French, Spanish, Italian and Portuguese. The authors demonstrate significant improvements in WER and PER vs state-of-the-art baseline models.
Strengths: * Their main idea of framing UASR as an RL problem is original and interesting. Overall I think it is a solid contribution to UASR literature.
* The proposed RL training is able to significantly improve upon Wav2Vec-U ASR results.
* The evaluation setup is comprehensive enough to believe the method may be generally applicable for other languages.
* The method, architecture, and experimental setup are described comprehensively.
* The ablations related to reward function designs are interesting and demonstrate that the proposed edit-distance and length-difference rewards are effective regularizers for the main perplexity difference reward.
Weaknesses: * The author’s (mis)use of the word “phoneme” is problematic. E.g., p.2 l.64 mentions “segmental structures (that) are acoustic units smaller than phonemes”. Phoneme is a perceptual construct that represents all possible (sequences of) sounds (phones) for which there is no lexical distinction. Such sequence may very well be empty. Therefore it is problematic to talk about phonemic segmentation; what the authors may have had in mind is phonetic segmentation instead. This problem is discussed more thoroughly in: Moore, R. K., and L. Skidmore. "On the use/misuse of the term 'phoneme'." Proceedings of Interspeech 2019.
* The authors claim that “the effect of feature segmentation on phoneme prediction is non-differentiable”, however, it can be. The authors may consult e.g. the segmental CPC approach with its differentiable boundary detection in: Bhati, S., Villalba, J., Żelasko, P., Moro-Velázquez, L., Dehak, N. (2021) Segmental Contrastive Predictive Coding for Unsupervised Word Segmentation. Proc. Interspeech 2021, 366-370, doi: 10.21437/Interspeech.2021-1874
Technical Quality: 4
Clarity: 4
Questions for Authors: * Could REBORN be leveraged in a semi-supervised setup, e.g., starting from an ASR pretrained on a low amount of supervised data, to leverage a larger amount of non-parallel data for fine-tuning?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: * The proposed method depends strongly on the availability of an initial pretrained ASR model (presumably unsupervised). My understanding is that it is more of a “refinement” stage for existing models rather than stand-alone method. I think it should be presented as such - the title and the abstract don’t mention that.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below.
> The author’s (mis)use of the word “phoneme” is problematic. What the authors may have had in mind is phonetic segmentation instead.
We thank the reviewer for pointing this out. **We will thoroughly revise the paper to ensure the correct use of these terms**.
> The authors claim that “the effect of feature segmentation on phoneme prediction is non-differentiable”, however, it can be.
We would like to clarify that in our paper, we meant that the process of predicting phonemes based on predicted segments is naturally non-differentiable. As the reviewer pointed out, some techniques could allow for approximating the backpropagation gradients. **We will revise our claim in the paper to avoid confusion and discuss the work [Bhati et al., 2021] referred by the reviewer**.
> Could REBORN be leveraged in a semi-supervised setup?
We thank the reviewer for this interesting idea. We believe that REBORN could benefit especially when the labels of the limited supervised data are noisy. In this way, the phoneme predictor might be non-ideal and could possibly get improved through REBORN’s iterative paradigm. We would love to investigate this idea in the future thoroughly.
> It should be presented as a “refinement” stage for existsing models, but the title and the abstract don’t mention that.
We will rethink the paper's title and abstract to clarify this. We appreciate the reviewer’s suggestion.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I uphold my recommendation to accept. | Summary: The paper proposes an improvement over wav2vec-U for unsupervised speech recognition, specifically for the task of predicting phonemes. For the final WER performance, a given lexicon is used to get from the phoneme sequence to words.
So the paper focuses on improving the segmentation/boundaries of the phonemes.
Two model parts:
The segmenter, trained via RL and various different reward functions.
GAN-style training for the phoneme prediction model based on the segments from the segmenter.
Stage 1 trains only the segmenter, and assumes a given frozen phoneme prediction model. In the first iteration, the phoneme prediction model is initialized from wav2vec-U.
Stage 2 trains only the phoneme prediction model, taking the segments from the segmenter.
In GAN-style training of the phoneme prediction model: Phoneme prediction model is the generator, and discriminator tries to distinguish a generated/predicted phoneme sequence vs a real one, using the unpaired text data.
Strengths: Good results.
Code will be published.
Weaknesses: Some parts could be made more clear. (See below.)
Needs Wav2vec-U for initialization.
Technical Quality: 3
Clarity: 3
Questions for Authors: So you train a phoneme generator model here. But it's a bit unclear, how do you get from phonemes to words? It refers to the appendix, and WFST decoding is mentioned. It mentions that it needs a lexicon. So to make this clear: the lexicon provides a given mapping from phonemes to words? If this is given, is it fair to call this unsupervised ASR then? It seems like you cheat here by giving it a crucial part for the unsupervised ASR task. While this is still an interesting problem to study then, I think this should be made much more clear. This is not really unsupervised ASR to me. Or if you really want to stick to the term, at least make it very clear (not just in the appendix, and only for readers who know what a "lexicon" is) that this is what you take as a given here. This also has further implications: The phoneme inventory is then also given.
The exact definition of the phoneme predictor/generator is a bit unclear. Is this an auto-regressive model? Or non-autoregressive?
How do you sample from the phoneme predictor for policy gradient? Does this involve beam search?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below.
> How do you get from phonemes to words? ... it needs a lexicon. If lexicon is used, is it fair to call this unsupervised ASR? I think this should be made much more clear.
We thank the reviewer for raising this concern. **Our work does utilize an additional phonemizer (L668-L670) to obtain lexicon, following the standard setup in UASR [1-5]**. We acknowledge that using a lexicon is a limitation of recent UASR works, including ours, which is discussed in Appendix A (L540). As suggested by the reviewer, **we will revise the paper to clearly discuss this in Section 2 and Section 4.3**.
> The definition of the phoneme predictor is unclear. Is it autoregressive or non-autoregressive?
It is non-autoregressive. Precisely, it is a one-layer CNN following wav2vec-U. We will revise the paper to include this information in Section 4.3 to make it clear.
> How do you sample from the phoneme predictor for policy gradient? Does this involve beam search?
When training the segmentation model using policy gradient, we obtain the phoneme predictor by taking the argmax prediction from the phoneme predictor. Since our phoneme prediction model is non-autoregressive, we do not perform a beam search. We will revise the paper to include this detail in Section 3.
> Needs wav2vec-U for initialization
We acknowledge that relying on a reasonably effective phoneme prediction model for initialization may present a limitation, as noted in L539. However, we would like to clarify that our method is not limited to the wav2vec-U for initialization; instead, we have experimented with using different feature extractors, such as HuBERT [6] and WavLM [7] following EURO [4], and starting from different phoneme predictors on LibriSpeech (see Appendix C.5). The results show that **REBORN achieves comparable performance improvements even with a completely different backbone model and a less effective phoneme predictor**.
### References
[1] Liu et al. “Completely Unsupervised Phoneme Recognition by Adversarially Learning Mapping Relationships from Audio Embeddings.” *Interspeech, 2018*.
[2] Yeh et al. "Unsupervised speech recognition via segmental empirical output distribution matching.” *ICLR, 2019*.
[3] Baevski et al. "Unsupervised speech recognition." *NeurIPS, 2021*.
[4] Gao et al. "EURO: Espnet unsupervised asr open-source toolkit." *ICASSP, 2023*.
[5] Liu et al. “Towards end-to-end unsupervised speech recognition.” *SLT, 2022*.
[6] Hsu et al. "Hubert: Self-supervised speech representation learning by masked prediction of hidden units." *TASLP, 2021*.
[7] Chen et al. "Wavlm: Large-scale self-supervised pre-training for full stack speech processing." *JSTSP, 2022*.
---
Rebuttal Comment 1.1:
Comment: Thank you for the answer. With those points being made more clear in the paper, I will increase my rating by one to 7 (Accept). | Summary: This paper proposes a new UASR approach which explicitly learns both segmentation (phoneme boundary) prediction and phoneme class prediction. To do segmentation, the paper proposes some reinforcement-learning objectives. The phoneme class prediction follows an existing approach. It turns out the proposed approach is effective, achieving the new SOTA for UASR on public datasets.
Strengths: - The paper's presentation is clear and is easy to follow.
- The proposed technique (RL for learning segmentation model) is novel and non-trivial
- The proposed technique is effective. It achieves SOTA by a big margin across several datasets and languages.
- There is analysis why the proposed technique works so effectively
- See my concern later
Weaknesses: - In fact, ASR don't need accurate segmentation, e.g. CTC model is peaky -- the peak of a phoneme can appear at any frame for this phoneme. For this reason, it is not so clear to me why improving segmentation/phoneme granularity can help improve UASR so much.
- Please justify.
- For this reason, I put the rating to "border line" instead of "weak accept"
- Unfortunately, the predicted segments are not more accurate than other approaches and probably are not good to be used to obtain accurate timestamps for ASR (or not? -- you may use TIMIT or Buckeye's ground-truth timestamp to check this)
- L324: "some segmental structures smaller than the phonemes": does it probably mean that -- if we use sub-phoneme structures in wav2vec-U 2.0, it may be as performant as the proposed approach, as we are able to learn a smaller granularity of signals. In this case, we won't need the RL learning.
- Would be better to give a real example
Technical Quality: 3
Clarity: 4
Questions for Authors: - Besides Table 4, it would be better to add a table for TIMIT which comes with word/phone-level timestamps. Another dataset Buckeye can be considered.
- Typo or presentation suggestions:
- Figure 1: probably add "1 means the start of a segment" to the caption to make it self-contained
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below.
> ASR don't need accurate segmentation, e.g. CTC model is peaky -- the peak of a phoneme can appear at any frame for this phoneme.
While recent **supervised ASR** does not need explicit segmentation, under the **unsupervised scenario**, learning the mapping from very long speech features to short text transcriptions has been found to be very challenging. To mitigate this difficulty, **segmentation is critical to ensure good unsupervised ASR performance and stability**, which is evident from early works [1-3] to recent studies [4-5].
To investigate the importance of segmentation in UASR, we compare the training stability of UASR algorithms: wav2vec-U, wav2vec-U 2.0, and REBORN. wav2vec-U 2.0 is the only UASR algorithm that does not segment the speech features. wav2vec-U uses k-means segmentation (see Appendix D), while REBORN uses a segmentation model learned using RL. Following the EURO setup described in Section 4.1, we use each algorithm to train UASR models on LibriSpeech. Since each algorithm has its own configuration and search space, including the random seeds, we iterate through these configurations, with each resulting in one model. For each algorithm, we report the *"percentage of models yielding PER < 40%"* among all training configurations and random seeds. The results are shown in Table 1 below.
*Table 1: The importance of segmentation in current UASR. Results are obtained from LibriSpeech following the EURO setup. As the table indicates, **wav2vec-U and REBORN are fully converged in PER with proper segmentation**.*
| Method | Segmentation | Number of models we trained | Percentage of models with PER < 40% ↑ |
|-|:-:|:-:|:-:|
| wav2vec-U (k-means-based) | ✓ | 40 | 100% |
| REBORN | ✓ | 50 | 100% |
| wav2vec-U (no-segmentation) | x | 40 | 0% |
| wav2vec-U 2.0 | x | 64 | 19% |
From Table 1, we observe that wav2vec-U 2.0, which does not have segmentation, is highly unstable. Conversely, REBORN and wav2vec-U (with k-means segmentation) always result in a UASR model with PER lower than 40%. Additionally, removing the segmentation from wav2vec-U makes the algorithm unstable and never yields a model with PER lower than 40%.
> For this reason, it is not so clear to me why improving segmentation/phoneme granularity can help improve UASR so much.
The results in Table 1 have justified the importance of segmentation in UASR. Moreover, we found that the quality of the segmentation strongly affects the performance of UASR (Table 4 in the paper): **using hand-crafted rules or separately learned segmentation boundaries yields suboptimal UASR performance while using the oracle phoneme boundary greatly improves the UASR performance**. Motivated by the above observation, we thus propose to use RL to learn segmentation that is **tailored for the phoneme prediction model**. The segmentation boundary in REBORN is learned with feedback from the phoneme prediction model, which ensures the boundary is useful for improving the UASR performance and **attains state-of-the-art results on many widely used UASR datasets**.
> The predicted segments are not more accurate than other approaches and probably are not good to be used to obtain accurate timestamps.
As suggested by the reviewer, we use TIMIT’s human-annotated phone-level timestamps for evaluation and report the results in Table 2 below.
*Table 2: Boundary evaluation results on TIMIT. REBORN with boundary merging (Figure 1-(c\)) is close to the existing SoTA method.*
|Method|Precision|Recall|F1 Score|R-Value|
|-|-|-|-|-|
|k-means-based|0.62|0.75|0.68|0.68|
|Strgar and Harwath [6]|0.85|0.79|0.82|0.84|
|REBORN|0.61|0.83|0.71|0.62|
|REBORN (w/ boundary merging)|0.80|0.78|0.79|0.82|
The results show that the initial boundaries learned by REBORN (before boundary merging in Figure 1-(c)), which are specifically optimized for the phoneme prediction model, already achieve a high recall. With boundary merging, i.e., consecutive segments with the same phoneme prediction are merged as illustrated in Figure 1-(c\), **REBORN achieves better alignment, bringing our results close to existing SoTA unsupervised segmentation methods**.
> It would be better to add a table for TIMIT which comes with word/phone-level timestamps.
We thank the reviewer for suggesting this experiment, which strengthens the contribution of our method. We will revise the paper to include this result.
> if we use sub-phoneme structures in wav2vec-U 2.0, it may be as performant as the proposed approach
Since wav2vec-U 2.0 uses a fixed stride 1D-CNN to downsample the speech features and does not segment the speech features, we are unsure how to incorporate sub-phoneme structures in wav2vec-U 2.0. We would appreciate it if the reviewer could provide more details. We are willing to implement, analyze, and discuss it if time allows.
> Figure 1 presentation suggestions
We thank the reviewer for the suggestion. We will revise Figure 1 based on the suggestion.
### References
[1] Liu et al. “Completely Unsupervised Phoneme Recognition by Adversarially Learning Mapping Relationships from Audio Embeddings.” *Interspeech, 2018*.
[2] Yeh et al. "Unsupervised speech recognition via segmental empirical output distribution matching.” *ICLR, 2019*.
[3] Chen et al. “Completely Unsupervised Phoneme Recognition by a Generative Adversarial Network Harmonized with Iteratively Refined Hidden Markov Models.” *Interspeech, 2019*.
[4] Baevski et al. "Unsupervised speech recognition." *NeurIPS, 2021*.
[5] Gao et al. "Euro: Espnet unsupervised asr open-source toolkit." *ICASSP, 2023*.
[6] Strgar et al. “Phoneme segmentation using self-supervised speech models.” *SLT, 2022*.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of rebuttal
Comment: Thank you for the explanation and details in the rebuttal. With the explanation of that segmentation is critical for UASR, I'll increase my rating. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
QT-ViT: Improving Linear Attention in ViT with Quadratic Taylor Expansion | Accept (poster) | Summary: In this paper, the authors propose QT-ViT models, which improve the traditional linear self-attention methods by using a second-order (quadratic) Taylor expansion to approximate the original softmax attention and then accelerate this process using a fast approximation algorithm reducing computational complexity from $O(n^2d)$ to $O(nd^3)$ and further to $O(nd^2)$. This method leverages the properties of quadratic expansion for better performance while maintaining the speed of linear approximation. Extensive experiments on image classification, as well as object detection and semantic segmentation tasks, demonstrate that QT-ViT models achieve state-of-the-art accuracy-speed trade-offs, surpassing previous methods across various model sizes.
Strengths: + The paper introduces a novel method as it combines the benefits of quadratic expansion with a fast approximation algorithm, offering a fresh perspective on improving attention mechanisms without relying on knowledge distillation or high-order attention residuals.
+ The paper provides a clear theoretical foundation for their method, explaining how the quadratic Taylor expansion and Kronecker product are utilized to reduce computational complexity. The theoretical analysis is solid and sound.
+ The experiments on image classification, detection, and segmentation both show the effectiveness of the proposed method.
+ The paper is well-written and structured.
Weaknesses: + In Table 1, the top 1 acc of the proposed QT-ViT models over EfficientViT appears to diminish. This trend raises concerns about the scalability and robustness of QT-ViT models as they are scaled up. The paper should include a detailed analysis of this trend.
+ While Figure 1 effectively visualizes the latency of the models, the paper does not include this crucial metric in Table 1, The paper should include latency metrics in Table 1 to provide a comprehensive comparison of the models.
+ Table 2 presents the results of using different kernels, but it is unclear whether all these kernels have the same time complexity. The paper should clearly state whether the time complexities of each kernel in Table 2 are the same. If not, the time complexity of each method should be added in Table 2.
+ In Eq.11. the authors state that using the self-multiplication terms can effectively represent all quadratic terms. Could the authors provide more details for this finding?
+ It seems that the improvement in segmentation is larger than in image classification and object detection. It is better to provide some analysis.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please see the weakness. The instability of accuracy is my major concern of this method.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: In Table 1, the top 1 acc of the proposed QT-ViT models over EfficientViT appears to diminish. This trend raises concerns about the scalability and robustness of QT-ViT models as they are scaled up. The paper should include a detailed analysis of this trend.**
A1: First of all, the difficulty of increasing the classification accuracy from a baseline of 79% is different from that of 85%. The better the baseline method is, the more difficult to further improve its performance.
Secondly, similar to EfficientViT B series and L series, QTViT 1$\sim$3 and 4$\sim$6 use different architectures as their baseline models. Thus, the proportion of attention operation is different in these two settings. Specifically, the ratios of FLOPs of attention blocks in QTViTs are shown below and we can see that the ratios in QTViT 4$\sim$6 are much smaller than those in 1$\sim$3. Thus, the proposed method will have a smaller impact on QTViT 4$\sim$6, since we only modify the attention operation in the model. This is the main reason that QTViT 1$\sim$3 has a better performance over the baseline method than QTViT 4$\sim$6 over the baseline method.
|model |ratio of FLOPs of attention blocks|
|-|-|
|QTViT-1|59.2%|
|QTViT-2|62.1%|
|QTViT-3|65.6%|
|QTViT-4|25.4%|
|QTViT-5|25.7%|
|QTViT-6|25.6%|
**Q2: While Figure 1 effectively visualizes the latency of the models, the paper does not include this crucial metric in Table 1, The paper should include latency metrics in Table 1 to provide a comprehensive comparison of the models.**
A2: Thank you for your advice. We will include the latency metrics in Table 1.
**Q3: Table 2 presents the results of using different kernels, but it is unclear whether all these kernels have the same time complexity. The paper should clearly state whether the time complexities of each kernel in Table 2 are the same. If not, the time complexity of each method should be added in Table 2.**
A3: Sorry for the unclarity. We have mentioned that we compare our method with other kernels used in various linear attention vision transformers in Line 212. These kernels are the kernel functions introduced in Section 2.3. Since they are all used for linear attention, the time complexity of each kernel in Table 2 is all the same and is O(Nd^2). We will further illustrate this in the final version of the paper.
**Q4: In Eq.11. the authors state that using the self-multiplication terms can effectively represent all quadratic terms. Could the authors provide more details for this finding?**
A4: In fact, as Table 3 shows, we have tried several different methods to reduce the number of quadratic terms, and using the self-multiplication terms yields the best result among all of the methods. Generally speaking, the quadratic terms can be viewed as a square matrix $\bf M$ in which each element ${\bf M}_{ij}$ represents the multiplication result of $x_i$ and $x_j$. Thus, the self-multiplication terms can be viewed as the elements on the diagonal of the matrix which can represent the whole matrix to some extent. Besides, we expand the self-multiplication results with respect to the number of quadratic terms and use a learnable parameter $\alpha$ to further adjust the ratio. Thus, the overall method can represent all quadratic terms.
**Q5: It seems that the improvement in segmentation is larger than in image classification and object detection. It is better to provide some analysis.**
A5: In our research, we introduce a novel linear self-attention using quadratic Taylor expansion. Our experimental results on image classification tasks using the ImageNet 1K dataset show the efficacy of our method.
While image classification focuses on assigning a single label to an entire image, segmentation is the classification task on the pixel level. Due to the similarity between these tasks, our method also has a good performance on segmentation.
For the object detection task, treating QT-ViT-1 as an example, although achieving a modest performance improvement compared to the semantic segmentation task, there is a smaller ratio of parameter (57.6M->57.9M, +0.5%) increase compared to the semantic segmentation task with parameters from 32.5M to 32.8M (+0.9%), which is reasonable.
The explanation above will be added in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate your response. I noticed that some model series, such as Poolformer, are listed in Table 1 but are not shown in Figure 1. Could you please explain this?
---
Reply to Comment 1.1.1:
Title: Respond to Reviewer LZeB
Comment: Thanks for your question. In fact, we have tested the latencies of Poolformer series and the results are shown below.
|model|latency (ms)|Top-1 Acc (%)|
|-|-|-|
|s12|3.78|77.2|
|s24|7.12|80.3|
|s36|9.91|81.4|
|m36|12.67|82.1|
|m48|16.56|82.5|
The results of Poolformer series are not as strong as other series and including them in Figure 1 will cause other models to be crowded together, making it difficult for readers to distinguish the strengths and weaknesses. Thus, we decide not to include Poolformer series in Figure 1. Other model series listed in Table 1 but not shown in Figure 1 are due to the similar reason.
We will add a 'Latency' column in Table 1 in the final version of our paper. | Summary: This paper proposes a novel method to compute the kernel function in linear attention. They decompose the softmax attention with Tayler expansion and utilize the first two items to approximate the exponential function. The Kronecker product is used to decompose the quadratic Taylor expansion into two kernel functions, and the self-multiplication terms in the output of the kernel are used to replace the quadratic terms for fast inference. The experimental results on multiple CV tasks show the effectiveness and efficiency of the proposed method.
Strengths: - The use of Kronecker product to realize the second-order Taylor expansion is smart and enlightening.
- The fast approximation algorithm is effective in reducing the time complexity.
- The QTViTs achieve a new Parato-front based on the accuracy and speed trade-off.
- The paper is well-written and easy to understand.
Weaknesses: - What is the dimension of $\alpha$, $\beta$ and $\gamma$ used in Eq.11? Are there any ablation studies using different parameters?
- I notice that you replace $K_r(\phi(x))$ in Eq.8 with Eq.11, and you use $\gamma$ to represent the constant term. Then, could the constant term $1/\sqrt{2}$ in Eq.11 be merged with $\gamma$? What about the scaling factor $1/\sqrt{2}$ in Eq.8?
- There are 6 models with different model sizes in Tab.1. Why do you plot only 3 models in Fig.1?
Technical Quality: 3
Clarity: 4
Questions for Authors: - You have mentioned that you do not necessitate the masked output of the original softmax attention during training as the previous methods do. I wonder if this can be added back to get further performance gain on QTViT?
- You mentioned that QTViT can exhibit a more focused and sharper response around Fig.2. Is there any insight about this phenomenon?
- In Tab.1, the performance gains of QT-ViT 4$\sim$6 to the SOTA are more marginal compared to QT-ViT 1$\sim$3 to the SOTA. Are there any explanations for this phenomenon?
Overall, I would like to see the accuracy and speed trade of the other 3 models to make sure that the proposed method can surpass EfficientViT on all different model sizes.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: What is the dimension of $\alpha$, $\beta$ and $\gamma$ used in Eq.11? Are there any ablation studies using different parameters?**
A1: $\alpha$, $\beta$, and $\gamma$ are all scalars as shown in Line 164 in the original paper. All of them are learnable parameters and are initially set to 1s in the experiments. We further conduct some ablation studies by setting one of them to 0 initially and learning the three parameters normally. The results are shown below (conducted on QTViT-1 on ImageNet dataset).
|$\alpha$|$\beta$|$\gamma$|Top1 Acc (%)|
|-|-|-|-|
|1|1|1|79.6|
|0|1|1|79.3|
|1|0|1|79.4|
|1|1|0|79.0|
We can see that the lack of each part in the initialization will influence the performance, and the constant term is the most important term among them. This is reasonable since the constant term is the most important basis for the Fourier series and a proper value can guarantee the approximation to always be greater than 0 which is one of the properties of softmax function.
**Q2: I notice that you replace $K_r(\phi(x))$ in Eq.8 with Eq.11, and you use $\gamma$ to represent the constant term. Then, could the constant term $1/\sqrt{2}$ in Eq.11 be merged with $\gamma$? What about the scaling factor $1/\sqrt{2}$ in Eq.8?**
A2: This is a really good question. Yes, we merge the scaling factor and constant term in Eq.8 with $\alpha$ and $\gamma$ in Eq.11. Thus, the real initialization of $\alpha$, $\beta$, and $\gamma$ after merging are $1/\sqrt 2$, $1/\sqrt 2$ and $\sqrt 2$.
**Q3: There are 6 models with different model sizes in Tab.1. Why do you plot only 3 models in Fig.1?**
A3: Note that the baseline of our method is the EfficientViT, and they plot only EfficientViT L1$\sim$L3 in their paper. We are following the same rule.
Specifically, QTViT 1$\sim$3 corresponds to EfficientViT B1$\sim$B3 and QTViT 4$\sim$6 corresponds to EfficientViT L1$\sim$L3. For the B series and L series in EfficientViT, they use different backbones and thus achieve different Parato-fronts. The L series has a better Parato-front than the B series. We have a similar result that our QTViT 4$\sim$6 has a better Parato-front than QTViT 1$\sim$3. Thus, we only plot QTViT 4$\sim$6 in our original paper.
To better illustrate this, we show all 6 different models in **Figure 2 in the rebuttal PDF**. Note that although QTViT 4$\sim$6 is better than 1$\sim$3, all six models have better accuracy-speed trade-offs than the EfficientViT series.
If you feel it is necessary, we can add QTViT 1$\sim$3 back to Fig.1 in the final version of the paper.
**Q4: You have mentioned that you do not necessitate the masked output of the original softmax attention during training as the previous methods do. I wonder if this can be added back to get further performance gain on QTViT?**
A4: This is a good question. The masked output of the original softmax attention has been shown to be useful in previous studies such as [1] and [2]. It is also shown to be useful in our method and the experiments using QTViT-1 on the ImageNet dataset are shown below. However, it requires more GPU memory during training, which is not suitable for training large models. Thus, we do not use this strategy in our paper.
|Method|GPU memory required per GPU during training|Top1 Acc (%)|
|-|-|-|
|w/o original softmax|13.9 GB|79.6|
|w/ original softmax| 15.8 GB (+13.7%)| 79.8|
[1] Vitality: Unifying low-rank and sparse approximation for vision transformer acceleration with a linear Taylor attention. HPCA, 2023.
[2] Castling-vit: Compressing self-attention via switching towards linear-angular attention at vision transformer inference. CVPR, 2023.
**Q5: You mentioned that QTViT can exhibit a more focused and sharper response around Fig.2. Is there any insight about this phenomenon?**
A5: Compared to linear Taylor attention and ReLU attention, our proposed method has a quadratic term, and thus can have a sharper response on input features with larger values, which means that QTViT concentrates more on important input features.
**Q6: In Tab.1, the performance gains of QT-ViT 4$\sim$6 to the SOTA are more marginal compared to QT-ViT 1$\sim$3 to the SOTA. Are there any explanations for this phenomenon?**
A6: First of all, the difficulty of increasing the classification accuracy from a baseline of 79% is different from that of 85%. The better the baseline method is, the more difficult to further improve its performance.
Secondly, similar to EfficientViT B series and L series, QTViT 1$\sim$3 and 4$\sim$6 use different architectures as their baseline models. Thus, the proportion of attention operation is different in these two settings. Specifically, the ratios of FLOPs of attention blocks in QTViTs are shown below and we can see that the ratios in QTViT 4$\sim$6 are much smaller than those in 1$\sim$3. Thus, the proposed method will have a smaller impact on QTViT 4$\sim$6, since we only modify the attention operation in the model.
|model |ratio of FLOPs of attention blocks|
|-|-|
|QTViT-1|59.2%|
|QTViT-2|62.1%|
|QTViT-3|65.6%|
|QTViT-4|25.4%|
|QTViT-5|25.7%|
|QTViT-6|25.6%|
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks for the author’s reply. I have some additional questions. Do you use KD method in the table in A4? How many GPUs do you use during training?
---
Reply to Comment 1.1.1:
Title: Respond to Reviewer dQG7
Comment: Thanks for your question.
We do not use KD method in the table in A4. In fact, adding KD method will largely increase the GPU memory (at least double the GPU memory usage). We use 8 GPUs during training. Each AMD MI250 GPU has a maximum GPU memory of 64GB.
It is still enough for training QTViT-1 with the help of original softmax attention and KD method, but it will cause OOM issue when training larger models such as QTViT-3. Thus, it shows the advantage of our method that can derive a good performance without utilizing masked softmax attention output or the KD method. | Summary: This paper introduces QT-ViT models, which enhance linear self-attention using quadratic Taylor expansion. The similarity function is decomposed into the product of two kernel embeddings via the Kronecker product. By employing a fast approximation algorithm, the computational cost is reduced while maintaining overall performance. Experiments on ImageNet Classification with various model sizes show consistent improvement over the baseline EfficientViT method. Additionally, visualizations demonstrate that the QT-ViT model has a more focused attention feature map.
Strengths: - S1. The paper is well-organized and easy to follow.
- S2. The overall idea of improving linear attention with quadratic Taylor expansion and Kronecker product is sound.
- S3. An extensive ablation study is conducted to show the performance of different variants of the Kronecker product.
Weaknesses: - W1. The overall improvement is incremental. Compared to the EfficientViT baseline, the performance is almost the same under the same computation budget across different model sizes and tasks. For ImageNet classification, it would be helpful to include results on ImageNet-v2 and ImageNet-real to demonstrate the robustness and consistency of the improvement.
- W2. The image resolution is not listed in the experiment section. Performance on different image resolutions should also be reported and compared with other methods. When using a much larger image resolution (and more patch tokens), how are the computation cost and latency affected?
- W3. It would be better to include the detection and segmentation results in the main text to support the effectiveness of the proposed QT-ViT across different tasks.
- W4. The network architecture should be detailed, and additional components like absolute positional embedding should be ablated.
- W5. The mIoU performance of EfficientViT-B3 is reported as 49.0 on the ADE20k dataset in the original paper, which is inconsistent with the 38.0 reported in this paper.
Technical Quality: 2
Clarity: 3
Questions for Authors: - The author should provide more implementation details about the network, experimental setting, and comparison with other methods.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: - The latency is tested on an AMD GPU, and I'm not sure if there is any difference on the implementation side. Will the performance gain be consistent on other type of devices?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Overall improvement is incremental. Add results on ImageNetv2 and real.**
A1: EfficientViT is currently the state-of-the-art method according to the accuracy-efficiency trade-off. Also, note that the difficulty of increasing the classification accuracy from a baseline of 79% is different from that of 85%. The better the baseline method is, the more difficult to further improve its performance. We achieve a new SOTA accuracy-efficiency trade-off under different model sizes.
Note that our QTViTs 1$\sim$3 and 4$\sim$6 follow the architectures of EfficientViT B and L series which use different architectures as their baseline models, and QTViTs 1$\sim$3 have more obvious improvements than QTViTs 4$\sim$6. One of the reasons is mentioned above. Another reason is that the ratios of FLOPs of attention blocks in QTViTs 4$\sim$6 are much smaller than those in 1$\sim$3. Thus, the proposed method will have a smaller impact on QTViTs 4$\sim$6, since we only modify the attention operation in the model. The proportions of the attention operation are shown in the following table.
|model |ratio of FLOPs of attention blocks|
|-|-|
|QTViT-1|59.2%|
|QTViT-2|62.1%|
|QTViT-3|65.6%|
|QTViT-4|25.4%|
|QTViT-5|25.7%|
|QTViT-6|25.6%|
For the robustness and consistency of the improvement, we directly use the pre-trained QTViTs and EfficientViTs checkpoints and conduct inference on ImageNet-V2 and ImageNet-ReaL. The results are shown below, and we can see that the proposed QTViTs can consistently outperform EfficientViTs, which shows the robustness and consistency of our method.
|models|ImageNet|ImageNet-V2 |ImageNet-ReaL|
|-|-|-|-|
|QTViT-1|**79.57**|**75.37**|**85.61**|
|EfficientViT-B1|79.38|75.04|85.32|
|QTViT-2|**82.46**|**78.21**|**87.32**|
|EfficientViT-B2|82.09|77.92|86.98|
|QTViT-3|**83.93**|**80.04**|**88.31**|
|EfficientViT-B3|83.47|79.01|88.11|
**Q2: Larger image resolution, and the computation cost and latency.**
A2: We use image resolution of 224 by default. The experimental results of using other image resolutions such as 256 and 228 are shown below, we can see that our QTViT can achieve a consistent improvement over the state-of-the-art method EfficientViT, and the FLOPs and latencies are roughly the same. Experiments on other models and the corresponding discussions will be added in the final version of the paper.
|resolution|model|FLOPs (G)|latency (ms)|Top-1 Acc (%)|
|-|-|-|-|-|
|224|QTViT-1|0.52|1.74|**79.6**|
|224|EfficientViT-B1|0.52|1.76|79.4|
|256|QTViT-1|0.68|2.16|**80.1**|
|256|EfficientViT-B1|0.68|2.13|79.9|
|288|QTViT-1|0.86|2.28|**80.6**|
|288|EfficientViT-B1|0.86|2.26|80.4|
**Q3: Move det and seg results in the main text.**
A3: Thanks for your suggestion, the experiments are currently in the supplemental section due to the limited pages of the main text. We will move the detection and segmentation results to the main text in the final version.
**Q4: The network architecture should be detailed, and absolute positional embedding should be ablated.**
A4: As shown in lines 176~180, we use exactly the same architecture as EfficientViT, except for changing the kernel function to our quadratic Taylor expansion kernel and adding the absolute positional embedding. In fact, the absolute positional embedding has little impact on the latency, FLOPs, and top-1 accuracy for image classification tasks (with almost the same latency and FLOPs, and <0.05 top-1 accuracy gap on different models). We actually see the difference in object detection, and the results of using absolute positional embedding or not are shown in the following table (APE stands for absolute positional embedding).
|Backbone|AP|AP$_{50}$|AP$_{75}$|
|-|-|-|-|
|QTViT-1 w/ APE|39.3|58.2|42.1|
|QTViT-1 w/o APE|39.2|58.2|42.0|
|QTViT-2 w/ APE|41.1|59.7|44.7|
|QTViT-2 w/o APE|41.0|59.7|44.6|
|QTViT-3 w/ APE|42.6|60.9|45.9|
|QTViT-3 w/o APE|42.5|60.8|45.8|
The above results will be added to the final version of the paper.
**Q5: Inconsistent mIoU results.**
A5: For the results of semantic segmentation, since EfficientViT did not provide their training details either in their GitHub code or in their paper by the time we submitted our NeurIPS paper, we conducted segmentation experiments based on mmsegmentation and used the same training strategy as Upernet-resnet50. Note that we use exactly the same training strategy to derive the results of the proposed QTViT and EfficientViT, we can surpass them by a margin which shows the priority of the proposed method. It is easy to improve the absolute mIoU value of both EfficientViT and QTViT by adjusting the training strategy, such as increasing the training steps.
As shown in the following table, by merely increasing the training steps, the mIoU results can be largely increased and approach the result of official EfficientViT-B1. In all of the settings, our QTViT can outperform EfficientViT.
|model |training steps|mIoU|
|-|-|-|
|QTViT-1|160k|*33.2*|
|EfficientViT-B1|-|32.8|
|QTViT-1|320k|**37.6**|
|EfficientViT-B1|-|37.2|
|QTViT-1|640k|**41.2**|
|EfficientViT-B1|-|40.9|
**Q6: Provide more implementation details, experimental setting, and comparison with other methods.**
A6: Thanks for your suggestions. The implementation details are already provided in lines 176-180, the experimental settings are shown in the experimental section and we will add the default image resolution in the main text. The experimental results mentioned above will be added to the final version of our paper.
**Q7: The latency on other type of devices.**
A7: We follow the standard procedure to test the latency on AMD GPU, by first converting the .pth pytorch checkpoint file into .onnx file, then conducting a latency test on our device.
We also test the latencies on NVIDIA V100 GPU, and the latency results are shown in **Figure 1 in the rebuttal PDF**. The conclusion is consistent with that of AMD GPU.
---
Rebuttal Comment 1.1:
Title: Thanks for the detailed feedback.
Comment: Thank you for the detailed rebuttal and the additional results provided. I appreciate the effort in addressing the issues raised. The consistent improvement over the ImageNet dataset and its variants, along with the consistent latency improvements across different device types, demonstrates the robustness of your approach. Additionally, the extra experiments and ablation studies on detection and segmentation effectively highlight the method's potential.
However, I do have one minor concern. If I understand correctly, the resolution tested in A2 is 288 rather than 228. While the results are promising, 288 is still not considered a very large resolution where differences in latency might be more pronounced. Furthermore, the performance gain, though present, remains marginal compared to EfficientViT across various model sizes and computation costs.
The overall results suggest that the paper meets the acceptance threshold. Based on the improvements and additional evidence provided, I am willing to increase my score to 5.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much for your positive decision.
Yes, the resolution tested in A2 is 288 rather than 228. Since the largest resolution used in EfficientViT-B1 is 288, we use the same setting as EfficientViT-B1 in A2. We further extend the resolution to 320 and 384. The results are shown below, but sorry for the limited discussion time left, we can only provide the FLOPs and latencies results.
|resolution|model|FLOPs (G)|Latency (ms)|
|-|-|-|-|
|320|QTViT-1|1.06|2.44|
|320|EfficientViT-B1|1.06| 2.42|
|384|QTViT-1|1.53|2.63|
|384|EfficientViT-B1|1.53| 2.59|
Best,
Authors of paper 275
---
Rebuttal 2:
Title: Are there any further questions?
Comment: Dear reviewer QZCU,
We appreciate your comments and suggestions in the review. All of your concerns in your original review have been addressed, including the overall improvement and results on ImageNetv2 and real, experiments on different image resolutions, ablations on positional embedding, experiments on semantic segmentation, and inference speed comparison on different devices.
We hope you can read the rebuttal and let us know if you have further questions. Thanks for your time.
Best,
Authors of paper 275 | Summary: This paper proposed a new linear complexity sequence modeling strategy for image modeling. To achieve this, the authors first replace the softmax attention with second-order Taylor expansion, then accelerate its computation with a fast approximation algorithm. The effectiveness of the proposed method is validated on the image classification task.
Strengths: Its performance on ImageNet-1K is good.
Weaknesses: 1. It is unclear why directly using the first-order Taylor expansion is worse than using the second-order Taylor expansion with linear approximation. The author should provide a section to discuss this.
2. Experiment details are missing. For example, different image resolutions can largely impact image classification performance. The author fails to provide the image resolution in the experiment section.
3. Some mathematical derivation can be moved to the supplemental section. For example, there are some redundant equations in Eq. 4, 6, 7, 9.
4. There is no actual training/inference speed comparison. Counting flops can be misleading in some cases. I would like to know the actual speed instead of flops.
5. Missing linear attention vision backbone. For example, VVT (TPAMI 23).
6. For image modeling, object detection, and semantic segmentation are also important. I do not know why the author decided to move these two tasks into the supplemental section. However, the semantic segmentation results are extremely low when compared with other linear image backbones. The author do not provide any comments on this.
Technical Quality: 1
Clarity: 2
Questions for Authors: As above.
Confidence: 4
Soundness: 1
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: It is unclear why directly using the first-order Taylor expansion is worse than using the second-order Taylor expansion with linear approximation. The author should provide a section to discuss this.**
A1: In fact, the disadvantages of first-order Taylor attention are mentioned in the sections introduction (lines 42-44), preliminaries (lines 102-105), and experiments (lines 198-200), and our main idea is trying to resolve the disadvantages.
Generally speaking, first-order Taylor expansion discards too much information from the original attention and needs to necessitate the KD or high-order attention residuals during training to make up for the performance gap. However, the GPU memory consumption will severely increase which makes these methods unsuitable for training large transformer models. The second-order Taylor expansion utilizes more information in the original attention and can alleviate this problem. The above explanation is shown in lines 108-112.
**Q2: Experiment details are missing. For example, different image resolutions can largely impact image classification performance. The author fails to provide the image resolution in the experiment section.**
A2: We use image resolution of 224x224 by default in our experiments. The experimental results of using other image resolutions such as 256 and 228 are shown below, we can see that our QTViT can achieve a consistent improvement over the state-of-the-art method EfficientViT. Experiments on other models and the corresponding discussions will be added in the final version of the paper.
|resolution|model|Top-1 Acc (%)|
|-|-|-|
|224|QTViT-1|**79.6**|
|224|EfficientViT-B1|79.4|
|256|QTViT-1|**80.1**|
|256|EfficientViT-B1|79.9|
|288|QTViT-1|**80.6**|
|288|EfficientViT-B1|80.4|
**Q3: Some mathematical derivation can be moved to the supplemental section. For example, there are some redundant equations in Eq. 4, 6, 7, 9.**
A3: In fact, Eq. 4, 6, 7, 9 are not redundant equations. Eq.4 shows an initial way of decomposing the $exp(\cdot)$ function with second-order Taylor expansion. Eq.6 gives theoretical proof that the square of a dot product can be converted into two Kronecker products followed by a dot product, which is also the core analysis of our method. Eq.7 combines the results of Eq.4 and 6 to prove that we can decompose the similarity function into separate kernel embeddings. Finally, Eq.9 changes the order of the elements in the Kronecker product, which does not influence the result of Eq.7 and can help us generate the approximation algorithm that can reduce the computational cost.
These equations illustrate the main idea of our method and may not be moved to the supplemental section. Otherwise, it will be hard for the readers to understand our method.
**Q4: There is no actual training/inference speed comparison. Counting flops can be misleading in some cases. I would like to know the actual speed instead of flops.**
A4: Figure 1 already shows the inference speed comparison among different methods. We provide inference speed of the models on the AMD Instinct MI250 GPU, which is also shown in Line 192 in the original paper.
**Q5: Missing linear attention vision backbone. For example, VVT (TPAMI 23).**
A5: Thanks for your suggestion. We will add VVT in our final version. We show the comparison between our method and VVT in the following table. We can see that we achieve similar results compared to VVT-small and medium but with much fewer FLOPs. For VVT-tiny and large, we can outperform them by a large margin with much fewer FLOPs and parameters.
|models|parameters|FLOPs|Top-1 Acc (%)|
|-|-|-|-|
|QTViT-1|**9.4**|**0.52**|**79.6**|
|VVT-tiny|12.9|3.0|79.2|
|-|-|-|-|
|QTViT-2|24.9|**1.60**|82.5|
|VVT-small|25.5|5.6|82.6|
|-|-|-|-|
|QTV-T-3|49.7|**3.97**|83.9|
|VVT-medium|47.9|9.4|83.8|
|-|-|-|-|
|QTViT-4|**53.0**|**5.26**|**84.7**|
|VVT-large|61.8|10.8|84.1|
**Q6: For image modeling, object detection, and semantic segmentation are also important. I do not know why the author decided to move these two tasks into the supplemental section. However, the semantic segmentation results are extremely low when compared with other linear image backbones. The author do not provide any comments on this.**
A6: This is a good question. Due to space limitations, we move the results of object detection and semantic segmentation into the supplemental section. We will put them into the experimental section in the final version of the paper.
For the results of semantic segmentation, since EfficientViT did not provide their training details either in their GitHub code or in their paper by the time we submitted our NeurIPS paper, we conducted segmentation experiments based on mmsegmentation and used the same training strategy as Upernet-resnet50. Note that we use exactly the same training strategy to derive the results of the proposed QTViT and EfficientViT, we can surpass them by a margin which shows the priority of the proposed method. It is easy to improve the absolute mIoU value of both EfficientViT and QTViT by adjusting the training strategy, such as increasing the training steps.
As shown in the following table, by merely increasing the training steps, the mIoU results can be largely increased and approach the result of official EfficientViT-B1. In all of the settings, our QTViT can outperform EfficientViT.
|model |training steps|mIoU|
|-|-|-|
|QTViT-1|160k|**33.2**|
|EfficientViT-B1|-|32.8|
|QTViT-1|320k|**37.6**|
|EfficientViT-B1|-|37.2|
|QTViT-1|640k|**41.2**|
|EfficientViT-B1|-|40.9|
---
Rebuttal Comment 1.1:
Comment: 1. The author addressed why using second-order Taylor expansion is better than the first-order Taylor expansion, but did not directly explain why using the first-order Taylor expansion is worse than using the second-order Taylor expansion with linear approximation. This does not resolve my concern in this case.
2. The actual training speed is still missing. Also, the inference speed is measured on an AMD Instinct MI250 GPU. I am not sure if all other methods have been fully optimized on AMD GPUs since other methods were originally implemented for NV GPUs.
---
Rebuttal 2:
Title: Are there any further questions?
Comment: Dear reviewer 5zgq,
We appreciate your comments and suggestions in the review. All of your concerns in your original review have been addressed, including the clarity of second-order Taylor expansion, experiments on different image resolutions, mathematical derivation, inference speed, comparison with VVT and experiments on semantic segmentation.
We hope you can read the rebuttal and let us know if you have further questions. Thanks for your time.
Best,
Authors of paper 275
---
Rebuttal Comment 2.1:
Title: Let's engage in the reviewer-author disscussion
Comment: Dear Rev. 5zgq,
We look forward to seeing your comments on the authors' rebuttal, as well as any further clarifications as you may need.
Thanks
---
Rebuttal 3:
Title: Rebuttal to reviewer 5zgq
Comment: Thanks for your comments.
1. If I understand it correctly, you are asking why using our fast approximation algorithm can well represent the second-order Taylor expansion. In fact, as Table 3 shows, we have tried several different methods to reduce the number of quadratic terms, and using the self-multiplication terms yields the best result among all of the methods. Generally speaking, the quadratic terms can be viewed as a square matrix ${\bf M}$ in which each element ${\bf M}_{ij}$ represents the multiplication result of ${\bf x}_i$ and ${\bf x}_j$. Thus, the self-multiplication terms can be viewed as the elements on the diagonal of the matrix which can represent the whole matrix to some extent. Besides, we expand the self-multiplication results with respect to the number of quadratic terms and use a learnable parameter
to further adjust the ratio. Thus, the overall method can represent all quadratic terms.
2. The training speeds of EfficientViT and QTViT are shown below, in which we use a batch size of 1024. The inference speeds of different models on NVIDIA V100 GPU are shown in **Figure 1 in the rebuttal pdf** which is also used to answer Q7 from reviewer QZCU.
|model|training speed (ms/batch)|
|-|-|
|EfficientViT-B1|534.5|
|QTViT-1|538.0|
|EfficientViT-B2|586.6|
|QTViT-2|591.5|
|EfficientViT-B3|756.8|
|QTViT-3|763.4|
Hope these answers can address your concerns.
---
Rebuttal Comment 3.1:
Comment: Thanks for your prompt reply.
I am digging into the approximation and found a paper [1] that is very similar to the proposed method, i.e., doing a second-order Taylor expansion of exp(x) and then writing it as inner products. I noticed that [1] was first published on arxiv on 28 February 2024 and accepted in ICLR 24. Here is an implementation as well: https://github.com/lucidrains/taylor-series-linear-attention.
Would you please provide a detailed discussion of the differences between these two methods? Also, the paper is missing a citation of [1].
[1] Arora, S., Eyuboglu, S., Zhang, M., Timalsina, A., Alberti, S., Zinsley, D., ... & Ré, C. (2024). Simple linear attention language models balance the recall-throughput tradeoff. arXiv preprint arXiv:2402.18668.
---
Rebuttal 4:
Title: Rebuttal to reviewer 5zgq
Comment: First of all, this paper directly used the conclusion of 2nd Taylor expansion from paper [a1] at the start of page 7. However, the conclusion in paper [a1] has some problems. For example, in page 6 in paper [a1], they have $d'=1+d+d^2$. However, the dimension of $\varphi(x)$ should be $(d+1)^2+1=2+2d+d^2$ which can be rigorously derived from Eq.8 in our paper. Besides, paper [a1] does not have any theoretical analysis and derivations to explain the dimension $d'$. Thus, we believe that our conclusion is correct.
Secondly, the acceleration of this paper is to utilize the hardware-efficient algorithm based on CUDA, which limits the utilization of its method and can only be accelerated on NVIDIA GPUs and is not able to be accelerated with other devices. The time complexity of their method is still $O(Nd^3)$, and mapping the input to lower dimensions as they used will yield bad results (see method 1 and 2 in Table 3 in our paper). Our method provides a fast approximation algorithm to reduce the time complexity to $O(Nd^2)$ and can be used without limitation.
Thirdly, this paper was published on Arxiv only two months before the deadline of NeurIPS 2024, thus we do not cite this paper or discuss it. We will add these discussions in our final version.
Thank you very much.
[a1] The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry. ArXiv, 2024.
---
Rebuttal Comment 4.1:
Comment: The issue in [a1] has been discussed in the paper "Linear Transformers with Learnable Kernel Functions are Better In-Context Models". However, the distinction between this paper and your approach is that one uses "concat" while the other uses "add".
The contribution of the hardware implementation of different platforms is not enough for a NeurIPS paper.
Can you rephrase the main insight or contribution of your paper based on this information?
---
Rebuttal 5:
Title: Are there any further questions?
Comment: Dear reviewer 5zgq,
We appreciate you taking the time to review this paper. Since it is less than 10 hours before the deadline for the discussion phase, we hope that we have addressed all your concerns.
If you still have any questions, we are happy to discuss them with you. Thank you very much.
Best,
Authors of paper 275
---
Rebuttal 6:
Title: Rebuttal to reviewer 5zgq
Comment: The paper [a2] you mentioned does not solve any of the problems in [a1] that I mentioned above. In fact, they do not even cite the paper [a1]. This paper simplifies the second-order Taylor expansion to $\phi(x)=x^2$ as shown on page 5, and uses a linear transformation before applying the kernel function. Thus, their implementation is much simpler than the second-order Taylor expansion since they discard all other terms except for the quadratic term. There is no theoretical analysis to show the relationship between this method and the second-order Taylor expansion.
Thus, compared to this paper [a2], our contributions are concluded in the following:
1. We improve the linear self-attention using the **correct** quadratic Taylor expansion through rigorous theoretical analysis and derivations, while there is no such analysis in [1, a1, a2] and the conclusions they derive are either incorrect [1, a1] (see the dimension $d’$ I mentioned above) or has little connection to second-order Taylor expansion [a2].
2. We propose a fast approximation algorithm to reduce the time complexity of our method from $O(Nd^3)$ to $O(Nd^2)$, while they use methods such as reducing the dimension of input through the linear layer or using hardware-efficient algorithm based on CUDA, which have their own drawbacks as we analyzed previously.
3. These three papers [a1, a2, 1] are all published within 2-3 months before the submission deadline of NeurIPS 2024. Thus it is unfair to judge our novelty based on them according to the policy of the NeurIPS conference.
Besides, the contribution of the hardware implementation of different platforms is about the paper [1] you mentioned above rather than ours.
Hope the above analysis can address your concern.
[a2] Linear Transformers with Learnable Kernel Functions are Better In-Context Models. ArXiv, 2024.
---
Rebuttal Comment 6.1:
Comment: Well, all linear methods share the same complexity of O(nd^2).
The paper begins with a second-order Taylor expansion to reduce the complexity to O(nd^3), and then utilizes a feature map of concat(q^2, q) to further reduce the complexity to O(nd^2).
However, the transformation between Taylor expansion to concat(q^2, q) is not smooth.
Using concat(q^2, q) is somewhat new. However, I would like to see stronger evidence for its efficacy. The results of other vision tasks are not compelling.
Thanks for the quick response and for discussing the related work. Considering the others' ratings, I would like to keep my initial rating as the avg 5.5 reflects the quality of this work.
---
Reply to Comment 6.1.1:
Title: Regret to hear that
Comment: We I regret to hear that you wouldn’t change your score. In fact, simply concat(q^2,q) yield bad result, since the coefficients before each term and the constant term are important to derive our sota results.
We believe that the simplicity of our implementation is an advantage rather than drawback. The analysis and derivation to the final equation form is the most important part of our method, which is not discussed by other papers before.
After all, we appreciate your time for reviewing this paper. | Rebuttal 1:
Rebuttal: The PDF contains the answer to Q7 of Reviewer QZCU that adds the latencies of different models on NVIDIA-V100 GPU, and the answer to Q3 of Reviewer dQG7 that adds the results of QTViT 1-3 and EfficientViT-L series into the figure.
Pdf: /pdf/2ba18fdecae8c979ef39cbbc6db3f63af08dcf87.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Text2NKG: Fine-Grained N-ary Relation Extraction for N-ary relational Knowledge Graph Construction | Accept (poster) | Summary: This work introduces Text2NKG. A proclaimed first method of extracting n-ary facts from text for construction a KG. The focus on n-ary is a more complex task than standard RE as an n-ary fact can hold more entities than the standard RDF {subject, relation, object}.
Candidate 3-ary span tuples are formed from entities combinations in a sentence. This is embedded using BERT, and put through linear classifiers (Feed-Forward-Networks). The resulting 3-ary facts are then combined for n-ary facts using "hetero-ordered merging". The fine-tuning process makes use of a data augmentation technique, a balancing parameter to compensate for the large amount of "no relation" labels, and without merging.
The method is verified on HyperRED (the only public hyperrelational dataset currently available), and beats several SOTA methods.
Strengths: - The motivation and contributions of the paper are clear and concrete. The previous limitations of research (Diversity, determination of order, and variability) are clearly mentioned and addressed.
- The authors compare to a variety of different baselines, including SOTA LLM (GPT).
- For this, a number of prompt templates have been considered
- Despite the sometimes overwhelming amount of notations, the work is generally pleasant to read.
Weaknesses: - The provided code is complex with low readibility (>5 nested for/while statements with little/no comments and no documentation).
- Code makes mention of ACE dataset, but this does not seem to have made it into the end-work, adding to the confusion.
- Other than the case study in Figure 5, the error analysis is fairly weak and generally performance based. There is no inspection provided as to what explains (for example) the discrepancy between "Text2NKG" and "Text2NKG w/o HM".
- Prompt templates for GPT that were considered (or eventually used for best result) are not included
- BERT was trained on a copy of Wikipedia in 2018. There seems to be no consideration that the data in HyperRED may overlap with the pre-training data of BERT.
- The work hyperfixates on a single benchmark: HyperRED, despite others being mentioned.
Comments:
Figure 2: Typo "hyper-relaional"
Line 159: typo
Line 214: Broken sentence.
Line 219: Why was it decided to name these Generative and Pipeline and not specific to the previous works? (BART and UniRE)
Table 2: The GPT results dont need 4 decimal places
Table 2: typo (pipelinge)
Equation 6: It is unclear to me how this is derived, and it is not explained why (the how is there) this specific formulation works.
Figure 5a: Figure is hard to read.
Technical Quality: 2
Clarity: 3
Questions for Authors: Questions:
- Line 40: Several more benchmarks for NKG are mentioned (JF17K, Wikipeople, WD50K, etc.), but not considered (or no argumentation is provided). Why were these not considered?
- From inspection, what are cases where the model fails (e.g. seemingly difficult cases where it struggles) or cases where it consistenly succeeds?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for a careful review of our work. We appreciate that you find our work concrete in motivation and pleasant to read. We respond to your specific concerns and questions below.
---
**For Weakness 1 (Code makes mention of ACE dataset, but this does not seem to have made it into the end-work.):**
We are pleased for your interest in our code. We will optimize it for better contribution to the open-source community. Previously, we converted the ACE dataset as a benchmark for n-ary relation extraction, but reviewers found it contributed little as a benchmark, so we removed it from the paper. Any remaining mentions in the code will be corrected in the final release. Thank you for your feedback.
---
**For Weakness 2 (There is no inspection provided as to what explains the discrepancy between "Text2NKG" and "Text2NKG w/o HM".):**
Thanks for your suggestions. As described in **Section 3**, n-ary RE involves more entity orders than binary RE. Text2NKG uses a hetero-ordered merging method to consider probabilities of different entity orders.
In the **Text2NKG w/o HM** setting, we didn't combine probabilities of different arrangements during decoding, leading to a decrease in performance.
---
**For Weakness 3 (Prompt templates for GPT that were considered are not included):**
Thanks for your suggestions. Our ChatGPT and GPT-4 prompts both adopted a 1-shot unsupervised design. The specific input prompts (which we will add to the Appendix of the final paper) can be referenced from our response to the **Weakness 1(i)** mentioned by **Reviewer rUhA**.
---
**For Weakness 4 (There seems to be no consideration that the data in HyperRED may overlap with the pre-training data of BERT.):**
BERT is widely used for downstream tasks, including n-ary relation extraction. Baseline models like CubeRE and those from the HyperRED dataset paper are also fine-tuned on BERT-based pre-trained methods for fair comparison. Plus, since Wikipedia is not originally structured with n-ary relational schemas, HyperRED focuses on extracting these structured relations from unstructured text, making it a more challenging task that involves entity count, order, and evaluation criteria.
---
**For Weakness 5 (The work hyperfixates on a single benchmark: HyperRED, despite others being mentioned.):**
There is a scarcity of work on automatically constructing NKGs from natural texts. HyperRED is currently the only hyper-relational extraction dataset. Text2NKG is the first method to unify n-ary RE extraction for four types of NKG schemas and can support more NKG schemas.
---
**For Comment (1) (Typos in Figure 2, Line 159, Line 214, and Table 2.):**
Thanks for your carefully reading. We will correct "hyper-relaional" in **Figure 2** to "hyper-relational", "imput" in **Line 159** to "input", 4 decimal places in **Table 2** to 2 decimal places, "Pipelinge" in **Table 2** to "Pipeline". Your proposal greatly enhances the presentation of the paper and will be all implemented.
---
**For Comment (2) (Why was it decided to name these Generative and Pipeline?):**
As stated in **Table 2**, the results of the supervised baseline models are primarily taken from the original paper of CubeRE. For consistency, we retained the baseline names from the CubeRE paper, using the same terms: Generative and Pipeline instead of BART and UniRE.
---
**For Comment (3) (Equation 6: It is unclear to me how this is derived, and it is not explained why this specific formulation works.):**
In the data augmentation section in **Section 4.3**, we trained on all 6 arrangements of A, B, C. Therefore, in Hetro-ordered Merging (prediction phase), we also need to consider all 6 arrangements to accurately evaluate (A, B, C). However, we must convert the 6 different orders back to $(A, r_1, B, r_2, C)$ to obtain the corresponding probability $P_i$ for each $r_i$ and sum them up. Let's take the calculation of $\mathbf{P}_{1}$ as an example, which is derived from the sum of six terms. The first term $\mathbf{P}_1^{(ABC)}$ represents the probability of $r_1$ in $(A, r_1, B, r_2, C)$ without any operation. The second term $I(\mathbf{P}_1^{(BAC)})$ represents the probability of $r_1'=r_1^{-1}$ in $(B, r_1^{-1}, A, r_2, C)$, where $r_1'$ needs to be inverted to transform to (A, B, C). The remaining 4 terms in the equation follow the similar logic. Finally, we calculate the combined probability of the two $r_1,r_2$ in (A, B, C) to get Equation 6.
---
**For Comment (4) (Figure 5a is hard to read.):**
In **Figure 5a**, "pred_n" represents the number of extracted n-ary facts with different arities $n$ by Text2NKG, and "ans_n" represents the ground truth. As training progresses, Text2NKG's output converges to the correct number of n-ary facts for each arity, demonstrating its ability to handle n-ary RE with arbitrary arity and good scalability.
---
**For Question 1 (Several more benchmarks for NKG are mentioned (JF17K, Wikipeople, WD50K, etc.). Why were these not considered?):**
Datasets like JF17K, Wikipeople, and WD50K, used for link prediction in n-ary relational knowledge graphs (NKGs), lack original text. Practical fields like medicine, finance, and law require extracting n-ary relational facts from natural language. HyperRED is the only high-quality dataset for this. Mentioning other NKGs helps design Text2NKG's method to cover all NKG schemas.
---
**For Question 2 (What are cases where the model fails or cases where it consistenly succeeds?):**
From **Table 2** and **Table 3**, Text2NKG shows higher Precision than Recall, indicating it accurately extracts n-ary facts but often misses some.
To balance this, we introduced a null-label weight hyperparameter ($\alpha$). As shown in Figure 4(c), increasing $\alpha$ raises Precision but lowers Recall by predicting more null-labels. Decreasing $\alpha$ does the opposite. This adjustability allows Text2NKG to be optimized for different scenarios.
---
Rebuttal Comment 1.1:
Comment: The answer for question 2 still makes for a quantitative only-evaluation with little to no qualitative inspection (by looking at examples where the model does well or doesnt do well).
I decide to keep my previous scores. | Summary: Text2NKG is a novel framework for fine-grained n-ary relation extraction aimed at constructing n-ary relational knowledge graphs (NKGs). Unlike traditional binary relational knowledge graphs, NKGs encompass relations involving more than two entities, making them more reflective of real-world scenarios. Text2NKG employs a span-tuple classification approach along with hetero-ordered merging and output merging techniques. It supports multiple NKG schemas—hyper-relational, event-based, role-based, and hypergraph-based—and achieves state-of-the-art performance in fine-grained n-ary relation extraction benchmarks.
Strengths: 1) Traditional triplet representations of knowledge cannot fully express complex relationships. However, there is limited research on the extraction of multi-ary and hyper-relations. This study on fine-grained n-ary relations has significant potential applications.
2) The method proposed in the article improved extraction effectiveness by 20% in n-ary extraction.
3) The author considered many additional technical issues, such as comparisons of LLMs and handling long texts, and conducted supplementary analyses.
4) A clever method was used to solve the entity order problem in n-ary relation extracion, and the experiments demonstrated that it is possible to extract facts with varying numbers of arity in an unsupervised manner.
5) The code and datasets used in the experiments are made publicly available, promoting transparency, reproducibility, and further research in this area.
Weaknesses: 1) The comparison settings with the large language models need to be clearer.
2) Line 159: imput -> input.
Technical Quality: 4
Clarity: 4
Questions for Authors: What is the connection and difference between n-ary relation extraction and event extraction?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have listed the limitations in the Checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for a detailed review. We are delighted that you think our work have significant potential applications and technical effectiveness. We answer your questions below.
---
**For Weakness 1 (The comparison settings with the large language models need to be clearer.):**
Thanks for your suggestions. Our ChatGPT and GPT-4 prompts both adopted a 1-shot unsupervised design. The specific input prompts (which we will add to the Appendix of the final paper) can be referenced from my response to the **Weakness 1(i)** mentioned by **Reviewer rUhA**.
---
**For Weakness 2 (Line 159: imput -> input.):**
Thanks for your carefully reading. We will correct this typo in the final paper.
---
**For Question (What is the connection and difference between n-ary relation extraction and event extraction?):**
N-ary relation extraction and event extraction are closely related tasks in natural language processing that aim to extract structured information from text. N-ary relation extraction focuses on identifying relationships involving more than two entities. Event extraction, as one of specific schema in n-ary relation extraction, identifies events described in the text and extracts participants and attributes associated with these events. An event consists of a trigger word indicating the occurrence and arguments that represent the entities involved.
The primary difference between the two lies in their focus and output. N-ary relation extraction captures static relationships among multiple entities, resulting in tuples that represent complex interactions. Event extraction, however, focuses on dynamic occurrences, identifying events, triggers, and arguments to produce structured event representations.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clear explanations. Your responses have addressed my concerns, and I appreciate the additional insights provided. I maintain my positive decision. | Summary: The paper introduces a novel framework for fine-grained n-ary relation extraction aimed at constructing n-ary relational knowledge graphs (NKGs). Traditional KGs primarily focus on binary relations, but this work targets n-ary relations which involve more than two entities, aligning more closely with real-world facts.
Strengths: 1. The writing of the paper is clear, well-structured. The logical flow from the problem definition to the proposed methodology and experimental results makes it easy to follow.
2. The span-tuple classification and hetero-ordered merging approach are novel and effective, enabling the extraction of fine-grained n-ary relations which are more representative of real-world facts compared to traditional binary relations.
3. Proposed method achieves state-of-the-art performance on the HyperRED benchmark, with significant improvements in F1 scores over existing methods.
4. The paper compares Text2NKG with ChatGPT (gpt-3.5-turbo) and GPT-4 for N-ary extraction performance across the four schemas, and analyzes the advantages and disadvantages between large and small models for NKG construction.
Weaknesses: 1. Although the framework shows potential for scalability, the paper does not thoroughly address the computational efficiency and scalability when applied to real-time applications.
2. While the paper includes ablation studies, more detailed analysis and discussion on the impact of each component (e.g., data augmentation, null-label weight hyperparameter) on different types of NKG schemas would provide deeper insights into their contributions.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Can you provide an example of extracting N-ary relations in real-world information extraction?
2. How should NKG be stored and utilized, and can n-ary extraction be applied in RAG or Agent in the era of LLMs?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: See the Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful evaluation of our paper. We are happy that you find our paper clear to follow and the results impressive. We respond to your concerns and questions below.
---
**For Weakness 1 (The paper does not thoroughly address the computational efficiency and scalability.):**
As described in **Section 3** and **Section 4.2**, given an input sentence with $l$ words $s=\{w_1,w_2,...,w_l\}$, an entity $e$ is a consecutive span of words: $e=\{w_p,w_{p+1},...,w_q\}\in \mathcal{E}_s$, where $p,q\in\{1,...,l\}$, and $\mathcal{E}_s=\{e_1,e_2,...,e_m\}$ is the entity set of all $m$ entities in the sentence. In order to perform fine-grained n-ary RE, we need first to encode a span-tuple ($e_1,e_2,e_3$) consisting of every arrangement of three ordered entities, where $e_1,e_2,e_3\in \mathcal{E}_s$. The main computational consumption of Text2NKG is selecting every span-tuple of three ordered entities to encode them and get the classified labels in multiple-label classification part. If we adopt an traversal approach with each span-tuple in one training items, the time complexity will be O($m^3$).
To reduce the high time complexity of training every span-tuple as one training item, Text2NKG uses packed levitated markers that pack one training item with each entity in $\mathcal{E}_s$ separately. Every input sentence contains $m$ training items with span-tuples for any ordered arrangement of three entities for multiple-label classification. Therefore, the time complexity decreased from O($m^3$) to O($m$).
---
**For Weakness 2 (More detailed analysis and discussion on the impact of each component on different types of NKG schemas would provide deeper insights into their contributions.):**
Thanks for your suggestions. In the setup of the ablation experiments, we separately removed the three main components of Text2NKG: data augmentation (DA), the null-label weight hyperparameter (α), and hetero-ordered merging (HM). DA and α are explicitly discussed in **Section 4.3 (Training Strategy)** and are part of the Multi-label Classification in **Figure 3**, representing two main training strategies. HM is part of the Hetero-ordered Merging in **Figure 3**. The ablation experiment results show that all three components contribute to the final outcome, validating the rational design of these modules.
Specifically, in the **Text2NKG w/o DA** setting, we did not perform augmented training with all 6 permutations of span-tuples, only training with the original order. This led to uneven training samples, causing some relational representations to be under-trained and reducing effectiveness. In the **Text2NKG w/o α** setting, we set α to 1.0, giving the null-label the same weight as other labels. Since the null-label appears far more frequently than other labels in training, this caused the model to favor classifying as null-label, resulting in fewer extracted n-ary relational facts and a drop in recall, thus decreasing performance. In the **Text2NKG w/o HM** setting, we didn't combine the probability values of different permutations during the decoding phase, preventing the model from fully considering different ordering, leading to a decrease in performance.
---
**For Question 1 (Can you provide an example of extracting N-ary relations in real-world information extraction?):**
Take medical diagnostic decision-making applications as an example. Medical knowledge is more complex than general domain facts, with a higher proportion of n-ary relational facts. The medical fact **"A male hypertensive patient is diagnosed with mild creatinine elevation when serum creatinine is between 115-133μmol/L"** is represented as the main triplet with gender and two creatinine value indicators as auxiliary keys: **(Hypertensive patient, diagnosis, mild increase in creatinine | gender: male, numerical indicator: blood creatinine (μmol/L) ≥ 115, numerical indicator: blood creatinine (μmol/L) ≤ 133)**, forming a hyper-relational fact with five entities, more completely representing complex hypertension medical knowledge.
With the Text2NKG framework, we can more easily extract NKG in hyper-relational schema from medical guidelines to model medication and treatment rules, a process previously often entirely manual. With Text2NKG's help, this process can be simplified to automated extraction and manual review. The extracted n-ary relational facts comprising the NKG can be used for subsequent tasks through link prediction, hierarchical graph, multi-hop logical queries, etc., realizing the practical application of NKG.
---
**For Question 2 (How should NKG be stored and utilized, and can n-ary extraction be applied in RAG or Agent in the era of LLMs?):**
NKGs should be stored in structured and efficient formats to facilitate easy retrieval and manipulation. Graph databases like Neo4j and Tugraph are ideal for managing n-ary relations due to their optimization for graph data. This n-ary relational fact can be represented using a special node to denote the n-ary relation, and then stored using traditional binary relational RDF. Efficient indexing and querying mechanisms, including SPARQL for RDF-based graphs, are crucial for quick data access.
N-ary extraction can significantly benefit RAG models and intelligent agents in the era of LLMs. For RAG models, n-ary extraction enhances contextual understanding and enables dynamic content generation by providing richer contexts and retrieving relevant information from knowledge bases, improving GraphRAG. In LLM agents, n-ary extraction improves decision-making and interaction quality by enabling better comprehension of complex relationships and contextual reasoning. The scalability of n-ary extraction allows agents to adapt to various domains, offering versatility through custom n-ary relation schemas.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal and clarifications. Your explanation of the time complexity reduction using packed levitated markers and the additional details on the ablation studies provide a clearer understanding of the computational efficiency and the contributions of each component within the framework.
The real-world example of extracting n-ary relations in the medical domain helps illustrate the practical application of your approach. Additionally, your discussion on the storage and utilization of NKGs, particularly in relation to RAG models and intelligent agents, offers valuable insights into the potential use cases of your work.
Overall, your rebuttal effectively addresses the concerns raised, and I appreciate the additional context provided. I remain positive about the contributions of your paper. | Summary: The paper presents Text2NKG, a new framework designed for fine-grained n-ary relation extraction aimed at constructing n-ary relational knowledge graphs (NKGs). By introducing a span-tuple classification method combined with hetero-ordered merging and output merging strategies, it achieves extraction across varying entity arities while preserving their order, enhancing the granularity of NKG construction. Moreover, Text2NKG is adaptable to four prevalent NKG schemas.
Strengths: **Validated Effectiveness**: Experimental results demonstrate the capability of Text2NKG in n-ary relation extraction.
**Relevance and Significance**: This work focuses on a related challenge in knowledge graph construction. Its consistency with practical application requirements enhances the importance of research.
Weaknesses: **Lack of Implementation Detail**: Vital specifics regarding input prompts to language models like GPT-4, and whether the Generative Baseline and Pipelinge Baseline underwent supervised fine-tuning, are omitted. These details are crucial for reproducibility and assessing the thoroughness of comparative analysis.
**Inherent Limitations of BERT-style Architectures**: The reliance on BERT-based architectures may restrict the handling of long texts. This could limit the scalability and applicability of Text2NKG in domains with extensive contextual requirements.
**Limited application scope**: Although Text2NKG supports 4 typical NKG schemas, the framework seems to be specifically designed for these specific structures, which may limit its generality in practical applications.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please refer to the Weaknesses
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for a detailed review. We appreciate that you find our work effective for n-ary relation extraction and consist with practical applications. We answer your concerns below.
---
**For Weakness 1(i) (Vital specifics regarding input prompts to language models like GPT-4.):**
Our ChatGPT and GPT-4 prompts both adopted a 1-shot unsupervised design. The specific input prompts (which we will add to the Appendix of the final paper) are as follows:
```
Task: Based on the relation_list and qualifier_list and the given input sentence and entity_list, output n-ary relational facts in hyper-relational schema.
relation_list=['adjacent station', 'award received', 'candidacy in election', 'capital of', 'cast member', 'chairperson', 'child', 'coach of sports team', 'connecting line', 'country', 'country of citizenship', 'director / manager', 'educated at', 'employer', 'followed by', 'head of government', 'head of state', 'headquarters location', 'home venue', 'incarnation of', 'instance of', 'league', 'legislative body', 'located in the administrative territorial entity', 'located on street', 'location', 'manufacturer', 'member of', 'member of political party', 'member of sports team', 'military branch', 'narrative role', 'noble title', 'nominated for', 'notable work', 'occupant', 'occupation', 'operator', 'original broadcaster', 'owned by', 'parent organization', 'part of', 'part of the series', 'participant', 'participating team', 'partner in business or sport', 'performer', 'place of birth', 'position held', 'present in work', 'replaces', 'residence', 'shares border with', 'significant event', 'sport', 'sports season of league or competition', 'spouse', 'stock exchange', 'subclass of', 'used by', 'voice actor', 'winner']
qualifier_list=['academic degree', 'academic major', 'adjacent station', 'affiliation', 'applies to part', 'character role', 'connecting line', 'country', 'diocese', 'electoral district', 'end time', 'follows', 'for work', 'has part', 'instance of', 'located in the administrative territorial entity', 'location', 'member of political party', 'mother', 'national team appearances', 'nominee', 'number of matches played/races/starts', 'number of points/goals/set scored', 'object has role', 'of', 'performer', 'point in time', 'position held', 'position played on team / speciality', 'publication date', 'quantity', 'ranking', 'replaces', 'series ordinal', 'sports league level', 'start time', 'statement disputed by', 'statement is subject of', 'street number', 'subject has role', 'ticker symbol', 'together with', 'towards', 'winner']
Example:
Input:
sentence='The current Leader of the National Party in the Parliament of Australia is Barnaby Joyce , and Deputy Leader is Fiona Nash , both elected on 11 February 2016 following the retirement of Warren Truss as Leader .'
entity_list=['National Party', 'Barnaby Joyce', '11 February 2016', 'Warren Truss']
Output:
[(entity1='National Party', relation='chairperson', entity2='Warren Truss', qualifier1='end time', entity3='11 February 2016'), (entity1='National Party', relation='chairperson', entity2='Barnaby Joyce', qualifier1='start time', entity3='11 February 2016')]
Then:
Input:
sentence='Apple was founded by Steve Jobs , Steve Wozniak , and Ronald Wayne on April 1 , 1976 , to develop and sell personal computers .' entity_list=['Apple', 'Steve Jobs', 'April 1 , 1976']
Output:
```
**For Weakness 1(ii) (Whether the Generative Baseline and Pipeline Baseline underwent supervised fine-tuning, are omitted.):**
As **Section 5.1** and **Appendix D** show, we adopted the same baselines as in the CubeRE paper to compare with Text2NKG. The settings and results of the Generative Baseline and Pipeline Baseline are both from that paper. Similar to CubeRE, Text2Event, UIE, and LasUIE, the Generative Baseline and Pipeline Baseline used supervised training methods. Additionally, as shown in **Table 2**, unsupervised and supervised methods are obviously distinguished.
---
**For Weakness 2 (The reliance on BERT-based architectures may restrict the handling of long texts.):**
In **Appendix G.3**, we have analyzed how Text2NKG can address long contexts with relations spread across various sentences. If the text to be extracted is a lengthy piece, it can undergo long-form n-ary relation extraction. The maximum text segment size for our proposed method depends on the maximum text length that a transformer-based encoder can accept, such as Bert-base and Bert-large, which have a maximum limit of 512. To extract from larger documents, we simply need to switch to encoders with larger context lengths, which all serve as the encoder portion of Text2NKG and are entirely decoupled from the n-ary relation extraction technique we propose. This is one of the advantages of Text2NKG. Its primary focus is to address the order and combination issues of multi-ary relationships. We can seamlessly combine a transformer encoder that supports long texts with Span-tuple Multi-label Classification to process n-ary relation extraction in long chapters.
---
**For Weakness 3 (Although Text2NKG supports 4 typical NKG schemas, the framework seems to be specifically designed for these specific structures.):**
Text2NKG's multi-label classification strategy and hetero-ordered merging strategy allow users to freely replace custom n-ary relation schemas. When users define an n-ary relational fact composed of n entities and m relations, they only need to set the corresponding number of FNNs for the output of m categories. Then, by using user-defined rules for hetero-ordered merging, custom fine-grained n-ary relation extraction can be quickly and flexibly implemented.
Plus, the hyper-relational schema, event-based schema, role-based schema, and hypergraph-based schema cover nearly all schemas of n-ary relational knowledge graphs (NKGs). In **Section 2.1**, we provide a detailed summary of existing instances of NKGs. | Rebuttal 1:
Rebuttal: We thank all four reviewers for carefully reading our paper and providing constructive feedback. We appreciate the recognition of our work's strengths, which are summarized as follows.
**1. Novelty in Motivation.**
To address challenges such as the diversity of NKG schemas, determination of the order of entities, and variability of the arity of n-ary RE, we propose a novel fine-grained n-ary relation extraction framework, Text2NKG, for NKG construction. (Supported by Reviewers: ED7F: Strength 1, GMbk: Strength 1, XsBb: Strength 1)
**2. Practicality and Flexibility.**
Over 30% of real-world facts contain more than two entities, with multiple n-ary schemas. Text2NKG supports any schema of n-ary relation extraction flexibly. (Supported by Reviewers: rUhA: Strength 2, ED7F: Strength 2, GMbk: Strength 4, XsBb: Strength 3)
**3. SOTA Performance and Effectiveness.** Experimental results show that Text2NKG achieves state-of-the-art performance in fine-grained n-ary relation extraction tasks, showing its effectiveness. (Supported by Reviewers: rUhA: Strength 1, ED7F: Strength 3, GMbk: Strength 2)
**4. Extensive Verification and Analysis.**
We conducted extensive experiments to demonstrate the effectiveness of Text2NKG and each of its modules, and compare it further with advanced LLMs like GPT. We also analyzed the advantages and disadvantages of large and small models for this task, and considerations for long texts. (Supported by Reviewers: ED7F: Strength 4, GMbk: Strength 3, XsBb: Strength 2)
We believe we have addressed all points mentioned and substantially improved the paper. We responded to the four reviewers, respectively. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Maximum Entropy Inverse Reinforcement Learning of Diffusion Models with Energy-Based Models | Accept (oral) | Summary: This work proposes a maximum entropy inverse reinforcement learning (IRL) approach for improving the sample quality of diffusion generative models, especially when using a small number of generation steps.
Strengths: 1. The paper presents a novel and principled approach for improving diffusion models by formulating the problem as maximum entropy IRL. This elegant formulation allows jointly optimizing a diffusion model and EBM to enhance sample quality, especially with fewer generation steps.
2. The proposed DxDP algorithm is a key technical innovation that makes optimizing the IRL objective tractable. By leveraging ideas from optimal control and dynamic programming, DxDP enables efficient diffusion model updates without back-propagation through time, which is a significant practical benefit.
3. Strong empirical results on both image generation and anomaly detection tasks demonstrate the approach's effectiveness. DxMI can generate high-quality samples with very few (4-10) steps and enables training EBMs without MCMC, such a diffusion step reduction is impressive to me.
Weaknesses: 1. While the acclaimed acceleration of the diffusion model looks effective (and is also stated as a major advantage of this algorithm), the comparison to prior diffusion model acceleration methods is somewhat limited. A more comprehensive evaluation across different speed-quality tradeoffs and more discussion of DxMI's relative strengths and weaknesses compared to other approaches would be essential.
2. While the empirical results look good, the paper lacks theoretical analysis of the proposed methods, such as convergence rates for DxDP or any approximation guarantees relating the IRL objective to the original objective. Adding such analysis would help characterize this method's ups and downs.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Upon reviewing this work, I found that the energy-based objective (Eq. (2)) and training objectives (Eqs. (3), (4), (5)) share very strong similarities/motivations to the concepts employed in existing KL-regularized RL-based fine-tuning papers (e.g., [1-4]). In particular, Eq. (6) and Eq. (7) in [2] serve precisely as the KL-based training objectives and the energy-based model, which involves sampling from unnormalized distributions. While I think the similarities might be just in terms of high-level methodologies and principles, it still becomes crucial for the authors to provide a thorough discussion highlighting the technical and methodological distinctions between this approach and the prior works in order to well-position the proposed method. For discussing the methodological relations/differences, I also recommend the authors carefully check [1] as this paper is written in a principled way, such that it becomes a good reference for understanding the methodologies.
2. As mentioned, do you have any theoretical insight into the convergence properties of DxDP or approximation guarantees relating to the IRL and original objectives? Many IRL theories might be worth checking (references [1-4] also might be worth checking for this goal, probably). Even without providing rates/bounds, an intuitive discussion/remark on these points could further validate the approach.
3. Can the DxMI approach be extended to other families of generative models, e.g., bridge-matching diffusion models, which are very similar to standard denoising diffusion models? What are the key challenges or requirements for the generative model?
4. How do you expect this approach to scale to more complex datasets or higher-dimensional spaces? Will the sample efficiency gains be more or less pronounced?
5. In experiments, how sensitive are the results to hyperparameters of DxMI/DxDP, such as the coefficient on the entropy term? What strategies did you use to tune these?
[1] https://arxiv.org/abs/2403.06279
[2] https://arxiv.org/abs/2402.16359
[3] https://arxiv.org/abs/2305.16381
[4] https://arxiv.org/abs/2405.19673
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer YBX1,
We deeply appreciate your thorough and constructive comments. We will do our best to answer your questions.
**Q. What is the connection to KL-regularized RL fine-tuning, particularly in papers such as [1-4]?**
Thanks for pointing out an interesting connection. The sampler update step of DxMI is indeed closely related to KL-regularized RL.
* We will augment our manuscript to discuss this connection in detail.
* We will add paragraphs dedicated to KL-regularized fine-tuning. In the paragraphs, we will cite the papers [1-4] and discuss them in detail.
* Please note that the current manuscript already mentions some works using KL-regularized RL, including [3], in the second paragraph of the Related Work section.
* The sampler update objective of DxMI is a special case of KL-regularized RL. However, there are three key differences that makes DxMI distinct from existing KL-regularized RL fine-tuning methods.
* First, DxMI employs an uninformative reference policy, such as a Gaussian distribution (Eq. (7)). Due to this choice, DxMI can deviate from the pretraining model to find a better generation trajectory more suitable for small $T$.
* Second, DxMI employs a novel value-based algorithm for updating a diffusion model.
* Third, while most KL-regularized RL works assume that the reward function is known, DxMI simultaneously learns the reward from data.
* BRAID [4] is particularly related to DxMI, as it also considers the problem of learning a reward from offline data. However, the reward is a separate random variable in [4], while in DxMI the reward is the log data density.
[1] https://arxiv.org/abs/2403.06279 Tang. Fine-tuning of diffusion models via stochastic control: entropy regularization and beyond. 2024.
[2] https://arxiv.org/abs/2402.16359 Uehara et al. Feedback Efficient Online Fine-Tuning of Diffusion Models. 2024.
[3] https://arxiv.org/abs/2305.16381 Fan et al. DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models. 2023.
[4] https://arxiv.org/abs/2405.19673 Uehara et al. Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models. 2024.
**Q. The comparison to prior diffusion model acceleration methods is somewhat limited.**
* Our comparative discussion had to be limited due to page constraints. Currently, the first paragraph of Section 6 ("Faster Diffusion Models") is dedicated to reviewing and comparing prior diffusion acceleration methods.
* In the updated manuscript, we will enhance the "Faster Diffusion Models" section to highlight the key differences between DxMI and previous methods. Additionally, we will incorporate this comparison into the introduction and the experiments sections. The enhancements will include the following points:
* The key distinction between DxMI and existing diffusion acceleration methods is that DxMI does not use the intermediate states in the trajectory of a diffusion model. Most diffusion distillation methods focus on preserving or learning from these intermediate states. In contrast, DxMI directly aims to match the final state of a sampler to the data distribution. The promising performance of DxMI indicates that deviating from the original diffusion trajectory may be beneficial for sample quality when the generation has to be performed in a very few steps. Among existing methods, only SFT-PG employs a similar approach; however, DxMI outperforms SFT-PG by using dynamic programming instead of a policy gradient.
**Q. A more comprehensive evaluation across different speed-quality tradeoffs would be essential.**
* In the updated manuscript, we will include a figure demonstrating the speed-quality trade-off with more data points, such as $T$=2, 20, and 40.
* Qualitatively, the best trade-off for DxMI is achieved in the mid-range of $T$, from 4 to 10.
* If $T$ is too small, the sampler is less capable, and the data processing inequality becomes less tight, making our MaxEnt regularization less effective.
* If $T$ is too large, the sampler's capability increases, but the value function learning becomes more challenging.
**Q. More discussion of DxMI's relative strengths and weaknesses compared to other approaches would be essential.**
We will augment our discussion on strengths and weaknesses of DxMI in the update manuscript. Currently, some of weaknesses are mentioned in our Limitation section. Focusing on diffusion acceleration application, our strengths and weaknesses can be summarized as follows.
* Strengths
* Unlike other diffusion distillation methods where the performance is bounded by the teacher model, in principle DxMI may achieve better sample quality than the pretrained model (see our CIFAR-10 case).
* The dynamic programming-based in DxMI is more effective than the policy gradient-based algorithm (e.g., SFT-PG).
* DxMI produces an EBM as a byproduct, which can be used in other applications such as anomaly detection or transfer learning.
* Weaknesses
* When $T=1$, DxMI reduces to GAN, offering no additional advantage.
* DxMI does not offer the flexibility of using a different value of $T$ during the test time.
----
Due to the character limit, we will continue answering your question in the comment.
---
Rebuttal 2:
Title: Continued Response for Reviewer YBX1
Comment: **Q. Theoretical analysis of DxMI**
* We agree that theoretical analysis of our MaxEnt IRL problem would be very interesting. However, direct analysis of DxMI may be highly challenging because our EBM and diffusion model comprise deep neural networks.
* Conducting theoretical analysis in a more restricted setting might be more feasible while still providing valuable insights.
* To the best of our knowledge, there is not many theoretical results for MaxEnt IRL. Previous works [5,6] offered convergence guarantees for MaxEnt IRL in a discrete state-action space. While their results do not directly apply to DxMI due to algorithmic and other differences, their results suggest that similar analysis could be conducted on DxMI under suitable assumptions.
* Please consider that the main focus of this paper is to present a practical algorithm that is empirically scalable and effective. We believe providing theoretical analysis that rationalizes the empirical results in the paper is an important future work.
* We will add a paragraph describing the theoretical results from [5,6] in our Related Work section. We will also mention the difficulty of theoretical analysis in our limitations section.
[5] Renard et al., Convergence of a model-free entropy-regularized inverse reinforcement learning algorithm, arxiv, 2024.
[6] Zeng et al., Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees, NeurIPS 2022.
**Q. Can the DxMI approach be extended to other families of generative models, e.g., bridge-matching diffusion models?**
* Thanks for the interesting suggestion. We do believe the value-based learning presented in DxMI can be extended to other types of generative modeling problems, such as finding a bridge between distributions.
**Q. How do you expect this approach to scale to more complex datasets or higher-dimensional spaces? Will the sample efficiency gains be more or less pronounced?**
* Currently, we do not see a particular obstacle that prevents DxMI from scaling to larger datasets. We believe DxMI is at least as scalable as GANs, which have already demonstrated its feasiblity in a very high-dimensional datasets. As an empirical evidence, we will add LSUN Bedroom 256x256 experiments in the updated manuscript.
**Q. How sensitive are the results to hyperparameters of DxMI/DxDP, such as the coefficient on the entropy term? What strategies did you use to tune these?**
* DxMI is not sensitive to the entropy regularization coefficient $\tau$, which can be safely set to 0.01 or 0.1. This insensitivity arises because the energy, which competes with the entropy, is regularized by the coefficient $\gamma$ to maintain a narrow range of values close to zero.
* We provide the guide for hyperparameter tuning in Appendix B.
* Probably the most critical hyperparameter is learning rates, for which we assign a larger value for the energy and a smaller value for the diffusion model. This is included in Appendix B.
Thank you once more for your insightful feedback. We hope our responses have addressed your concerns. Please feel free to reach out if you have any further questions or need additional information.
Best regards,
Authors.
---
Rebuttal Comment 2.1:
Comment: Thanks for the detailed and informative response. The strengths of this work and connections with relevant topics are made much clearer.
---
Reply to Comment 2.1.1:
Title: Thanks for your reply
Comment: Thanks for acknowledging the strength of this work. Also, thanks again for bringing up the interesting connection to existing work. We will make sure the connection is described well in the updated version.
Best regards,
Authors. | Summary: The authors propose learning a denoising diffusion model without using the denoising loss. Instead, they propose first training an energy based model and then treating the diffusion denoising sampler as an RL trajectory with the energy based model as the reward. To learn the energy based model, however they propose a generalized version of contrastive divergence which uses the current diffusion model as part of its objective function. Experiments are designed to validate this idea, starting from pretrained diffusion checkpoints.
Strengths: - The paper explores a creative combination of a variety of ideas all seeking to replace the simple denoising diffusion training objective.
- The exposition is well written
- The proposed approach involves learning both an energy based model and a diffusion model using DP where the number of steps can be small. The latter has value for fine-tuning diffusion models against other types of rewards.
Weaknesses: - The authors introduce an elaborate scheme to do away with the score function denoising objective, but their experiments rely on pre-trained checkpoints that use the score function training to get these results so the final results are based on stacking the new methods on top of the original diffusion training.
- The objective in Eq (5) seems somewhat hard to be confident of, in the sense that given any $\pi$ and $p$, the optimal $q$ could diverge away from $p$ such that $KL(p||q) < KL(\pi|q) >> 0$ (while still being closer than q since the optimum value of the objective has to be negative). This could theoretically make it unstable as an iteration progresses, $\pi$ will track $q$ and then $q$ can take another step further away from both $p$ and $\pi$ while being optimal for Eq (5), which only requires that the relative distance to $p$ be less than that of $q$. As another point, when initializing with a $\pi$ that is close to $p$ (e.g. in the pretrained model case), the optimal $q$ does not look like it has to be close to either given the cancellation involved.
- The key proposal in Algorithm 1 has a multiple highly unstable procedures mixed together within each iteration -- e.g. the energy update in Line 5 and the TD bootstrap objective in line 7.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Algorithm 1, Line 5, are the $x$ and $x_T$ completely independent samples with no relation to each other? From the way its written, $x_T$ is sampled starting from independent noise $x_0$, and $x$ is a particular data sample for that iteration. Moreover, since for the pre-trained checkpoints, $\pi$ is likely close to $p$, the energy objective looks like it has minimal training signal.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer yjrf,
We appreicate your time and effort in reviewing our work. Here, we are happy to address your concerns and questions.
**Q. Is the objective function stable?**
> The objective in Eq (5) seems somewhat hard to be confident of, in the sense that given any $\pi$ and $p$, the optimal $q$ could diverge away from p such that $KL(p||q) = KL(\pi||q) \gg 0$. This could make it unstable as an iteration progresses, $\pi$ will track q and then q can take another step further away from both p and $\pi$ while being optimal for Eq (5).
We would like to clarify the critical misunderstandings regarding the objective function.
Let us write our objective function as $L(q,\pi)=KL(p \parallel q) - KL(\pi \parallel q)$, where we aim to solve $\min_q \max_\pi L(q,\pi)$.
* When $p$ and $\pi$ are fixed and $p\neq \pi$, the optimal $q$ does not satify $KL(p \parallel q) = KL(\pi \parallel q) \gg 0$.
* It is a misunderstanding that our objective function $L(q,\pi)$ has a minimum at 0, where $KL(p||q) = KL(\pi||q)$.
* When $p\neq \pi$, the objective function can have a negative value. For example, setting $q=p$ makes the objective negative $L(p,\pi)=-KL(p \parallel \pi)<0$. Therefore, $KL(p||q) = KL(\pi||q)$ is not optimal for $q$, as there are other values of $q$ that achieve a lower objective function value.
* Thus, our objective function always keep $q$ closer to $p$ than to $\pi$, not invoking the instability questioned in the review comment.
**Q. Is learning possible when $\pi$ is close to $p$?**
> As another point, when initializing with a $\pi$ that is close to p (e.g. in the pretrained model case), the optimal q does not look like it has to be close to either given the cancellation involved.
> Moreover, since for the pre-trained checkpoints, π is likely close to p, the energy objective looks like it has minimal training signal.
* When $p=\pi\neq q$, it is true that there is no learning signal for $q$. However, $p=\pi\neq q$ is not a Nash equilibrium, and learning is not terminated at this point.
* When $p=\pi\neq q$, our objective function drives $\pi$ away from $p$ to be close to $q$. After $\pi$ becomes different from $p$, the learning signal for $q$ is generated.
* This behavior may be undesirable in practice. However, we are most interested in the case where the number of function evaluations is small ($T=4$ or $10$). With small $T$, the initial sample quality from $\pi$ is very bad (Figure 1 (Right) of the manuscript), indicating that $p \neq \pi$.
* As $p=\pi\neq q$ is not a Nash equilibrium of our objective function, an optimization algorithm, if done correctly, will eventually lead to the Nash equilibirum $p=q=\pi$.
**Q. DxMI still uses a diffusion model as a starting point.**
> The authors introduce an elaborate scheme to do away with the score function denoising objective, but their experiments rely on pre-trained checkpoints that use the score function training to get these results so the final results are based on stacking the new methods on top of the original diffusion training.
* DxMI is not meant to be a complete replacement for denoising objective. We will update our introduction to clarify that DxMI is a complementary training algorithm for diffusion models.
* Please note that diffusion model pre-training is not always necessary for DxMI. In our 2D experiment and anomaly detection experiment, DxMI demonstrated its ability to train a sampler without pre-training.
**Q. The proposed algorithm has multiple unstable procedures.**
> The key proposal in Algorithm 1 has a multiple highly unstable procedures mixed together within each iteration -- e.g. the energy update in Line 5 and the TD bootstrap objective in line 7.
* We understand the proposed algorithm can be seemingly complicated.
* To deal with the complexity, we will make our code public and provide model checkpoints. Also, we disclose our hyperparameters and suggest a hyperparameter selection strategy in Appendix B.
* However, to the best of our knowledge, there is no empirical or theoretical evidence of instability.
* The TD update and the energy update does not interfere with each other, as they operate on different inputs. Both updates are stable, as the energy update equation is regularized (coefficient $\gamma$), and TD update is simply mean squared error minimization.
* Empirically, we found the algorithm much more stable than MCMC-based EBM training algorithms, which occasionally diverge for no reason.
* If you have a particular concern regarding the algorithm's stability, we are happy to discuss it.
**Q. In Algorithm 1, Line 5, are the x and x_T completely independent samples with no relation to each other?**
* You are correct that, in Algorithm 1, $\mathbf{x}$ denotes a real data and $\mathbf{x}_T$ indicates a sample from the diffusion model. We will make this point clear in the updated manuscript by adding a comment in Algorithm 1.
Thank you again for your constructive review. We believe there was a misunderstanding regarding our objective function, and we hope our response clarifies this issue. We kindly request that you reconsider the decision in light of our responses and update the score accordingly. We are also eager to address any additional concerns you may have.
Best regards,
Authors.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: > Thus, our objective function always keep closer to than to, not invoking the instability questioned in the review comment.
Thanks for the clarification, I understand the proposal better after re-reading the paper and have updated the initial review accordingly.
While my comment related to the objective was not really trying to claim a formal counter example for demonstrating pathologies of the proposed objective, it would be quite impressive if you could convert your intuition in the above response into a formal argument backing the proposal.
---
Rebuttal 2:
Title: Thank you for your reply.
Comment: Thank you very much for taking the time to revisit the manuscript. We truly appreciate your reconsideration of the score. We will make sure your comments are reflected in the manuscript well and try to come up with some formal statements that we can make to ensure the stability of the objective function.
Best regards,
Authors. | Summary: This paper seeks to improve diffusion models by employing inverse reinforcement learning methods of imitation rather than (more myopic) behavioral cloning methods, which prevalent existing diffusion models can be viewed as using. It trains an energy-based model using maximum entropy inverse reinforcement learning and proposes an optimal control problem for diffusion based on minimizing an upper bound of the contrastive KL divergence. The benefits of the approach are demonstrated with a focus on generating outputs with few diffusion iterations.
Strengths: The paper is motivated by the key connection between imitation and diffusion models, the characterization of existing methods corresponding to behavioral cloning approaches for imitations, and the potential for improved diffusion using more sophisticated inverse reinforcement learning imitation methods. This motivation is nicely described.
The technical content of the paper is quite dense, but the authors present it clearly.
Other work exists exploring this perspective of diffusion as imitation/optimal control, but the paper’s approach is nicely constructed to avoid MCMC and policy gradient optimization, which are often bottlenecks in existing methods.
Experimental results show the benefits of this approach, including comparisons with a similar approach that is reliant on policy gradient (SFT-PG) and other recent diffusion model learning methods for generation and anomaly detection.
Weaknesses: Closely related work isn’t described in the introduction to better frame the contributions of this paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: Is there potential for theoretical analysis or guarantees using this approach?
Are there visual differences in the generated outputs of different models that could be highlighted in 5.2?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Potential abuses using deep fakes are described, along with limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Nqik,
Thank you for taking the time to review our work. We deeply value your feedback and are happy to address your questions.
**Q. Is theoretical analysis possible?**
> Is there potential for theoretical analysis or guarantees using this approach?
Thanks for bringing up an important point.
* Yes, there is significant potential for theoretical analysis in our MaxEnt IRL problem. However, directly analyzing the DxMI implementation may be highly challenging because our EBM and diffusion model comprise deep neural networks.
* Conducting theoretical analysis in a more restricted setting might be more feasible while still providing valuable insights. Previous works [1,2] offered convergence guarantees for MaxEnt IRL in a discrete state-action space. While their results do not directly apply to DxMI due to algorithmic and other differences, their results suggest that a similar analysis could be conducted on DxMI under suitable assumptions.
* Please consider that the main focus of this paper is to present a practical algorithm that is empirically scalable and effective. We believe providing a theoretical analysis that rationalizes the empirical results in the paper is an important future work.
* We will add a paragraph describing the theoretical results from [1,2] in our Related Work section. We will also mention the difficulty of theoretical analysis in our limitations section.
[1] Renard et al., Convergence of a model-free entropy-regularized inverse reinforcement learning algorithm, arxiv, 2024.
[2] Zeng et al., Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees, NeurIPS 2022.
**Q. Related works are not described in the introduction.**
> Closely related work isn’t described in the introduction to better frame the contributions of this paper.
* We had to defer our discussion on prior works to the Related Work section (Section 6) due to the page limitation.
* In the revised manuscript, we will include a paragraph in the introduction which describes related work. If there is any specific work that you want us to additionally cite, we are happy to incorporate them into the updated manuscript.
**Q. Are there visual differences in the generated outputs of different models?**
* Please find examples of randomly generated images from DxMI and other models in the attached PDF, highlighting the visual differences in their outputs. For example, the Consistency Model samples distort human faces, while the DxMI samples depict them in correct proportions.
Again, we thank the reviewer for acknowledging the value of our work. If you have any further questions, we are happy to address them.
Best regards,
Authors.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. | Summary: The authors introduce a maximum entropy inverse reinforcement learning (IRL) approach for
enhancing the sample quality of diffusion generative models, especially with limited generation
time steps. Named Diffusion by Maximum Entropy IRL (DxMI), the approach involves joint
training of a diffusion model and an energy-based model (EBM). The EBM provides the
estimated log density as a reward signal for the diffusion model, which is trained to maximize
both the reward from EBM and the entropy of generated samples. Additionally, the authors
propose Diffusion by Dynamic Programming (DxDP), a novel reinforcement learning algorithm
that optimizes diffusion model updates efficiently by transforming the problem into an optimal
control formulation. Empirical studies demonstrate that diffusion models fine-tuned with DxMI
can generate high-quality samples in as few as 4 to 10 steps and improve the stability of EBM
training dynamics, enhancing anomaly detection performance.
Strengths: 1. Innovation: The proposed DxMI methodology introduces the concept of maximum entropy IRL to the training of diffusion models, which is novel and potentially impactful in improving sample quality and inference time of diffusion model.
2. Clarity: The manuscript is well-written and logically structured, with clear explanations of the theoretical foundations and algorithms.
Weaknesses: 1. Complexity of Implementation: The implementation of DxMI, particularly the joint training
of diffusion models and EBMs, might be complex and require significant computational
resources. The practical feasibility in various settings could be further elaborated.
2. Limited Scope of Experiments: The experiments, while promising, are somewhat limited in
scope. More diverse and complex tasks could further validate the robustness and versatility
of the proposed approach.
3. Comparative Analysis: While the proposed methods show improvements, a more detailed
comparative analysis with state-of-the-art techniques, including training time and
computational cost comparisons, would strengthen the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Generalization to Complex Tasks: How do the authors envision the performance of DxMI in
more complex generative tasks, such as high-resolution image generation or text-to-image
synthesis?
2. Training Time Comparison: What is the average training time for models using DxMI
compared to other generative model training techniques such as GANs or VAE-based
approaches?
3. Scalability: How scalable is the proposed DxMI approach when applied to larger datasets or
models with significantly more parameters? Are there any anticipated bottlenecks?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have clearly presented the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer EhWr,
We appreciate the comprehensive feedback on our manuscript. All the comments and questions raised have been considered below.
**Q. Complexity of Implementation**
> The implementation of DxMI, particularly the joint training of diffusion models and EBMs, might be complex and require significant computational resources.
* DxMI may seem complicated, but in fact its complexity is no larger than GANs and actor-critic RL methods.
* Particularly in image experiments, EBM is the only additional component over a diffusion model. EBM functions similarly to the discriminator of GAN. Furthermore, EBM used in DxMI is typically much smaller than the diffusion model, introducing minimal burden. For example, in our CIFAR-10 experiment (T=10), the EBM has 5M parameters while the diffusion model has 36M parameters.
* DxMI is significantly simpler than standard actor-critic RL, such as Soft Actor-Critic (SAC) [1], which trains a value function and two Q functions simultaneously. On the contrary, DxMI does not require a Q function and trains only a single value function.
[1] Haarnoja, Tuomas, et al. "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor." ICML 2018. https://proceedings.mlr.press/v80/haarnoja18b
**Q. More Experiments**
>The practical feasibility in various settings could be further elaborated.
> The experiments, while promising, are somewhat limited in scope. More diverse and complex tasks could further validate the robustness and versatility of the proposed approach.
> How do the authors envision the performance of DxMI in more complex generative tasks, such as high-resolution image generation or text-to-image synthesis?
* The manuscript already includes experiments on a variety of tasks and data types, such as 2D density estimation, unconditional and conditional image generation, and anomaly detection using latent vectors. If you find any particular aspect of the experimental portfolio to be limited, please let us know.
* To further demonstrate the scalability of DxMI, we will provide additional experimental results on LSUN Bedroom 256x256. DxMI (T=4) achieves competitive results (FID: 5.93, Recall: 0.477), whereas DDPM (T=1000) achieves FID: 4.89, Recall 0.45. We will augment our experiment section with the LSUN Bedroom experiment. This observation shows that DxMI is indeed scalable to high-dimensional data.
* We believe that DxMI can be effectively applied to text-to-image synthesis. However, text-to-image synthesis typically requires a significant amount of computing resources (at least 25 A100 days) and involves numerous experimental conditions to explore (e.g., selecting text prompts). Therefore, we suggest this as an intriguing direction for future research.
**Q. Comparative Analysis**
> While the proposed methods show improvements, a more detailed comparative analysis with state-of-the-art techniques.
* Please find that the current manuscript provides comparative analysis on Section 6 Related Work.
* We will augment our comparative analysis by adding additional paragraphs in the introduction and sections that describe the proposed method.
**Q. Average Training Time**
> What is the average training time for models using DxMI compared to other generative model training techniques such as GANs or VAE-based approaches?
* Our CIFAR-10 experiment takes less than 24 hours on two A100 GPUs, while our ImageNet 64 experiment takes approximately 48 hours on four A100 GPUs. The computational resources required for DxMI training are significantly lower compared to state-of-the-art GANs, which can take up to 48 hours on a Google TPU V3 Pod [2]. A TPU Pod consists of 1024 TPU chips, which are considerably more powerful than a few A100 GPUs. This difference is partly because DxMI can leverage a pre-trained diffusion model.
[2] Brock et al. Large scale GAN training for high fidelity natural image synthesis. ICLR 2019. https://arxiv.org/abs/1809.11096
**Q. Scalability**
> Scalability: How scalable is the proposed DxMI approach when applied to larger datasets or models with significantly more parameters? Are there any anticipated bottlenecks?
* For now, we do not observe any sign of bottleneck or inscalability. We believe DxMI is as scalable as GANs, since DxMI also leverages an EBM as a discriminator. GANs have already been shown to be scalable to very high-dimensional data [2].
Again, we thank you for providing valuable comments. Do not hesitate to let us know if you have further questions.
Best wishes,
Authors.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses. All of my concerns have been addressed properly. | Rebuttal 1:
Rebuttal: ## General Comments to AC and All Reviewers
We appreciate all reviewers for their thoughtful comments and remarks. We thank the reviewers for their insightful feedback and constructive comments and for providing suggestions that would improve our paper.
First of all, we are encouraged that the reviewers found the following points:
(i) Our approach and formulation are novel, principled, and elegant in improving sample quality and inference time of a diffusion model. (EhWr, YBX1)
(ii) The motivation is nicely described, the algorithm is well constructed to avoid bottlenecks in existing methods (Nqik), and it uses a creative combination of ideas (yjrf).
(iv) The experimental results show the benefits of the proposed approach (Nqik), which are strong and impressive (YBX1).
(iii) The manuscript is well-structured, and the presentation is clear. (EhWr, Nqik, yjrf)
Based on the feedback from all reviewers, the most significant shared concerns or major points to address are as follows.
**On theoretical analysis (Nqik, YBX1):** We agree with the reviewers that a theoretical analysis of our MaxEnt IRL problem would be valuable. To the best of our knowledge, there are few theoretical results available for MaxEnt IRL. Previous works [1,2] have provided convergence guarantees for MaxEnt IRL in a discrete state-action space with an infinite horizon. In contrast, our approach considers a continuous state-action space with a finite horizon. Additionally, there are algorithmic differences regarding the reward functions. Consequently, their results do not directly apply to DxMI. However, we believe a similar analysis could be performed on DxMI under appropriate assumptions and a simplified setting, particularly with linear reward functions and a tabular policy.
[1] Renard et al., Convergence of a model-free entropy-regularized inverse reinforcement learning algorithm, arxiv, 2024.
[2] Zeng et al., Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees, NeurIPS 2022.
**On scalability and complexity of the algorithm (EhWr, YBX1):** Because we formulate the problem using MaxEnt IRL, DxMI might initially seem complex. However, its complexity is comparable to Generative Adversarial Networks (GANs) and actor-critic reinforcement learning methods. We believe that DxMI is as scalable as conventional diffusion models and GANs, which have already demonstrated scalability to very high-dimensional data. To demonstrate the scalability of DxMI, we run DxMI (T=4) on LSUN Bedroom 256x256 and achieve competitive results (FID: 5.93, Recall: 0.477), where DDPM (T=1000) achieves FID: 4.89, Recall 0.45. Additionally, we have not observed any signs of bottlenecks or scalability issues. For instance, DxMI is not particularly sensitive to the entropy regularization coefficient. We provide guidance for hyperparameter tuning in Appendix B. More specific answers can be found in individual comments.
The attached PDF provides examples of generation from DxMI on CIFAR-10 and ImageNet 64x64, along with samples from competitive baselines.
We hope the above answers some common questions from the reviewers. We also respond to individual comments from each reviewer below.
Pdf: /pdf/566521ef1815a43cc9d5b570f5ed394d5d6f1b9f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FairWire: Fair Graph Generation | Accept (poster) | Summary: In this submission a graph diffusion model with fairness correction is introduced for graph generation. The model is also applied for link prediction. The authors introduce a graph regularizer for fairness based on a theoretical bound for subgroup distance and representation distance. This analysis is newly introduced by the authors. The authors apply the regularizer to link prediction, examining the accuracy-fairness tradeoff. The authors then test fair graph generation by examining link prediction and node classification tasks, but replace the training graphs with fairness-corrected generated graphs. Empirically it is shown that this greatly improves fairness while marginally reducing accuracy.
Strengths: This paper has key straightforward strengths that contribute to my "weak accept" rating. First, the method seems to perform well, within the scope of the intended contributions. The paper is generally well-written and the presentation is clear. Furthermore the theoretical results are satisfactory. They appear correct and they properly motivate the method.
Weaknesses: There is some missing/glossing-over existing work. The authors claim that the theoretical analysis is novel, however ref [24] in the paper also includes a theoretical analysis with >2 binary categories. It would be good for the authors to discuss this work and compare with their own. Also, the authors should consider citing "Debiasing Graph Representations via Metadata-Orthogonal Training" (doi/10.1109/ASONAM49781.2020.9381348), which seems relevant to the problem (also includes a fairness correction over potentially >2 attributes).
There are a non-trivial amount of clarity and presentation issues, which I discuss in my list of questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) In the second experiment type (S6.3), which link prediction model was trained on the generated graphs? And was the FairWire regularization also applied to the link prediction model during training?
(2) From Fig 1 we can see that FairWire effectively removes intra-edges from generated graphs. However, the structure of the generate graphs is not quite evident. Can the graphs be visualized in a more productive way? Maybe the adjacencies can be row-sorted into their communities.
(3) The authors derive two fairness criterion $\alpha_1$ and $\alpha_2$, though $\alpha_2$ is ignored at the mini-batch level for scalability. Can the authors give more intution about $\alpha_2$ and why it should ultimately not matter when applied in practice?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to sincerely thank the Reviewer for the raised points. We have addressed all points raised by the Reviewer, and presented our responses below.
**Weakness:** We thank the Reviewer for this comment under Weaknesses.
In [24], it is stated that:
“For clarity and simplicity, we consider the case of a single binary sensitive attribute, with the theoretical intuitions naturally generalizing to the multiattribute and multi-class settings.”. Hence, in their theoretical analysis, [24] considers a single, binary sensitive attribute, while we take multi-valued sensitive attributes into account in our analysis.
In addition, we want to clarify that our analysis is the first theoretical investigation for the relation between $\Delta _ {\mathrm{SP}}$ and the graph topology considering multi-valued sensitive attributes. The analysis in [24] concludes that if the weight of the adversarial regularizer is set to infinity, the equilibrium of their objective is achieved when there is zero mutual information between the sensitive attribute and the node embeddings. However, setting the weight of the adversarial regularizer to infinity is impractical, which limits the applicability of their analysis. Thus, there are significant differences in the theoretical findings of [24] and our work, where we specifically focus on structural bias and its analytic relation to a well-accepted bias metric $\Delta _ {\mathrm{SP}}$ for non-binary sensitive attributes, which was not explored before to the best of our knowledge.
We thank the Reviewer for pointing out this related work (doi/10.1109/ASONAM49781.2020.9381348), which we will cite in the final submission.
**Q1:** Thank you for this insightful question.
The same link prediction model is used for link prediction (S6.2) and graph generation (S6.3) experiments, whose details are provided in Appendix G. Furthermore, we did not apply any extra fairness intervention on FairWire during link prediction model training (including FairWire regularization). FairWire only modifies the training for graph generation, as we wanted to evaluate the effectiveness of structural debiasing that is executed in a task-agnostic way.
**Q2:** Thank you for this suggestion.
Based on this comment, we created the visualizations for row-sorted adjacencies (based on sensitive attributes) of the real Cora network and its synthetic version created by FairWire. The created figures are presented in the attached PDF document to the General Rebuttal. The figures confirm our previous finding that, with the employment of FairWire, the links between nodes become less correlated with the sameness of sensitive attributes of the nodes. Specifically, for the real graph, most links are built within the same sensitive group (diagonal entries), while the link formation becomes more uniform across different sensitive groups after applying FairWire.
In addition, in order to provide structural information, we present the 1-Wasserstein distance between the node degree distribution and clustering coefficient distribution of the original graph and the synthetic graphs created by FairWire in Appendix F.
We hope this reply addresses your concerns.
**Q3:** Thank you for raising this point. We would like to clarify that our submission does not conclude or claim that $\alpha _ {2}$ does not matter for fairness. In fact, the consideration of $\alpha _ {2}$ together with $\alpha _ {1}$ might further benefit the fairness. The corresponding solutions for $( p _ {k} ^ { \omega } ) ^ { * }$ and $(p _ {k} ^ {\chi}) ^ {*}$ for this case are presented in Subsection 4.2. However, as the reviewer mentioned, a regularizer that is designed based on both $\alpha _ {1}$ and $\alpha _ {2}$ cannot be employed in a mini-batch setting, which limits its use, especially for generative models. Since the main focus of FairWire is mitigating structural bias in graph generation, which can only be trained in a mini-batch setting for medium to large-scale graphs due to a complexity growing exponentially with the number of nodes, we focused on a design that can be used in a broader range of learning settings. Our experimental results confirm the effectiveness of the proposed fairness regularizer, which typically provides the best fairness/utility trade-off compared to state-of-the-art fairness-aware baselines.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: The authors have satisfactorily addressed my concerns. I have raised the presentation score and the overall score. | Summary: This work considers the problem of fairness in learning over graphs. Namely, it adopts a criterion for fairness that quantifies the probability of a relationship existing between nodes whose sensitive attribute value matches (intra-edges) versus not (inter-edges). It first derives theoretical results that bound the discrepancy between the two, and uses these insights to design a "fairness regularizer" that is compatible with both link prediction and synthetic graph generation tasks. The authors perform an evaluation on these two tasks, outperforming existing baselines in fairness metrics while preserving the core utility metric (e.g., accuracy).
Strengths: The paper considers a problem that has been studied previously but is original in the way it approaches it. The quality of the work is high and contains both theoretical insights and thorough experiments. The organisation of the paper is clear and the writing quality is also high. The topic is significant and relevant to the NeurIPS community. Reproducibility is also good as code is provided and experiments are described clearly.
Weaknesses: The paper is very well executed. In my opinion, its main weakness is that the considered fairness definition is adopted without a convincing justification. It is rather well-known in the network science literature [1,2] that individuals tend to form connections with those that are similar to them. Therefore, I do not understand why this characteristic is framed as inherently "bad" and something that should be corrected when training a model for link prediction or graph generation. I think this deserves a proper justification and discussion. However, this may be a point that is broader than the paper itself.
[1] McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1), 415-444.
[2] Newman, M. E. (2003). Mixing patterns in networks. Physical review E, 67(2), 026126.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please address the potential weakness discussed above. Additional questions and comments:
C1. Could you discuss what happens if there is >1 sensitive attribute? Do your results already apply via some transformation, or what would be required to generalise to this case?
C2. Line 131-32: I think this claim needs to be scoped to e.g. *social* networks, as this is definitely not always the case (see e.g. ref [2] above).
C3. Line 153: maximization is over $k$ presumably?
C4. Line 167; Eq 3: $\mathbf{Se}_{k}$ not defined as far as I can tell.
C5. Experiments in 6.2: I think it should be discussed what "base GNN" is being used. Appendix G mentions it is a GCN -- but what is the justification for using a single layer?
C6. I think it's worth summarizing the choice and justification for the considered sensitive attributes in the main text (currently in Appendix E).
C7. Table 5 is overflowing the margins, consider wrapping in `\resizebox`
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations and potential negative social impacts are discussed, although the latter only superficially, and I think it deserves more discussion given the primary area of the paper is fairness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the Reviewer for the supportive and constructive remarks, as well as valuable comments. We have presented our responses to the Reviewer’s questions below.
**Weakness:** Regarding the comment under Weaknesses, we agree with the reviewer that “individuals tend to form connections with those that are similar to them,” and we would like to clarify that such links between similar nodes in graphs are not necessarily bad, instead such connectedness generally provides additional information and is leveraged in graph ML. However, while admittedly entailing useful information for learning, the homophilic relations built based on sensitive attributes may amplify the bias in predictions [R1]. For example, it has been shown that ad recommenders display racial biases between users with similar preferences [R2], where the denser connectivity between the users from the same ethnicity in the corresponding social networks might amplify this issue. Our proposed framework aims to mitigate the structural bias leading to such discriminatory predictions in the decision systems requiring fairness considerations.
**C1:** We thank the Reviewer for this insightful question.
In case we have multiple sensitive attributes, the proposed scheme can be extended in two possible ways:
*Approach 1-* Applying multiple regularizers: Multiple fairness regularizers (see Eq. 3) corresponding to different sensitive attributes can be introduced and optimized jointly during the training. In fact, we do consider such an extension as a potential framework.
*Approach 2-* Redefining inter/intra-edges: We can change the definition of inter- and intra-edges, where if the norm distance of multiple sensitive attributes between different nodes is lower than a certain threshold, the corresponding edges can be referred as intra-edges, while the remaining edges become inter-edges. Afterwards, we can try to balance the predicted probabilities for intra- and inter-edges with a regularizer for a better fairness.
**C2:** Thank you for this comment. We agree with the reviewer and in our final submission, we will specify that this claim generally holds for social networks.
**C3:** Thank you for this valuable comment. Indeed, the maximization is over k, which will be clarified better in the final submission and the corresponding equation will be corrected.
**C4:** In Preliminaries, we define $\mathbf{S}$ as the one-hot encoding of sensitive attributes; and $\mathbf{e} _ {k}$ is defined in Line 168 as the unit vector whose only non-zero index is k (thanks to your comment, we have realized that there is a typo in its dimension, where it will be corrected to $\mathbb{R}^{k}$ in the final submission). The term $\mathbf{S}\mathbf{e}_{k}$ is the matrix multiplication of these two terms.
**C5:** Thank you for this question.
For link prediction, we had also obtained results with Common Neighbor (CN) [R3] and two-layer GCN models, where one-layer GCN led to the best AUC performance for our datasets. Thus, we chose our base GNN to be a one-layer GCN.
We will further clarify this in Appendix G in our final version.
**C6:** We would like to kindly mention that the datasets and sensitive attributes are also used in prior fair graph ML studies; including the baselines Adversarial, FairDrop, and FairAdj. Hence, we followed the same experimental setup in said studies without choosing the sensitive attributes by ourselves, for fair comparison. We will mention this in the final version.
That said, from a purely mathematical point of view, the fairness problem considered in our study can be regarded as the problem of decorrelating the system output from a particular attribute associated with the nodes, regardless of the attribute’s relevance to real life. In this sense, the selection of sensitive attributes is not expected to affect the evaluation of such decorrelation from an algorithm design point of view.
**C7:** Thank you for your keen observation, we will fix this issue in our final submission.
---
[R1] E. Dai and S. Wang, “Learning fair graph neural networks with limited and private sensitive attribute information,” IEEE Transactions on Knowledge and Data Engineering, 2022.
[R2] Latanya Sweeney. 2013. Discrimination in online ad delivery. Queue 11, 3 (2013), 10–29.
[R3] Liben-Nowell, David, and Jon Kleinberg. "The link prediction problem for social networks." Proceedings of the twelfth international conference on Information and knowledge management. 2003.
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for responding to the points I raised, I think the additional clarifications make sense. The "sticking point" regarding the fairness definition is difficult to resolve in the absence of a practical study which shows that, indeed, graph ML systems do exhibit the type of bias the authors argue about, as has been shown for other types of ML techniques in a variety of deployment scenarios. Nevertheless, as I mentioned before, I do not see this as a reason to penalise this particular paper. I am retaining my original score as I think it accurately reflects my assessment of the work. | Summary: The impact of generative learning algorithms on structural bias is investigated in this paper. The authors provide a theoretical analysis on the sources of structural bias which result in disparity. Then a novel fairness regularizer is designed to alleviate the structural bias for link prediction tasks over graphs. Furthermore, a fair generation framework called FairWire is proposed by leveraging the fairness regularizer in a generative model. Finally, extensive experiments on synthetic and real datasets are conducted to show the effectiveness of mitigating structural bias in machine learning algorithms on graphs.
Strengths: 1. The theoretical analysis on the sources of structural bias is novel and intuitive, which provides an interesting perspective on what causes the bias problem in machine learning algorithms over graphs.
2. The design of the regularizer and fair graph generation model address the challenges and theory reasonably. Besides, sufficient experiments are provided to prove the better performance on mitigating the structural bias problem.
3. The paper's writing is good, and the overall structure is clear.
Weaknesses: 1. This paper aims to address the fairness in graph generation. But there are not sufficient experiments on different representative backbone generation models. The authors only provide one backbone generation model, I suggest more representative backbone generation model to solidify the effectiveness.
2. This paper provides the theory and algorithm based on the sensitive group which may limit the use to real world graphs.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The author introduce GraphMaker as a backbone generative model and claimed that this method considers the bias mitigation when generating new graphs. Does it mean that FairWire can not been used on other generative models?
2. In the experiments, for the supervised tasks the author use link prediction tasks results, while for generation tasks, the author use node classification tasks results. What is the results of node classification tasks for supervised tasks? It would be better that the same tasks are used to evaluate both the supervised and generative tasks.
3. The authors claim the contribution in fairness over generative graph algorithm, but also provides a lot experimental results on supervised tasks. What is the difference in structural bias problem over generative tasks and supervised tasks?
4. From the experimental results, FairWire makes the performance of AUC worse. What is the influence of FairWire on the accuracy of the GNN model? Does it influence the model training or that because GNN models conducts bias on different samples?
5. From the theoretical analysis of the paper, we have to know the sensitive group first, how would you define and select sensitive groups? What kind of influence does sensitive group have on GNN and the proposed FairWire method?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed the limitations of their methods. And It is hard to foresee any potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their insightful comments regarding our work. We have addressed the Reviewer’s comments, and placed our responses to each comment below. Please note that references written in [R#] format are provided at the end of this rebuttal, whereas the ones in [#] format correspond to the respective references in the paper.
**Q1:** Thank you for this question.
The proposed fairness regularizer (Eq. 3) in FairWire can be utilized in any generative model outputting probabilistic graph topologies, including but not limited to graph autoencoder-based or random walk-based graph generation models (see Remark [*Applicability to general generative models*] in the submission). In other words, although our submission focuses on its use for graph diffusion models, the proposed framework provides a versatile use with different graph generative models.
That said, to address the reviewer's comment, we also obtained results for the use of $\mathcal{L}_{\text{FairWire}}$ in a variational graph autoencoder-based (VGAE-based) graph generation [R1]. Note that VGAE does not inherently possess the ability to generate synthetic node features and sensitive attributes. Thus, we used the real nodal features and sensitive attributes with the synthetic graph structures for evaluation. Tables below report the link prediction results, with models trained over real ($\mathcal{G}$) and synthetic graphs for Cora and Citeseer networks (see Experimental Setup in Subsection 6.1). These additional results also confirm the effectiveness of our proposed tool for fair graph generation.
| Cora | AUC (%) | $\Delta _ {SP} $(%) | $\Delta _ {EO} $(%) |
| :--- | :----: | :----: | ---: |
| $\mathcal{G}$ |**94.92** | 27.71 | 11.53 |
| VGAE | 92.51| 34.99 | 9.40|
| FairWire | 92.00| **5.08** | **2.07**|
| Citeseer | AUC (%) | $\Delta _ {SP} $(%) | $\Delta _ {EO} $(%) |
| :--- | :----: | :----: | ---: |
| $\mathcal{G}$ |**95.76** | 29.05 | 9.53 |
| VGAE | 92.30| 33.79 | 9.62|
| FairWire | 92.79| **4.05** | **1.88**|
**Q2:** Thank you for this question.
We would like to clarify that for graph generation, we present our results for both link prediction (see Table 3) and node classification (see Table 4). In addition to graph generation, we also present supervised link prediction results in Table 2 to exemplify another use of the proposed regularizer. In fact, our regularizer can be employed for any training scheme outputting edge probabilities (e.g., supervised link prediction, graph generation).
**Q3:** First, we would like to clarify that our link prediction and node classifications results in Tables 3 and 4 are obtained over synthetic graphs created via generative models. In addition to them, we also wanted to demonstrate the effectiveness of the proposed regularizer for a different learning framework to show its versatile use (i.e., supervised link prediction) in Table 2.
Apart from this clarification, we want to thank you for your insightful question. While it has been demonstrated that learning over real graphs for supervised tasks leads to algorithmic bias due to the structural bias in graphs [11], use of synthetic graphs makes the fairness issue more complex and challenging to resolve. As generative models learn to mimic the frequent patterns in real data (i.e., well-represented groups), there is a high probability that they will overlook the patterns exhibited within under-represented groups. Hence, they are prone to amplify the already existing structural bias [48]. We also confirmed this structural bias amplification empirically via our results in Table 1, which shows that the use of generative models worsens all fairness metrics over four different real-world datasets and motivates our proposed framework.
**Q4:** In general, for all fairness-aware interventions, we expect to observe a fairness/utility trade-off [R2, R3], as bias mitigation algorithms introduce a fairness-related metric to consider in addition to utility, which typically leads to a solution that is not optimal for utility-only considerations. Thus, the observed utility/fairness trade-off is a natural outcome of introducing the proposed fairness regularizer (Eq. 3) in training to make it fairness-aware. Note that a similar trade-off can also be observed for our fairness-aware baselines (Adversarial, FairAdj, FairDrop) as well. Our results in Tables 2, 3, and 4 demonstrate that FairWire typically leads to the best utility/fairness trade-off compared to these works.
**Q5:** In fairness literature, sensitive attributes are defined as the ones on which we do not want our algorithm’s predictions to depend for fair decision making. For example, recidivism predictions made by a ML model should not be related to the ethnicity of the convicts, making ethnicity the sensitive attribute and different ethnic groups the sensitive groups. Overall, the selection of sensitive groups is mainly governed by the application in which the learning model is used.
In most of the fair ML literature, it is a common practice to assume that the sensitive attributes are known and given, see [7, 24, 27]. Furthermore, the real-world datasets we employed in our experiments and the sensitive groups within them have been used in existing fairness-aware graph ML works, see e.g., [11, 36, 27, 19].
We hope this explanation addresses your concern.
---
[R1] T. N. Kipf et al., “Variational graph auto-encoders,” NeurIPS Workshop on Bayesian Deep Learning, 2016.
[R2] S. Dehdashtian et al., "Utility-Fairness Trade-Offs and How to Find Them," CVPR, 2024.
[R3] M. Pannekoek et al., "Investigating trade-offs in utility, fairness and differential privacy in neural networks," arXiv preprint arXiv:2102.05975 (2021).
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors satisfactorily addressing my concerns. The clarification makes sense and I have raised the presentation score and the overall score. | null | null | Rebuttal 1:
Rebuttal: We would like to thank the Reviewers for their detailed reviews and constructive suggestions. We have addressed the questions raised by the Reviewers, and presented the detailed responses in a point-to-point manner.
Pdf: /pdf/5154905b7d07d3a03ad5d5a8bc02cf7ec9fd2279.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dissecting the Interplay of Attention Paths in a Statistical Mechanics Theory of Transformers | Accept (poster) | Summary: This paper proposes a way to analyze the interplay of attention paths. Statistical analysis is provided, and experiments are performed to verify the theory.
Strengths: The idea considered in this paper is interesting, and the experiments seem to align with the theoretical justifications.
Weaknesses: [1] This paper modifies the transformer model compared to the standard transformer. While changing the activation to linear seems to be acceptable for the ease of the derivation, the critical change in what is fed in the later layers seems strong: It significantly simplifies the format of equation (7) and the behavior of the considered transformer can be quite different from the standard transformer. The experiments are using two attention layers, which is significantly different from the real practice. It remains doubtful whether the analysis is still correct when using a deeper standard transformer.
[2] The writing of this paper can be improved in its theory parts: There is no formal theory statement, and readers have to trust the authors that we use equation (9) - (13) in the analysis. Without the formal theory statement, it is even more difficulty to track the derivations in the appendix: Readers have to go through every detail to understand what the authors want to do in (9) - (13).
[3] The writing of the paper can be improved in the experiment part: For example, it is not clear whether the authors implement a transformer using the considered architecture, or just a standard transformer. For image data, it is not clear whether additional embedding is used or not. In addition, the training details of the transformer is missing.
Technical Quality: 3
Clarity: 1
Questions for Authors: Please use formal theoretical claim to express the key quantify of interest.
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the precious feedback, which helped us to improve the communication of our findings.
At the moment, however, we believe that our contributions have not been evaluated in the proper context.
In particular, the reviewer seems to interpret our work as providing a method of analysis and its application. Instead, our work belongs to a more theory-focused line of research aimed at providing an analytical understanding of deep learning in the form of *exact analytical results*—in our case, an exact expression for the network's predictor on test examples.
Before reading this response, we kindly ask the reviewer to read our global response, which should help in better contextualizing our work.
2. *[...] the critical change in what is fed in the later layers seems strong: It significantly simplifies the format of equation (7) and the behavior of the considered transformer can be quite different from the standard transformer. The experiments are using two attention layers, which is significantly different from the real practice. It remains doubtful whether the analysis is still correct when using a deeper standard transformer.*
The reviewer is correct in identifying the simplifications present in our model with respect to the standard transformer, which we openly acknowledge in our manuscript. However, we believe these simplifications should be considered in the context of our theory-focused line of research. At the current state of the art of deep learning theory, simplifications need to be made, if exact analytical results are to be derived. In this context, our work actually takes important steps forward with respect to the simplified models considered in other comparable works (see global response for details). In particular, although clearly a simplification, our assumption on what is fed to the attention heads allows us to analytically tackle a multi-head multi-layer architecture, which remained inaccessible to previous works. This allows us to characterize a previously uncovered learning mechanism: attention paths interplay.
A very relevant question is whether this mechanism would still be present, when the attention is fed with the layer’s preactivation, rather than the bare input. It is reasonable to expect that the mechanism would persist, but that it would be harder to disentangle from the learning of the attention itself, which would now also depend on the value weights. Future work could build upon our results to tackle this exciting mathematical challenge. This consideration will be included in the revised version of the manuscript, as per request of reviewer Y9xT.
Regarding the experiments, again the rigorous theoretical framework limits the scale of testable networks. In the Bayesian framework, one needs to sample from the high dimensional posterior distribution of network weights, which is much more computationally costly than gradient descent, and technically challenging for a larger number of layers/heads. Large scale experiments are anyways out of scope for this context. Since, under the specified assumptions, our analytical results are exact for any depth, experiments are typically of small scale and serve two purposes: 1) To convince the reader, who may not want to check the rigorous derivations, that the analytical predictions are exact 2) To provide minimal, easily interpretable examples illustrating the theoretical insights. We also invite the reviewer to check the scale of experiments considered in other comparable works, which we cite throughout the introduction.
3. *There is no formal theory statement [...].*
We thank the reviewer for their suggestion to improve the readability of our results. The revised manuscript will provide a formal theory statement. Currently, our theoretical results and related definitions are presented alongside their interpretation and implications for generalization. The revised manuscript will first present the results (eqs. 9-12) and assumptions in a compact and more formal statement, and only later discuss their interpretation.
4. *[...] it is not clear whether the authors implement a transformer using the considered architecture, or just a standard transformer. For image data, it is not clear whether additional embedding is used or not. In addition, the training details of the transformer is missing.*
We thank the reviewer for their important suggestions to improve the presentation of our experiments.
We always consider our transformer-like architecture, and never the standard transformer. This is because our work focuses on providing exact analytical results, rather than a method of analysis, so our experiments are focused on validating the theory and illustrating the predicted mechanisms on the considered architecture. The revised manuscript will make this clearer by explicitly referencing equations (1-4) defining the network, whenever it is mentioned in the experimental section.
When nothing is said explicitly, the network is the one considered by the theory, i.e. “trained” by sampling its weights from the posterior distribution Eq. 8. The revised manuscript will state this explicitly. Details on the sampling method are given in “Experiments, Section F” in the Appendix. When the network is trained with gradient descent, we already state this explicitly and refer to “Experiments, Section H.1” for training details.
For images, the input tokens consist of just the normalized pixel values. No additional embedding is used. This will be stated explicitly in the revised text.
We hope our response clarifies the scope of this work, and highlights our true contributions on both advancing the frontiers on the theoretical research on transformer-like models, and deep learning theory in general. In our view, the current rating is unfair in light of these contributions. If you find our response convincing/useful, please consider amending the score. Thank you very much.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors in preparing the rebuttal. Could you help provide a more detailed description for **"The revised manuscript will provide a formal theory statement."** It would be great to have the statement here.
Thanks.
---
Rebuttal 2:
Title: Introductory comment to the theory statement
Comment: We thank the reviewer for giving us the opportunity to provide the
formal theory statement, which we could not include in our original
response due to length constraints. The theory statement will appear in the next comment. As we stated in the rebuttal, all parts of the manuscript about the
interpretation and implications of our results will be put after this
theory statement. Please refer to the submitted manuscript
for any referenced equations which do not appear in the statement itself (i.e. Eqs 1-8, which will appear in the manuscript before the theory statement).
Also note that we made the following changes in notation:
- 1. We renamed
the query and key weights from $\textbraceleft Q ^{\left(\ell\right)h},K{} ^{\left(\ell\right)h}\textbraceright _{\ell,h=1} ^{L,H}$
to $\textbraceleft W _{Q} ^{\left(\ell\right)h},W _{K} ^{\left(\ell\right)h}\textbraceright _{\ell,h=1} ^{L,H}$,
in order to avoid confusion with the kernel $K$.
- 2. We renamed the
network predictor on a test example from $f ^{P+1}$ to $f ^{*}$.
- 3. We renamed the network inputs from $x ^{\left(0\right)}$ to $x$.
This new naming scheme will be consistently implemented throughout
the revised manuscript.
---
Rebuttal 3:
Title: Theory statement
Comment: ### Theory statement ###
**Definitions.** Consider a training dataset consisting of $P$
inputs $x ^{\mu}\in\mathbb{R} ^{N _0\times T}$ and associated labels
$y ^{\mu}\in\mathbb{R}$, where $\mu=1,\ldots P$. Call $X\coloneqq \textbraceleft x ^{\mu} \textbraceright _{\mu=1} ^{P}$
the set of training inputs and $Y\in\mathbb{R} ^{P}$ the vector of
training labels with $\mu$-th component $y ^{\mu}$. Consider a network
defined by Eqs. (1-4) and in particular call $f ^*$ the network
output (Eq. 4) corresponding to a test input $x ^* \in\mathbb{R} ^{N _0\times T}$.
We remind the reader of the following network hyperparameters: the
input's embedding dimension $N _0$, the hidden layer's width $N$,
the number of tokens $T$, the number of attention heads per layer
$H$, and the number of attention layers $L$.
**Assumptions.** Assume the network weights $\Theta\coloneqq\left(V ^{\left(0\right)},\textbraceleft V ^{\left(\ell\right)h}\textbraceright _{\ell,h=1} ^{L,H},a\right)$
are distributed according to the Bayesian posterior distribution defined
in Eq. 8, with temperature $\mathcal{T}>0$, while the query and key
weights $\textbraceleft W _{Q} ^{\left(\ell\right)h},W _{K} ^{\left(\ell\right)h}\textbraceright _{\ell,h=1} ^{L,H}$
are fixed.
Assume $N,N _0,P\to\infty$, with $P/N\coloneqq\alpha\in\mathbb{R} ^{+}$
and $P/(N _0H ^{L})\coloneqq\alpha _0\in\mathbb{R} ^{+}$, where $\alpha$,
$\alpha _0$ as well as other size parameters $T,H,L\in\mathbb{\mathbb{N}}$
are finite.
**Claim.** Under the above assumptions,
(1) the mean predictor under the posterior distribution (Eq. 8) is
given by
$$\mathbb{E}\left[f ^{*}\right]=k ^{\top}\cdot\left(K+\mathcal{T}\mathbb{I}\right) ^{-1}Y, \qquad (9) $$
where the average is w.r.t. to the posterior distribution (Eq. 8).
The vector $k\in\mathbb{R} ^{P\times1}$ and the matrix $K\in\mathbb{R} ^{P\times P}$
are defined in terms of a kernel function $\mathcal{K}:\mathbb{R} ^{N _0\times T}\times\mathbb{R} ^{N _0\times T}\to\mathbb{R}$
as $k ^{\mu}\coloneqq\mathcal{K}\left(x ^{*},x ^{\mu}|U\right)$ and
$K ^{\mu\nu}\coloneqq\mathcal{K}\left(x ^{\mu},x ^{\nu}|U\right)$, for
$\mu,\nu=1,\dots,P$. The kernel function is given by
\begin{equation}
\mathcal{K}\left(x,x'|U\right)=\frac{1}{H ^{L}}\sum _{\pi,\pi'\in\Pi}U ^{\pi\pi'}C _{\pi\pi'}\qquad\mathrm{with}\qquad C _{\pi\pi'}\coloneqq\frac{1}{N _0}\xi ^{\pi}\left(x\right) ^{\top}\cdot\xi ^{\pi'}\left(x'\right)\, \qquad (10)
\end{equation}
where $\xi ^{\pi}\left(x\right)$ is the ``attentioned input'' corresponding
to an input $x\in\mathbb{R} ^{N _0\times T}$, along path $\pi\in\Pi$
(Eq. 7) and $\Pi$ is the set of all attention paths for a given architecture
with $|\Pi|=H ^{L}$.
The matrix $U\in\mathbb{R} ^{H ^{L}\times H ^{L}}$, called *order
parameter*, is a positive semi-definite matrix given by
\begin{equation}
U=\underset{\tilde{U}}{\mathrm{argmin}} \ S(\tilde{U};X,Y)\, \qquad (11)
\end{equation}
where the scalar function $S$ called the *action* is defined
as
\begin{equation}
S(U;X,Y)=\mathcal{L}(U)+\alpha\mathcal{E}(U;X,Y)\, \qquad (12)
\end{equation}
The scalar function $\mathcal{\mathcal{E}}$, which we call the *energy*,
is given by
\begin{equation}
\mathcal{E}(U;X,Y)=\frac{1}{P}\ln\det\left(K(X,X|U)+\mathcal{T}\mathbb{I}\right)+\frac{1}{P}Y ^{\top}\cdot\left(K(X,X|U)+\mathcal{T}\mathbb{I}\right) ^{-1}\cdot Y, \qquad (13)
\end{equation}
where $K\coloneqq K(X,X|U)$ is the $P\times P$ training kernel matrix,
defined according to Eq. (10). The expression for the scalar function
$\mathcal{L}$, which we call *entropy*, is lengthy and is given
in Appendix B.1.
(2) In the particular case of a single head per layer $H=1$, $U$ is a scalar, and the entropy assumes the simple form $\mathcal{L}\left(U\right)=\sigma ^{-2\left(L+1\right)}U-\ln\left(U\right)$,
where $\sigma ^{2}$ is the variance of the Gaussian prior on the network
weights $\Theta$ (see Eq. 8).
(3) For general $H$, $\mathcal{L}\left(U\right)$ is minimized by
$U ^{\pi\pi'}=\sigma ^{2\left(L+1\right)}\delta _{\pi,\pi'}$,
which therefore is always the solution of Eq. (11) in
the GP limit defined by $\alpha\to0 ^{+}$.
(4) The matrix $U$ obeys the following relation
\begin{equation}
U ^{\pi\pi'}=\frac{1}{N}\mathbb{E}[V _{\text{eff}} ^{\pi}\cdot V _{\text{eff}} ^{\pi'\top}], \qquad (14)
\end{equation}
where $V _{\text{eff}} ^{\pi}\in\mathbb{R} ^{1\times N}$ are the effective weights along path $\pi$ (Eq. 6).
**Derivation:** See Appendix C.
---
Rebuttal Comment 3.1:
Comment: I appreciate the authors in providing the detailed revised material for the theory. I still feel that it is not sth as we usually refer to as a "theoretical statement". However, if treating the results as a new analysis tool, it is an interesting contribution. I have raise my score to 5.
---
Rebuttal 4:
Comment: Thank you very much for taking the time to reassess the value of our contribution and for raising the score. While we hoped for a higher score, we respect your perspective and are willing to further improve our theoretical statement based on your feedback.
We understand that the reviewer may be referring to theoretical statements in the field of mathematics. In this case, we would like to clarify that our work adopts the methods and presentation style of theoretical physics, according to which our results have the validity of a predictive theory, rather than a mere analysis tool.
Theoretical physics has a long history in developing theories of artificial neural networks [1-5], whose contribution has always been of high relevance to conferences like NeurIPS (see, e.g., [6-11]). The field of theoretical machine learning is growing quickly, with different methods from mathematics, physics, and computer science, and each discipline has its own distinct goals and ways of communicating results.
We recognize the importance of making our work accessible to researchers across these disciplines and are committed to refining our presentation in this direction. We welcome any further feedback to help us improve our work.
### References ###
[1] Hopfield, J. J. Proc. Natl Acad. Sci. USA 79, 2554–2558 (1982)
[2] Amit, D. J., Gutfreund, H. & Sompolinsky, H. Phys. Rev. Lett. 55, 1530–1533 (1985)
[3] Gardner, E. J. Phys. A 21, 257–270 (1988)
[4] Gardner, E. & Derrida, B. J. Phys. A 21, 271–284 (1988).
[5] For a review on recent works, see, e.g.: Bahri, Y. et al. Ann. Rev. Cond. Matt. Phys. 11, 501–528 (2019)
[6] Krogh, Anders, and John Hertz. Advances in neural information processing systems 4 (1991).
[7] Cortes, Corinna, et al. Advances in neural information processing systems 6 (1993).
[8] Saxe, A., McClelland, J., & Ganguli, S. (2014). Proceedings of the International Conference on Learning Represenatations 2014.
[9] Bordelon, Blake, Abdulkadir Canatar, and Cengiz Pehlevan. International Conference on Machine Learning. PMLR, 2020.
[10] Gerace, Federica, et al. International Conference on Machine Learning. PMLR, 2020.
[11] Lee, Jaehoon, et al. Advances in neural information processing systems 32 (2019). | Summary: The paper investigates Bayesian learning of the value weight matrices of a deep multi-head attention network without MLPs employing the back-propagating kernel renormalization (BPKR) [1] technique in the linear regime, where the training set size $P$ scales with the width $N$, i.e., $P/N = O(1)$. It finds that the network’s kernel is a sum of constituent kernels operating on different pairs of attention paths, rescaled based on their alignment with the target task. This renormalization enhances generalization compared to previously studied infinite-width (GP) limits. The paper validates its theoretical predictions through experiments on a synthetic task and in-context classification with simple image datasets, finding qualitative agreement with gradient-based training. Additionally, it shows that the theory’s predictions can be used to prune less relevant attention heads without significantly impacting performance.
[1] Li, Q. and Sompolinsky, H., 2021. Statistical mechanics of deep linear neural networks: The backpropagating kernel renormalization. Physical Review X, 11(3), p.031059.
Strengths: - **Originality:** The application of BPKR to attention models and the characterization of the network’s kernel as a task-relevant weighted sum of path-path kernels is novel.
- **Quality:** The statistical mechanics analysis is technically sound and validated with comprehensive numerical experiments on both synthetic and real data, demonstrating the robustness of the findings. The paper provides the code to reproduce its empirical results.
- **Clarity:** The paper is generally clear, although there are areas for improvement (see Weaknesses section).
- **Significance:** The paper considers simplified transformer architectures, which remain challenging for theoretical approaches. The developed theory extends beyond the GP limit, predicting task-adaptivity properties that are necessary to explain the success of modern learning algorithms and are not captured by infinite-width limits. While the insights are intuitive and may not significantly enhance the understanding of transformers or their success, the theory is quantitative and offers non-trivial generalization predictions.
Weaknesses: 1. **Strong assumptions:** The analysis relies on several very strong assumptions: (i) linearity of the network output in the value weights, (ii) applying the attention at any depth on the network input, (iii) considering frozen (and already learned) query and key matrices, and (iv) a system at equilibrium (i.e., a Gibbs distribution over the parameters of the model). These assumptions clearly limit the relevance of the results. Although some of the empirical results seem to suggest that the last two assumptions can be relaxed, the impact of (i) and (ii) on the conclusions remains unclear. It would be beneficial to discuss the implications of these assumptions further.
2. **Background:** The paper is dense and lacks sufficient background material to aid the reader in following it and understanding where the results come from. The BPKR technique and its assumptions and rationale are not adequately introduced. Additionally, the paper mentions the "network’s kernel" early on without really specifying which object it is considering (as there are various kernels in deep learning, e.g., NTK etc.). Providing a more detailed introduction and some intuition on BPKR, along with its assumptions and high-level steps in the main manuscript, would significantly improve clarity.
3. **Clarity and completeness:** Sec. 4.1.1 is not very clear and easy to follow. It would be helpful to explain the structure of the different heads within the main text, rather than repeatedly directing the reader to long appendices.
4. **Typos:** The manuscript contains several typos (“thermodinamic” in line 106, “generalizaton” in line 147, “task-specifc” in line 161, “cathegorized” in line 241, “taks” in line 262, “on this regard” in line 330).
Technical Quality: 3
Clarity: 2
Questions for Authors: 5. If I understand correctly, unlike the dense case, thanks to the transformer architecture, the kernel is not rescaled by a scalar renormalization variable, resulting in more interesting outcomes compared to [1]. Can you elaborate more on this?
6. In Fig. 3 (d), is the GD map obtained for the same network from which the trained query and key weights were obtained for the theory?
7. In general, in your experiments, do you always apply attention to the input at all depths of the network?
8. What do you mean by “efficiently” (line 57)?
[1] Li, Q. and Sompolinsky, H., 2021. Statistical mechanics of deep linear neural networks: The backpropagating kernel renormalization. Physical Review X, 11(3), p.031059.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper adequately lists its limitations in Section 5. I do not foresee any potential negative societal impacts arising from this study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable time reviewing our work and for many positive comments. We’d like to address the reviewer’s remaining concerns as follows.
1. *The analysis relies on several very strong assumptions: (i) linearity of the network output in the value weights, (ii) applying the attention at any depth on the network input, [...] the impact of (i) and (ii) on the conclusions remains unclear. It would be beneficial to discuss the implications of these assumptions further.*
We agree with the reviewer that further discussion on assumptions (i) and (ii) would be beneficial.
Regarding assumption (i), we discuss in the paper a potential way forward, following the heuristic arguments in [Li, Sompolinsky, PRX, 2021]. There, the analytical results for a deep linear network are heuristically extended to a network with ReLU activations, by replacing the linear GP kernels with the ReLU GP kernels. Despite having a more complex renormalization, also in our case the renormalized kernel can be seen as a linear combination, weighted by the order parameter, of many “GP” kernels, i.e., the path-path kernels. Therefore, we foresee how a similar argument could potentially be applied in our case. This suggestion, which was briefly mentioned in lines 330-332, will be discussed in more detail in the revised manuscript.
Assumption (ii) is indeed a limitation of our model, when compared to the standard transformer. However, we would like to emphasize how this assumption allows us to consider an architecture featuring both multiple heads and multiple layers, enabling the characterization of attention paths interplay. The question is whether such a mechanism would still be present when relaxing assumption (ii), which seems reasonable to believe. The theoretical challenge, however, would be to disentangle the learning of attention paths interplay from the learning of the attention paths themselves, because now also the attention matrix would depend on the value weights. This discussion will be included in the revised manuscript. If the reviewer is interested, we provide further details in our response to reviewer 9bWs.
2. *If I understand correctly, unlike the dense case, thanks to the transformer architecture, the kernel is not rescaled by a scalar renormalization variable, resulting in more interesting outcomes compared to [1]. Can you elaborate more on this?*
The reviewer is perfectly right: the multi-head transformer architecture induces a non-trivial renormalization encoded in the matrix order parameter. This results in a crucial improvement in performance. Indeed, for deep linear fully connected architectures, where the order parameter is purely scalar, the renormalization only affects the variance of the predictor, while the mean predictor remains the same as in the GP limit. Conversely, a matrix kernel renormalization produces a change in the mean predictor, thereby improving its generalization performance with respect to the GP limit. A matrix order parameter cleverly combines specific architectural components – the attention paths in our case – offering deeper insight into the critical role of architecture in finite width networks.
We kindly invite the reviewer to also evaluate the importance of the above contribution for the theory of deep learning in general, independently of the transformer’s context. We elaborate on this point in our global response.
3. *[...] The BPKR technique and its assumptions and rationale are not adequately introduced [...] the paper mentions the "network’s kernel" early on without really specifying which object it is considering [...].*
We thank the reviewer for this precious feedback, which helps us to improve the communication of our findings. The new version of the manuscript will integrate the reviewer’s suggestions by:
- a. Specifying to which kernel we refer more precisely.
- b. Outlining the rationale behind the BPKR method: explain how the expectation under the Gibbs distribution of network weights can be reduced to an expectation over a lower-dimensional distribution of macroscopic order parameters, which is obtained by gradual integration of the weights; explain how the so-reduced expectation can then be solved via the saddle-point method in the thermodynamic limit.
- c. Clarify the significance of the thermodynamic limit assumptions.
4. *It would be helpful to explain the structure of the different heads within the main text, rather than repeatedly directing the reader to long appendices.*
We welcome the reviewer’s suggestion to improve the clarity of Section 4.1.1. The revised manuscript will contain an explanation of the key features of the different heads, while keeping the mathematical definitions in the appendix. In particular, we will explain how the first good head makes use of the Markov nature of the task by attending only to nearby tokens and checking whether they match, while the second good head performs uniform attention.
5. *In Fig. 3 (d), is the GD map obtained for the same network from which the trained query and key weights were obtained for the theory?*
Yes. The revised text will explain this more clearly.
6. *In general, in your experiments, do you always apply attention to the input at all depths of the network?*
The attention matrix $\Omega^{(\ell)}$ is always a function of the bare input $x^{(0)}$, irregardless of the layer $\ell$ (see Eq. 3). The attention matrix is however applied to the preactivation to that layer $x^{(\ell)}$ (see Eq. 2).
7. *What do you mean by “efficiently” (line 57)?*
We mean that we prune the less relevant heads so that the loss in performance is minimal. We have rewritten the sentence as follows: “we show that a trained network can be reduced in size with minimal performance loss.”
We hope our response provides clarifications to all the concerns the reviewer has raised. If you find our response convincing/useful, please consider increasing the score. Thank you very much.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed answers to the reviews, which addressed my initial comments. I personally appreciate the theoretical physics approach of the work, particularly the formulation of the transformer kernel as a task-dependent combination of path-path kernels. Furthermore, I recognize that making (sometimes strong) assumptions is necessary to advance our theoretical understanding of deep learning systems. With the proposed changes, I recommend acceptance and have adjusted the score accordingly. I encourage the authors to include all the new discussions in the revised version.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response and the updated score. We are glad to hear that the reviewer found our response useful. We will make sure to include these discussions in the final version. | Summary: This paper provides an interpretation of wide (large embedding dimension, $N$) multi-head attention-only transformers solving in-context learning tasks as performing high-dimensional kernel combining. The paper derives the exact statistics of the predictor, when the number of training examples $P$, and $N \to \infty$, but $P/N \to \mathcal{O}(1) > 0$. The main result is the revelation that the predictor statistics can be expressed as the weighted sum of independent kernels, each pairing two attention paths through the multiple heads across the layers, and the weights depend on the relevance of the path to the task to be solved. Capitalising on this result, the paper proposes a network size-reduction method by efficient pruning of certain the attention paths with marginal performance loss.
Strengths: (1) The paper introduces a novel way of looking at generalization capabilities of a transformer through the lens of the statistical mechanics theory in finite thermodynamic limit.
Specifically, the result shows that transformer-like architectures perform a task-relevant kernel combining, where each kernel is based on a path through the multi-head, multi-layer network. This is a crucial direction that offers valuable insights into the nature of the attention mechanisms.
(2) The proofs are clearly written with the necessary steps (barring some steps like saddle-point methods) to follow.
(3) The experimental results (Figure 3) greatly help in understanding the theoretical results (Section 3) better.
(4) The contents and organisation of the paper is easy to understand.
Weaknesses: (1) One minor weakness is that the experiments are limited to binary classification.
Technical Quality: 4
Clarity: 4
Questions for Authors: (1) I am curious about the authors' thoughts on what can happen when the attention is computed from the outputs of the previous layer and not always from the bare inputs?
(2) In order to prune the attention paths, the order parameters $U_{\pi, \pi'}$ (Eq. (13)) must be computed, which involves averaging over the Gibbs distribution (Eq. (8)). How can this be computed in real architectures?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: (1) The limitations are already stated towards the end of the paper; (i) it does not consider the learning of query-key matrices, which is a crucial step in functioning of the attention mechanisms (ii) the attention is always computed from the bare inputs, which limits the applicability of the theory to practical transformers. However, these do not reduce the value of the contribution of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable time reviewing our work, for the many positive comments, and for the intriguing questions.
Below are our replies to the reviewer’s questions, as well as minor concerns.
1. *One minor weakness is that the experiments are limited to binary classification.*
We agree that considering a network with multiple outputs would be interesting, and readily implementable within our theory. We would like to share with the reviewer why we actively decided to not include these kinds of experiments in this first work exploring our theory. The renormalization effect coming from having a number M of multiple outputs is the same for any architecture (e.g. the same as in deep linear networks [Li, Sompolinsky, PRX 2021]), and results in the order parameter gaining an additional pair of “output indices” ranging from 1 to M. In this work, we wanted to focus on illustrating exclusively the renormalization effects that are proper of the transformer architecture—-that of attention paths combination. Exploring the interplay between the two renormalization effects (multiple outputs and multiple paths) could be the topic of future work, with potentially exciting applications to language and next-token prediction tasks. One caveat in this context, is that one would need to be careful of a too large number of outputs M, which may spoil the thermodynamic limit which assumes M finite.
2. *I am curious about the authors' thoughts on what can happen when the attention is computed from the outputs of the previous layer and not always from the bare inputs?*
It seems reasonable to believe that an attention paths combination mechanism would still persist in this case. The theoretical challenge, however, would be to disentangle the learning of attention paths interplay from the learning of the attention paths themselves. Indeed, when the attention is a function of the bare input, our results for the renormalized kernel
$$K=\sum_{\pi,\pi'\in\mathrm{\Pi}}U^{\pi\pi'}C_{\pi\pi'}$$
show that the value weights’ statistics enter only in determining the order parameter $U$, that is the interplay between attention paths. If the attention were to be a function of the previous layer’s output, instead, also the attention path kernels $C$ would depend on the value weights, which in this case affect the learning of the attention paths themselves. In other words, the value weights would influence performance in two key ways: (i) by optimizing the representation of inputs to the attention heads, and (ii) by improving the recombination of attention paths.
Obtaining analytical progress in this scenario is challenging, primarily due to the introduction of non-linearities in the trained weights through the attention’s softmax. A first step to advance in this computation could be to consider linear attention heads, which may simplify the analysis.
This discussion will be included in the revised manuscript, as also prompted by a question from reviewer Y9xT.
3. *In order to prune the attention paths, the order parameters 𝑈𝜋,𝜋′ (Eq. (13)) must be computed, which involves averaging over the Gibbs distribution (Eq. (8)). How can this be computed in real architectures?*
We would like to be sure to consider the case meant by the reviewer with “real architectures”, so let us consider a few scenarios below:
- a. For our model in the Bayesian framework, the order parameter can be equivalently computed by either solving the self-consistent equation (as in figure 3d “theory”), or sampling the posterior (as in figure 3d “sampled”).
- b. For our model trained with gradient descent, one can just take the learned query and key weights, and plug them into procedure (a) to obtain the order parameter for the value weights. Another more empirical approach that we did not try, would be to get multiple “samples” of the value weights by retraining the network with gradient descent, but for fixed query and key weights coming from the first iteration of the training (this is somehow what we did to compute the order parameter in figure 3d “gradient descent”, only that there we used a single “sample”).
- c. For the more standard transformer with attention as a function of the bare input and trained with gradient descent, one could still do something similar to (b). The difference would be that now also the attention matrix $\Omega$ would depend on the value weights. Instead of taking the learned query and key weights, and plugging them into procedure (a), one could just take the entire attention matrix $\Omega$ (as computed on each example from the network's activations) and plug it into procedure (a).
- d. If we also add nonlinear MLP blocks in between each head, one could still apply the same procedure as in (c), but, when arriving to step (a), he could only proceed by sampling the posterior, while the option of solving the self-consistent equation would not be available due to the self-consistent equation not being defined for the nonlinear case. A possible way to define the self-consistent equation could be to heuristically extend it to the nonlinear case, using the same heuristic argument as in [Li, Sompolinsky, PRX, 2021]. There, the analytical results for a deep linear network are heuristically extended to a network with ReLU activations, by replacing the linear GP kernels with the ReLU GP kernels. Despite having a more complex renormalization, also in our case the renormalized kernel can be seen as a linear combination, weighted by the order parameter, of many “GP” kernels, i.e., the path-path kernels. Therefore, we foresee how a similar argument could potentially be applied in our case.
4. *...barring some steps like saddle-point methods*
The revised manuscript will have an improved explanation of the saddle point technique. We ask the reviewer, if possible, to help us by further elaborating on the unclear points.
Once again, we thank the reviewer for this thorough review and the very positive rating.
---
Rebuttal 2:
Comment: We noticed a typo in our reply, point 3.c :
"For the more standard transformer with attention as a function of *the bare input*" -> "For the more standard transformer with attention as a function of *its layer's preactivations*". | Summary: This work studies the mechanism of attention in deep Transformers. The theory shows that the prediction process can be expressed as a combination of kernels of different attention paths. Experiments are conducted to verify the theory, which also motivates a pruning method on different attention heads.
Strengths: 1. The mechanism of the attention path is interesting and reasonable.
2. The theory and experiments support each other and make sense.
Weaknesses: 1. The experiments are simple. It will be better to verify the mechanism on larger datasets, larger models, and more complicated tasks. Experiments use Transformers of 2 or 3 layers, which are not deep. However, the abstract claims a "deep multi-head self-attention network".
2. The presentation is somehow weird. The order of figures in Figures 2 and 3 is strange. Also, Figures 2 and 3 can be divided into multiple figures, since the experiments in these subfigures are about different things.
3. Some references on the theoretical mechanism of Transformers in learning are missing.
Olsson et al., 2022. In-context Learning and Induction Heads.
Li et al., ICML 2024. How Do Nonlinear Transformers Learn and Generalize in In-Context Learning? (This work also includes model pruning based on the mechanism of Transformers)
Nichani et al., ICML 2024. How Transformers Learn Causal Structure with Gradient Descent.
Reddy et al. ICLR 2024. The mechanistic basis of data dependence and abrupt learning in an in-context classification task.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Can you have some discussion about the following related work about model pruning? This work also finds that some heads can be pruned during the inference. However, their experiments are implemented by real-world state-of-the-art LLMs. Their results are more complete in terms of performance, efficiency, and on more challenging tasks.
Liu et al., ICML 2023. Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: There is no potential negative societal impact of their work. The extension of the proposed method and analysis to larger models is the biggest concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the precious feedback, which helped us to improve the communication of our findings.
At the moment, however, we believe that our contributions have not been evaluated in the proper context. Before reading this response, we kindly ask the reviewer to read our global response, which should help in better contextualizing our work.
1. *The experiments are simple. It will be better to verify the mechanism on larger datasets, larger models, and more complicated tasks. Experiments use Transformers of 2 or 3 layers, which are not deep. However, the abstract claims a "deep multi-head self-attention network".*
As explained in the global response, our work is theory-focused and provides exact analytical expressions for the network predictor. In this context, experiments are typically designed to serve two purposes: 1) To convince the reader, who may not want to check the details of the theoretical derivations, that the analytical predictions are exact. 2) To provide minimal, easily interpretable examples illustrating the theoretical insights. Specifically, we focused on experiments with only 2 or 3 layers and 2 to 4 heads because the resulting order parameter remained visually interpretable in that case.
In the Bayesian framework adopted here, numerically validating our derivations on larger models and datasets would require an exponentially larger computational cost—much larger than with gradient descent—other than being technically challenging, because it requires sampling the network weights from an high dimensional posterior distribution (Eq. 8). We believe this to be out of the scope of our work: our theoretical results are rigorous and apply for any depth, under the thermodynamic limit specified by the theory.
We acknowledge that the term “deep” is used in the literature with varying meanings. In the context of theory, where analytical results have predominantly been obtained for single-layer attention, we followed the convention of using “deep” to emphasize that our derivations apply to models with an arbitrary number of layers. However, if the reviewer believes it is more appropriate, we are willing to change the term “deep” in the abstract to “multi-layer.”
2. *The presentation is somehow weird. The order of figures in Figures 2 and 3 is strange. Also, Figures 2 and 3 can be divided into multiple figures, since the experiments in these subfigures are about different things.*
We thank the reviewer for spotting this problem. We have changed the order in both figures and made them more readable. The updated figures are available in the joint PDF. We have decided not to split the figures, because all the subplots are closely related and together show how the path combination mechanism affects the performance. For instance, in order to understand both the accuracy trend and the pruning mechanism in Fig. 3, it is useful to look at the structure of the order parameter.
3. *Some references on the theoretical mechanism of Transformers in learning are missing.*
We thank the reviewer for pointing us towards these missing references. These will appear in the revised version as follows. Refs. [Li et al., ICML 2024; Nichani et al., ICML 2024] will be included in the Introduction, in the discussion of other theory-focused, closely related works. Refs. [Olsson et al., 2022; Reddy et al. ICLR 2024] are also highly relevant, but more suitable for the Discussion section, since they focus on empirical methods [Olsson et al., 2022] and phenomenological models [Reddy et al. ICLR 2024] that are more distant from the focus of our work on exact derivations of the network’s predictor.
4. *Can you have some discussion about the following related work about model pruning? This work also finds that some heads can be pruned during the inference. However, their experiments are implemented by real-world state-of-the-art LLMs. Their results are more complete in terms of performance, efficiency, and on more challenging tasks.*
We thank the reviewer for pointing us towards this interesting work [1], which will be discussed in the Discussion section of the revised manuscript.
However, we would like to stress that paper [1] is very different in scope and methodology to our work. Our work is theory-focused, and aims at providing exact analytical expressions that describe the network’s learning mechanisms. The cited work [1] is application-focused, and aims at providing a competitive pruning algorithm. In our work, the application to head pruning is neither the main result nor the focus, and we are not making any claim about it being competitive: we only use it to elucidate the kernel combination mechanism and its interpretation.
We believe it unfair to compare our experiments to those appearing in [1]. As we explain in the global response, our work is part of a line of research aimed at achieving a better analytical understanding of deep learning. Given the current state of theory, simplifying assumptions are typically necessary to obtain exact analytical results. Given these constraints, it is not fair to compare the standard transformer used in practice to our simplified model. Real-world applications would anyways be out of scope for our work. We direct the reviewer to our global rebuttal, in which we explain how, when seen in the right context, our work presents novel theoretical insights, as well as several advancements compared to the simplifying assumptions of other closely related theoretical works.
We hope our response clarifies the scope of this work, and highlights our true contributions on both advancing the frontiers of the theoretical research on transformer-like models, and deep learning theory in general. In our view, the current rating is unfair in light of these contributions. If you find our response convincing/useful, please consider amending the score. Thank you very much.
[1] Liu et al., ICML 2023. Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. It is acceptable that this work is theory-focused. Then, the contribution should be Section 3. I have the following questions since I hope the theoretical part to be novel.
1. I roughly checked Appendix C. I do not think the analysis of Transformer architecture is special here. The references [23, 26] mentioned in line 131 are also for linear networks rather than Transformers. My concern is that the analysis here oversimplifies Transformer models into linear models, which weakens the theoretical novelty.
2. What are the theoretical challenges you overcome in Section 3? In other words, why existing methods cannot solve such a problem?
3. The presentation of Section 3 can be improved. It is better to present equations of Section 3 into Theorems/Propositions/Lemmas. Otherwise, it is difficult to find the point in this section.
4. What is the practical significance of this work in real-world applications?
---
Rebuttal 2:
Comment: Thank you very much for taking the time to reassess the value of our work based on our theoretical results. We would like to address your specific questions below. For additional details, please also refer to our general response, where we provide an overview of our contributions, compared to previous comparable works.
2. *What are the theoretical challenges you overcome in Section 3? In other words, why existing methods cannot solve such a problem?*
Currently, works aiming at providing analytical results for learning in transformers need to consider simplified architectures and/or training regimes (cf. Introduction). The methods adopted to study such simplified models are typically well established (e.g. NNs as Gaussian Processes equivalence, teacher-student setting, dynamic mean-field theory, BPKR, etc...). The challenge, however, is to identify simplified models that are analytically tractable, yet rich enough to offer insights related to specific features of transformers.
In this landscape, the challenges we overcome are:
- a. To consider a model beyond a single-head and/or single-layer architecture, allowing for the existence of attention paths—present in large number in actual transformers—and the characterization of their interplay.
- b. To characterize such a model beyond the Gaussian Process (GP) limit, in which the hidden weights remain random after learning, and therefore cannot learn any data-dependent structure, in particular any clever combination of attention paths.
By including the above features, our analysis reveals a previously uncovered mechanism of attention paths interplay, implemented by learned structures in the value weights, which are predicted and described in our theory by the order parameter. As noted by both reviewers 9bWs and Y9xT, these data-dependent structures remained inaccessible to previous studies which, even when considering multi-layer multi-head transformers, only did so in the GP limit.
1. *I roughly checked Appendix C. I do not think the analysis of Transformer architecture is special here. The references [23, 26] mentioned in line 131 are also for linear networks rather than Transformers. My concern is that the analysis here oversimplifies Transformer models into linear models, which weakens the theoretical novelty.*
The model we consider is not linear. It is linear in the value weights, but highly nonlinear in the query-key weights, through the attention operation. As we explain in the last part of Section 2, the model can be seen as a deep linear network in the value weights, which is however applied to a highly nonlinearly expanded input of dimension $N_0 H^L$: it reads from the $H^L$ attentioned inputs (i.e. the input transformed by a specific attention path), which are nonlinear functions of the original input of size $N_0$.
A more precise statement is that we exclusively characterize the learning of those weights in which the network is linear, i.e. the value weights (in this sense applying the BPKR technique found in [23, 26]). The other query-key weights, however, are still learned through gradient descent; simply, we do not attempt to describe their learning mechanism in our theory.
The value weights learn to combine architectural features specific to the transformer—the nonlinear attention paths—showcasing a mechanism of attention paths combination that is specific to this work. Also note that the learning of the linear value weights is itself highly nonlinear: this is what gives rise to the nontrivial combination mechanism, which better aligns the network kernel with the task and improves generalization, as shown by our theory and experiments.
4. *What is the practical significance of this work in real-world applications?*
As a theory-focused research, we admit that the immediate implications of our results to practical applications are limited. Nevertheless, we'd like to believe that our work is a progress toward a more practically useful theory. Our order parameter is a simple measure of the learned weights, which provides useful information about the role and interplay of different attention paths. While work is still needed to extend its application to state-of-the-art networks, we illustrated its efficacy in our head pruning experiment (Sec 4.2.1) connecting our theoretical results in the Bayesian setting to more practical models trained with gradient descent.
---
Rebuttal Comment 2.1:
Comment: 3. *The presentation of Section 3 can be improved. It is better to present equations of Section 3 into Theorems/Propositions/Lemmas. Otherwise, it is difficult to find the point in this section.*
We thank you for your feedback, which helps us improve the communication of our findings. Currently the manuscript alternates the presentation of our results with their interpretation and implications for generalization performance. The revised manuscript will be reorganized to contain a paragraph with a formal theory statement, containing definitions, assumptions and results. The interpretation and discussion of such results will be given after such a statement. The statement is provided in the next comment.
Please refer to the submitted manuscript for any referenced equations which do not appear in the statement itself (i.e. Eqs 1-8, which will appear in the manuscript before the theory statement). Also note that we made the following changes in notation:
- 1. We renamed the query and key weights from $\textbraceleft Q ^{\left(\ell\right)h},K{} ^{\left(\ell\right)h}\textbraceright _{\ell,h=1} ^{L,H}$ to $\textbraceleft W _{Q} ^{\left(\ell\right)h},W _{K} ^{\left(\ell\right)h}\textbraceright _{\ell,h=1} ^{L,H}$, in order to avoid confusion with the kernel $K$.
- 2. We renamed the network predictor on a test example from $f ^{P+1}$ to $f ^{*}$.
- 3. We renamed the network inputs from $x ^{\left(0\right)}$ to $x$.
This new naming scheme will be consistently implemented throughout the revised manuscript.
As a minor note on terminology, please note that this work uses the approach of theoretical physics, which has a long-standing tradition in developing theories of neural networks. Following this approach, we provide our analytical results in the form of Results and Derivations, rather than Theorems and Proofs.
---
Reply to Comment 2.1.1:
Comment: ### Theory statement ###
**Definitions.** Consider a training dataset consisting of $P$
inputs $x ^{\mu}\in\mathbb{R} ^{N _0\times T}$ and associated labels
$y ^{\mu}\in\mathbb{R}$, where $\mu=1,\ldots P$. Call $X\coloneqq \textbraceleft x ^{\mu} \textbraceright _{\mu=1} ^{P}$
the set of training inputs and $Y\in\mathbb{R} ^{P}$ the vector of
training labels with $\mu$-th component $y ^{\mu}$. Consider a network
defined by Eqs. (1-4) and in particular call $f ^*$ the network
output (Eq. 4) corresponding to a test input $x ^* \in\mathbb{R} ^{N _0\times T}$.
We remind the reader of the following network hyperparameters: the
input's embedding dimension $N _0$, the hidden layer's width $N$,
the number of tokens $T$, the number of attention heads per layer
$H$, and the number of attention layers $L$.
**Assumptions.** Assume the network weights $\Theta\coloneqq\left(V ^{\left(0\right)},\textbraceleft V ^{\left(\ell\right)h}\textbraceright _{\ell,h=1} ^{L,H},a\right)$
are distributed according to the Bayesian posterior distribution defined
in Eq. 8, with temperature $\mathcal{T}>0$, while the query and key
weights $\textbraceleft W _{Q} ^{\left(\ell\right)h},W _{K} ^{\left(\ell\right)h}\textbraceright _{\ell,h=1} ^{L,H}$
are fixed.
Assume $N,N _0,P\to\infty$, with $P/N\coloneqq\alpha\in\mathbb{R} ^{+}$
and $P/(N _0H ^{L})\coloneqq\alpha _0\in\mathbb{R} ^{+}$, where $\alpha$,
$\alpha _0$ as well as other size parameters $T,H,L\in\mathbb{\mathbb{N}}$
are finite.
**Results.** Under the above assumptions,
(1) the mean predictor under the posterior distribution (Eq. 8) is
given by
$$\mathbb{E}\left[f ^{*}\right]=k ^{\top}\cdot\left(K+\mathcal{T}\mathbb{I}\right) ^{-1}Y, \qquad (9) $$
where the average is w.r.t. to the posterior distribution (Eq. 8).
The vector $k\in\mathbb{R} ^{P\times1}$ and the matrix $K\in\mathbb{R} ^{P\times P}$
are defined in terms of a kernel function $\mathcal{K}:\mathbb{R} ^{N _0\times T}\times\mathbb{R} ^{N _0\times T}\to\mathbb{R}$
as $k ^{\mu}\coloneqq\mathcal{K}\left(x ^{*},x ^{\mu}|U\right)$ and
$K ^{\mu\nu}\coloneqq\mathcal{K}\left(x ^{\mu},x ^{\nu}|U\right)$, for
$\mu,\nu=1,\dots,P$. The kernel function is given by
\begin{equation}
\mathcal{K}\left(x,x'|U\right)=\frac{1}{H ^{L}}\sum _{\pi,\pi'\in\Pi}U ^{\pi\pi'}C _{\pi\pi'}\qquad\mathrm{with}\qquad C _{\pi\pi'}\coloneqq\frac{1}{N _0}\xi ^{\pi}\left(x\right) ^{\top}\cdot\xi ^{\pi'}\left(x'\right)\, \qquad (10)
\end{equation}
where $\xi ^{\pi}\left(x\right)$ is the ``attentioned input'' corresponding
to an input $x\in\mathbb{R} ^{N _0\times T}$, along path $\pi\in\Pi$
(Eq. 7) and $\Pi$ is the set of all attention paths for a given architecture
with $|\Pi|=H ^{L}$.
The matrix $U\in\mathbb{R} ^{H ^{L}\times H ^{L}}$, called *order
parameter*, is a positive semi-definite matrix given by
\begin{equation}
U=\underset{\tilde{U}}{\mathrm{argmin}} \ S(\tilde{U};X,Y)\, \qquad (11)
\end{equation}
where the scalar function $S$ called the *action* is defined
as
\begin{equation}
S(U;X,Y)=\mathcal{L}(U)+\alpha\mathcal{E}(U;X,Y)\, \qquad (12)
\end{equation}
The scalar function $\mathcal{\mathcal{E}}$, which we call the *energy*,
is given by
\begin{equation}
\mathcal{E}(U;X,Y)=\frac{1}{P}\ln\det\left(K(X,X|U)+\mathcal{T}\mathbb{I}\right)+\frac{1}{P}Y ^{\top}\cdot\left(K(X,X|U)+\mathcal{T}\mathbb{I}\right) ^{-1}\cdot Y, \qquad (13)
\end{equation}
where $K\coloneqq K(X,X|U)$ is the $P\times P$ training kernel matrix,
defined according to Eq. (10). The expression for the scalar function
$\mathcal{L}$, which we call *entropy*, is lengthy and is given
in Appendix B.1.
(2) In the particular case of a single head per layer $H=1$, $U$ is a scalar, and the entropy assumes the simple form $\mathcal{L}\left(U\right)=\sigma ^{-2\left(L+1\right)}U-\ln\left(U\right)$,
where $\sigma ^{2}$ is the variance of the Gaussian prior on the network
weights $\Theta$ (see Eq. 8).
(3) For general $H$, $\mathcal{L}\left(U\right)$ is minimized by
$U ^{\pi\pi'}=\sigma ^{2\left(L+1\right)}\delta _{\pi,\pi'}$,
which therefore is always the solution of Eq. (11) in
the GP limit defined by $\alpha\to0 ^{+}$.
(4) The matrix $U$ obeys the following relation
\begin{equation}
U ^{\pi\pi'}=\frac{1}{N}\mathbb{E}[V _{\text{eff}} ^{\pi}\cdot V _{\text{eff}} ^{\pi'\top}], \qquad (14)
\end{equation}
where $V _{\text{eff}} ^{\pi}\in\mathbb{R} ^{1\times N}$ are the effective weights along path $\pi$ (Eq. 6).
**Derivation:** See Appendix C.
---
Rebuttal 3:
Comment: Thank you for your reply. We would like to address your concerns and some misunderstandings.
As we explained in our previous comment, the model we consider is linear in the value weights, while it is nonlinear in the query-key weights (i.e. the attention paths). Since we apply the BPKR technique only to characterize the learning of the value weights, we are not making any approximation and our results are *exact* at any depth for the model under consideration. There is no “risk” of the theory breaking at larger depths since the network will always stay linear in the value weights, at any depth.
We also explained in our first rebuttal that simulating networks of larger depth is prohibitive in the Bayesian framework, but also out-of-scope for this theory-focused work since the theory is exact at any depth, and the experiments have the main purpose of illustrating its mechanisms in a minimal setting. We can however promise to attempt to simulate as-deep-as-possible networks for the appendix of the final version of the manuscript, though there is no reason to expect different results for the model under consideration.
A related question raised by Reviewer Y9xT that we find relevant here is whether our theory could be extended to a network that is nonlinear in the value weights, which we do not consider in this work. This is an interesting mathematical challenge, which future work could tackle by building upon our theoretical advancements (as discussed in our Discussion, and in our dialogue with reviewer Y9xT). We kindly ask the reviewer to evaluate our contributions in terms of these theoretical advancements, which have been highlighted in our global response, as well as by reviewers Y9xT and 9bWs. While we openly admit that deep learning theory is still far from analytically characterizing state-of-the-art transformers, we make several important steps forward compared to other related works.
Regarding the 3-layer network in Figure 6, we did not show the order parameter simply because the objective of the figure was different. We have the measurement of the order parameter for that network and, unsurprisingly given the exactness of the theory, we clearly see that for large alpha it contains strong off-diagonals, while for small alpha it is diagonal. This makes sense as the order parameter needs to account for the boost in performance for larger alpha which is shown in Figure 6. We will include these plots of the order parameter in figure 6. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable time and their constructive comments. All reviewers agree on two main points:
- Our theoretical contribution is “interesting” (15FG and 5PZg), “novel” (Y9xT and 9bWs) and “technically sound” (Y9xT).
- Experiments thoroughly validate our theoretical results.
However, some reviewers raised concerns about the scale of the experiments (5PZg, 15FG) and the simplified architecture compared to the standard transformer (Y9xT, 15FG).
While we acknowledge that our theoretical results involve several simplifications, we believe these are outweighed by the paper's novel contributions, especially when considered within the appropriate context and scope of this theory-focused work (as noted by 9bWs and Y9xT).
In this global response, we aim at better clarifying the context and scope of this work, and highlight important contributions currently overlooked in the reviews by 15FG and 5PZg.
We would first like to emphasize that our work is part of a line of research aimed at achieving a better analytical understanding of deep learning. It is important to underline that the focus is on obtaining *exact analytical results*—in our case, an exact expression for the network's predictions on test examples—rather than on applications. In particular, it is unfair to compare our results with those from heuristic and phenomenological approaches, which, even when inspired by theory, are less constrained by rigor and free to focus on any arbitrary architecture used in practice.
Given the current state of the art in exact analytical methods, simplifying assumptions are typically necessary to obtain analytical results. In this context, we believe our work provides considerable contributions in the two following directions:
1. **Advances in the analytical understanding of learning in transformer-like architectures.** While from an application-focused perspective, our model still appears simplified with respect to the standard transformer, from the perspective of theory it integrates several key features that were not considered in previous works. These are the combination of (cf. Sec. 1 “Introduction” for references):
- a. **A multi-head, multi-layer architecture.** Most previous works either considered single-head or single-layer models, which did not allow for a characterization of the interplay between attention paths.
- b. **A large number of training examples.** Previous works considering the standard transformer architecture typically do so in the Gaussian Process (GP) limit, in which the network width $N→\infty$ while the number of examples $P$ stays finite (i.e. $P<<N$ in practice). Here instead we consider the more realistic case of $P=\alpha N$ for finite $\alpha$. As shown in our work, this is fundamental to uncover interesting learning mechanisms, which are lost in the GP limit.
- c. **An exact derivation of the network predictor, independent of assumptions on the training data.** Even when including one of the features (a) or (b), previous works either provided results in the form of generalization bounds rather than exact expressions for the network predictor, and/or required specifying a model for the training data statistics. In contrast, our results apply to any realistic training dataset.
While we openly acknowledge that deep learning theory has still a long way to go to analytically understand transformers, we believe to have made important steps forward in these still underdeveloped directions. Importantly, we believe that what we lose by making certain simplifications in our model is outweighed by what we gain in terms of the novel features described above. For example, note that the simplification of feeding the attention heads with the bare input (which is of concern for reviewers Y9xT and 15FG) is what allows us to analytically tackle a multi-head multi-layer architecture, thereby uncovering the mechanism of attention paths interplay. Therefore, while surely a limitation, this assumption allows us to make a significant step forward, on which future work could build upon, for example working on relaxing our assumption.
2. **Advances in the theory of deep learning in general.** While we focused on a transformer-like architecture due to its relevance to recent advancements in AI, our results have broader implications for the theory of deep learning. As mentioned above, exact derivations of the network predictor without assumptions on the dataset structure are typically limited to the network’s Gaussian Process (GP) limit. However, as shown in [Li, Sompolinsky, PRX 2021], it is only by going beyond this limit that the network’s hidden weights learn data dependencies. In the original deep linear network considered in [Li, Sompolinsky, PRX 2021], this phenomenon did not affect the mean predictor and only appeared as a scalar renormalization of the predictor’s variance. As correctly noted by Reviewer Y9xT, our work provides an analytically tractable yet reasonably powerful model in which the hidden weights learn a data-dependent structure that significantly enhances the performance of the mean predictor, which is now controlled by a matrix order parameter. There are currently only a few examples of such models in the literature [Li, Sompolinsky, NeurIPS 2022; Aiudi, et al., arXiv:2307.11807]. Furthermore our work provides a novel understanding of the order parameter as aimed at maximizing kernel-task alignment (the negative log-likelihood argument), allowing for a more direct interpretation of the renormalization phenomenon. In short, we believe our work makes an important contribution toward uncovering interesting learning mechanisms that can only emerge with a large number of training examples.
In synthesis, our work provides significant contributions to a theory-focused line of research that has always been of high relevance to NeurIPS, and which we hope to have better contextualized in our reply.
Pdf: /pdf/55a58c740acac29db8014f94d477c17513135870.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Uncovering Safety Risks of Large Language Models through Concept Activation Vector | Accept (poster) | Summary: The paper introduces a Safety Concept Activation Vector framework designed to identify and exploit vulnerabilities in LLMs by accurately interpreting their safety mechanisms. The authors develop an SCAV-guided attack method that enhances the success rate of generating harmful prompts and embedding-level attacks with optimized perturbations, requiring less training data.
Strengths: 1. The paper is well-written and easy to follow.
2. The evaluation is comprehensive and the results look promising.
3. The discussion on section 4.3 is insightful.
Weaknesses: The idea of inspecting the hidden representation of the model about the safety concept is not novel, which is already proposed in RePE. The difference is that the authors propose to use the SCAV to find the subspaces for benign and harmful prompts, while RePE uses the PCA. The results in Figure 3 look confusing. When only one pair of benign-harmful instruction is presented in the training data, the success rate of the SCAV is already over 80%. It is not clear why it can learn the subspaces so well with only one pair of data. With only one pair of data, the subspace boundary should be very uncertain. The authors should provide more explanation on this.
The second concern is that directly modifying the intermediate representation of the model may lead to the model's performance degradation. Modifications to any internal layer will influence all the following layers' representations[1,2]. Without drift control, it is hard to guarantee that the model's performance will not be affected. The authors should provide more discussion on this. Also, the langauge flaw determined by GPT-4 is not a good metric to reflect that. I would suggest using more principled metrics such as weighted average of bi- and tri-gram entropies.
[1] Locating and Editing Factual Associations in GPT
[2] MASS-EDITING MEMORY IN A TRANSFORMER
Other issues:
Although the paper claims that they have done comfort for annotators, I think the IRB approval or equivalent is still needed since it involves human subjects and may do potential harm to the annotators.
Technical Quality: 3
Clarity: 3
Questions for Authors: The transferability to GPT-4 is questionable, since the embedding editing only works for the local model while GPT-4 has a definitely different embedding. Could you give an reasonable explanation for that?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful and detailed comments, which enable us to better clarify our contributions and ethical considerations.
> **Weakness 1: Interpretability used in attacks by [1]**
>
We acknowledge that previous works like [1] have explored the use of interpretability to assist attacks. In contrast, our work distinctively focuses on developing a **more accurate and principled interpretation method**. Unlike existing approaches, our method:
1. Eliminates misleading heuristics (L128-140) and thus performs significantly better than existing techniques [1,2] (Tables 1 and 2), particularly under conditions of limited training data (Figure 3).
2. Is the first to support both embedding-level and prompt-level attacks, enhancing its applicability.
3. Removes the necessity for time-consuming hyperparameter tuning.
We will update our abstract and introduction to better highlight these distinctions.
> **Weakness 2: Confusion in Figure 3**
>
We have double-checked the result in Figure 3 and confirmed it is correct. One pair of data can lead to high performance because embeddings of malicious questions are well separated from those of safe questions (Rebuttal Table 1). In this case, we can learn a linear classifier **$P_m$** that accurately distinguishes embeddings of malicious questions from safe ones with one pair of data (Rebuttal Table 2) and use **$P_m$** to effectively guide the attacks.
**[Rebuttal Table 1] Distances between embeddings of malicious and safe questions ($d_{ms}$) exceed those within malicious ($d_{mm}$) or safe questions ($d_{ss}$) in LLaMA-2-7B-Chat 31th layer.**
| Type | Min | Max | Mean | Medium |
| --- | --- | --- | --- | --- |
| $d_{mm}$ | 11.3 | 93.3 | 56.2 | 57.7 |
| $d_{ss}$ | 30.4 | 128.8 | 84.9 | 84.2 |
| $d_{ms}$ | 83.0 | 132.6 | 113.9 | 114.2 |
**[Rebuttal Table 2] Test accuracy of linear classifier $P_m$ using one pair of training data (LLaMA-2-7B-Chat)**
| Runs | Layer 10 | Layer 15 | Layer 20 | Layer 25 | Layer 30 |
| --- | --- | --- | --- | --- | --- |
| 1 | 60.7 | 92.4 | 96.6 | 94.2 | 94.3 |
| 2 | 79.1 | 96.4 | 96.3 | 96.8 | 95.9 |
| 3 | 82.3 | 97.5 | 96.0 | 96.6 | 95.8 |
| 4 | 86.7 | 95.6 | 96.6 | 96.3 | 93.2 |
| 5 | 70.4 | 97.1 | 93.7 | 95.1 | 94.5 |
| Variance | 107.27 | 4.13 | 1.49 | 1.23 | 1.27 |
> **Weakness 2: Risks in modifying intermediate representations**
>
We carefully ensure such risks are under control by:
1. **Optimizing with drift control**. Our goal minimizes perturbation magnitude $|\epsilon|$ to avoid significantly modifying intermediate representations (Eq. (3)). We also modify only the necessary layers (Algorithm 1).
2. **Performance validation**. In addition to our **human evaluation** (Table 2), which confirms that LLMs modified by our method have good performance, we also implement **the weighted average of bi- and tri-gram entropies** per your suggestion. This quantity drops if the generated text is repetitive. As shown in Rebuttal Table 3, our method consistently performs the best in terms of this criterion.
**[Rebuttal Table 3] Comparing our embedding-level attacks with baselines in terms of the weighted average of bi- and tri-gram entropies
[3, 7]**
| Model | Method | Entropy on Advbench | Entropy on StrongREJECT |
| --- | --- | --- | --- |
| LLaMA-2-7B-Chat | JRE | 14.65 | 13.34 |
| | RepE | 16.49 | 15.33 |
| | DP | 15.51 | 15.69 |
| | AutoDAN | 16.46 | 15.62 |
| | SCAV (Ours) | 16.97 | 16.24 |
| LLaMA-2-13B-Chat | JRE | 16.59 | 15.94 |
| | RepE | 16.54 | 15.81 |
| | DP | 14.58 | 15.25 |
| | AutoDAN | 13.03 | 12.41 |
| | SCAV (Ours) | 16.98 | 16.35 |
> **Weakness 3: Ethical considerations in human evaluation**
>
We take the following steps to adhere to ethical standards:
1. **Obtaining IRB-equivalent approval.** Our human annotation experiment was carried out through a vendor company that has a formal contract with our institution. We have carefully confirmed with our collaborator (level 3 program manager) in the vendor company, and she has investigated our annotation process and confirmed it has passed their ethical approval.
2. **Doing comfort for the annotators.** We reviewed the labeling content to ensure that there was no extremely harmful content, warned the annotators about the potential harm, and let them know that they could quit at any time if uncomfortable. We also conducted a follow-up with the annotators, and they all indicated that our experimental content had not caused harm to them.
> **Question 1: Transferability to GPT-4**
>
The transferability of attacks observed in our experiments is also reported in multiple existing works [4,5,6], enhancing the credibility of our results. For example, it has been found that attacks learned in LLaMA-2 can be effectively applied to GPT-3.5, GPT-4, and Vicuna [4,5,6]. Currently, to the best of our knowledge, no existing works can clearly explain its reason. One hypothesis is that there is a form of cross-model generalizability potentially resulting from using a similar architecture (Transformer), loss function (e.g., next-token-prediction), and datasets (human-written text from the Internet), and we consider the verification of this hypothesis to be an interesting future work.
[1] Representation engineering: A top-down approach to ai transparency
[2] Open the Pandora's Box of LLMs: Jailbreaking LLMs through Representation Engineering
[3] A Unified Framework for Model Editing
[4] Universal and transferable adversarial attacks on aligned language models
[5] AutoDAN: Generating stealthy jailbreak prompts on aligned large language models
[6] AdvPrompter: Fast adaptive adversarial prompting for LLMs
[7] MASS-EDITING MEMORY IN A TRANSFORMER
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer xmcv,
Thank you for your thoughtful feedback on our submission. We kindly remind you to review our rebuttal and let us know if it adequately addresses your concerns. If you believe our explanations and revisions have satisfactorily resolved the issues, we would greatly appreciate it if you could reconsider your evaluation of our paper.
Thank you again for your time and for providing valuable guidance. If there are any further questions or suggestions, we are glad to discuss them with you.
Best regards,
Authors of paper 7016 | Summary: This paper introduces the Safety Concept Activation Vector (SCAV) framework, which guides attacks on LLMs by interpreting their embedding-level safety mechanisms. This work estimates the likelihood that an embedding is considered malicious by the LLM, utilizing a linear classifier based on Concept Activation Vector principles. The attack method generates both attack prompts and embedding-level attacks with automatically selected perturbation hyperparameters. The method improves attack success rates and response quality while requiring less training data. The experimental results show that it's possible to transfer attack prompts across different models, including black-box ones like GPT-4. The findings suggest that existing unlearn methods may not fully erase harmful knowledge, highlighting the persistent safety issues in LLMs.
Strengths: The paper's originality lies in its application of the SCAV framework to guide attacks on LLMs, offering a unique perspective on understanding and exploiting their safety mechanisms. The quality of the work is high, supported by comprehensive experiments that validate the effectiveness of the SCAV framework across multiple models and datasets.
Weaknesses: One significant weakness of the paper is its reliance on the assumption of linear separability, which may not consistently hold true in the complex, high-dimensional spaces characteristic of LLM embeddings. The authors were correct to cite many papers on interpretability that also assume the linear interpretability assumption but to me this is a generally unsound assumption in the interpretability community. In section 2.2 you empirically evaluate linear interpretability with Llama and Vicuna, but while the results show high test accuracy in later layers, the dynamics of how linear separability develops across layers could be explored in more depth. The intuition in 2.3.1 is an oversimplification of high-dimensional spaces. Each layer in a deep neural network applies a non-linear transformation to the data. While later layers might exhibit some linear separability due to the network's hierarchical feature learning, without a mathematical explanation it is not clear to me why non-linear transformations and complex concepts would be accurately captured linearly.
The regularization terms in the objective function ( \( \frac{\lambda_1}{2}||w||^2 + \frac{\lambda_2}{2}b^2 \) ) help prevent overfitting, but their choice and tuning are not discussed in detail. Improper regularization can lead to either overfitting or underfitting, affecting the classifier's ability to generalize.
While the theoretical foundation for using Concept Activation Vectors is sound, the paper does not provide sufficient empirical evidence to validate this assumption across a diverse range of models and layers. This oversight raises questions about the general applicability and robustness of the SCAV framework. The lack of validation makes it difficult to ascertain whether the linear separability assumption can be reliably applied to different LLM architectures or whether it is specific to the models and datasets used in the experiments.
Furthermore, the decision to focus attacks on layers with high test accuracy appears somewhat arbitrary and lacks justification. The authors need to provide a deeper exploration of why these layers are chosen and how this choice impacts the effectiveness of the SCAV-guided attacks. A more detailed discussion on the selection criteria for hyperparameters, such as the thresholds \( P_0 \) and \( P_1 \), would significantly enhance the robustness of the methodology. The current approach leaves the impression that the layer selection and hyperparameter tuning are based on heuristic rather than principled optimization, which undermines the credibility of the proposed framework.
The experiments are predominantly focused on a narrow set of models (e.g., LLaMA-2-7B/13B-Chat) and specific datasets (e.g., Advenbench and StrongREJECT). This limited scope constrains the generalizability of the findings.
For example, the authors utilize GPT-4 for computing 209 ASR-answer, ASR-usefulness, and Language flaws in the experiments, but not for tables 1 and 2. They could also use Harmbench (Mazeika, Mantas, et al.) as another dataset. To convincingly demonstrate the efficacy and versatility of the SCAV framework, the authors should expand their evaluation to include a broader range of LLM architectures, including those with different training paradigms.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you provide more empirical evidence to support the linear separability assumption across different models and layers? This would help validate the applicability of the SCAV framework.
2. How do you determine the thresholds P0 and P1 for attacking specific layers?
3. Have you considered testing the SCAV framework on models with adversarial training or other defensive mechanisms?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Weakness 1 and Question 1: Linear interpretability assumption**
>
Thanks for explaining your concern in such a detailed and constructive way. Per your suggestion, we further justify the assumption by providing:
1. **Empirical evidence across models and layers**. As shown in Rebuttal Table 1, the test accuracy of linear classifier $P_m$ for 5 more LLMs consistently exceeds 0.9 in later layers, confirming the robustness of the assumption. Moreover, our attack method, built upon this assumption, has achieved an ASR-keyword of over 94% on 7 LLMs, demonstrating its general applicability. You can also see Figure 1 in Rebuttal PDF for details by layer.
2. **Further explanations**. A solid theoretical explanation for linear interpretability is a difficult task not solved by works that use the assumption (refs [20-23] in paper). Our hypothesis is **two groups of embeddings tend to be linear separable if learned to have sufficient inter-group distance and intra-group similarity.** For example, in LLaMA-2-7b-Chat, the median distances between embeddings of malicious and safe questions ($d_{ms}$) exceed those within malicious ($d_{mm}$) or safe questions ($d_{ss}$) - 113.88 vs. 56.21 and 84.89, respectively. This disparity, learned to ensure malicious questions have different outputs compared with safe questions, may form clear clusters easily to be linearly separated.
**[Rebuttal Table 1] Minimal test accuracy of linear classifier** $P_m$ **across LLMs and layers**
| | Layers 0~10 | Layers 11~20 | Layers 21~31 | Layers 31~39 |
| --- | --- | --- | --- | --- |
| LLaMA-3-8B-Instruct | 0.43 | 0.95 | 0.97 | NA |
| Qwen-1.5-7B-Instruct | 0.56 | 0.71 | 0.98 | NA |
| Mistral-7B-Instruct | 0.50 | 0.82 | 0.90 | NA |
| Deepseek-v2-Lite-Chat | 0.55 | 0.94 | 0.94 | NA |
| ChatGLM-4-9B-Chat | 0.62 | 0.78 | 0.98 | 0.99 |
> **Weakness 2**: **Choices of regularization terms**
>
The current regularization follows the default setting in sklearn and consistently performs well on varying models and datasets. We also test different regularization terms. **Rebuttal Table 2 shows that using the L2 regularization is critical, while its coefficient is not sensitive.** We will include this analysis in the revised paper.
**[Rebuttal Table 2] ASR-keyword (%) w.r.t different regularization terms (Advbench, LLaMA-2-7B-Chat)**
| $\lambda (\lambda_1=\lambda_2)$ | L1 | L2 |
| --- | --- | --- |
| 0.5 | 0 | 100 |
| 1 | 0 | 100 |
| 2 | 0 | 98 |
| 3 | 0 | 100 |
> **Weakness 3: Layers to attack**
>
We attack only layers with an accurate $P_m$ because our attacks are guided by $P_m$ (Eq. (3)) and **inaccurate $P_m$ leads to ineffective attacks**. This is consistent with empirical results: perturbing a layer whose test accuracy is lower than 90% results in a low attack success rate (ASR-k from 0 to 6%), while perturbing other layers leads to a consistently better result (41.1$\pm$26.2%).
> **Weakness 4 and Question 2: Determining $P_0$ and $P_1$**
>
Our method works reasonably well for varying $P_0$ and $P_1$, as shown in Rebuttal Table 3. **Users can easily determine $P_0$ and $P_1$** by using the default values (0.01% and 90%) that work well for all 7 LLMs we tested, or slightly lower $P_0$ if they wish to increase ASR. An optimal $P_1$ can be found by plotting $P_1$ across layers and choosing the elbow point, but the default value already works well in practice.
**[Rebuttal Table 3] ASR-keyword (%) w.r.t. varying** $P_0$ **and** $P_1$ **(Advbench, LLaMA-2-7B-Chat).**
| $P_0 \backslash P_1$ | 0.85 | 0.90 | 0.95 |
| --- | --- | --- | --- |
| 1e-3 | 98 | 96 | 98 |
| 1e-4 | 100 | 100 | 100 |
| 1e-5 | 100 | 100 | 100 |
**Weakness 5: More datasets and diverse LLMs**
Beyond the 2 datasets and 8 LLMs already tested (Table 5), we have extended our experiments to Harmbench and 3 more LLMs with different training paradigms. Rebuttal Tables 4 and 5 show the effectiveness of our method. We also hope to highlight that 3 reviewers claim that our original experiment is “comprehensive” or “thorough.”
**[Rebuttal Table 4] Results on Harmbench**
| | ASR-keyword (%) | ASR-answer (%) | ASR-useful (%) | Language Flaws (%) |
| --- | --- | --- | --- | --- |
| LLaMA-2-7b-Chat | 99.5 | 97.5 | 90 | 20 |
| LLaMA-2-13b-Chat | 98.75 | 95 | 87.5 | 13.75 |
**[Rebuttal Table 5] Results on 3 more LLMs on Advbench**
| | ASR-keyword (%) | ASR-answer (%) | ASR-useful (%) | Language Flaws (%) |
| --- | --- | --- | --- | --- |
| ChatGLM4-9b | 94 | 86 | 82 | 18 |
| Deepseek-v2-lite-chat | 100 | 96 | 86 | 6 |
| Gemma-1.1-7B-it | 100 | 90 | 86 | 14 |
> **Question 3: Applying defensive mechanisms like adversarial training**
>
Per your suggestion, we attack models with adversarial training (Rebuttal Table 6) and other defense mechanisms (Rebuttal Table 7). The unlearn method in Sec. 4.2 can also be regarded as a defense method. Results show that **all 6 tested defense methods cannot well mitigate our attacks**.
**[Rebuttal Table 6] Attacking LLMs with adversarial training [1] on Advbench**
| | Methods | ASR-keyword (%) | ASR-answer (%) | ASR-useful (%) | Language Flaws (%) |
| --- | --- | --- | --- | --- | --- |
| LLaMA-3-8B-Instruct-RR | W/o Attack | 0 | 0 | 0 | 96 |
| | SCAV-embed | 98 | 88 | 74 | 16 |
| Mistral-7B-Instruct-RR | W/o Attack | 2 | 0 | 0 | 98 |
| | SCAV-embed | 94 | 84 | 70 | 20 |
**[Rebuttal Table 7] Attacking LLMs with defense methods (ASR-keyword, %)**
| Defense Methods | Advbench | StrongREJECT |
| --- | --- | --- |
| Self-Reminder [2] | 92 | 100 |
| ICD [3] | 98 | 96 |
| Paraphrasing [4] | 98 | 98 |
| PPL [5] | 100 | 100 |
| W/o Defense | 100 | 100 |
[1] Improving Alignment and Robustness with Short Circuiting.
[2] Defending chatgpt against jailbreak attack via self-reminders.
[3] Jailbreak and guard aligned language models with only few in-context demonstrations.
[4] Baseline defenses for adversarial attacks against aligned language models.
[5] Detecting language model attacks with perplexity.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer mJhn,
Thank you for your thoughtful feedback on our submission. We kindly remind you to review our rebuttal and let us know if it adequately addresses your concerns. If you believe our explanations and revisions have satisfactorily resolved the issues, we would greatly appreciate it if you could reconsider your evaluation of our paper.
Thank you again for your time and for providing valuable guidance. If there are any further questions or suggestions, we are glad to discuss them with you.
Best regards,
Authors of paper 7016 | Summary: This paper proposes a jailbreak attack (SCAV) inspired by the Concept Activation Vector (CAV) on neural networks. For safety concept, this concept vector is essentially defined as a direction orthogonal to the decision boundary of a linear classifier trained to distinguish safe and harmful instruction on embeddings at a given layer. SCAV works by perturbing the embedding in the direction of this concept vector (towards malicious and away from safe instructions).
Under the assumption that the concept “score” is linear, perturbing in concept vector direction is optimal. However, for token-level SCAV, the same technique does not apply so the authors come up with an optimization objective inspired by the embedding-level approach and use AutoDAN’s genetic algorithm to optimize it. Empirical results are convincing at both levels and show some good transferability to GPT-4.
Strengths: ### Originality
There are several works like JRE and RepE that attempt to mount a jailbreak attack at the embedding levels using an interpretability technique. However, this work fixes an obvious problem of these methods with a simple technique. It is also well-motivated by a classical interpretability method (albeit with a very strong assumption).
### Quality
All the formulations in Section 2 are technically sound given the linearity assumption. Algorithm 1 also makes a lot of sense. The experiments are relatively thorough (see the Weaknesses section for some suggestions) with a good number of baselines, different target models, transferability results, and some ablation studies. Among the other sections, I find Section 2.3 particularly clear and convincing.
### Significance
Strong and efficient jailbreak attacks help safety evaluation. This is a significant and timely research problem in my opinion. I also like the fact that this attack algorithm seems to also help us learn about the concept activations in LLMs (such as their linearity). The technique itself can also apply to other use cases beyond safety (concept erasure or unlearning is one other use case explored briefly in this paper).
### Clarity
The paper is well-written and easy to follow in all of the sections. I like the paper structure as well as how all of the ideas and results are presented.
Weaknesses: ### 1. Choice of layer to compute SCAV objective
I would like to see an ablation study on the choice of layer being optimized. Perhaps, Figure 4(a) does cover a bit of this, but I have a few more questions.
1. Why is only the last layer optimized in the token-level attack (Eq. (5)), unlike the embedding-level where the objective covers the prior layers? From Eq. (5), it is possible to optimize over $e_S^l$ while the objective is computed on any other layer $e_S^{k}$ where $k > l$. There seem to be some design choices here that were not explored. Presumably, if the objective depends on the last layer anyway, including earlier layers in the objective should not increase the overall computation.
2. What would happen if only the last layer is perturbed in the embedding-level attack? Is there a benefit from optimizing many layer from an early one to the last?
3. For both attacks, I would like to see how ASR changes with the choice of layer, given that only one layer is perturbed.
### 2. Token-level SCAV objective
I’m curious why the authors decide to use the objective in Eq. (5) instead of the Lagrangian relaxation of the optimization problem in Eq. (3) into something like the following:
$$
\arg\min_{e^L_S}~ \lvert| e^L_S - e^L \rvert| + \lambda P_m(e^L_S)
$$
Is there an advantage to the product form instead of a weighted sum?
### 3. Missing baseline attacks
1. **Comparison to soft prompt attack for embedding-level SCAV.** While the authors already compare SCAV to JRE and RepE, I believe it is also important to compare to other soft prompt optimization attack (such as https://arxiv.org/abs/2402.09063) as the concept is very similar. These soft prompts (can be prefix or suffix) are also known to be very successful against open-source LLMs.
2. **Comparison to GCG in Table 3 & 4 for token-level SCAV.** Unless there is a very good reason, I believe that it is important to have GCG as a baseline for any token-level attack, even if AutoDAN claims to be better than GCG in some scenarios. GCG is still the most popular attack at this point so it allows for easy comparison / verification across different works.
Technical Quality: 3
Clarity: 4
Questions for Authors: **Q1:** For the embedding-level SCAV, when an embedding $e^l$ is mentioned (e.g., L115), does this refer to (1) embedding of one of the prompt tokens or (2) a newly added prefix (like soft prompt tuning) or suffix (like GCG but soft) token? I just want to make sure I understand the detail here, but it doesn’t seem clearly defined in Section 2.1.
**Q2:** This might be a follow-up question to Q1. I’m not very convinced by the statement on L121-122:
> The first term that minimizes $|\epsilon|$ ensures a small performance loss of LLMs, avoiding flaws such as repetitive or irrelevant responses.
>
unless the optimized embedding $e^l$ is from one of the prompt tokens. So I assume that $e^l$ is the next-token embedding (i.e., from the last input token)?
**Q3:** Can the authors share details on how Figure 2 is created? Or is it more like a diagram?
**Q4:** What’s the difference between ASR-answer and ASR-useful?
**Q5:** In Table 7 (Section 4.2), Is the SCAV attack here at embedding or prompt level? Since the baselines are GCG and AutoDAN, I’m assuming that this is at a prompt level?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations and negative societal impact have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments and questions.
> **Weakness 1: Optimizing only the last layer in token-level attack**
>
We optimize only the last layer to ensure
1. **A fair comparison with baselines**: our baselines (e.g., [1]) use only information from the last layer so we follow their setting to ensure a fair comparison.
2. **Simplicity and effectiveness**: using only the last layer is 1) **simple** - does not require balancing different layers of varying embedding scales and 2) **effective** - better than baselines and achieves a **comparable performance** with a method considering multiple layers, as shown in Rebuttal Table 1.
**[Rebuttal Table 1] Comparison with SCAV-m, which considers multiple layers by extending Eq. (5) to** $\arg\min_{S} \max_{l}[P_m(e_S^l) ||e_S^l - e^l||],\text{TestAcc}(P_m^l)>P_1$ **on Advbench**
| Model | Method | ASR-keyword (%) | ASR-answer (%) | ASR-useful (%) | Language Flaws (%) |
| --- | --- | --- | --- | --- | --- |
| LLaMA-2-7B-Chat | SCAV | 54 | 60 | **44** | **52** |
| | SCAV-m | **60** | 60 | 34 | 58 |
| LLaMA-2-13B-Chat | SCAV | **72** | 46 | 28 | 58 |
| | SCAV-m | 64 | **50** | **38** | **48** |
> **Weakness 2: What would happen when perturbing only the last layer**
>
Perturbing only the last layer **results in worse ASRs and more language flaws**, as shown in Rebuttal Table 2. We suspect it is because, at the last layer, the inference procedure has nearly finished, thus forming a completely different answer can be difficult.
**[Rebuttal Table 2] Comparison with SCAV-l that perturbs only the last layer**
| Dataset | Methods | ASR-keyword (%) | ASR-answer (%) | ASR-useful (%) | Language Flaws (%) |
| --- | --- | --- | --- | --- | --- |
| Advbench | SCAV | **100** | **96** | **92** | **2** |
| | SCAV-l | 18 | 4 | 2 | 82 |
| StrongREJECT | SCAV | **100** | **98** | **96** | **10** |
| | SCAV-l | 28 | 14 | 2 | 64 |
> **Weakness 3: Impact of each layer on ASR**
>
We provide the results in the rebuttal PDF attached to the global rebuttal. Our findings are:
1. For both levels, **ASR is poor on layers that are not linearly separable** (Figure 2 in the rebuttal PDF).
2. For **embedding-level** attacks, perturbing the last layer is less effective compared with some earlier layers (e.g., layers 13 to 23). Algorithm 1 tends to select these effective (Figure 2 in the PDF).
3. For **prompt-level** attacks, using middle or late layers has a comparable attack performance (Figure 3 in the PDF).
> **Weakness 4: Why product form**
>
We use the product form because it works sufficiently well (Table 3) **without introducing an additional hyperparameter $\lambda$** for balancing $||e^L_S-e^L||$ and $P_m$, which is required for the weighted sum expression. In the product form, the percentage of increase in $||e^L_S-e^L||$ is considered to be similarly important to the percentage of increase in $P_m$, without having to consider their difference in scales.
> **Weakness 5: Comparison to soft prompt attack [2]**
>
As shown in Rebuttal Table 3, our embedding-level attack method outperforms the soft prompt attacks in terms of ASRs and language flaws.
**[Rebuttal Table 3] Comparing with soft prompt attacks**
| Evaluation Dataset | Models | Methods | ASR-keyword (%) | ASR-answer (%) | ASR-useful (%) | Language Flaws (%) |
| --- | --- | --- | --- | --- | --- | --- |
| Advbench | LLaMA-2 (7B-Chat) | SCAV | 100 | 96 | 92 | 2 |
| | | soft prompt | 56 | 50 | 40 | 62 |
| | LLaMA-2 (13B-Chat) | SCAV | 100 | 98 | 96 | 0 |
| | | soft prompt | 80 | 66 | 50 | 44 |
| StrongREJECT | LLaMA-2 (7B-Chat) | SCAV | 100 | 98 | 96 | 10 |
| | | soft prompt | 64 | 44 | 38 | 66 |
| | LLaMA-2 (13B-Chat) | SCAV | 100 | 100 | 98 | 2 |
| | | soft prompt | 74 | 28 | 28 | 68 |
> **Weakness 6: Comparison to GCG**
>
Results show that our prompt-level attack method consistently outperforms GCG on different datasets and tasks (direct prompt-level attack in Rebuttal Table 4 and transferability in Rebuttal Table 5 of the rebuttal PDF).
**[Rebuttal Table 4] Comparing with GCG**
| Evaluation Dataset | Models | Methods | ASR-keyword (%) | ASR-answer (%) | ASR-useful (%) | Language Flaws (%) |
| --- | --- | --- | --- | --- | --- | --- |
| Advbench | LLaMA-2 (7B-Chat) | SCAV-prompt | 54 | 60 | 44 | 52 |
| | | GCG | 28 | 32 | 10 | 76 |
| | LLaMA-2 (13B-Chat) | SCAV-prompt | 72 | 46 | 28 | 58 |
| | | GCG | 40 | 24 | 10 | 74 |
| StrongREJECT | LLaMA-2 (7B-Chat) | SCAV-prompt | 60 | 46 | 40 | 44 |
| | | GCG | 26 | 26 | 16 | 72 |
| | LLaMA-2 (13B-Chat) | SCAV-prompt | 54 | 48 | 46 | 42 |
| | | GCG | 34 | 18 | 16 | 80 |
> **Questions 1 and 2: What does $e^l$ refers to?**
>
$e^l$ is the embedding of the last input token. Specifically, given an input instruction with N tokens, we first attack the embedding of the N-th token to generate the first token in the answer. We then iterate this process to generate a full answer.
> **Question 3: How is Figure 2 created**
>
Figure 2 is generated by 1) generating two-dimensional pseudo-embedding data points and 2) running the attacking methods (ours, RepE, and JRE) to obtain the perturbation vector.
> **Question 4: Difference between ASR-answer and ASR-useful**
>
ASR-useful is a more strict criterion than ASR-answer. **ASR-answer** mainly evaluates whether the model answers or rejects to answer the question, while **ASR-useful** evaluates whether the model answer is truly useful for attackers. For details, please refer to Appendix A.3.2 and F.
> **Question 5: Is SCAV in Table 7 embedding or prompt level?**
>
It is embedding-level SCAV. The prompt-level baselines are added following [3].
[1] AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models
[2] Soft Prompt Threats: Attacking Safety Alignment and Unlearning in Open-Source LLMs through the Embedding Space
[3] Eraser: Jailbreaking Defense in Large Language Models via Unlearning Harmful Knowledge
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer CRgs,
Thank you for your thoughtful feedback on our submission. We kindly remind you to review our rebuttal and let us know if it adequately addresses your concerns. If you believe our explanations and revisions have satisfactorily resolved the issues, we would greatly appreciate it if you could reconsider your evaluation of our paper.
Thank you again for your time and for providing valuable guidance. If there are any further questions or suggestions, we are glad to discuss them with you.
Best regards,
Authors of paper 7016 | Summary: This paper introduces a new framework named Safety Concept Activation Vector (SCAV) for jailbreaking LLMs. This framework is built upon LLM interpretability work. Specifically, the paper utilizes an interoperability approach called Concept Activation Vector, which can linearly separate safe and unsafe instructions in the latent representation space. Given this linear separability, this paper proposes that a jailbreak attack can be achieved if one can make perturbations such that the latent feature of an unsafe prompt is perturbed to the feature subspace of a safe prompt. Following this idea, the authors show the feasibility of doing such jailbreak attacks in both the feature space and the prompt space.
Strengths: 1. The paper is well-written and well-motivated. The presentation is also clean and comprehensive.
2. The success rate of the attack is good, showing good improvement over existing baselines. The improved transferability to attack GPT-4 is also an advantage of the attack.
3. The paper also shows that existing unlearn methods fail to defend against the proposed attack, suggesting that unlearning may not be an effective solution to stronger jailbreaking attacks.
Weaknesses: 1. The idea of using an interpretability approach to assist jailbreak attacks is not new and has already been explored by [1].
2. It would be better if the authors could also discuss why the proposed attack can be more effective than previous ones and what are possible ways to mitigate the attacks.
[1] Zou, Andy, et al. "Representation engineering: A top-down approach to ai transparency." arXiv preprint arXiv:2310.01405 (2023).
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the authors clarify more on how the prompt space attack is implemented? In the paper, the authors only mentioned the use of AutoDAN’s hierarchical genetic algorithm to solve the optimization. It would be better if more details could be clarified.
For the representation space attack, the adversaries can directly interfere with the internal components of the model. Would this be closer to the threat model of fine-tuning attacks presented in [1]? In that case, it seems that fine-tuning attacks are easier and more effective. It would be good if the authors could discuss the connection.
[1] Qi, Xiangyu, et al. "Fine-tuning aligned language models compromises safety, even when users do not intend to!." arXiv preprint arXiv:2310.03693 (2023).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No major limitations were found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your insightful and constructive comments. We really appreciate the efforts that you have made in helping improve our paper.
> **Weakness 1: Interpretability used in attacks by [1].**
>
We acknowledge that previous works like [1] have explored the use of interpretability to assist attacks. In contrast, our work distinctively focuses on developing a **more accurate and principled interpretation method**. Unlike existing approaches, our method:
1. Eliminates misleading heuristics (L128-140) and thus performs significantly better than existing techniques [1,2] (Tables 1 and 2), particularly under conditions of limited training data (Figure 3).
2. Is the first to support both embedding-level and prompt-level attacks, enhancing its applicability.
3. Removes the necessity for time-consuming hyperparameter tuning.
We will update our abstract and introduction to better highlight these distinctions.
> **Weakness 2: Need to discuss why the proposed attack is more effective**
>
Why our method is more effective is introduced in the method part (L128-140, L158-165) and we will discuss more in the experiments.
1. For embedding-level attacks, our method is more effective because we **remove potentially misleading heuristics** of existing methods (L128-140). While existing works **assume** that some heuristically extracted direction (e.g., the main component of PCA) works well for perturbing the embedding without theoretical justification, we **learn** the malicious probability of each embedding from data to provide an accurate perturbation.
2. For prompt-level attacks, our method **can more accurately estimate the attack success rate** because, unlike the existing works, we do not rely on manually defined target responses, which are often different from the real model responses (more details in L158-165).
> **Weakness 3: How to mitigate the attacks**
>
Mitigating attacks is important and we will add the following discussions in the revised version:
1. **Applying existing defense methods cannot effectively mitigate our attacks**. As shown in Rebuttal Table 1, after applying existing defense methods [3,4,5,6], our attack method still achieves a high attack success rate (ASR) on two datasets according to the widely used criterion ASR-keyword (%). Here, the victim LLM is LLaMA-2 (7b-Chat). This indicates that our work identifies the inherent safety vulnerability of LLMs. Moreover, the unlearn method in Section 4.2 can also be regarded as a defense method and it also fails to mitigate our attacks (Table 7). We also attack models with adversarial training proposed by [8](Rebuttal Table 2).
**[Rebuttal Table 1] The attack success rate** **(ASR-Keyword in %) after applying defense methods**
| Defense Method | On Advbench | On StrongREJECT |
| --- | --- | --- |
| ICD [3] | 98 | 96 |
| Self-Reminder [4] | 92 | 100 |
| Paraphrasing [5] | 98 | 98 |
| PPL [6] | 100 | 100 |
| W/o Defense | 100 | 100 |
**[Rebuttal Table 2] Attacking LLMs with adversarial training [8] on Advbench**
| Models | ASR-keyword (%) | ASR-answer (%) | ASR-useful (%) | Language Flaws (%) |
| --- | --- | --- | --- | --- |
| LLaMA-3-8B-Instruct-RR | 98 | 88 | 74 | 16 |
| Mistral-7B-Instruct-RR | 94 | 84 | 70 | 20 |
2. **Suggesting other ways for mitigating the attacks**. One potential way to mitigate the attacks is to distinguish perturbed embeddings from original ones, based on the assumption that some perturbed embeddings might be different from original embeddings. We are eager to explore whether such an assumption is valid in the future.
> **Question 1: Implementation detail of prompt space attack**
>
Per your suggestion, we will add the following details to ensure that our paper is self-contained:
“Specifically, the hierarchical genetic algorithm is tailored for structured prompt text. It views the jailbreak prompt as a combination of paragraph-level population and sentence-level population. At each search iteration, it first optimizes the sentence-level population by evaluating and updating word choices within sentences. Then, it integrates these optimized sentences into the paragraph-level population and performs genetic operations to refine sentence combinations, ensuring comprehensive search and improvement across both levels with high jailbreak performance and readability.”
> **Question 2: Comparing with fine-tuning attacks [7]**
>
Per your suggestion, we compare with [7] and will add related results in the paper. As shown in the following tables, our SCAV embedding-level attack:
1. achieves a **higher attack success rate** (ASR-keyword) compared with [7] (Rebuttal Table 3)
2. requires **much less time cost and computation memory** when applying on LLaMA-2-7B-Chat, since it only requires inference (Rebuttal Table 4). The evaluation is conducted on Advbench.
**[Rebuttal Table 3] Comparison with fine-tuning attacks in terms of ASR-Keyword (%)**
| Victim LLM | Ours | fine-tuning attack |
| --- | --- | --- |
| LLaMA-2-7B-Chat | 99.2 | 95.6 |
| GPT-3.5-turbo | 95.7 | 85.0 |
**[Rebuttal Table 4] Comparison with fine-tuning attacks in terms of efficiency**
| | Ours | fine-tuning attacks |
| --- | --- | --- |
| Time cost | 30 s | 2 min |
| Computation memory | 15 G of one A100 GPU | 80 G of two A100 GPUs |
[1] Representation engineering: A top-down approach to ai transparency
[2] Open the Pandora's Box of LLMs: Jailbreaking LLMs through Representation Engineering
[3] Jailbreak and guard aligned language models with only few in-context demonstrations
[4] Defending chatgpt against jailbreak attack via self-reminders
[5] Baseline defenses for adversarial attacks against aligned language models
[6] Detecting language model attacks with perplexity
[7] Fine-tuning aligned language models compromises safety, even when users do not intend to!
[8] Improving Alignment and Robustness with Short Circuiting
---
Rebuttal Comment 1.1:
Title: Thanks Authors
Comment: I would like to thank the authors for their detailed responses. I believe this paper makes a good improvement of LLM jailbreak attacks, which could be used to further encourage future development of stronger defenses. Thus, I will keep my initial recommendation of weak accept.
Meanwhile, I noticed the authors repeatedly use the term "misleading heuristics" throughout the paper and the rebuttal context. This term is very hand-wavy. I suggest the authors be more precise on what this means, which may make the audience easier to understand.
I also noticed that the authors reported higher ASR than fine-tuning attacks. Can the authors share their intuition on why an embedding-level attack would be even stronger than fine-tuning that directly modifies model weights?
---
Reply to Comment 1.1.1:
Title: Thanks for your comment!
Comment: Thanks for your comment!
**Clarifying the specific meaning of the term “misleading heuristic”**
Thanks for pointing out the potential confusion caused by using this term. We are glad to provide a clearer and more factual explanation of what this term means.
- We observed the linear separability in embeddings (Figure 1) and the relationship between jailbreak ASR and $P_m$ (Table 9), which guided the perturbation vector learning. In comparison, the extraction methods used by RepE and JRE are assumed heuristically.
- We found that RepE and JRE sometimes cannot correctly extract the vector direction as described by their heuristic (Figure 2). Moreover, in a small number of training sample scenarios, these two baselines result in a lower ASR (Figure 3) because of this.
Following your suggestion, we will provide more precise explanations in our paper, instead of just using this term to summarize.
**Intuition on why SCAV-embedding outperforms fine-tuning attack**
We are glad to share our intuition on why SCAV-embedding performs better than fine-tuning attacks with you.
- Fine-tuning directly modifies the weight of models, which cannot adaptively modify their behavior towards specific malicious instructions. However, our paper's Algorithm 1 can ensure accurate and effective attacks by adaptively modifying the embeddings of each layer.
- LLMs obtain safety concepts through a large number of corpus fine-tuning after pre-training. A small amount of corpus used for fine-tuning attacks may not be generalized enough to bypass all the capabilities of the safety concepts. However, SCAV-embedding attacks directly modify the safety concepts in embeddings to reverse LLMs’ recognition of malicious instructions to safe ones, thus leading to higher ASR.
Thank you for praising our work in the comments. We hope the above content can clarify your doubts. We are glad to continue to provide you with any details. | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for the insightful comments and recognition of our work. We are encouraged that reviews think our paper has the following strengths:
- Well-written and easy to follow (Reviewers 6xey, CRgs, xmcv)
- Comprehensive experiments (Reviewers 6xey, CRgs, xmcv)
- Superior performance (Reviewers 6xey, CRgs, mJhn, xmcv)
- Insightful (Reviewers 6xey, CRgs, xmcv)
- Fix some problems of existing methods (Reviewers 6xey, CRgs)
Following are some of the main concerns, afterwards we address each reviewer's comments individually.
- **To Reviewer 6xey**: We greatly appreciate your suggestions, which have indeed made our paper more readable and accurate. We will clearly and precisely describe the advantages of the SCAV attack compared to baselines such as RepE and JRE. We will also provide a more comprehensive evaluation and experimental results to solidify our findings and highlight our significant advancements.
- **To Reviewer CRgs**: Thank you for your suggestions on several critical parts of our paper. We are pleased to discuss some of the key design decisions we made and will summarize these design ideas in the appendix to provide readers with more detailed information. Per your suggestions, we conducted additional comparative experiments, which confirmed the validity of our design choices. These experimental results have strengthened the solidity of our paper.
- **To Reviewer mJhn**: We appreciate your insightful comments. We fully understand your concerns about the key assumption of the linear separability of safety concepts. Following your advice, we repeated the experiments on a broader set of models and datasets, which still support our findings as stated in the paper. Additionally, we have elaborated on the benefits of using probabilities for perturbation and provided detailed explanations for choosing $P_0$ and $P_1$, which may address your concerns.
- **To Reviewer xmcv**: Thank you for your important suggestions, which have allowed us to enhance the focus on particularly interesting aspects of our work. We repeated the experiments you were skeptical about, expanded the evaluation metrics to include weighted bi- and tri-gram entropy, and provided more findings and explanations for the related results. We also appreciate your concern regarding the ethical compliance of the human involvement. We have obtained IRB-equivalent approval, which will be explicitly stated in the paper.
To provide you with some key experimental results more intuitively, we have included an additional PDF with this rebuttal. Once again, thank you for your time and your invaluable contribution to the community!
Pdf: /pdf/9e8d604b30145a9d9ade5e884f6a4610bc7f6d6c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Inductive biases of multi-task learning and finetuning: multiple regimes of feature reuse | Accept (poster) | Summary: This work investigates the implicit bias of both multi-task learning (MTL) and fine tuning pre-trained models (PT+FT) in diagonal linear networks and shallow ReLU neural networks. The contributions are as follows:
1. The authors prove that both MTL and PT+FT bias the networks to *reuse features*. This holds for both diagonal linear networks and shallow ReLU networks (Corollary 1, Corollary 2 and Proposition 3).
2. The authors prove that during finetuning networks interpolate between a "lazy" and "rich" regime. (Proposition 4).
3. For PT+FT it is observed empirically that the choosen features are a sparse subset of the features learned during pretraining.
4. The paper provides empirical evidence that for ReLU networks only PT+FT benefits from features that are correlated between the auxilary and primary task.
5. Finally, they present a practical technique to improve PT+FT by weight rescaling after pretraining.
Strengths: I think the setting is interesting and very relevant to modern deep learning. This is also one of the few papers studying the inductive bias of multi-task networks. In particular:
- The paper offers a number of results with many experiments on toy models.
- The finding that MTL and PT+FT induces a regularizer that interpolates between $\ell_1$ and $\ell_2$ regularization is interesting and novel.
- The authors go beyond theory and show how their finding can be exploited in practical architectures through a simple weight rescaling technique.
Weaknesses: - Corollary 1 and its extension for ReLU neural networks in Appendix A.1 is already well known in the literature [2] (see discuss after Remark 2) and [3] (Equation 4). These and the references therein should at least be mentioned.
- The works of [1] and [2] are both very related to the phenomenon uncovered here. In particular the notion of "neuron sharing" [2] seems to exactly correspond to the "feature reuse" phenomemon described in this work.
- It's not clear to me why the paper focuses on linear diagonal networks at all. These are not used in practice and are only studied for theoretical reasons. Since the theoretical contributions seem to be corollaries of previous results I think the paper would benefit from just focusing on shallow ReLU networks.
- The regularizer used during finetuning in Proposition 3 isn't well motivated. Minimizing the $\ell_2$ norm of the weight changes during fine tuning is not weight decay and seems a bit ad-hoc. It would be great to provide some discussion on why you choose to analyze this regularizer or whether this is done in practice.
- The notation is very loaded and makes the paper difficult to parse. For example $\vec{w}^{(1),aux}_{h}$.
- The figures are also very small and hard to read. It would be better to fill in all the whitespace between the different plots and include a legend for the different settings instead of a colorbar.
**Minor things**
- Lines 51 and 54 reuse the word "Finally".
- Line 82: Should the inputs to the model $x \in \mathbb{R}^{D}$?
- Line 113 paraeter --> parameter
- Line 135: Typo $\mathbb{R}^{d} \in \mathbb{R}$
- Line 251: yon --> on
- In Section 3 $D$ and $d$ are both used interchangeably for the input dimension of the networks.
[1] Collins, Liam, et al. "Provable multi-task representation learning by two-layer relu neural networks." arXiv preprint arXiv:2307.06887 (2023).
[2] Shenouda, Joseph, et al. "Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression." arXiv preprint arXiv:2305.16534 (2023).
[3] Yang, Liu, et al. "A better way to decay: Proximal gradient training algorithms for neural nets." OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop). 2022.
Technical Quality: 3
Clarity: 1
Questions for Authors: - Where is the proof for Corollary 2?
- When finetuning are the weights of both layers trained or only the second layer weights?
- Where are the results from the experiments discussed in Section 3.3?
- In Fig 2a, what do the labels on the colorbars mean "# Active dims" and "Units"? Is this the number of features in the teacher model?
- Line 176 what do you mean by "overlapping dimensions"? You mean overlapping features?
- On line 178 what is LP in PT+FT(LP)?
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: Limitations have been mentioned in the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your helpful review. Below we respond to your comments and questions. (Due to the character limit on the rebuttal, we focus here on the most important responses and have relegated additional responses to a comment.)
First, in response to the reviewer’s summary of the paper, we wanted to clarify a few of our key contributions. If you think we could highlight these contributions more clearly in the revised version of the paper, we would appreciate any suggestions.
> The authors prove that during finetuning networks interpolate between a "lazy" and "rich" regime. (Proposition 4).
Specifically, we show that this interpolation can be described in terms of a conserved quantity expressing a tradeoff between initialization dependence and sparsity, and that a network’s position on this tradeoff is tied to the scale of the weights following pretraining.
> For PT+FT it is observed empirically that the choosen features are a sparse subset of the features learned during pretraining.
The insight above allowed us to predict the existence of a “nested feature selection” regime, in which finetuning extracts a sparse subset of the features learned during pretraining (due to intermediate levels of both sparsity and initialization dependence). This is a qualitatively different regime from the lazy regime (which is initialization-dependent, but not sparse) and rich regime (which is sparse, but not initialization-dependent) that were known previously. Networks do not always exhibit this behavior, but do in some cases, and can be induced to exhibit it via the weight rescaling technique we introduce.
**Responses to comments**
> - Corollary 1 and its extension for ReLU neural networks in Appendix A.1 is already well known in the literature [2] (see discuss after Remark 2) and [3] (Equation 4). These and the references therein should at least be mentioned.
> - The works of [1] and [2] are both very related to the phenomenon uncovered here. In particular the notion of "neuron sharing" [2] seems to exactly correspond to the "feature reuse" phenomemon described in this work.
Thank you for pointing us to these highly relevant papers; they were interesting to read and indeed highly relevant. We’ll make sure to cite these papers when describing the MTL penalty and when describing the feature reuse bias (in particular, Theorem 9 in [2]). We will also discuss this line of work in the Related Work section. We note that in our original draft we did not claim that Corollary 1 was an original result (we cited reference [16], Dai et al.) but we appreciate the relevant references for the ReLU network case and agree they are important to cite.
> It's not clear to me why the paper focuses on linear diagonal networks at all. These are not used in practice and are only studied for theoretical reasons. Since the theoretical contributions seem to be corollaries of previous results I think the paper would benefit from just focusing on shallow ReLU networks.
We see our results on diagonal linear networks as complementary to our results on ReLU networks, as we are able to prove our theoretical claims with fewer assumptions in the diagonal linear case. In particular, the diagonal linear setting allows us to analytically derive the effects of implicit regularization due to gradient descent for PT+FT (in the ReLU case we had to assume an explicit regularizer, due to the difficulty of analyzing gradient descent dynamics in nonlinear networks). This helps clarify the mechanism underlying our key theoretical results -- the nested feature-selection regime and our “conservation law” -- and motivates our subsequent ReLU network analysis.
> The regularizer used during finetuning in Proposition 3 isn't well motivated. (...) It would be great to provide some discussion on why you choose to analyze this regularizer or whether this is done in practice.
We agree that this deserves discussion separately from the explicit regularization considered in the multi-task learning setup. We chose to consider this regularization penalty for two reasons. First, infinitesimal explicit regularization from initialization is equivalent to the implicit regularization induced by gradient descent in the case of shallow linear models [4 below]. While this is not true more generally, it is a useful heuristic to motivate theoretical analysis (which of course must then be checked against experiments). Assuming the explicit regularization heuristic allowed us to derive a number of nontrivial predictions that we would not otherwise have come up with, and which we confirmed hold true in empirical simulations of unregularized gradient descent. We also note that this explicit regularization penalty is sometimes studied in the context of continual learning [e.g. 5,6 below]. We will make sure to clarify the motivation for basing our ReLU theory on this regularization penalty in the revised manuscript.
> Where is the proof for Corollary 2?
This was an oversight on our part. We will add an explanation to the appendix detailing how this follows from Azulay et al. Specifically, we derive this results in two steps: first, we note that if after pretraining the network has the effective linear predictor $\beta^{aux}$, the first hidden layer has the weights $\sqrt{\beta^{aux}}$, where the square root is applied element-wise. Having set the readout initialization to $\gamma$, we then apply Theorem 4.1 in Azulay et al.
> Where are the results from the experiments discussed in Section 3.3?
Section 3.3 describes the setup for all the teacher-student experiments presented in section 4. We will clarify this in the revised manuscript.
4. Gunasekar et al. "Characterizing implicit bias in terms of optimization geometry." ICML (2018).
5. Lubana et al. "How do quadratic regularizers prevent catastrophic forgetting: The role of interpolation." CoLLAs (2022).
6. Evron et al. "Continual learning in linear classification on separable data." ICML (2023).
---
Rebuttal 2:
Title: Additional responses
Comment: Below are additional responses to your questions and suggestions that we did not have sufficient space for in the main rebuttal.
> - The notation is very loaded and makes the paper difficult to parse. For example $w^{(1),aux}_h$.
> - The figures are also very small and hard to read. It would be better to fill in all the whitespace between the different plots and include a legend for the different settings instead of a colorbar
Thank you for highlighting this. We’ll change the notation to denote hidden weights (i.e. $w^{(1)}$) by $w$ and readout weights (i.e. $w^{(2)}$) by $v$, in addition to a few other changes that aim to clarify notation a bit. We also appreciate the suggestion on the figures — we’ll make them bigger and agree that replacing some of the color bars by legends would be helpful.
> When finetuning are the weights of both layers trained or only the second layer weights?
> On line 178 what is LP in PT+FT(LP)?
Finetuning means that both layers are trained whereas LP stands for linear probing, meaning that only the readout weights are trained. Importantly, this means that finetuning is able to perform feature learning whereas linear probing cannot. We will add a sentence clarifying this to the manuscript.
> In Fig 2a, what do the labels on the colorbars mean "# Active dims" and "Units"? Is this the number of features in the teacher model?
Yes, exactly. Would “# Non-zero dims.” and “# Units” be clearer?
> Line 176 what do you mean by "overlapping dimensions"? You mean overlapping features?
Yes. We will change this to “overlapping non-zero dimensions (i.e. features)” (as we want to make clear that in this case, the different features are simply different dimensions of the input).
Again, thank you very much for your review!
---
Rebuttal 3:
Title: Thank you for the response
Comment: I thank the authors for their rebuttal which have clarified many of my questions and addressed the concerns raised. While I still have some reservations about the presentation, I think the results are interesting and I trust the authors will make the necessary changes in the camera-ready, therefore I will raise my score to 5 (Borderline Accept).
Aside: I still think that while the results on diagonal linear networks are neat and clean they don't really seem necessary and they distract from some of the more interesting points in the paper. I would really suggest moving all of the diagonal linear net stuff like Corollary 1/2 and diagonal linear experiments to a section in the Appendix. I would use the extra space to make these figures bigger, provide more justification on some of the assumptions being used in Prop. 3 and move some of the experiments from the Appendix to the main body.
**Typos**
- Line 87 and 88: What is $\ell_1$ and $\ell_2$ norm of $f$ are these typos? It seems like you're taking an $\ell_2$ norm of a function which doesn't really make sense...
- Line 85 has a switch in the citation style.
---
Rebuttal 4:
Title: A Small Contrasting Opinion Re Linear Diagonal Networks
Comment: I want to note that I differ slightly from Reviewer 4YZg on the recommendation about moving linear diagonal network results to Appendix -- I think as the authors say, including the results gives an insight starting from strong theoretical backing with limited assumptions that they then show in increasingly realistic settings, albeit with less formalism. As a theorist, I value the range of models in which this insight is presented; it not only substantiates the authors' results from multiple epistemic perspectives but also is encouraging to theorists to see that the simple models we study corroborate insights derived via different tools (importantly, in different settings than in which the model was initially developed)! Including these results in the main body only strengthens the narrative, in my opinion.
I could imagine that for an empiricist reading this paper, the results in the extremely stylized model may not not provide as much value. I encourage the authors to think about their target audience(s) in making the decision of whether or not to move the results to the appendix.
---
Rebuttal Comment 4.1:
Title: Seconding the opinion to keep diagonal network in main paper not Appendix
Comment: I will second the opinion of Reviewer 969G here. My initial impression was the same as Reviewer 4YZg on the use of linear diagonal networks as the initial proof of concept. My main issue is that these are very contrived networks with specific properties that are not used at all in contemporary deep learning. However, the more I read about them the more I saw that they still act as a stepping stone in theoretical studies and are an excellent starting point for explainable deep learning re: feature reuse. After reading the entire paper several times I think the LDN section has a place in the main paper not appendix.
---
Rebuttal 5:
Comment: We are glad that our rebuttal addressed the reviewer's concerns and thank them for increasing their score.
We also appreciate their note on the diagonal linear network section (suggesting that it should be moved to the appendix), as well as the input on this question by reviewers 969G and FiYT (who suggest it is a useful part of the main article, for the reason that we provide in the rebuttal). We really appreciate everyone's input on this. We have decided to keep the diagonal linear network section in the main text. However, we will clarify in more detail the role of these networks in connecting the theory in our paper to existing lines of work in deep learning theory. Note that in response to a suggestion by reviewer Ry6c, we will move our definitions of the diagonal linear and ReLU networks to a dedicated section ("Theoretical setup") before describing our theoretical results. We will add to this section a detailed explanation of the role of diagonal linear networks in our paper.
We believe that this will make our paper more broadly accessible: for readers who are interested in these networks as a theoretical stepping stone, we present diagonal linear networks in an integrated manner with our other findings; for empirically minded readers, we clarify early on why we focus on these networks and make it as easy as possible for them to focus their attention on ReLU networks and large-scale neural networks (if they wish to do so). We will also make sure to provide more justification on the assumptions used in Prop. 3 and make our figures bigger (using the extra page allotted to the camera-ready version).
We want to thank all the reviewers for providing their input on this question, as this was a very helpful discussion.
**Response to typos**
> Line 87 and 88: What is $\ell_1$ and $\ell_2$ norm of $f$ are these typos? It seems like you're taking an $\ell_2$ norm of a function which doesn't really make sense...
Thank you for pointing this out. Our intention in these lines was to define the $\ell_1$/$\ell_2$-norm of these functions in terms of the $\ell_1$/$\ell_2$-norm of the linear coefficients (as we consider linear functions). However, we agree that this is confusing and will replace $\|f\|_{\ell_1}$/$\|f\|_{ell_2}$ by $\|\beta\|_1$/$\|\beta\|_2$.
> Line 85 has a switch in the citation style.
Thanks for pointing this out.
Once again, thank you very much for your response as well as your original review. | Summary: Abstract
The goals of the paper are very clear from the abstract, as are the results.
Introduction
Lines 23-42: The authors do a great job of summarizing the applications of MTL (using it here loosely to capture MTL and PF+FT) while recognizing that very little work has been done to explore exactly why it works as well as it does in data-limited settings. We have strong intuitions about its regularization effects with minimal work backing our intuitions
Related Work
Lines 60-68: Noting the extent of prior work analyzing regularization effects in single-task modeling is hopeful in showing the impact of the author’s work in adding to similar work for MTL.
Lines 85-89: I would do a better job of defining “large initialization” vs “small initialization” for the reader without requiring a reference check by the reader. One could assume that large = overparameterized and small = underparameterized versus large vs small magnitudes for the same number of parameters at initialization.
Implicit and explicit regularization penalties for MTL and PT+FT
Lines 94-102: I think most readers would take the authors at their heuristic given what they note in Lines 101-102, that explicit weight decay use in practice yields a permissible heuristic. However, the references, especially to [12] enhance their argument.
Lines 103-111: The notation in Corollary 1 is a bit confusing. I am not sure why beta vector aux has the aux not in subscript when defining vector beta 2. I also do not know why we introduce vector beta 2 other than just showing that we are indexing the outputs since we continue to stick with vector beta aux. I see that in 3 we now subscript the dimension but I think the notation can be more clean.
Line 112: Typo: “A ReLU networks”
Lines 112-115: I have not seen a ReLU network defined this way and from the authors’ text the notation is again confusing. The authors state they are defining a ReLU network with O outputs but (4) that follows defines only a single final output from the network. What follows then makes us think that the “outputs” are the units in a single hidden layer? The notation is unnecessarily confusing for a basic ReLU network and something more conventional would help especially in the context of MTL. Also, the L1,2 norm is referenced several times already but not defined. There is another typo on line 113 “paraeter”.
Lines 117-124: Why are the DLN network weights initialized with a constant magnitude?
Lines 128-129: We are not told how we get to (6) and not appendix section is referenced
Line 131: Typo
Line 134: Herein we find more confusion because now (4) is referenced as an equation for a ReLU network with a single output, whereas this was not how it was defined in the preceding text before Equation (4). Also, weights within the set of R d within the set of R?
Line 135: Is vector gamma simply a uniform vector (constant magnitude) of the same scalar similar to how the DLN was defined or is the vector gamma different values at re-initialization (not uniform)?
Lines 137-139: It is not until Line 139 that theta and m are well defined despite numerous equations/references to them earlier in the manuscript.
Figure 2: I would define STL as single task learning for the less familiar reader
Lines 144-151: The experiment is explained well and easy to follow.
Lines 152-155: More explanation for the ReLU would help. By “sparse” number of (hidden?) units we mean a low dimensional hidden layer?
Lines 158-165: Everything is clear save how we arrived at c. Appendix reference?
Lines 158-182: I like the elegance of the intuition to simplify the experiments but just having no feature overlap in aux and main tasks in small pretrained feature case and full feature overlap in aux and main in large pretrained feature case.
Lines 183-194: Again, elegance in defining the feature sharing to explicitly study what was of interest (simultaneous sparsity and feature sharing).
Lines 206-215: The definitions are clear and the intermediate regime is set up well by the authors.
Lines 223-250: Again, the elegance of the experiment set-ups to reinforce the nested feature selection regime.
Lines 256-257: I would like to hear more about this from the authors in the Conclusion. It seems like the authors are saying that there seems to be some “critical” (used loosely here) rescaling magnitude between aux aux and main tasks in MTL over which the network is unable to enter the nested feature selection regime. I would have liked to see the authors stress this to see what they found re: network tolerance to different rescaling magnitudes. Hopefully this can be a future work.
Lines 262-264: I am not sure this is the case. MTL can certainly learn correlated features in shallow layers where one task’s use of the common features are simply rescaled later in the network relative to the features learned for an auxiliary task in isolation. Unless the authors are speaking more strictly of actual soft parameter sharing here, in which case, yes, separate parameters are being learned but kept similar. I think this is confirmed in the low sample MTL results (where we know MTL works best when it can). It does not work as well as PT+FT but still beats STL. It is a marginal beat.
Lines 312-315: Very interesting finding on a large dataset and more complex network.
Summary: The authors explore the inductive biases behind the success of MTL and PT+FT, a very under-studies phenomenon. Many interesting discoveries were made, most interestingly in their newly names "nested feature selection regime". This is important and timely work. I would like some of the notation cleaned-up for better reach to a wide audience.
Strengths: The theoretical justifications for the set-up of the experiments are excellent. The experiments that follow are also well-designed to allow the authors to probe their implicit regularization penalties. The authors kept probing until arriving on their nested feature regime. I liked the trajectory from theory to synthetic data/systems experiments to findings to larger/classic datasets and networks.
Weaknesses: I think some of the notation needs to be cleaned-up to have this important work reach as broad an audience as possible. Much is left to the appendix and when it is the references are usually there, but I would make sure they are always there for the reader to reference from the main body of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Why are the DLN network weights initialized with a constant magnitude?
The authors mention how entrance into the nested feature regime is sensitive to the correlation between aux and main task features. Some more thoughts in the Conclusions about further theoretical thresholding of this or future experiments would be of interest to me as the authors seemed to have good theoretical foundation for all other experiments in the paper.
Beyond me being pedantic, is this entire paper working on multi-output learning not multi-task learning? Traditionally when we talk about MTL we have two tasks with a different loss function, different goals, different processes, different dimensionality of input features (image classification as task 1 and image object segmentation as task 2). From what I can tell, the auxiliary and main tasks in the synthetic experiments are different instantiations of the same process, with carefully crafted differences in features/coefficients/etc in order to generate different outputs, then described as different tasks. I would describe this as multi-output learning. All of the findings still hold of course, but I would make the distinction more clear.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive feedback and your helpful comments. We will make sure to make the notation more accessible and really appreciate your detailed notes on this. Below we respond to the questions and comments you raised in your review. (Due to the character limit, we focus on the most important responses in the rebuttal and have relegated additional responses to a comment.)
> Why are the DLN network weights initialized with a constant magnitude?
We wanted to remove the random variation induced by having randomly sampled readout weights, and to be consistent with Woodworth et al. who also considered constant-magnitude initialization. However, we would expect qualitatively similar results for randomly sampled weights from a Gaussian. Unlike for ReLU networks, random weights are not really necessary for symmetry breaking in diagonal linear networks, as each unit already receives input from a different dimension.
> The authors mention how entrance into the nested feature regime is sensitive to the correlation between aux and main task features. Some more thoughts in the Conclusions about further theoretical thresholding of this or future experiments would be of interest to me as the authors seemed to have good theoretical foundation for all other experiments in the paper.
We agree that the task dependence of where the nested feature selection regime falls is an interesting phenomenon. As the reviewer points out, we show a theoretical basis for such sensitivity in Fig. 3d. However, some of our empirical results are not completely described by the theory – empirically, even when auxiliary and main task features are identical, ReLU networks exhibit behavior that our theory would expect to arise in the case where they are highly correlated but not identical. We conjecture that this discrepancy arises due to noise in stochastic gradient descent dynamics (i.e. the inferred features on the two tasks may be highly correlated but not identical). Furthermore, we observe differences between ResNets and Vision Transformers in where the nested feature selection regime occurs. Our shallow network theory is not able to speak to these architectural differences. We think that clarifying this picture is an important direction for future work, and will add a sentence on this to the conclusion.
> Beyond me being pedantic, is this entire paper working on multi-output learning not multi-task learning? (...) From what I can tell, the auxiliary and main tasks in the synthetic experiments are different instantiations of the same process, with carefully crafted differences in features/coefficients/etc in order to generate different outputs, then described as different tasks. I would describe this as multi-output learning. All of the findings still hold of course, but I would make the distinction more clear.
We agree that in practice multi-task learning is much more versatile than the setup we consider here. We opted for such similar main and auxiliary tasks to remove additional factors of variation and to keep the teacher-student setup parsimonious, but will update the manuscript to acknowledge the much broader range of multi-task training used in practice. We expect that our findings are relevant to other kinds of multi-task learning, but have not yet established this, and think it is an important direction for future work.
> Lines 256-257: I would like to hear more about this from the authors in the Conclusion. It seems like the authors are saying that there seems to be some “critical” (used loosely here) rescaling magnitude between aux aux and main tasks in MTL over which the network is unable to enter the nested feature selection regime. I would have liked to see the authors stress this to see what they found re: network tolerance to different rescaling magnitudes. Hopefully this can be a future work.
We agree that this is a really fascinating phenomenon. So far, we have found a (not fully rigorous) theoretical basis for differences in the rescaling magnitude required to enter the nested feature selection regime between diagonal linear networks and ReLU networks (see lines 245-250 for our explanation). We have also empirically found that rescaling by a lower value is required for ResNets to enter the nested feature selection regime, whereas this is not required for Vision Transformers (and indeed such rescaling appears to cause them to leave this regime). We don’t yet understand the reason for this difference, and we agree that a more comprehensive investigation of this phenomenon (across networks and tasks) would be very interesting future work. We will add a sentence on this topic to the conclusion.
> Lines 262-264: I am not sure this is the case. MTL can certainly learn correlated features in shallow layers where one task’s use of the common features are simply rescaled later in the network relative to the features learned for an auxiliary task in isolation. Unless the authors are speaking more strictly of actual soft parameter sharing here, in which case, yes, separate parameters are being learned but kept similar. I think this is confirmed in the low sample MTL results (where we know MTL works best when it can). It does not work as well as PT+FT but still beats STL. It is a marginal beat.
To clarify, while MTL can certainly learn correlated features using different hidden units, it does not benefit from such correlated units in terms of its overall norm (at least in a neural network with one hidden layer). Note that in the low-sample regime, MTL simply uses the same unit for both tasks, which is not entirely accurate but works reasonably well (see Fig. 8e, which shows that the student units have a correlation of 0.9 with the main task teacher, suggesting that they are identical with the units the student uses for the auxiliary task). We'll change this sentence to make that point clearer.
---
Rebuttal Comment 1.1:
Title: First Response
Comment: I appreciate the authors' follow-up responses. Much of my questions were asking for clarity on theoretical implications of the authors' work. Although it is not all addressed in the work here, the authors have many ideas that they are adding into the Discussion and Conclusions sections to drive further work by them and others.
I think my point re: " I am not sure this is the case. MTL can certainly learn correlated features in shallow layers where one task’s use of the common features are simply rescaled later in the network relative to the features learned for an auxiliary task in isolation" came from the authors and myself referencing two different things. I was certainly referring to a NN with more than one layer whereas the authors were making their statement in the context of a NN with a single hidden layer. I do not think we have any disagreement in the more restricted case being described by the authors.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response, and thank you again for your helpful review. We agree that MTL could encode correlated features in a network with more than one hidden layer, which could be an interesting direction for future work.
---
Rebuttal 2:
Title: Additional responses
Comment: Below, we are providing some additional responses that we did not have space for in our rebuttal:
> Lines 85-89: I would do a better job of defining “large initialization” vs “small initialization” for the reader without requiring a reference check by the reader.
Thank you for pointing this out! We will change "large/small initialization" to "large/small initial weight magnitude".
> Lines 103-111: The notation in Corollary 1 is a bit confusing. I am not sure why beta vector aux has the aux not in subscript when defining vector beta 2. I also do not know why we introduce vector beta 2 other than just showing that we are indexing the outputs since we continue to stick with vector beta aux. I see that in 3 we now subscript the dimension but I think the notation can be more clean.
We agree that the notation is overly complicated and will clean this up. Specifically, we will now introduce the two output dimensions by the superscripts “aux” and “main” from the beginning.
> Lines 112-115: I have not seen a ReLU network defined this way and from the authors’ text the notation is again confusing. The authors state they are defining a ReLU network with O outputs but (4) that follows defines only a single final output from the network. What follows then makes us think that the “outputs” are the units in a single hidden layer? The notation is unnecessarily confusing for a basic ReLU network and something more conventional would help especially in the context of MTL. Also, the L1,2 norm is referenced several times already but not defined. There is another typo on line 113 “paraeter”.
> Lines 137-139: It is not until Line 139 that theta and m are well defined despite numerous equations/references to them earlier in the manuscript.
Thank you for highlighting this! We’ll make the notation here more conventional and will add a separate remark early on explaining the alternative parameterization by the magnitude $m$ and unit direction $\theta$ (as this parameterization is important for characterizing the penalties we derive). We’ll also make sure to properly define the L1,2 norm.
> Lines 128-129: We are not told how we get to (6) and not appendix section is referenced
This was an oversight on our part. We will add an explanation to the appendix detailing how this follows from Azulay et al. Specifically, we derive this results in two steps: first, we note that if after pretraining the network has the effective linear predictor $\beta^{aux}$, the first hidden layer has the weights $\sqrt{\beta^{aux}}$, where the square root is applied element-wise. Having set the readout initialization to $\gamma$, we then apply Theorem 4.1 in Azulay et al. Note that we use a slightly different functional form of $q$ to enhance readability.
> Line 134: Herein we find more confusion because now (4) is referenced as an equation for a ReLU network with a single output, whereas this was not how it was defined in the preceding text before Equation (4). Also, weights within the set of R d within the set of R?
To clarify, the single-output ReLU network is obtained by setting $O=1$ in Eq. 4. However, as noted above, we will change our explanation of this setup to make it clearer. The "$\in\mathbb{R}$" is a typo; thank you for pointing it out.
> Line 135: Is vector gamma simply a uniform vector (constant magnitude) of the same scalar similar to how the DLN was defined or is the vector gamma different values at re-initialization (not uniform)?
Our theorem applies to arbitrary vectors $\gamma$. In practice we use a randomly sampled readout with a variance of $10^{-3}\sqrt{2/H}$. We will add a sentence clarifying this point; thank you for pointing this out.
> Figure 2: I would define STL as single task learning for the less familiar reader
We agree.
> Lines 152-155: More explanation for the ReLU would help. By “sparse” number of (hidden?) units we mean a low dimensional hidden layer?
Yes, i.e. a low number of hidden units. We will clarify this --- thank you for pointing this out!
> Lines 158-165: Everything is clear save how we arrived at c. Appendix reference?
We should have clarified that this is a direct application of an analysis in Woodworth et al., and will add such a clarification.
> Lines 158-182: I like the elegance of the intuition to simplify the experiments but just having no feature overlap in aux and main tasks in small pretrained feature case and full feature overlap in aux and main in large pretrained feature case.
> Lines 183-194: Again, elegance in defining the feature sharing to explicitly study what was of interest (simultaneous sparsity and feature sharing).
> Lines 223-250: Again, the elegance of the experiment set-ups to reinforce the nested feature selection regime.
Thank you very much for these (and the other) positive comments! It’s useful for us to get this kind of feedback, as it helps us know when our explanations/experiments are clear.
Again, thank you very much for your helpful review! | Summary: In this study, the authors explore the inductive biases associated with multi-task learning (MTL) and the sequential process of pretraining followed by finetuning (PT+FT) in neural networks. Specifically, they analyze the implicit regularization effects in diagonal linear networks and single-hidden-layer ReLU networks under these training paradigms. The findings reveal that both MTL and PT+FT promote feature reuse across tasks and favor sparsity in the feature set, establishing a conservation law that illustrates a tradeoff between these biases. Additionally, a unique "nested feature selection" pattern in PT+FT is identified, where a sparse subset of features learned during pretraining is selectively refined. This behavior contrasts with the broader feature learning strategies seen in MTL. Empirical validation using teacher-student models and experiments on deep networks trained for image classification confirms the theoretical insights. The authors also propose a practical enhancement involving weight rescaling post-pretraining, which their results suggest can optimize finetuning by encouraging the network to engage more effectively in the nested feature selection regime.
Strengths: 1. The article provides an in-depth characterization of the inductive biases associated with two common training strategies—Multi-Task Learning (MTL) and Pretraining followed by Fine-Tuning (PT+FT)—in diagonal linear and ReLU networks. This detailed analysis is crucial for understanding the impacts of different training strategies.
2. By pushing networks into the nested feature-selection regime, the article proposes simple techniques to improve PT+FT performance, which have shown promising empirical results, adding practical application value.
Weaknesses: 1. Although the findings are promising, the article notes that more work is needed to test these phenomena in more complex tasks and larger models. This suggests that the research's applicability and universality might be limited.
2. The article outlines promising avenues for extending the theoretical work, such as connecting derived penalties for ReLU networks to the dynamics of gradient descent, and extending the theory to the case of cross-entropy loss. This implies that the current theoretical foundation still requires further development.
3. The expression of the article needs further improvement. It is suggested that the author should introduce the theoretical settings of analyzing MTL and PT+FT in separate sections in detail, rather than the current form scattered across various sections.
Technical Quality: 3
Clarity: 2
Questions for Authors: See in Weaknesses.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your helpful review. Below we respond in detail to your comments.
> Although the findings are promising, the article notes that more work is needed to test these phenomena in more complex tasks and larger models. This suggests that the research's applicability and universality might be limited.
We are indeed quite excited about investigating these phenomena in such more complex settings — we expect that such an investigation would reveal an even more nuanced picture in terms of when and how an inductive bias towards nested feature sparsity, correlated features, and shared features aids generalization. We believe that such an investigation is best conducted in a new manuscript, whereas the main focus of this paper was to carefully derive and explain how these inductive biases arise in simpler networks. We’d also like to emphasize that we’re already providing an investigation of these empirical phenomena that considers several deep neural network architectures (ResNet, VGG, Vision Transformers) and standard benchmark datasets (Imagenet and CIFAR-10). As such, we believe that we are already providing useful evidence for the applicability of our insights, but we acknowledge that there is much more work to be done on this in future papers.
> The article outlines promising avenues for extending the theoretical work, such as connecting derived penalties for ReLU networks to the dynamics of gradient descent, and extending the theory to the case of cross-entropy loss. This implies that the current theoretical foundation still requires further development.
We agree that these are exciting avenues. Understanding how the penalties arising from explicit regularization differ from and connect to the penalties arising from the implicit regularization of gradient descent is technically challenging but an important direction. (We note that characterizing the impact of explicit regularization is also useful and important in its own right, and is a common strategy for understanding the inductive biases of networks, see e.g. Savarese et al. (2019); Evron et al. (2022); see also l.97-102 in our manuscript.) Similarly, we would be interested in extending our theory to the cross-entropy loss (as well as other losses) and expect the same principles of feature sparsity, sharing, and correlations to be important in that setting as well. Doing so would require introducing new teacher-student setups and introduce some additional practical considerations (e.g. weights of cross-entropy networks grow without bound rather than converge) which we believe are best left for another paper. We felt it was better to focus this paper on analyzing the regression setting in depth, to provide a solid foundation for future extensions to our work.
> The expression of the article needs further improvement. It is suggested that the author should introduce the theoretical settings of analyzing MTL and PT+FT in separate sections in detail, rather than the current form scattered across various sections.
Thank you for this suggestion. We agree that describing the theoretical setup in a separate section could be a helpful change. As a result, we will make the following change to the camera-ready version if accepted: we will add a new section 3.1 (“Theoretical setup”), which defines diagonal linear networks and ReLU networks and defines multi-task learning and pretraining+finetuning. We will then introduce our theoretical results in dedicated sections. Currently, our sections 3.1 and 3.2 both introduce the network setups and describe theoretical results – these changes will have the effect of refactoring these sections to separate our description of the setup from our theoretical results. Could you clarify if this is the kind of change you had in mind? We are open to other ways of organizing the content.
**References**
Savarese, Pedro, et al. "How do infinite width bounded norm networks look in function space?." Conference on Learning Theory. PMLR, 2019.
Evron, Itay, et al. "How catastrophic can catastrophic forgetting be in linear regression?." Conference on Learning Theory. PMLR, 2022. | Summary: This paper studies the implicit bias of gradient descent on the linear diagonal model and two-layer ReLU networks but instead of looking at the standard single output regression / classification setting, the authors study multiple outputs. In particular, the dataset X is associated with labels y and another dataset X_aux is associated with labels y_aux, which comprise an auxiliary task. With these, they study the multitask setting and the pretraining and fine-tuning setting. First, they apply several pre-existing results on implicit regularization to derive the implicit regularizer in each of these cases. They then analyze the qualitative implications of these results and run experiments to validate those qualitative analyses.
Strengths: * I really like the style of the paper! Take a theoretical result, extend it to a new setting, understand its implications, and verify them! The implications are not only verified on simplified models but also in more realistic models.
* I view the audience of your paper as a much broader community than just the theory of deep learning community, as there are concrete and possibly algorithmic implications for your findings (i.e., it may be possible to design algorithms that actively incentivize certain kinds of feature learning in different settings via your results).
Weaknesses: A good paper overall. See some questions below.
Technical Quality: 4
Clarity: 3
Questions for Authors: * This paper seems somewhat related to the PT + FT setting. Mainly out of curiosity, as I think the settings are a little different, do you see any connections?
Evron, I., Moroshko, E., Buzaglo, G., Khriesh, M., Marjieh, B., Srebro, N. & Soudry, D.. (2023). Continual Learning in Linear Classification on Separable Data. Proceedings of the 40th International Conference on Machine Learning.
* These papers seem related to your feature sparsity and sharing experiments. Can you please discuss how they relate? In particular, do you see the emergence of a similar phenomenon to these works in the intermediate regime?
Lee, S., Goldt, S. & Saxe, A.. (2021). Continual Learning in the Teacher-Student Setup: Impact of Task Similarity. Proceedings of the 38th International Conference on Machine Learning.
Lee, S., Mannelli, S.S., Clopath, C., Goldt, S. & Saxe, A.. (2022). Maslow’s Hammer in Catastrophic Forgetting: Node Re-Use vs. Node Activation. Proceedings of the 39th International Conference on Machine Learning
* While for someone familiar with the literature it is not so necessary, for accessibility to a more general audience, it would be helpful to get a little more discussion on what the settings and results are in [Dai et al], [Woodworth et al], [Azulay et al] and in particular why you need different results for the different settings. As mentioned above, your paper is one for a broad audience, one that is certainly much broader than just the implicit regularization community, who would be the ones most familiar with the subtleties in those above results.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Work is self-contained and aims to explain; the limitations of the theory in explaining certain cases are raised and addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive feedback and your helpful comments. Below we respond to your questions.
> This paper seems somewhat related to the PT + FT setting. Mainly out of curiosity, as I think the settings are a little different, do you see any connections?
We agree that this paper is related. This paper (which focuses on classification, though see Evron et al. (2022), which focuses on regression) also analyzes sequential learning, focusing on how learning of subsequent tasks affects forgetting of previous tasks (i.e. a continual learning perspective). In contrast, we focus (in our PT+FT analysis) on how learning of a previous task impacts generalization on the subsequent task. Further, the authors analyze a shallow network using only a linear readout, whereas we analyze networks with a hidden layer, enabling us to characterize the impact of feature learning (e.g. giving rise to feature sparsity and nested sparsity biases). Thus, these papers use complementary methods to study a related, but different problem. Future work could both analyze generalization on sequential tasks using the methods established in Evron et al., or use our analysis to analyze the continual learning setup. We find both of these directions quite promising and will add the paper to the Related Work section.
> These papers seem related to your feature sparsity and sharing experiments. Can you please discuss how they relate? In particular, do you see the emergence of a similar phenomenon to these works in the intermediate regime?
There are indeed a number of similarities between these papers and our own investigation. In particular, they also investigate a teacher-student setup (albeit with a different nonlinearity), using two different teachers with various overlaps. Notably, their focus (just like that of Evron et al.) is on forgetting, but they also investigate forward transfer, studying how well the networks can learn novel tasks with various overlaps. In particular, they find that these networks do not always reach optimal training error, which may be due to the online learning setup, the challenging learning landscape caused by the initialization induced by pretraining, and the fact that the student networks have about as many parameters as the teacher. In contrast, we study overparameterized student networks which are able to reach arbitrarily low training error and then investigate their generalization error. Nevertheless, these phenomena may be affected by similar mechanisms: in particular, Lee et al. (2021; 2022) also manipulate the alignment between the two teachers by varying the correlations of their hidden features and find maximal forgetting for intermediate correlations. While we focus on the generalization error on the finetuning task (rather than measuring forgetting on the pretraining task), we expect that in our experiments, the error on the original pretraining dataset after finetuning (i.e. a measure of forgetting) might exhibit a similar non-monotonicity with respect to correlations.
Notably, Lee et al. (2022) consider interleaved replay (similar to MTL in our setup) and find that it performs worse than finetuning (or regularized finetuning) for intermediate task similarity. This is also the case in our studies, where correlated teacher features (corresponding to intermediate task similarity) generally yield worse performance on MTL compared to PT+FT. Indeed, the intuition of Maslow's hammer may transfer to our setting: MTL networks try to re-use the same features on both tasks, which harms their generalization. In contrast, finetuned networks change their pretrained features (which yields forgetting in the setup of Lee et al.), yielding better generalization on the finetuning task.
All in all, these prior works on continual learning provide useful contextualization of our own work. We will add a paragraph on this to the related work section and will further note the similarity discussed in the previous paragraph in section 4.4. Thank you for bringing these papers to our attention!
> While for someone familiar with the literature it is not so necessary, for accessibility to a more general audience, it would be helpful to get a little more discussion on what the settings and results are in [Dai et al], [Woodworth et al], [Azulay et al] and in particular why you need different results for the different settings. As mentioned above, your paper is one for a broad audience, one that is certainly much broader than just the implicit regularization community, who would be the ones most familiar with the subtleties in those above results.
We agree with this point and will add a paragraph to the related work section on the contrast between implicit and explicit regularization, explaining why we are using a mixture of both. Roughly, our summary is: Woodworth et al. and Azulay et al. are able to characterize the implicit regularization induced by gradient descent for diagonal linear networks trained from arbitrary initialization. However, it is technically much more challenging to derive a similar result for multi-output diagonal linear networks, or for ReLU networks of any kind. It is generally easier to characterize the impact of explicit weight regularization, which Dai et al. do for multi-output diagonal linear networks. Our own contributions to the theoretical landscape are (1) spelling out the implications of existing results on implicit regularization in diagonal linear networks, when applied to the PT+FT setting, and (2) characterization of the penalty conferred by explicit regularization on finetuning ReLU networks from arbitrary initialization. Again, we take an explicit regularization perspective in the ReLU case (and validate experimentally that our results are a good description of implicit regularization dynamics), because this is technically much more tractable than characterizing the implicit regularization of ReLU networks.
---
Rebuttal Comment 1.1:
Title: Thank you!
Comment: Thanks for engaging with my suggestions. I hope you found the literature suggestions valuable. I appreciate the discussion you have provided here. To reiterate, I really like your paper!
---
Reply to Comment 1.1.1:
Comment: Thank you again for your helpful review, your positive assessment of our work, and your literature suggestions! | Rebuttal 1:
Rebuttal: We thank the reviewers for their helpful feedback and comments. In particular, the reviewers have suggested a number of relevant papers that we will add to the related work section. In addition, they have provided valuable feedback on clarity, which we will take into account in revising the manuscript (see specific responses to reviewers). We respond to the individual reviews below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Renovating Names in Open-Vocabulary Segmentation Benchmarks | Accept (poster) | Summary: Open-vocabulary models use class names as text prompts for unseen categories’ generalization during training. In this paper, the issue of imprecise or even wrong class names from existing datasets is specially studied. One simple and general framework is proposed for automatic dataset renaming, that is, first narrow down the whole name search space to a curated list of candidate names, by using original class names with contextually relevant nouns from image captioning; next employ a specially trained renaming model to select the best-matching candidate name for every ground-truth segmentation mask. To demonstrate the effectiveness, two types of practical applications are explored, i.e. using renovated names to train open-vocabulary models, and applying renovated names to improve evaluation of open-vocabulary segmentation.
Strengths: [+] Due to the fact that class names are often directly used as text prompts, they have a significant impact on the performance of open-vocabulary tasks. Therefore, focusing on the noisy issue of category names is on the right path.
[+] Leveraging foundation models to automate the renaming process indeed reduces the manual labor.
[+] The paper is easy to follow and understand, having clear logic.
[+] Some experiments are conducted to demonstrate that renovate names help to train models with stronger open-vocabulary capabilities and refine evaluation benchmarks.
Weaknesses: [-] Noise type & noise rate. The authors partially visualize the noise issue of some class names, such as inaccurate or too general or lack contexts, which is worth encouraging. But, are there only these three types of noise in real-world scenarios? What are the noise rates for different datasets? And is the effectiveness of RENOVATE the same for different types/ratios of noises?
Thorough statistics are needed to clarify the performance bounds.
[-] How many ground-truth masks are needed when sorting candidate names using visual context? How to ensure consistency in visual alignment, as visual masks typically have significant intra-class variances (such as humans having different hairstyles)?
[-] To obtain candidate names, GPT-4 and image captioning are used. For GPT4, it is usually sensitive to prompts, how to ensure the quality of generating candidate names (Quantitative comparisons are needed)? Do you need additional labor to perform a double-check after combining results from GPT-4 and captioning?
[-] Since renovating names for open-vocabulary understanding seems to be one general idea, why only focus on segmentation tasks? Will the conclusion of this paper be exactly the same for classification or detection? For example, when using visual samples to match the best candidate, the intra-class visual variances of classification are definitely greater than those of segmentation.
Technical Quality: 3
Clarity: 4
Questions for Authors: [-] In the related-work section, some citations are missing, which may mislead readers/reviewers. [1,2] have explored the text issue of class names in open-vocabulary segmentation, with some overlapping with this paper. The reviewer encourages including them and conducting detailed discussions and comparisons.
[1] AttrSeg: Open-Vocabulary Semantic Segmentation via Attribute Decomposition-Aggregation. NeurIPS 2023
[2] Multi-Modal Prototypes for Open-Set Semantic Segmentation. Spring IJCV
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: As the authors said, RENOVATE could inadvertently propagate biases from the foundational models into the new names. To mitigate potential negative societal impacts, they advocate for verification of names in critical applications.
Also, the authors acknowledge that, the exploration remains incomplete, and will further make refinement to explore more model backbones and scale up to large-scale datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging our paper to be “on the right path” and that our presentation to be “easy to follow” and have “clear logic”. We address the rest of the comments below.
**1. Noise types and rates.**
Besides the noise types we visualized in Figure 1, there are indeed many other noise types in the real world, such as “being too specific”, “being too ambiguous”, “being culturally/socially inappropriate”, and many more. However, since no groundtruth names are known, it is infeasible to determine exactly how many types of name mistakes are there. It is thus also impossible to determine the “noise rates” for each noise type.
However, in the future, one can use human annotators or even VQA models to annotate a dataset for such name mistakes. That would certainly be very valuable for our understanding how noise in names affect the models.
Nevertheless, we can still use human-verified RENOVATE names on the validation set to estimate the “noise rates” of different datasets. Specifically, we can define noise rate$=1- sim($original name, renovated name$)$ to assess how different the original names are from the RENOVATE names, and use the averaged score to assess the “noise rate” of a dataset. Under this definition, the noise degree of ADE20K and Cityscapes are 0.59 and 0.39, respectively. We can see that ADE20K exhibits a higher noise ratio compared to Cityscapes, potentially because ADE20K has more samples and classes and is likely to have more errors.
**2. On the name selection process.**
As written in L179-182, we perform name ranking and selection for each segment separately. Therefore, only one groundtruth mask is needed for each segment. Since we do per-segment renaming, there is no intra-class variance to deal with.
**3-(a). Sensitivity to GPT4 prompts.**
Our GPT prompts have the following components: (1) original name; (2) context names from image captioning; (3) suggestions on the two types of names (synonyms and short contexts) to focus, with examples; (4) instructions on how to make use of the original name, with examples.
In Table B.1, we performed an ablation study on the different choices of the context names and we can see that better context names result in significant gains in the renaming performance. We further ablate components (3) and (4) by removing them from the GPT-4 prompt and report the results in the following table. From our empirical studies, we find that GPT-4 is typically robust to the specific wording of the four components and that removing any key component (esp (2) and (3)) can significantly affect the results.
||PQ|AP|mIoU|
|-|-|-|-|
|full prompt |**27.9**|**17.9**|**37.1**|
|- (2) |25.5|16.5|33.8|
|- (3)|25.6|16.8|34.0|
|- (4)|26.5|17.3|35.2|
**3-(b). Manual check for GPT-4 results**
Our pipeline does not require additional manual checks at the GPT-4 results as they already incorporate knowledge from our “context names” and the original name.
**4. Why only focus on the segmentation task?**
We chose the segmentation task as this is one of the most representative tasks for 2D recognition. Please note our segmentation task covers semantic, instance, and panoptic segmentation. Our method is readily applicable to detection (by replacing the mask by the bounding box) and classification (by replacing the mask by the whole image).
Specifically, we conducted an additional study on an open-vocabulary object detection model YOLO-World [1] and by replacing the original names with RENOVATE names for fine-tuning, we improved the performance on ADE20K from an AP of **18.1%** to **21.2%**, which is equivalent to 17.1% relative improvement. We will add the results to the paper.
**5. More related works**
We thank the reviewer for pointing out more related works. Both works mentioned by the reviewer propose to decompose the class names into a set of attributes or prototypes and are indeed also addressing the problem that class names are typically not descriptive enough for open-vocabulary segmentation tasks.
However, note that our work addresses the problem of class names in a different way – our RENOVATE framework is able to deal with various kinds of problems with the original names and directly improve the name quality for the segmentation task.
We will discuss these related works in our final camera-ready version.
[1] Cheng, et al. "Yolo-world: Real-time open-vocabulary object detection." CVPR 2024.
---
Rebuttal Comment 1.1:
Comment: The reviewer thanks for the efforts made during rebuttal, and most of my issues has been solved.
---
Rebuttal 2:
Title: Thanks for the response
Comment: We thank the reviewer for the response and we are pleased to see that our rebuttal has addressed the reviewer's concerns.
If any remaining concerns hold the reviewer’s opinion on the current borderline recommendation, we would be happy to provide further clarification and discussion. Otherwise, we invite the reviewer to kindly raise the recommendation rating.
Update: We have noticed the recommendation score raising and would like to thank the reviewer again for recognizing the value of this work. | Summary: The paper explores modifying the class names associated with mask annotations in the segmentation datasets in the context of open-vocabulary segmentation. A method, RENOVATE, is described to perform such renaming in an automated pipeline. RENOVATE leverages an image captioning model to generate contextual words for each class. GPT-4 is prompted by class name and the contextual world to generate a pool of candidates. A segmentation model is trained to select among the candidate names using the mIoU as a proxy metric. The refined class names are then used to train open-vocabulary models, which show improvement in performance.
Strengths: 1) The paper explores an interesting angle of open-vocabulary segmentation: improving the names assigned to each segment. This is a novel study, highlighting the value of precise and varied captions for downstream training.
2) The writing of the paper is engaging. Particularly, the ample use of examples helps illustrate key points around the proper use of terms.
3) The paper conducts a very small scale study (n=20) human study showing a preference for new names.
4) The conducted ablation study verifies some components of the pipeline.
Weaknesses: 1) The section on the training of the ranking model to select among the candidate names could be clearer and more streamlined. It is not particularly clear whether a model is trained for each category or if a single model is trained for all categories.
- If it is the former, then there are certainly some computational concerns and limitations that the method imposes. It would be useful to discuss the computation cost wrt to the number of classes in the data.
- If it is the latter, then the ability to use or re-use this trained model for the downstream task of OV segmentation could be considered. It seems that the effect of gt-mask attn bias is limited (B.2) as randomisation performs better. Also, reporting the performance of this model on the downstream task is of interest.
2) In the same section, the design and presentation of a specialised architecture for the renaming could be better motivated. Since a proxy of a segmentation task is employed, is the new architecture even necessary? Could existing model architecture perform this task? Why do existing OV segmentation model pipelines not meet the requirements of the proposal's renaming stage?
3) The evaluation of the study for the improved performance of the segmentation models is limited -- only a single architecture is explored. Other architectures would help rule out particular drawbacks of the FC-CLIP being addressed with renaming and would provide more evidence to the broader conclusion about renaming helping model training (L310).
4) While the new names might help identify particular problems in models (L297), the reasoning behind renovating names in evaluation is rather backwards. While some misclassifications might be due to the similarity in labels, such as those identified as "benign" by the authors, they are still misclassifications as defined by the task. This then creates a problem in downstream use. For example, two masses in an X-ray might appear very similar due to both being a collection of cells, but their placement and small characteristic outline patterns will determine the difference between benign and dangerous growths. Thus, one should seek to highlight key errors in the evaluation instead, even if the labels are close. It seems that a different phrasing of the conclusions (L296): the learned semantics are not sufficient to differentiate between semantically adjacent classes.
5) [Minor] Finally, the proposed pipeline itself might be less applicable outside of the academic where established datasets with limited label sets are used. Would gathering a dataset with varied captions from the onset make the RENOVATE method redundant?
Technical Quality: 3
Clarity: 3
Questions for Authors: - It would be good if the issue surrounding the training of the renaming model (W1) could be clarified in the rebuttal.
- Additionally, providing some commentary on the weaknesses (W2) and (W4) would help improve the interpretation of the paper.
- Furthermore, it would be good to discuss future applicability of the pipeline, or whether the better path is to just collect more precise captions for training data.
Overall, the paper is well-written and presents some insights into the treatment of data for open-vocabulary segmentation. There are some limitations in the evaluation. I currently rate the paper as BA prior to rebuttal.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper touches on the potential to propagate existing biases in the foundational models it employs, which is a big issue, and it is important that it is mentioned.
I would recommend conducting an ethics review or obtaining IRB approval for the human study, even if it is not strictly necessary according to the respective institutional guidelines.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the novelty of our paper and our efforts in making our writing “engaging”. We address additional comments as the following:
**1. On the renaming model and downstream segmentation task.**
Thanks for the suggestion, we will work on making this part more streamlined and clear. Now to answer the specific questions:
First, we train the renaming model for all categories of a dataset.
Second, it is an interesting idea to use the renaming model for downstream segmentation. To do so, we need to use it without gt-mask attn biases and use names from all the classes as the text queries. Note that the “rand gt mask” in Table B.2 still uses gt masks – it only randomizes in the intermediate layers of the transformer decoder, but always uses gt masks at the initial layer. Using a renaming model without gt masks would compromise the performance significantly and make the model unable to distinguish different instances from the same class, as reflected in its segmentation performance on ADE20K: an mIoU of **33.2%** (semantic segmentation) and AP of only **12.9%** (instance segmentation). Similarly, OVSeg models cannot perform as well as our renaming models in the renaming task as they do not make full use of the available knowledge for renaming (see discussion in **2.**).
While the renaming model may not be useful for downstream segmentation, it can be used for other tasks such as **generating mask-name annotations for image datasets** by matching caption nouns or tags with mask proposals, making it potentially useful to improve many large-scale datasets. See the PDF for an example.
**2. The necessity of an architecture for renaming.**
Compared to OVSeg, the renaming task has more knowledge of the inputs (gt masks and gt classes) and only focuses on finding optimal names within candidate names of each class for each visual segment. Thus, when designing renaming model architecture, it is a good idea to make use of these different task requirements to ensure high-quality renaming results.
In fact, our renaming architecture is nearly identical to Mask2Former except that we use text embeddings as input queries and gt masks (with randomization in intermediate layers) as attention biases for cross-attention computation. As shown in Table B.2, both modifications are essential to improve the renaming performance (e.g., from an mIoU of 27.1% to 37.1%). We further reported the renaming performance when using OVSeg architecture FC-CLIP under the same setup as in Table B.2:
||PQ|AP|mIoU|
|-|-|-|-|
|FC-CLIP (for renaming)|19.6|14.3|25.3|
|Our renaming model |**27.9**|**17.9**|**37.1**|
We can see that our renaming architecture is indeed much superior at the renaming task.
**3. Other architectures for training with the RENOVATE names.**
We thank the suggestion for extending the current training experiments to include more architectures.
We further experiment with a very different architecture, YOLO-World[1], which does not rely on a CLIP visual backbone. To compare the training name quality on COCO, we fine-tune YOLO-World-M on the COCO object detection dataset and evaluate OV object detection on ADE20K. By replacing the original names with RENOVATE names, we improve the AP from **18.1%** to **21.2%**, which is equivalent to 17.1% relative improvement. This further demonstrates the general applicability of RENOVATE names to help model training.
**4. On the evaluation with renovated names.**
The reviewer raised an interesting misclassification case where the visually close misclassifications are of high interest and suggested that “one should seek to highlight key errors in the evaluation”. We totally agree with this.
We want to first point out that the “key errors” are highly domain-dependent. For example, in autonomous driving, identifying a “car” as a “truck” is much less dangerous than identifying it as a “road” for an autonomous driving car. In this case, the key error is not “car/truck” but “car/road” misclassification. In this domain, classification mistakes with high semantic differences (e.g., “car/road” mistake) are often thought of as more “wrong” as it is consistent with our intuition of a classification mistake.
In the X-ray case, we can use mistakes in “placement and small characteristic outline patterns” as the key errors instead of “visual similarity” and design the evaluation metric accordingly.
Finally, regardless of how to design the evaluation metric, renovated names make it possible to conduct fine-grained analysis on mistakes with different semantic distances. Without our renovated names, we can only see the coarse misclassifications without a fine-grained understanding of the model.
We will add this discussion of the evaluation metric in our revision.
**5. Further applicability of the renaming pipeline with rich captions.**
We want to first point out the fact that collecting a high-quality multi-caption dataset is very expensive — current caption data collection is mainly from web scraping which is of low quality, often misses many objects in the scene, and doesn’t consist of multiple/rich annotations. Curating these datasets already requires a lot of effort and costs[2,3].
Compared to this, it is very economically efficient to curate a dataset with relatively coarse annotations and use our RENOVATE to **improve the quality of annotations**. From this perspective, RENOVATE is indeed very useful and meaningful to improve current large-scale datasets with coarse web-scraped annotations. In the PDF, we showed an example where **the renaming model is applied to generated masks and tags and produces good mask-name matches**. This shows that RENOVATE has broad future applicability.
[1] Cheng, et al. "Yolo-world: Real-time open-vocabulary object detection." CVPR 2024.
[2] Gadre, et al. "Datacomp: In search of the next generation of multimodal datasets." NeurIPS 2024.
[3] Fang, et al. "Data filtering networks." arXiv preprint (2023).
---
Rebuttal Comment 1.1:
Title: Thank you for the rebutall
Comment: I wanted to acknowledge and thank the authors for their detailed responses. I believe the majority of my concerns have been addressed, and I support the proposed changes. I will update my recommendation accordingly.
---
Reply to Comment 1.1.1:
Title: Thanks for the Response
Comment: Thank you for taking the time to review our response. We’re pleased that our rebuttal has addressed your concerns and we appreciate the recommendation score increase. We will make sure to incorporate the reviewer's suggestions into our revised paper. | Summary: The paper considers the problem of open-vocabulary segmentation. The task requires segmentation models to recognize categories outside of the training taxonomy. This usually means learning a joint image-text feature space and classification via similarity matching between class text embeddings and segmentation mask embeddings. The paper aims to improve the suboptimal text descriptions of the dataset classes, which are often too general, ambigous and context dependant. The proposed method describes a strategy for automated renaming of the assigned class descriptions at the segment level. This means that each ground truth segment gets assigned with one or more text descriptions that describe that particular segment more specifically. Training with such enriched text descriptions improves the model performance on both the source and target datasets. The experimental study follows the usual setup where COCO is used as source dataset, and ADE20k and Cityscapes as target datasets.
Strengths: The paper considers an important and underexplored problem in open-vocabulary segmentation. To the best of my knowledge, OpenSeg [5] is the only prior work that also considers the problem of suboptimal class descriptions and corresponding renaming solutions. The paper provides a proper comparison and discussion of the differences.
The proposed renaming method is fully automated, which represents a significant step forward w.r.t. to the related work that requires manual inspection. The described approach is technically sound and properly explained.
The proposed method seems dataset and model agnostic. This means that it should easily generalize to other datasets and would be beneficial for training of different open-vocabulary segmentation models. The authors promise to share the code and the renamings for popular datasets which would have positive affect on future research in open-vocabulary segmentation.
The results reveal that training with renovated vocabulary leads to increased segmentation performance across different metrics and datasets. Furthermore, including the renovated vocabulary only in the evaluation phase leads to increased performance of the open-vocabulary models pretrained with regular datasets.
Weaknesses: Some parts of the paper remain unclear.
For example, I am confused about the batch collation for training the renaming model. Figure 2 suggests that the transformer decoder in a single forward pass receives candidate names from a single ground truth class. Does this mean that for a single image the number of forward passes through the transformer decoder is equal to the number of ground truth segments? Would you say then that a single training example is consisted of an image, a single ground truth segment and multiple candidate names? Is then a batch size equal to the total number of segments in the sampled images? Does the batch size 16 from line 239 refers to the number of images or number of segments? Does this type of training affects the memory requirements significantly? Are there any redundancy in terms of the tensor representations that have to be kept in the memory in order to enable parallel training on multiple segments? I suggest authors to clarify these questions.
I also dont understand the details of using renovated names during the evaluation. Does this mean that all the candidate names are just appended to other naming variants of that particular class? It seems to me that we do not need to train the renaming model if we would like to use the renovated names just for the evaluation. Is this correct? How many naming variants per class do you get on average when you concatenate the proposals from the original set, the OpenSeg and the renovated names? Does this affect the inference speed significantly? If you also use the usual templates (a picture of *class name*, an image of *class name*, etc.) it seems to me that there might be a lot of different textual representations for a single class. How does this inference really work?
The renaming model should be compared to some stronger baseline which also has the ability of candidate selection. For example, you could assign each segment with a candidate name that achieves highest similarity with the segment representation based on the CLIP embeddings. The segment representation could be average-pooled feature representation from the CLIP visual encoder. I question the necessity of training the mask prediction module just for renovating the class name associated with some segment, if we already know the corresponding ground truth mask. Could you please comment on this issue and maybe discuss some possible alternatives?
In Table 3. the results obtained with OpenSeg names should correspond to the original FCClip results. However, in some cases the reported performance is lower than the FCClip performance reported in the official repository. Why is this the case?
As shown in Table 3., training with renovated names leads to marginal gain in some cases (e.g. PQ on Cityscapes).
I am not sure if the x-axis on figure 5 shows the number of images available in the training dataset, or the number of iterations trained on the full dataset? If these experiments consider subsampled training subsets, you should explain how the training images were selected.
I am a bit surprised about the effectiveness of the negative sampling. What would be the effect of including additional renovated names of the ground truth class as negatives? This reminds of hard example mining, as it would force the model to differentiate between similar semantic concepts.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please consider the questions from the weaknesses section that would clarify training of the renaming model and the evaluation procedure with the renovated names. Also, please discuss the possibility of evaluating the renaming model with another baseline with candidate name selection ability. Please find the suggestion in the weaknesses section, which considers a baseline based on pooling of CLIP features.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discussed the limitations in the section 6. The main concern is regarding the bias transfer from the foundation models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for pointing out that we addressed an “important but underexplored” problem and that our approach is “technically sound”, “properly explained”, and “would have positive impact on future research”. We address the remaining comments below.
**1-(a). Batch collation to train the renaming model**
Our renaming model processes all images in a batch (16 images) in a single forward pass – all the text queries from the gt segments are processed in parallel. Specifically, each input query is to assess the alignment between one gt mask and one of its candidate names — the input query itself is initialized with the text embedding of the candidate name, and it is refined by the cross-attention layers that use the gt mask as attention biases.
Since the model can process multiple queries in a single pass (like Mask2Former), we can process all such mask-name pairs in one batch in a single pass. For images where the number of pairs is less than the number of queries, we simply pad with empty queries for collation.
We will make the batch collation more clear in the paper.
**1-(b). Memory usage**
Our model **consumes similar memory** to a standard Mask2Former model. For the memory consumption, the key factor is the number of queries. In Mask2Former, this number is 250. In our renaming model, it usually varies between 100 to 350, depending on the number of candidate names and segments in images in a batch.
**2-(a). Name merging for evaluation in Table 3**
For results in Table 3, when evaluating, we append the **“renovated names”** instead of **“candidate names”** to the other naming variants. After merging the names, the number of names per class is 6.44, 4.98, and 6.89 for COCO, ADE20K, and Cityscapes, respectively. This has barely any influence on inference time (0.3sec/img) since the **inference cost is mostly in processing the image and query features** instead of the final cosine similarity computation. One can even choose to use more test names (e.g. 10 names per class) depending on the use case and will not incur any noticeable extra cost of inference time.
**2-(b). The usage prompt template in inference**
For each prompt name, we first use ViLD templates to create 63 different sentences and extract their CLIP text embeddings. We then average the text embeddings after normalization and use this **single embedding** as the text embedding for the corresponding name. This is also the common practice in other OVSeg works [1,2].
**2-(c). The need of renaming model for the evaluation**
Since we need “renovated names” for evaluation, we need to train the renaming model to obtain them. Evaluating using candidate names would introduce unnecessary complications in the evaluation as they are quite noisy (as indicated by the inferior performance when training with candidate names in Table 3).
**3-(a). Baselines for the renaming model**
We thank the reviewer for the suggestion of using average-pooled CLIP features as a baseline. In prior works, this baseline is known to be inferior to tailored OVSeg models[1]. In [1], it’s shown that with gt masks as region proposals, average-pooled CLIP features only reach an mIoU of **20.1%** on ADE20K, being significantly lower than OVSeg models (at least over 30%).
Our baseline for the renaming model is the FC-CLIP with gt attn biases (row 1 in Table B.2), which is much stronger than the average-pooled CLIP features. Our model significantly improves upon this baseline (from **27.0%** mIoU to **37.1%** mIoU) by incorporating our designs of “rand gt masks” and “text queries”.
**3-(b). Necessity of training the mask prediction module.**
Relatedly, since CLIP features are not good enough, we need to improve them. The pixel decoder in our renaming model is responsible for improving the CLIP features to a per-pixel level. The mask prediction module is irreplaceable since it’s the only source of supervision signals for these per-pixel features.
We also note that even if we use gt masks as attn biases, the mask prediction is still non-trivial. As we write in L160-164, the gt masks are only guiding the cross-attention to focus on the segment regions.
**3-(c). Alternatives of the mask prediction module**
As an alternative to our mask prediction module, one may design different proxy losses to train the renaming model. For example, a region-level contrastive loss that forces matched mask-name embedding pairs to be closer. As our focus of this paper is to demonstrate the importance of names and renaming, we leave such exploration to future work.
**4. Table 3 OpenSeg results are lower than the original FCCLIP results.**
This is because the performance of OV Seg model depends on the test name set. In Table 3, our test names merge all three name sets. This is for a fair comparison of models trained with different names. We will update the paper accordingly to make this clear.
**5. Figure 5 setups.**
The x-axis in Figure 5 shows the number of images available in the training set. The training images are randomly selected from the whole training set.
**6. Negative sampling variants**
Negative sampling is highly effective to train with a large vocabulary in NLP. Recently, some OVSeg models also have adopted a similar approach to ours [2].
While negative names from other classes are guaranteed “true negatives”, same-class renovated names can be complicated. For example, for one segment, “man” might be the most precise, but “person” is also not wrong. It is thus not ideal to penalize “person” in the same way as names from other classes (e.g., “car”) during training. To remedy this, one may add an extra step to first select “true negatives” among each class. This can indeed be an interesting future extension of our current sampling approach.
[1] Liang, et al. "Open-vocabulary semantic segmentation with mask-adapted clip." CVPR 2023.
[2] Cheng, et al. "Yolo-world: Real-time open-vocabulary object detection." CVPR 2024.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the rebuttal. Most of my concerns were addressed. Thus, I will upgrade my decision to weak accept.
---
Reply to Comment 1.1.1:
Title: Thanks for your response and score upgrade
Comment: Thank you for taking the time to review our response. We’re pleased that our rebuttal has addressed your concerns and we sincerely appreciate the recommendation score upgrade. We will carefully consider and incorporate the key points you've raised as we work on the revised paper. | null | null | Rebuttal 1:
Rebuttal: We’d like to thank all reviewers again for their valuable feedback.
We are pleased to see that reviewers found that our work is **novel** (Reviewer aQgc) and **technically sound** (Reviewer fwux), **considers an important and underexplored problem** (Reviewer fwux, Reviewer CT7Q) and would **positively affect on future research** (Reviewer fwux). We are equally glad that the reviewers found the paper writing **engaging** and **easy to follow and understand** (Reviewer aQgc, Reviewer CT7Q).
We’d like to highlight a few points in our responses to the reviewers:
1. We added an experiment using YOLO-World [1] to train with renovated names and reported performance gains in open-vocabulary object detection on ADE20K from an AP of **18.1%** to **22.2%**, an equivalent of 17.1% relative improvement. This demonstrates the benefit of RENOVATE names in training different model architectures and different tasks (open-vocabulary object detection). See responses to **Reviewer aQgc** and **Reviewer CT7Q**.
2. We added a figure in the PDF that showcased how the renaming model can be used **without gt mask annotations** but **with SAM2-generated masks [2] and RAM-generated image tags [3]**. This demonstrates another possible application of the renaming model and shows that our renaming approach can be further generalized to datasets without mask annotations. See response to **Reviewer aQgc**.
3. We added ablation experiments on different components of the GPT-4 prompts for a better understanding of how GPT-4 prompts influence the renaming performance. See response to **Reviewer CT7Q**.
4. We added an experiment using FC-CLIP (without gt-mask attn bias during training) for renaming. Its significantly inferior performance to our renaming model suggested a need for our designated renaming model. See response to **Reviewer aQgc**.
5. We also added more implementation details on the training of the renaming model, the name selection process of the renaming model, and the evaluation of the models trained with renovated names. We hope our response clarifies the questions from the **Reviewer fwux** and **Reviewer CT7Q**.
We hope our replies to all the reviewers satisfactorily address their questions and comments and we warmly welcome further questions during the discussion phase.
[1] Cheng, et al. "Yolo-world: Real-time open-vocabulary object detection." CVPR 2024.
[2] Ravi, Nikhila, et al. "SAM 2: Segment Anything in Images and Videos." arXiv preprint arXiv:2408.00714 (2024).
[3] Zhang, Youcai, et al. "Recognize anything: A strong image tagging model." CVPR 2024.
Pdf: /pdf/4be21b89282dac22fee5dffca2c15ed9ed4c6796.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels | Accept (poster) | Summary: The paper introduces SAMI, an iterative algorithm designed to align LLMs with behavioral principles (constitutions) without the need for preference labels or demonstrations. SAMI achieves this by optimizing the conditional mutual information between principles and self-generated responses given queries. The approach shows significant improvements in single-turn dialogue and summarization tasks compared to pretrained and instruction-finetuned models.
In summary, this paper presents a novel and effective method for aligning language models with behavioral principles without relying on preference labels or demonstrations. Despite demonstrating strong empirical results and scalability, the approach faces challenges related to its dependence on the initial models, regularization issues, domain limitations, and length bias. Overall, SAMI represents a significant advancement in the efficient and practical alignment of language models.
Strengths: 1. The introduction of SAMI represents a significant innovation by aligning LMs with principles without using preference labels or human demonstrations.
2. The method outperforms both the initial pretrained model and an instruction-finetuned baseline in single-turn dialogue and summarization tasks.
3. SAMI scales effectively to stronger models (e.g., llama3-70b) and generalizes to diverse principles not seen during training.
4. The ability to align LMs to follow principles without extensive human oversight has practical implications for reducing the resource intensity and complexity of current alignment techniques.
Weaknesses: 1. The approach faces potential over-optimization issues, producing non-coherent outputs (gibberish) if not regularized properly. The paper mentions that current regularization strategies add algorithmic complexity and are not always effective.
2. The experiments are restricted to single-turn dialogue and summarization tasks. It remains to be seen how SAMI performs on more complex multi-turn interactions and a broader range of tasks. Additionally, there is a noted length bias in responses, especially when using mutual information as a measure, which can affect the quality and coherence of the generated outputs.
3. The experimental results (such as Figure 2) suggest that multiple iterations are needed to achieve optimal performance, raising concerns about the potential resource burden. The authors should further elucidate this aspect. Additionally, it would be beneficial to include more ablation studies and comparisons with other alignment methods to provide a comprehensive evaluation of SAMI's effectiveness.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Can the authors provide more details on the regularization strategies they have considered? What specific measures have they found effective in mitigating over-optimization and producing coherent outputs?
2. Have the authors considered extending their experiments to multi-turn interactions and more complex tasks? If so, what are the preliminary findings, if any?
3. The experimental results indicate that multiple iterations are needed to achieve optimal performance. Can the authors provide more insights into the resource requirements and computational costs associated with these iterations?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: 1. While the authors acknowledge the issue of over-optimization leading to gibberish outputs, a more detailed discussion on potential solutions and future directions for regularization would be helpful. This would provide readers with a clearer understanding of how to tackle this challenge.
2. The current experiments are limited to single-turn dialogue and summarization tasks. Expanding the discussion to include potential performance in multi-turn interactions and a wider range of tasks would address concerns about the generalizability of SAMI.
3. Including more ablation studies and comparisons with other alignment methods would strengthen the evaluation of SAMI. This would help readers understand its relative strengths and weaknesses better.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation of our work!
As mentioned in our responses to our first reviewer LYCL, SAMI, like other alignment methods such as DPO, suffers from over-optimization (e.g., non-coherent outputs if trained for too long). We regularize against this “forgetting” by always starting from the initial model during finetuning, using data generated from an intermediate model. While this is a limitation, we do not add additional complexity through our regularization. In fact, by not having a reference model + KL divergence, we reduce algorithmic complexity as we only need to load one model into memory during training. We will make sure to better clarify this relationship and flag the over-optimization issue more clearly in our limitations.
Regarding your other questions and concerns:
- We agree that evaluation on more complex domains is an important limitation and plan to address this in future work.
- Regarding length bias concerns: While HH-RLHF suffered from length bias, our results on TL;DR have shown that length bias can be regularized against simply by stating that responses should be concise. Fig. 3 shows that sequence lengths decreased over iterations as a result of including a conciseness principle in the constitution. As such, while the original objective suffers from a length bias, this can be regularized against by including conciseness as a part of the constitution. Moreover, we have now included updated results with a more principled length correction for our HH-RLHF experiments (see response to reviewer eqUp).
- As mentioned in our response to reviewer LYCL, we only need a small amount of data at each iteration, which is why the computational/resource requirements for training a model are rather low. We will include ablations with respect to principles and potential simplifications of our objective in our revision.
- Regarding regularization: Please see our response to LYCL.
- We have not yet extended to multi-turn interactions; however, related work has looked at multi-turn interactions using self-improvement techniques and shown promising results (Andukuri et al., 2024). In future extensions, we plan to combine ideas from this multi-turn setting with our mutual information objective.
References
- Zelikman, E., Wu, Y., Mu, J., & Goodman, N. (2022). Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35, 15476-15488.
- Andukuri, C., Fränken, J. P., Gerstenberg, T., & Goodman, N. D. (2024). Star-gate: Teaching language models to ask clarifying questions. Conference o
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer 5Enh
Comment: Thank you for the rebuttal. The authors have addressed most of my concerns. I am pleased that the computational/resource requirements for training the model are relatively low. I look forward to seeing the algorithms extended to multi-turn interaction settings. I will maintain my current score. | Summary: This work presents a method to align the model towards a set of principles. The general idea is to sample responses from the model with different constitutions and optimize the matching between responses and constitutions via an infoNCE-type contrastive loss. The whole process is done iteratively to improve the alignment. This method is tested on single-turn dialog and summarization, and is shown to improve the performance of mixtral and llama3 models with constitutions sampled either from weaker or stronger models. Further experiments also demonstrate this method's capability to use diverse principles and to combine chain-of-thought reasoning.
Strengths: 1. This paper is well-written and easy to follow. All the detailed hyperparameters are listed in the appendix.
2. The improvements are good. It is nice to see the models can even benefit from principles generated by weaker models.
Weaknesses: 1. While the improvements are convincing, it would make this paper much stronger if the model could be compared with baselines. Some valuable baselines to compare with include: (1) simplified versions of the proposed method (e.g., simplifying the contrastive loss part); (2) baselines mentioned in the related work part (which I acknowledge that may not using the same resources, but would still be good to have).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. One of the motivations behind this work seems to be to remove the dependency on "carefully curated" examples. I'm wondering how robust this method is towards different principles. It would be great to show simple experiments on this or even just impressions through all the existing experiments.
2. I don't understand "by using a small number of gradient updates during earlier iterations" in line 145. How does this regularize distribution shift?
3. While the MI lower bound is increasing smoothly during training, the win rates are not. How do you determine the total number of iterations and select the best checkpoint during training?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Sec. 5 includes a paragraph discussing the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation of our work. We completely agree that comparing to other baselines than instruct-finetuned models and base models directly is an important limitation of our work and we plan to address this in future extensions.
Regarding your specific questions:
- We agree that re-running experiments using simplified versions of our contrastive loss (e.g., one-sided vs. two-sided) are important ablations. We will include these in our revision.
- We agree that our first two experiments with HH-RLHF and TL;DR only involved a small number of carefully curated examples. As such, our third experiment with Llama3-70B involves a larger selection of diverse principles (e.g., talk like a pirate or use emojis; see Fig. 5 and Section A.13). We provide examples in Section A.9 (see e.g., at the bottom of p. 23)
- As mentioned in our responses to other reviewers, we are planning to evaluate additional domains requiring more diverse principles, such as roleplaying personas directly (e.g., in MT-bench) in future versions.
- Regarding your comment "I don't understand 'by using a small number of gradient updates during earlier iterations' in line 145. How does this regularize distribution shift?": Thank you for pointing this out. We will revise this statement to be more precise in our revision. Specifically, we regularize in two ways: First, we always train the base model (i.e., the initial model) using data generated from each intermediate model. As such, the same model is never trained twice, and an intermediate model is always using new data for training the next iteration of a model. Second, by only taking a small number of gradient steps, we stay as close as possible to the initial model to avoid forgetting (see also our third response to reviewer LYCL).
- Regarding checkpoint selection: Following previous work (Zelikman et al., 2022), we start with a small number of examples during the first iteration and linearly increase the number of training examples at each iteration. We fix both the number of iterations and the number of examples in advance (see also our third response to reviewer LYCL). We chose 3 iterations simply because we have to run GPT-win rates evaluations to get win rates after each iteration and need to generate new data, so each iteration is resource-intensive. Since related works (e.g., Andukuri et al., 2024) have observed a ceiling after or around three iterations, we followed this approach. We will make sure to point this limitation out more carefully in our revision.
References:
- Zelikman, E., Wu, Y., Mu, J., & Goodman, N. (2022). Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35, 15476-15488.
- Andukuri, C., Fränken, J. P., Gerstenberg, T., & Goodman, N. D. (2024). Star-gate: Teaching language models to ask clarifying questions. Conference on Language Modeling Research.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! The explanations and additional details resolved most of my concerns, so I increased my score. | Summary: The authors propose a technique to improve the ability of language models (LM) to abide by constitutions without using human labels. First, they ask a principle writer LM to construct detailed constitutions and inverse versions of them (called “antitheses” in this work). Then the main LM, which is the LM to be improved, is asked to generate a response to a prompt for each of the constitutions. Finally, a mutual information loss is calculated and backpropagated, which simultaneously encourages the LM to produce response y_1 under constitution c_1 and and to produce y_2 under c_2, while also discouraging the model to produce y_1 under c_2 and to produce y_2 under c_1 (where y_i was indeed produced when conditioned on c_i). The authors show strong improvements over baselines with this technique.
Strengths: * The authors propose a novel technique for improving the steerability ability of LMs
* The improvement over baselines are impressive
* The paper is comprehensive in its results and analysis, and the Appendix is detailed
* The paper is well-written and easy to follow for the most part
Weaknesses: * Soundness
* I believe the length correction method is insufficiently rigorous, as it doesn’t account for the magnitude of length variations. Imagine a hypothetical scenario where win rate is entirely correlated to length. If your less-than-or-equal bucket is on average 100% of the base length, then it will achieve a win rate of 50%. Then say your greater-than bucket is 400% the base length, leading it to achieve 100% win rate. Averaging out the two win rates will yield a result of 75%, when in fact an unbiased length-corrected result should yield 50%. I recommend a more principled approach, like the one mentioned in Stiennon et al 2020 Appendix F - https://arxiv.org/abs/2009.01325, which uses logistic regression to remove the effect of length.
* Please include statistical significance of win rates and other key results
* Please include details on checkpoint selection and the train/validation/test splits used. From reading the paper alone, I am under the impression that there were no validation splits used, which would be concerning
* Usefulness of principle writer
* Is the principle writer component necessary? It seems like it’s not a vital part of this technique, and a human could write a constitution given that they are already writing out specifications to the principle writer. It also takes up a lot of space in this paper and is rather distracting
* From eyeballing Figs 3-4, it also looks like principle writer size doesn’t matter. This should be stated clearly in the results.
* Ablation should be conducted on whether you need antitheses vs. just need more than 1 constitution. This fits nicely into the contrastive loss narrative, but it’s not clear if antitheses are really necessary.
* Clarity
* It should be made clearer that the idea is to improve the steerability of the LLM via constitutions. When reading the paper, I was originally under the impression that the goal was to align the model to a specific constitution (e.g. “be helpful and harmless”).
Technical Quality: 3
Clarity: 3
Questions for Authors: * Can you clarify whether the inputs to the SAMI and the baseline models are exactly the same? (both in the paper and in your response) This is quite important to making sure the authors are using a fair baseline
* Figures 2-4 are very busy and hard to interpret. Please consider trimming them down (e.g. removing the “principle writer size” dimension), or using different colors, symbols
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation of our work and providing the additional length correction reference. We will make sure to report both statistical significance in addition to confidence intervals in our revision. Moreover, we will revise figures for clarity as requested.
Please see our attached pdf for updated win rates from Experiment 1 (HH-RLHF Dialogue) based on fitting a logistic regression model to remove length effects. For fitting the logistic model, we have followed the most recent evaluation standard from Length-Controlled Alpaca Evals (Dubois et al., 2024) which similar to Stiennon et al. (2022), inputs the length of each response as well as the result and trains a classifier to predict the counterfactual: “what would the preference be if the model’s output had the same length as the baseline”. Results for HH-RLHF after applying this length correction show the same pattern as our previous results albeit being more conservative. Thank you again for pointing this out, we will make sure to update our figures and text to reflect this change!
Regarding your other concerns and specific questions:
- We take only one gradient step on each batch and never train twice on a given data point within a given iteration. Moreover, we alternate between two dataset splits (A, B) between iterations, ensuring that if a model was trained on split A it never sees data points from split A when generating new training data for the next iteration, for which we use split B (and vice versa if a model was trained on split A). As such, at every point in our pipeline (data generation, training, and win-rate evaluation), a data point encountered by a given model is seen for the first time and is never seen again. We then fix the number of iterations across experiments and compute win rates at the end of each iteration to evaluate performance (similar to Zelikman et al., 2022).
- While it is necessary to have principles, the principle writer could be both a human or another language model. Moreover, principles could be sampled from a pre-existing dataset. As such, the principle writer is not strictly necessary for our pipeline. However, we would like to emphasize that we were interested in exploring a setting in which a principle writer might be weaker than the student being finetuned. We believe that—similar to using a weak supervisor model to label data for training a strong student (i.e., weak-to-strong generalization; see e.g., Burns et al.; 2023)—this is an important point to make on its own as it is not unlikely that future models might surpass human users in capabilities while still having to follow instructions human users. As you mention in your next comment, Figures 3-4 show that a small principle writer can indeed be used to align a stronger student, which we believe is a key finding suggesting that small aligned principle writers / models (which act as a stand-in for a human user) can be used to steer strong students. Thank you for pointing this out again, we will make sure to state this more clearly in our results!
- We agree that additional ablations for the usefulness of antitheses are important and will include these future extensions of our work.
- We agree that we need to make our goal—steerability of a language model via constitutions—more explicit. This goal is distinct from aligning a model to a specific constitution (or a specific distribution of labels through RLHF). Instead, it aims to increase steerability more generally.
- Yes, the inputs to SAMI and baselines are the same. Both the original base model and SAMI-finetuned models use a base model template with no additional special tokens except BOS and EOS tokens. Instruct-model based templates use the exact same input, with the only difference being the additional special tokens required by the tokenizer.
References:
- Burns, C., Izmailov, P., Kirchner, J. H., Baker, B., Gao, L., Aschenbrenner, L., ... & Wu, J. (2023). Weak-to-strong generalization: Eliciting strong capabilities with weak supervision. arXiv preprint arXiv:2312.09390.
- Dubois, Y., Galambosi, B., Liang, P., & Hashimoto, T. B. (2024). Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475.
- Zelikman, E., Wu, Y., Mu, J., & Goodman, N. (2022). Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35, 15476-15488.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns, especially regarding the length analysis. Please include significance testing in your next revision as promised. I will raise my score accordingly. | Summary: This paper introduces SAMI (Self-Supervised Alignment with Mutual Information), an iterative algorithm for aligning language models to follow behavioral principles without using preference labels or demonstrations. The key idea is to finetune a pretrained LM to increase the mutual information between constitutions (sets of principles) and self-generated responses. The authors demonstrate SAMI's effectiveness on dialogue and summarization tasks, showing it can outperform both the base model and instruction-tuned baselines. They also show SAMI can align strong models using principles written by weaker models, and that it generalizes to diverse principles and scales to larger models.
Strengths: 1. The paper presents a novel approach to language model alignment that does not require preference labels or demonstrations. This is a significant departure from existing methods like RLHF or supervised finetuning.
2. The method is well-developed and grounded in information theory. The authors provide a clear theoretical foundation for their approach, deriving a tractable lower bound on the conditional mutual information objective.
3. The paper is well-structured and clearly written. The SAMI algorithm is presented in detail (Algorithm 1), and the key ideas are illustrated effectively through Figure 1. If the results hold up to scrutiny, this could be an important contribution to the field of AI alignment. The ability to align language models without relying on expensive and potentially biased human preference data could significantly accelerate progress in this area.
4. Authors conduct a comprehensive set of experiments, including comparisons to strong baselines, investigations of generalization to diverse principles, and scaling to larger models (llama3-70b).
Weaknesses: 1. While the paper shows results on dialogue and summarization tasks, it would be beneficial to see performance on a wider range of tasks to better understand the method's generalizability.
2. The paper compares primarily to instruction-tuned models and base models. Comparisons to more recent alignment methods like constitutional AI or RLAIF would strengthen the results.
3. While the authors mention regularization to prevent divergence from the initial model, there's limited discussion of how this affects the model's ability to generalize to new tasks or domains not seen during training.
4. While the method is shown to work with llama3-70b, it's not clear how computationally intensive the approach is compared to other alignment methods, especially for very large models.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How does the computational cost of SAMI compare to other alignment methods like RLHF or constitutional AI, especially for very large models?
2. Have you investigated how SAMI performs on tasks significantly different from those used in training? For example, if trained on summarization, how well does it generalize to tasks like code generation or mathematical reasoning?
3. The paper mentions using regularization to prevent divergence from the initial model. Can you provide more details on how this regularization affects the model's ability to learn new behaviors not present in the initial model?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation of our work!
We completely agree that future work should include a wider range of tasks. Widening both the range of tasks and principles is crucial for training a general constitution-following model, and we plan to address this limitation in future extensions of our work. We will ensure that this limitation is carefully addressed in our limitations section.
Regarding your questions:
- SAMI's computational cost is relatively low. We apply only one gradient step per batch, with a batch size of 128. At each iteration, we train on at most 12 batches (1,536 examples). Following Zelikman et al. (2022), we start with a small number of batches and increase the number of batches by a constant factor in later iterations. Consequently, fine-tuning small models like Mistral-7B or Mixtral-8x7B takes no more than 5-10 minutes at a given iteration (using reasonable hardware such as A100 GPUs). While fine-tuning LLaMA-70B is more expensive due to its size, the dataset still remains small (≤1,536 examples at each iteration; see section A.3 for further details). For comparison, the original DPO paper used 170,000 HH-RLHF examples (see p. 7 in Rafailov et al., 2023), an additional SFT stage prior to DPO fine-tuning, and trained for multiple epochs. Importantly, we view SAMI as a complementary approach to other alignment fine-tuning methods, not a replacement. SAMI's goal is to enhance a model's steerability by amplifying the connection between a set of guiding principles and the responses that realize them, and it can in principle be applied at later stages in post-training, e.g., after DPO or instruction finetuning.
- As mentioned above, we have not yet evaluated SAMI on other domains. This is an important limitation that we will carefully address in our limitations section. An important aspect to consider is that training on summarization and evaluating on mathematical reasoning is unlikely to benefit a model. Instead, we anticipate that training on a variety of domains should help the model generalize. For example, jointly training on code generation, mathematical reasoning, summarization, and other domains should increase generalization performance. We note that this limitation is not specific to SAMI but a data limitation that should apply to finetuning/alignment methods more generally.
- To clarify, we regularize by training the initial (i.e., original base) model at each iteration, using data generated from each intermediate model. This approach is standard (see Zelikman et al., 2023; Andukuri et al., 2024) and prevents overfitting. We only train on a small number of examples using a low learning rate to avoid substantial changes to the initial model. The reason for using regularization is that, as with other alignment approaches like RLHF and DPO which require a reference model and KL divergence, a model might exploit the objective to obtain more reward. In our case, this could mean pushing the log probabilities to an identity matrix to maximize mutual information. Thus, regularization does not prevent the model's ability to learn new behaviors but instead prevents it from "forgetting" desirable behaviors due to reward overoptimization.
References:
- Zelikman, E., Wu, Y., Mu, J., & Goodman, N. (2022). Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35, 15476-15488.
- Andukuri, C., Fränken, J. P., Gerstenberg, T., & Goodman, N. D. (2024). Star-gate: Teaching language models to ask clarifying questions. Conference on Language Modeling Research.
- Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., & Finn, C. (2023). Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for their response. After reading the response, I think my current score is appropriate. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their positive evaluation of our work and their helpful suggestions for revising our paper.
We have included length-corrected dialogue win rates based on logistic regression (as requested by reviewer eqUp) in the attached .pdf.
Point-by-point responses to each reviewer are provided below.
Pdf: /pdf/3da17cf62287b5547bc0d16e403a62ead140b5c6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Safe and Efficient: A Primal-Dual Method for Offline Convex CMDPs under Partial Data Coverage | Accept (poster) | Summary: This paper formulates offline convex MDPs under safety constraints and proposes a linear programming-based primal-dual algorithm for solving it. The authors make a partial data coverage assumption yet achieve a sample complexity of $\mathcal{O}(\frac{1}{(1-\gamma)\sqrt{n}})$ while the current SOTA is $\mathcal{O}(\frac{1}{(1-\gamma)^2\sqrt{n}})$. This paper also conducted the empirical evaluation and showed that the proposed algorithm achieves better performance in terms of reward and safety.
Strengths: - Offline convex CMDPs are promising formulation that covers wide-ranging problems including safe imitation learning or standard offline CMDPs.
- The theoretical results are strong. The sample complexity is improved by $1-\gamma$ compared to the SOTA, which is an important contribution to the offline safe RL community. The assumptions the authors make are 1) concentrability of the optimal policy, 2) realizability, 3) completeness, 4) Boundness of $\mathcal{W}$ and $\mathcal{X}$, and 5) Lipschitz continuity of the $f$ and $g$. Though the number of assumptions is large and each assumption is strong, all of them are standard assumptions in previous work and we understand that it is almost impossible to prove sample complexity without them.
- While this paper is highly theoretical, the authors also provide empirical results. I think it is nice to evaluate their algorithm in two settings: safe imitation learning and offline safe RL.
Weaknesses: - While I admit that the main contribution of this paper is theory, I consider that the empirical experiments are not fully conducted. Both environments are toy problems, and there is no baseline method in 5.1. In 5.2, the benchmark task is the frozen lake, which is a much easier task than previous work tried to solve. I know COptiDICE is a well-known baseline algorithm, but there are many more algorithms that perform better than COptiDICE.
- The authors say "We include the limitations in our assumptions", but I do not think that the limitations of this paper exist except for the assumptions. I would recommend that the authors discuss the limitations such as scalability, empirical performance in more complicated tasks, etc.
**Minor Comments and Typos**
- Line 45: The following sentence seems weird to me. Should this be "We formulate ..."?
- "We for the first study the offline."
- Figure 3: (a), (b), ..., (d) do not exist in Figure 3 while they are mentioned in the caption.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In lines 229 - 238, the authors claim as follows. However, the empirical results are provided for grid-world environments. The size of the grid world is rather small (i.e., $8 \times 8$). Could you tell me why the authors used such small environments? I personally think that the following claim is not fully supported.
> Besides, our algorithm is appropriate in the scenario with large-scale state-action space
- I think I could follow the theorem and proof, but could you tell me what enables the authors to achieve the SOTA sample complexity $\mathcal{O}(\frac{1}{(1-\gamma)\sqrt{n}})$? What is the biggest reason why the author could improve the current SOTA by a factor of $1/(1-\gamma)$?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Though the authors discuss the limitations regarding assumptions, I think there are other limitations such as the applicability to more complicated tasks with continuous state-action spaces, computational complexity, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on our paper! We appreciate your support and comments. We'd like to respond to the major comments in the following.
**The experiments are toy examples and have limitations when extending to complicated continuous state-action space.**: As our paper is primarily on the theory side, the experiments are used to justify the theoretical results, including the data coverage, function approximation assumption, and the sample complexity of our algorithm. According to the comprehensive experiments in [R1] and [R2], CoptiDICE is the most stable and effective algorithm, so we consider CoptiDICE as the baseline in our submission.
But, following the reviewer's suggestions, we conduct a list of experiments in the challenging and continuous environment--SafetyGym with comprehensive baselines (e.g., CPQ in [R3], PDCA in [R1], CoptiDICE in [R2], and BEAR-Lagrangian in [R4], [R5]). Note that only PDCA and our algorithm provide theoretical results. In these experiments, to deal with the continuous state-action space, we use the fully connected single hidden-layer neural network of width 128 to represent $w$.
We summarize the evaluation results in the following table. All the rewards and costs are normalized. The cost threshold is 1. Each value is averaged over 20 evaluation episodes and 3 random seeds.
|**Task**|**COptiDICE**||**CPQ**||**BEAR-Lag**||**PDCA**||**Ours**||
|-|-|-|-|-|-|-|-|-|-|-|
||**[Reward,Cost]**||**[Reward,Cost]**||**[Reward,Cost]**||**[Reward,Cost]**||**[Reward,Cost]**||
|**AntRun**|**[0.6,0.94]**|| [0.03,0.02]||[0.15,0.73]||[0.28,0.93]||**[0.6,0.01]**||
|**BallRun**|[0.59, 3.52]||[0.22,1.27]||[-0.47,5.03]||[0.55,3.38]||**[0.24,0.0]**||
|**BallCircle**|[0.70,2.61]||**[0.64,0.76]**||[0.86,3.09]||[0.63,2.29]||[0.39,0.93]||
|**CarPush1**|[0.23,0.5]||[-0.03,0.95]||**[0.21,0.54]**||[0.17,0.41]||[0.20,0.4]||
|**CarRun**|[0.87,0.0]||[0.95,1.79]||[0.68,7.78]||[0.85,0.0] ||**[0.90,0.0]**||
The results show our algorithm is the only one to guarantee safety across all environments. In AntRun, BallRun, and CarRun tasks, our algorithm can achieve the highest reward and small cost among all baseline algorithms.
Note our algorithm is very flexible to incorporate more sophisticated function approximations (e.g., deeper and more advanced neural networks) to potentially achieve better empirical performance.
**Question about SOTA sample complexity**: We think the most important reason that we can achieve the SOTA sample complexity is that our algorithm is direct enough. Our algorithm actually is to measure the distance between program (5)-(7) and the approximate program (8)-(10). In our proof, the discounted factor $\gamma$ only exists in Lemma 2 which depicts the distance between the reward and the constraint violation. However, most previous works are more complex compared with ours. For example, the paper [R1] has to use two oracles in their algorithm, introducing additional factors $(1-\gamma)^{-1}$, which makes their result weaker than ours.
**Minor comments and Typos**: Thank you for your detailed read and we will fix it in our paper.
---
Rebuttal 2:
Title: References
Comment: [R1]: Kihyuk Hong, Yuhang Li, and Ambuj Tewari. "A primal-dual-critic algorithm for offline constrained
reinforcement learning." In International Conference on Artificial Intelligence and Statistics, pages
280–288. PMLR, 2024.
[R2]: Jongmin Lee, Cosmin Paduraru, Daniel J Mankowitz, Nicolas Heess, Doina Precup, Kee-Eung Kim,
and Arthur Guez. "Coptidice: Offline constrained reinforcement learning via stationary distribution
correction estimation." In International Conference on Learning Representations, 2021.
[R3]: Haoran Xu, Xianyuan Zhan, and Xiangyu Zhu. "Constraints penalized q-learning for safe offline
reinforcement learning." In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36,
pages 8753–8760, 2022.
[R4]: Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. "Stabilizing off-policy
q-learning via bootstrapping error reduction." Advances in neural information processing systems,
32, 2019.
[R5]: Adam Stooke, Joshua Achiam, and Pieter Abbeel. "Responsive safety in reinforcement learning
by pid lagrangian methods." In International Conference on Machine Learning, pages 9133–9143.
PMLR, 2020.
---
Rebuttal Comment 2.1:
Title: Responses
Comment: Thank you for the clarification and additional experiments. The concerns I had at the time of the initial review are resolved. After reading other reviews and authors' rebuttals, I still recommend acceptance of this paper. That said, I think that the impact of this paper would be moderate-to-high, so I keep the original score of 6.
> 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations.
---
Reply to Comment 2.1.1:
Comment: Thanks a lot for your precious time in helping us improve our paper. Much appreciated! Please let us know if you have any follow-up questions. We will be happy to answer them. | Summary: This paper investigates batch RL with safety constraints and function approximation, which is a question of both theoretical and practical importance.
Strengths: The assumptions considered in this paper are less restrictive compared to previous works. In particular, most previous works consider linear objective, and this paper relaxes linearity to convexity, which makes the results apply to a broader class of learning problems (e.g. batch RL with entropy regularization). It is also good to see the experimental results.
Weaknesses: (1) The presentation is a bit messy. The class X is not well-explained. The Lipschitz constant L appeared in Theorem 1 is not defined in the main text. Is L bounded by L_f, L_g and other parameters?
(2) There is an unstated assumption: the convergence of the proposed algorithm actually relies on the convexity of W (Lemma 8).
(3) The upper bounds are stated in terms of log|W|, the log-cardinality of the function class W. However, it may be problematic to assume W is finite: if W is both convex and finite, then W must be a singleton, and the results are vacuous. Therefore, the upper bounds should instead be stated in terms of the log covering number of W.
Technical Quality: 2
Clarity: 2
Questions for Authors: Q1. Is there a relationship between the proposed primal-dual update rules and the actor-critic algorithms?
Q2. In assumption 4, it is required that all functions in W have a uniform upper bound B_w, which implies $B_w\geq C_\pi^\star $. It would be better to explicitly state that it requires prior knowledge on the concentrability constant.
Q3. The class X does not appear in the algorithm. Is it introduced purely for the sake of analysis?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. We would like to address your concerns point by point.
**The class $\mathcal X$ is not well-explained**: In Assumption 3, operator $\phi(w)$ or $x$ aims to calculate the $l_1$ norm of the constraint function $Kw - (1-\gamma)\mu_0$. It assumes that for all $w \in \mathcal W$, $l_1$ norm of the constraint function exists and thus can be used to calculate the validity constraint violation (or called Bellman error). Only when the constraint violation is a computable quantity can we relax the equality constraint (6) and obtain an approximate program (8)-(10). So class $\mathcal X$ is used to depict the validity constraint violation and simplify our analysis. We will add this explanation to our paper.
**The convergence of the proposed algorithm relies on the convexity of $\mathcal W$**: Thank you for your detailed discussion and we will add this to our revision. In fact, if we want program (8)-(10) to be computationally tractable, function class $\mathcal W$ is supposed to be convex. However, when $\mathcal W$ is not convex, the operation of convexification on function class $\mathcal W$ is reasonable and common in offline safe RL [R1, R2]. Even if $w$ is parameterized by the neural network, as studied in [R3], when the neural network is over-parameterized, it would have some almost convex properties such that stochastic gradient descent (SGD) can find global minima on the training objective of neural networks. Thus in our algorithm, if the neural network is over-parameterized, it will also have the property of convergence.
**Question about the log-cardinality of the function class $\mathcal W$**: In our paper, we do not assume the function class $\mathcal W$ is finite. When $\mathcal W$ is a continuous set, the cardinality represents the covering number or the number of extreme points of the function class [R4], and it does not affect our results.
**Relationship between the proposed primal-dual update rules and the actor-critic algorithm**: We think that our algorithm can be viewed from the perspective of actor-critic in some sense. For example, as studied in [R5, R6, R7], in the LP formulation of CMDP, the Lagrange multipliers of Bellman flow constraints are value functions. Because the value function is the dual variable of the occupancy measure in LP formulation. So the Lagrange multiplier $\lambda$ in our algorithm refers to the critic in actor-critic algorithm. And $w$ refers to the actor in actor-critic algorithm since $w$ equals to the occupancy measure in some sense and represents the policy in our algorithm.
**The Lipschitiz constant $L$ and the prior knowledge on concentrability constant**: The Lipschitz constant $L$ is defined in the appendix and it is the upper bound of $L_f$ and $L_g$. And $B_w$ actually requires prior knowledge on the concentrability constant. Thank you for your detailed advice and we will add these to our paper.
---
Rebuttal 2:
Title: References
Comment: [R1]: Hoang Le, Cameron Voloshin, and Yisong Yue. "Batch policy learning under constraints." In International Conference on Machine Learning, pages 3703–3712. PMLR, 2019.
[R2]: Kihyuk Hong, Yuhang Li, and Ambuj Tewari. "A primal-dual-critic algorithm for offline constrained reinforcement learning." In International Conference on Artificial Intelligence and Statistics, pages 280–288. PMLR, 2024.
[R3]: Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. "A convergence theory for deep learning via over-parameterization. In International conference on machine learning." pages 242–252. PMLR, 2019.
[R4]: Asuman E Ozdaglar, Sarath Pattathil, Jiawei Zhang, and Kaiqing Zhang. "Revisiting the linear-
programming framework for offline rl with general function approximation." In International
Conference on Machine Learning, pages 26769–26791. PMLR, 2023.
[R5]: Ofir Nachum, Yinlam Chow, Bo Dai, and Lihong Li. "Dualdice: Behavior-agnostic estimation of
discounted stationary distribution corrections." Advances in neural information processing systems,
32, 2019.
[R6]: Jongmin Lee, Wonseok Jeon, Byungjun Lee, Joelle Pineau, and Kee-Eung Kim. "Optidice: Offline
policy optimization via stationary distribution correction estimation." In International Conference
on Machine Learning, pages 6120–6130. PMLR, 2021.
[R7]: Jongmin Lee, Cosmin Paduraru, Daniel J Mankowitz, Nicolas Heess, Doina Precup, Kee-Eung Kim, and Arthur Guez. "Coptidice: Offline constrained reinforcement learning via stationary distribution correction estimation." In International Conference on Learning Representations, 2021.
---
Rebuttal 3:
Comment: Thank you for your response. Some comments:
> However, when W is not convex, the operation of convexification on function class W is reasonable and common in offline safe RL
While the algorithm can operate on the convexified function class, the log covering number of the convexified function class can be much larger than the log covering number of W. Therefore, when you compare your results with existing works, it is necessary to highlight the necessity of convexity.
> In our paper, we do not assume the function class W is finite
I don't think this is an honest claim, given that your current proof of Lemma 5 directly uses the union bound over W. You don't even mention how cardinality is defined for continuous W: The word "covering" does not appear in the paper. Further, if it is indeed defined as the covering number, it has to depend on the covering radius, which is also not mentioned.
Of course, this is a relatively minor issue. However, you shall admit that it is a mistake, instead of denying it.
---
Rebuttal Comment 3.1:
Comment: We greatly appreciate your comments on the function class $\mathcal W$ and would like to respond to them as follows.
**When comparing our results with existing works, it is necessary to highlight the necessity of convexity:** We will highlight our results based on the convex or convexified function class $\mathcal W$ and clarify that this is consistent with our most related work [7, 14, 17] (in Table 1) as they also require a similar convex property (either a tabular setting in [7] or convexified policy class in [14, 17]).
**Elaboration on the function class $\mathcal W$:** We apologize for any misunderstandings and for any unintentional impression of dishonesty regarding our claim on the function class $\mathcal W$ due to the missing definition and statement. We would certainly clarify the definition of $\mathcal W$ and $|\mathcal W|$ (e.g., emphasize the continuous property and the covering radius and number).
We thank the reviewer once again for the detailed comments, which have definitely helped improve the quality of our paper. We sincerely hope our response addresses your major concerns and that you will consider reevaluating our work. Please let us know if you have any further comments, and we will do our best to address them. | Summary: This paper proposes a novel linear programming based primal-dual algorithm for convex MDPs which incorporates “uncertainty” parameters to improve data efficiency, while requiring only partial data coverage assumption. The authors provide theoretical results achieve a sample complexity of $O(1/((1-\gamma)\sqrt{n}))$ under general function approximation, improving the current state-of-the-art by a factor of $1/(1 − \gamma)$, where $n$ is the number of data samples in an offline dataset, and $\gamma$ is the discount factor. The authors also run experiments to validate their theoretical findings, which demonstrate the practical efficacy of their approach in achieving improved safety and learning efficiency in safe offline settings.
Strengths: 1. The studied problem, i.e., safe offline RL in convex MDPs, is well-motivated and can be applied to autonomous driving, robotics, etc.
2. The authors design a novel linear programming based primal-dual algorithm for convex MDPs under only the partial coverage assumption, instead of the full coverage assumption. The authors also provide sample complexity for this algorithm, which improves the current result by a factor of $1/(1 − \gamma)$.
3. Empirical evaluations are also presented to validate the practical efficacy of the proposed algorithm.
Weaknesses: 1. Can the authors give more explanations on Assumptions 2-4. Why do you introduce the set $\mathcal{W}$ and $\mathcal{X}$. Does the algorithm need to know these two sets in advance? If yes, what does it mean in practice?
2. Does the algorithm need to know the behavior policy $\mu(a|s)$ in advance (in Eq. (12))? This is not a practical assumption.
3. The authors should give more comparison to the existing results for offline (linear) constrained RL, since (linear) constrained MDPs is an important example of convex MDPs.
4. What are the technical challenges and novelty in the offline convex MDP (RL) problem, compared to the existing online convex MDP (RL) works?
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weaknesses above.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Please see the weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and detailed comments on our paper. We will respond to the major concern in the following.
**Need more explanations on Assumptions 2-4 and the reason for introducing the set $\mathcal W$ and $\mathcal X$.**: Assumption 2 assumes the optimal policy $\pi^*$ $(w^*)$ is included in the function class $\mathcal W.$ It means (5)--(7) (with $w \in \mathcal W$) is a proper baseline as it includes the optimal policy. Assumption 3 is a completeness-type assumption. Operator $\phi(w)$ or $x$ aims to calculate the $l_1$ norm of the constraint function $Kw - (1-\gamma)\mu_0$. It assumes that for all $w \in \mathcal W$, $l_1$ norm of the constraint function exists and thus can be used to calculate the constraint violation (or called Bellman error). Only when the constraint violation is a computable quantity can we relax the equality constraint (6) and obtain an approximate program (8)-(10). Assumption 4 is a standard assumption in offline RL that assumes the boundness of function classes $\mathcal W$ since we always do not want variable $w$ to be infinite.
The introduction of set $\mathcal W$ and $\mathcal X$ helps our analysis. In our algorithm, we want to measure the distance between program (5)-(7) and the approximate program (8)-(10). So if we assume the optimal solutions to program (5)-(7) and program (8)-(10) are in the same function class $\mathcal W$, the distance between the above two programs can be directly in function class $\mathcal W$. Moreover, $x \in \mathcal X$ is used to calculate the $l_1$ norm of the constraint function and further measure the constraint violation as we state above. Actually, we do not need to know set $\mathcal W$ and $\mathcal X$ in advance since they are used for the sake of analysis.
Thank you for your advice, we will further explain this in our paper.
**Does the algorithm need to know the behavior policy in advance?**: The algorithm does not need to know the behavior policy $\mu(a | s)$ in advance. For ease of exposition, we assume the behavior policy is known. When it is unknown in practice, behavior clone is an effective approach to extract the behavior policy from the dataset. Specifically, we can estimate the learned behavior policy $\hat{\pi}$ by $\hat{\pi} (a | s) = \frac{n(s,a)}{n(s)}$, where $n(s,a)$ is the number of $(s,a)$ state-action pairs in the offline dataset. It can be shown that the gap between the learned policy $\hat{\pi}$ and the real behavior policy $\pi_\mu$ is upper bounded by $\min ( 1, |\mathcal S| / n )$ [R1], which does not affect our sample complexity. Furthermore, the experiments in our paper do not assume the behavior policy and utilize the behavior clone to estimate the behavior policy, where our algorithms also achieve great results.
**More comparison to the existing results for offline (linear) constrained RL**: In theory, our algorithm achieves state-of-the-art results with general function approximation under partial data coverage. However, in previous works, [R2] proposes a meta-algorithm combining with general function approximation but achieves $\mathcal O \left( \frac{1}{(1-\gamma)^5 \sqrt{n}}\right)$ sample complexity under the strong Bellman completeness assumption. [R3] also focuses on occupancy measures but their algorithm can just apply to discrete state-action spaces. The most related work [R4] approaches the problem from the perspective of the Actor-Critic algorithm, analyzing the sample complexity to be $\mathcal O \left( \frac{1}{(1-\gamma)^2 \sqrt{n}}\right)$ under a stronger full data coverage assumption.
In experiments, following the reviewer's suggestions, we conduct a list of experiments in the challenging and continuous environment--SafetyGym with comprehensive baselines (e.g., CPQ in [R5], PDCA in [R4], CoptiDICE in [R6], and BEAR-Lagrangian in [R7], [R8]). Note that only PDCA and our algorithm provide theoretical results. In these experiments, to deal with the continuous state-action space, we use the fully connected single hidden-layer neural network of width 128 to represent $w$.
We summarize the evaluation results in the following table. All the rewards and costs are normalized. The cost threshold is 1. Each value is averaged over 20 evaluation episodes and 3 random seeds.
|**Task**|**COptiDICE**||**CPQ**||**BEAR-Lag**||**PDCA**||**Ours**||
|-|-|-|-|-|-|-|-|-|-|-|
||**[Reward,Cost]**||**[Reward,Cost]**||**[Reward,Cost]**||**[Reward,Cost]**||**[Reward,Cost]**||
|**AntRun**|**[0.6,0.94]**|| [0.03,0.02]||[0.15,0.73]||[0.28,0.93]||**[0.6,0.01]**||
|**BallRun**|[0.59, 3.52]||[0.22,1.27]||[-0.47,5.03]||[0.55,3.38]||**[0.24,0.0]**||
|**BallCircle**|[0.70,2.61]||**[0.64,0.76]**||[0.86,3.09]||[0.63,2.29]||[0.39,0.93]||
|**CarPush1**|[0.23,0.5]||[-0.03,0.95]||**[0.21,0.54]**||[0.17,0.41]||[0.20,0.4]||
|**CarRun**|[0.87,0.0]||[0.95,1.79]||[0.68,7.78]||[0.85,0.0] ||**[0.90,0.0]**||
The results show our algorithm is the only one to guarantee safety across all environments. In AntRun, BallRun, and CarRun tasks, our algorithm can achieve the highest reward and small cost among all baseline algorithms.
Note our algorithm is very flexible to incorporate more sophisticated function approximations (e.g., deeper and more advanced neural networks) to potentially achieve better empirical performance.
**The challenge between online convex MDP and offline convex MDP.**: We think the main challenge between online convex MDPs and offline convex MDPs lies in data, including data quality and data assumptions. For example, in R[9], they propose a method that uses standard RL algorithms to solve convex MDPs. However, in the offline setting, each RL algorithm has its own data assumption. It is challenging to satisfy all data assumptions of these algorithms, so many online convex MDP algorithms can not apply to the offline setting directly.
---
Rebuttal 2:
Title: References
Comment: [R1]: Aviral Kumar, Joey Hong, Anikait Singh, and Sergey Levine. "Should i run offline reinforcement
learning or behavioral cloning?" In International Conference on Learning Representations, 2021.
[R2]: Hoang Le, Cameron Voloshin, and Yisong Yue. "Batch policy learning under constraints." In International Conference on Machine Learning, pages 3703–3712. PMLR, 2019.
[R3]: Fan Chen, Junyu Zhang, and Zaiwen Wen. "A near-optimal primal-dual method for off-policy learning in cmdp." Advances in Neural Information Processing Systems, 35:10521–10532, 2022.
[R4]: Kihyuk Hong, Yuhang Li, and Ambuj Tewari. "A primal-dual-critic algorithm for offline constrained reinforcement learning." In International Conference on Artificial Intelligence and Statistics, pages 280–288. PMLR, 2024.
[R5]: Haoran Xu, Xianyuan Zhan, and Xiangyu Zhu. "Constraints penalized q-learning for safe offline reinforcement learning." In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 8753–8760, 2022.
[R6]: Jongmin Lee, Cosmin Paduraru, Daniel J Mankowitz, Nicolas Heess, Doina Precup, Kee-Eung Kim, and Arthur Guez. "Coptidice: Offline constrained reinforcement learning via stationary distribution correction estimation." In International Conference on Learning Representations, 2021.
[R7]: Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. "Stabilizing off-policy q-learning via bootstrapping error reduction." Advances in neural information processing systems, 32, 2019.
[R8]: Adam Stooke, Joshua Achiam, and Pieter Abbeel. "Responsive safety in reinforcement learning by pid lagrangian methods." In International Conference on Machine Learning, pages 9133–9143. PMLR, 2020.
[R9]: Tom Zahavy, Brendan O’Donoghue, Guillaume Desjardins, and Satinder Singh. "Reward is enough
for convex mdps." Advances in Neural Information Processing Systems, 34:25746–25759, 2021.
---
Rebuttal Comment 2.1:
Title: I increased my score to 6
Comment: Thank the authors for their repsonse and added empirical results. I increased my score to 6.
---
Reply to Comment 2.1.1:
Comment: We sincerely thank the reviewer for considering our response and revising the score. Much appreciate! Please let us know if you have any follow-up questions. We will be happy to answer them. | Summary: This paper studies the offline safe reinforcement learning (RL) problem and proposes a primal-dual algorithm to solve this problem. The key contributions are that the paper focuses on the more general setting of convex Markov Decision Processes (convex MDPs), where the objective function is a convex utility function, rather than the standard linear rewards. The authors propose a primal-dual algorithm that can handle the offline safe RL problem under the convex MDP setting, crucially requiring only partial data coverage, a weaker assumption compared to previous work. The theoretical analysis provides the sample complexity of the proposed algorithm. Experimental results validate the theoretical findings and demonstrate the practical efficacy of the approach.
Strengths: This paper studies a very interesting and important topic in the RL community. The writing is clear and the results are well presented.
Weaknesses: My main concern with this paper is the limited technical novelty of the contributions. The proposed formulation (8)-(10) appears to be a direct extension of the setup in prior work [26], where the linear reward and cost functions are generalized to convex functions. The algorithm itself is a standard primal-dual method for solving convex optimization problems, rather than a new technical innovation.
Furthermore, the experimental evaluation is quite limited in scope. The authors only compare their approach against a small number of baseline methods, and the experiments are run on a relatively small set of problem instances. This raises questions about the generalizability and practical significance of the empirical results.
While the paper tackles an important and relevant problem in offline safe reinforcement learning, the technical contributions seem incremental given the prior work in this area.
Technical Quality: 3
Clarity: 3
Questions for Authors: -- line 181, how large the dataset has to be in order for (8)-(10) to be close to (5)-(7)?
-- When w is parametrized by a neural network, does the proposed formulation satisfy all the assumptions made?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewers' detailed comments. Please find our point-by-point response to your questions below.
**Concern of the novelty of the contributions**: We would like to emphasize that we are the first to study offline convex CMDPs and provide state-of-the-art theoretical results under mild assumptions.
There are only a few theoretical studies on the offline (linear) CMDPs [R1, R2, R3]. However, these papers either need strong assumptions (full data coverage, Bellman completeness) or can not combine with the general function approximation settings. For this purpose, we develop an efficient algorithm that only needs partial data coverage and meanwhile achieves state-of-the-art theoretical results with general function approximation. The assumption of weak data coverage, the ability to handle large state-action spaces, and the best theoretical results are important aspects that were not addressed in previous works. We would also like to emphasize that the partial data coverage assumption is extremely important in the CMDP setting. A full coverage assumption would mean that each state and action should be covered by the dataset, including the most dangerous states and actions, which is unrealistic and undesirable.
**The proposed formulation appears to be a direct extension of the setup in prior work [26]**: The prior work [26] does not consider the cost function in their formulation. Note that adding the cost function makes the problem more challenging and significantly different from the unconstrained setting. For example, the optimal policy is no longer a greedy policy, as a trade-off needs to be made between reward and costs. This is why CMDP problems are usually much more difficult. In addition, when safety constraints are considered, the gap between program (5)-(7) and approximate program (8)-(10) changes and is unknown in the previous work. Second, extending the program (5)-(7) to program (8)-(10) directly leads to suboptimal results. Finally, the optimal rate of sample complexity is still unknown when considering safety constraints. We focus on answering these questions in our paper.
**The experimental evaluation is limited in the paper**: As our paper is primarily on the theory side, the experiments are used to justify the theoretical results, including the data coverage, function approximation assumption, and the result of sample complexity of our algorithm. According to the comprehensive experiments in [R3] and [R4], CoptiDICE is the most stable and effective algorithm, so we consider CoptiDICE as the baseline in our submission.
But, following the reviewer's suggestions, we conduct a list of experiments in a popular, challenging, and continuous environment--SafetyGym with comprehensive baselines (e.g., CPQ in [R5], PDCA in [R3], CoptiDICE in [R4], and BEAR-Lagrangian in [R6], [R7]). Note that only PDCA and our algorithm provide theoretical results. In these experiments, to deal with the continuous state-action space, we use the fully connected single hidden-layer neural network of width 128 to represent $w$.
We summarize the evaluation results in the following table. All the rewards and costs are normalized. The cost threshold is 1. Each value is averaged over 20 evaluation episodes and 3 random seeds.
| **Task** | **COptiDICE** | | **CPQ** | | **BEAR-Lag** | | **PDCA** | | **Ours** | |
|--------------|---------------|----------------|------------|----------------|--------------|----------------|------------|----------------|------------|----------------|
| | **[Reward, Cost]** | | **[Reward, Cost]** | | **[Reward, Cost]** | | **[Reward, Cost]** | | **[Reward, Cost]** | |
| **AntRun** | **[0.6, 0.94]** | | [0.03, 0.02] | | [0.15, 0.73] | | [0.28, 0.93] | | **[0.6, 0.01]** | |
| **BallRun** | [0.59, 3.52] | | [0.22, 1.27] | | [-0.47, 5.03] | | [0.55, 3.38] | | **[0.24, 0.0]** | |
| **BallCircle** | [0.70, 2.61] | | **[0.64, 0.76]** | | [0.86, 3.09] | | [0.63, 2.29] | | [0.39, 0.93] | |
| **CarPush1** | [0.23, 0.5] | | [-0.03, 0.95] | | **[0.21, 0.54]** | | [0.17, 0.41] | | [0.20, 0.4] | |
| **CarRun** | [0.87, 0.0] | | [0.95, 1.79] | | [0.68, 7.78] | | [0.85, 0.0] | | **[0.90, 0.0]** | |
The results show our algorithm is the only one to guarantee safety across all environments. In AntRun, BallRun, and CarRun tasks, our algorithm can achieve the highest reward and small cost among all baseline algorithms.
Note our algorithm is very flexible to incorporate more sophisticated function approximations (e.g., deeper and more advanced neural networks) to potentially achieve better empirical performance.
**How large the dataset has to be in order for (8)--(10) to be close to (5)--(7)?**: Our theoretical results in Theorem 1 show (8)--(10) will ``converge to'' (5)--(7) with the rate of $\mathcal O(1/\sqrt{n})$, where $n$ is the size of dataset.
**When w is parametrized by a neural network, does the proposed formulation satisfy all the assumptions made?**: Yes. When $w$ is parameterized by a neural network, Assumption 2 (Realizability Assumption) can be satisfied by choosing proper network parameters. In Assumption 3 (Completeness Assumption), the operator $\phi(w)$ or $x$ aims to calculate the $l_1$ norm of constraint function $Kw - (1-\gamma)\mu_0$. The completeness assumption is set up if $l_1$ norm of the constraint function exists, and it works for the case that $w$ is a neural network. In Assumption 4 (Boundness Assumption), if the output of the neural network $w$ is bounded, the assumption is also satisfied.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their rebuttal, and I have read it. I keep my score and remain positive about this paper.
---
Reply to Comment 1.1.1:
Comment: Thanks a lot for your precious time in helping us improve our paper. Much appreciated! Please let us know if you have any follow-up questions. We will be happy to answer them.
---
Rebuttal 2:
Title: References
Comment: [R1]: Hoang Le, Cameron Voloshin, and Yisong Yue. "Batch policy learning under constraints." In
International Conference on Machine Learning, pages 3703–3712. PMLR, 2019.
[R2]: Fan Chen, Junyu Zhang, and Zaiwen Wen. "A near-optimal primal-dual method for off-policy
learning in cmdp." Advances in Neural Information Processing Systems, 35:10521–10532, 2022.
[R3]: Kihyuk Hong, Yuhang Li, and Ambuj Tewari. "A primal-dual-critic algorithm for offline constrained
reinforcement learning." In International Conference on Artificial Intelligence and Statistics, pages
280–288. PMLR, 2024.
[R4]: Jongmin Lee, Cosmin Paduraru, Daniel J Mankowitz, Nicolas Heess, Doina Precup, Kee-Eung Kim,
and Arthur Guez. "Coptidice: Offline constrained reinforcement learning via stationary distribution
correction estimation." In International Conference on Learning Representations, 2021.
[R5]: Haoran Xu, Xianyuan Zhan, and Xiangyu Zhu. "Constraints penalized q-learning for safe offline
reinforcement learning." In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36,
pages 8753–8760, 2022.
[R6]: Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. "Stabilizing off-policy
q-learning via bootstrapping error reduction." Advances in neural information processing systems,
32, 2019.
[R7]: Adam Stooke, Joshua Achiam, and Pieter Abbeel. "Responsive safety in reinforcement learning
by pid lagrangian methods." In International Conference on Machine Learning, pages 9133–9143.
PMLR, 2020. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Regression under demographic parity constraints via unlabeled post-processing | Accept (poster) | Summary: This paper considers post-processing regressors to satisfy the group fairness criterion of demographic parity, in particular, under the attribute unaware setting.
1. The authors begin by analyzing the constrained optimization problem (\*) for fair post-processing, and showing in lemma 3.1 that the solution, i.e., the post-processed regressor, can be represented and parameterized by the dual variables of (\*); and provided that the base models are accurate and (\*) is solved to optimiality, the resulting regressor will be optimally fair.
2. Then the authors discussed optimization and statistical aspects of (\*). Specifically, the convergence rate is analyzed, as well as the conditions for non-perfect solutions to satisfy fairness, namely that the "gradient mapping" of (\*) needs to be lipschitz. For this reason, the authors recommended using the SGD3 algorithm for optimizing (\*).
3. The paper closes with empirical evaluations of the proposed algorithm.
Strengths: 1. To my knowledge, this is the first paper that studies fair post-processing regressors in the attribute unaware setting.
2. The primal-dual analysis that leads to the representation result in lemma 3.1 is interesting. Because the proposed procedure involves discretization, the formulation of (\*) is based on the support of the regressor's output space. This is different from somewhat similar works for the classification setting [1, 2] where the optimization problem is based on the scores of the training examples (without discretization).
3. The theoretical analysis is thorough, hence the proposed procedure is well-supported, including the choice of SGD3.
[1] https://arxiv.org/pdf/2310.05725
[2] https://arxiv.org/pdf/2405.04025
Weaknesses: 1. Regarding lipschitzness of $F$. The reviewer feels that some discussions are curt. The authors introduced the "gradient mapping" $G_\alpha$ in eq. (8), with a hyperparameter $\alpha$. It is not mentioned: how $\alpha$ should be chosen, in theory or in practice; how exactly SGD3 controls the norm of $G_\alpha$ (lines 237-239) in the main body. At least a brief discussion should be included in the main body for the latter, since theorem 5.1 would depend on whether SGD3 can provably reduce this quantity.
2. Regarding the use of SGD3, an ablation study would have been appreciated at illustrating the importance of using SGD3 over other optimizers; is it absolutely necessary? Also, could the authors discuss potential limitations with SGD3?
3. The authors mentioned that there are several hyperparameters associated with SGD3 (line 236), and with discretization (i.e., the number of bins). But it does not seem to be discussed how these are chosen for the experiments in section 6, and whether mis-specification of these hyperparameters would impact performance.
[3] https://arxiv.org/pdf/2006.07286
Technical Quality: 3
Clarity: 3
Questions for Authors: - Does the analysis rely on properties unique to the demographic parity fairness criterion? If not, is there a path to extend to other criteria (potentially taking inspirations from [1, 2])?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - See weaknesses.
- Minor, but the proposed procedure requires discretizing the support, which is a limitation; even though the reviewer is aware that, in practice, discretization helps with generalization hence gives better performing models compared to non-parametric methods [3].
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's careful reading and fully agree with the evaluation regarding the strengths and potential directions for future investigation. We will address the questions raised below.
*Weaknesses*
**W1:**
* Regarding lipschitzness of $F$. The reviewer feels that some discussions are curt.
**A:** We will enhance our discussion to better emphasize that the smoothness properties of $F$ are controlled by the regularization parameter $\beta$ *(Lemma 3.4)*. Our final theoretical guarantees also provide practical guidance on selecting this parameter, balancing the regularization error *(Lemma 3.3)* and ensuring the algorithm's fast convergence.
* The authors introduced the "gradient mapping" $G_\alpha$ in eq. (8), with a hyperparameter $\alpha$. It is not mentioned how $\alpha$ should be chosen, in theory or in practice.
**A:** In theory, we set $\alpha = 1/M$ and any value of $\alpha \leq 1/M$ would yield exactly the same result. However, the choice of $\alpha$ is not critical for our specific problem, as the bounds on unfairness and risk are valid for any $\alpha > 0$ (see *Lemma 3.5*, lines 203-204). In practice, there is no $\alpha$ hyperparameter in the final implementation of SGD3 with a black-box optimizer.
* It is not mentioned how exactly SGD3 controls the norm of $G_\alpha$ in the main body.
**A:** Due to space limitations, we have provided the details of this part in the *Appendix C*. We would also like to remind that any algorithm designed to minimize the expected squared norm of the gradient mapping can be used. SGD3 is just an example that we included because it is supported by convergence rates.
**W2:**
* An ablation study would have been appreciated at illustrating the importance of using SGD3 over other optimizers; is it absolutely necessary?
**A:** We conducted additional experiments to observe the behaviors of simpler algorithms. A figure that illustrates the comparison is included in the attached pdf file. In conclusion, all algorithms perform similarly in the middle to high unfairness regime, while those based on SGD3 are more stable in the low unfairness (high fairness) regime. However, since the stochastic minimization of the norm of the gradient of a convex function is a relatively niche topic, there are only a few algorithms with established convergence rates. As we aim for end-to-end guarantees, we have chosen methods based on SGD3. Nonetheless, other stochastic minimization methods could potentially perform just as well in practice.
* Also, could the authors discuss potential limitations with SGD3?
**A:** The SGD3 algorithm, as described by [Allen-Zhu (2021)](https://arxiv.org/pdf/1801.02982), serves primarily as a theoretical model that highlights a unique phenomenon in stochastic convex optimization with an unconventional criterion. Between the submission and the rebuttal, we extended and simplified the theory from [Foster et al. (2019)](https://arxiv.org/pdf/1902.04686) to be applicable to our problem. This has enabled us to provide a theoretical guarantee for a simpler algorithm that combines an SGD3-like approach with accelerated stochastic gradient descent. While this improved analysis slightly enhances our fairness and risk guarantees, it does not alter the main message of the paper.
**W3:** The authors mentioned that there are several hyperparameters associated with SGD3 (line 236), and with discretization (i.e., the number of bins). But it does not seem to be discussed how these are chosen for the experiments in section 6, and whether mis-specification of these hyperparameters would impact performance.
**A:** In practice, we normalize the target variables to the range $[-1, 1]$, so we set $B = 1$ as our default practical recommendation, which aligns with actual practice. We have found that the other hyperparameters suggested by our theory already produce good empirical results. The exact choices of these parameters are detailed in *Appendix G* (line 693), and, aside from some multiplicative constants, they closely follow the theoretical values. These will be the default settings in the package we plan to release.
*Questions*
**Q:** Does the analysis rely on properties unique to the demographic parity fairness criterion? If not, is there a path to extend to other criteria (potentially taking inspirations from [1, 2])?
**A:** This is an excellent question. As the reviewer may have noticed, the key feature that ensures everything works is the compatibility of the demographic parity constraint with discretization. This approach can be adapted to any other notion of fairness that is "friendly" to discretization. However, care must be taken when introducing new notions of fairness. The demographic parity constraint is convenient because there are always predictions that satisfy it (e.g., constants), but this may not be true for other fairness notions. Since our algorithm relies heavily on a primal-dual approach, the finiteness of the optimal dual variables can only be guaranteed if certain constraint qualification conditions are met. This is not an issue for the demographic parity constraint but could be a potential obstacle for other fairness definitions.
We appreciate the reviewer's suggestion to include those two references, and they will be added to the final version. We want to emphasize that both references heavily depend on the ability to express the form of the optimal classifier under a given fairness constraint analytically. However, this is not applicable to regression in the unawareness setup as no such formula exists. Nevertheless, combining our discretization approach with the two references could potentially lead to a sensible method, and it is indeed an interesting avenue for future research.
---
Rebuttal Comment 1.1:
Comment: The reviewer thanks the authors for the response.
Could the authors also (briefly) comment on how the number of bins is chosen (in the experiments), and how it affects performance (theoretically)?
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's feedback. In Lemma B.1 (Appendix B, line 518), we show that discretization increases the risk by $4B/L + 1/L^2 + \log(2L+1)/\beta$. Additionally, as demonstrated in the proof of Theorem 5.1 (Appendix D, line 615), setting $L = \sqrt{T}$ achieves a risk rate of $1/\sqrt{T}$. Therefore, we use $L = \sqrt{T}$ in our experiments, resulting in $2\sqrt{T} + 1$ bins, where T is the number of unlabeled samples. We also note that using more bins wouldn't improve statistical performance but would complicate the optimization due to higher dimensionality. Hence, we adhere to the theoretically recommended setting in practice. | Summary: This paper proposes an algorithm that takes in an fitted regression function and a sensitive attribute predictor and output a prediction function satisfying the demographic parity constraint. It designs a smooth convex objective function with discretization to solve for a prediction function with small risk and controlled violation of the demographic parity constraint. Stochastic minimization techniques are applied to solve the proposed optimization problem and recover the statistical rate $1/\sqrt{T}$.
Strengths: The proposed algorithm is supported by theoretical analysis and error bounds. The proofs are well-organized and clear to read. The algorithm deploys suitable stochastic minimization techniques to achieve statistical guarantees. Moreover, based on the experiment results, the proposed algorithm is much more computationally efficient than existing methods.
Weaknesses: Typos:
* falls withing -> falls within (page 2)
* statisitcal properties -> statistical properties (page 5)
* out approach -> our approach (page 7)
* phenomenons -> phenomena (page 7)
* outperfomce -> outperform (page 8)
Technical Quality: 4
Clarity: 3
Questions for Authors: In practice, how do we pick algorithm parameters such as L, B and \beta?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you to the reviewer for the careful reading and evaluation. We agree with the feedback and will correct the suggested typos in the revision. We will also address the question raised below.
**Q**: In practice, how do we pick algorithm parameters such as L, B and $\beta$?
**A**: In practice, we normalize the target variables to the range $[-1, 1]$, so we set $B = 1$ as our default practical recommendation, which aligns with actual practice. We have found that the other hyperparameters suggested by our theory already produce good empirical results. The exact choices of these parameters are detailed in Appendix G (line 693), and, aside from some multiplicative constants, they closely follow the theoretical values. These will be the default settings in the package we plan to release.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their elaborate rebuttal. This does address my concerns and questions. | Summary: This paper presents a post-processing algorithm designed to enforce demographic parity in regression tasks without access to sensitive attributes during inference.
Strengths: The solution is versatile solution for enforcing demographic parity as it can be applied on different optimizers.
The paper provides a rigorous theoretical foundation to quantify the risk.
The fairness problem is relevant.
Weaknesses: While the algorithm presented in the paper appears promising, its evaluation is hampered by the presentation style, which tends to obscure clarity. Technical details such as the reliance on discretization and sophisticated methods for controlling the gradient norm are emphasized in the abstract, which could be streamlined to focus on broader impacts and significance instead.
The paper contains numerous remarks that disrupt the flow of discussion; for instance, Remark 2.1 seems tangential and could be relegated to an appendix or omitted entirely. Other generalizations and discussions, such as those in Remark 2.2 and the paragraph on line 304, should be consolidated and presented at the end of the paper to maintain focus. The 'abuses notation' in Remark 3.1 could be resolved by providing clear definitions Comparisons with other literature are inconsistently integrated within the text, appearing at disparate locations such as lines 56, 228, and 334, which could be better organized to aid in comparative analysis and enhance readability.
There is excessive use of sub-titles or mini-sections. Some of these sections are notably brief (line 176, for example), where the short content under each title does not justify the need for a separate heading.
Technical Quality: 1
Clarity: 2
Questions for Authors: The paper focuses on demographic parity as a fairness metric. Are there impacts on other fairness metrics like Equalized Odds and Equal Opportunity when using the proposed algorithm? Does it improve these metrics, or could it potentially worsen them?
Regarding the risks plot in Figure 1, could you clarify its purpose given there are no comparative benchmarks provided? How should the unfairness score presented in the plot be interpreted?
Reference [1] addresses a similar topic with a minimax approach. Is this minimax result applicable or relevant to the methodology used in your paper?
In Algorithm 1, "DP" is mentioned in the name. Could you specify what "DP" stands for?
The algorithm "fairname" appears in line 305 without a prior definition. Could you explain what this term means within the context of your study?
[1] Fukuchi, Kazuto, and Jun Sakuma. "Demographic parity constrained minimax optimal regression under linear model." Advances in Neural Information Processing Systems 36 (2024).
Confidence: 3
Soundness: 1
Presentation: 2
Contribution: 2
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's time and effort in reviewing our work, but we respectfully disagree with the evaluation.
*Weaknesses*
The reviewer has criticized several stylistic choices in our paper. While we recognize that these choices may not align with the reviewer's preferences, it is worth noting that neither of the other two reviewers raised similar concerns. Our approach is driven by theory and is based on a carefully developed methodology rooted in mathematical reasoning, rather than common sense. Given this theoretical foundation, we believe our stylistic choices are justified and aid the reader in understanding our thought process. Below, we address the specific stylistic comments raised:
1. *"Remark 2.1"*. We disagree with the assessment that *Remark 2.1* is tangential. On the contrary, clearly defining the joint distribution of every random variable involved is crucial for understanding the problem. Including *Remark 2.1* helps prevent potential misconceptions, such as assuming that $\hat{Y} = Y$ is a valid prediction function for any distribution of $(X, S, Y)$. *Remark 2.1* explicitly clarifies that $\hat{Y} = Y$ is not a valid prediction function unless $Y$ is measurable with respect to $X$.
2. *"Remark 2.2"*. *Remark 2.2* anticipates potential questions from readers about possible extensions of our methodology. While we do not insist on keeping this remark in the main body, we believe it could be useful for some readers.
3. *"Remark 3.1"*. We do not see any issue in abuse of notation as it is extremely common in mathematical science. Yet, we find it equally important to be extremely clear when such an abuse happens so that the reader is aware and could easily understand the logic behind our choice. The reviewer suggests providing clear definitions, but we would appreciate more specific guidance: which quantities are not clearly defined in our paper?
4. *"There is excessive use of sub-titles or mini-sections"*. This is a stylistic choice we made to clarify the purpose of each paragraph. We believe it helps in setting the context clearly. We prefer to keep this as is unless the reviewer can provide a clearly better alternative.
*Questions*
**Q1:** The paper focuses on demographic parity as a fairness metric. Are there impacts on other fairness metrics like Equalized Odds and Equal Opportunity when using the proposed algorithm? Does it improve these metrics, or could it potentially worsen them?
**A1:** Equalized Odds and Equal Opportunity typically refer to fairness conditions in binary classification, while our work addresses fairness in the context of regression. While extensions of these notions to regression are possible, our approach focuses on demographic parity. As such, it might improve, worsen, or maintain fairness under other definitions. Since we have not claimed to address all definitions of fairness simultaneously, studying the trade-offs between different fairness notions is beyond the scope of this work. However, we will include a discussion emphasizing that our algorithm is tailored for a specific fairness definition as it is highlighted by the title.
**Q2:** Regarding the risks plot in *Figure 1*, could you clarify its purpose given there are no comparative benchmarks provided? How should the unfairness score presented in the plot be interpreted?
**A2:** The purpose of *Figure 1* is to illustrate the post-processing dynamics of our proposed method, not to compare it with other algorithms, which are based on different approaches. A comparison with other algorithms is shown in *Figure 2*. Additional details on the implementation are provided in *Appendix G*, and the code is available via the link provided in the paper.
**Q3:** Reference [1] addresses a similar topic with a minimax approach. Is this minimax result applicable or relevant to the methodology used in your paper?
**A3:** The paper suggested by the reviewer has little in common with our contribution. Firstly, it deals with the scenario where sensitive attributes are available for prediction (the awareness setup) and relies on an explicit form of the optimal prediction provided by [Chzhen et al. (2020b)](https://arxiv.org/pdf/2006.07286) and by [Le Gouic et al. (2020)](https://arxiv.org/pdf/2005.11720). Such an explicit form is not yet available for regression in the unawareness setup, which remains an open question in the field. Secondly, that paper focuses exclusively on linear regression and provides a minimax statistical analysis for this case. In contrast, our work presents a versatile, theoretically grounded post-processing algorithm that can be applied to any pre-trained model in the unawareness setup for regression.
**Q4:** In Algorithm 1, "DP" is mentioned in the name. Could you specify what "DP" stands for?
**A4:** DP in the name of *Algorithm 1* "DP post-processing" stands for *demographic parity*. We have indeed not introduced this term in the main body and will include it in the revision.
**Q5:** The algorithm "fairname" appears in line 305 without a prior definition. Could you explain what this term means within the context of your study?
**A5:** The name "fairname" in $\texttt{FairName}(L, T, \beta, \boldsymbol{p}, B, \hat\eta, \hat{\boldsymbol{\tau}})$ in the section of extension to unknown $\eta$ and $\tau$ is an unfortunate typo. It should be $\texttt{DP post-processing}$ and it will be corrected upon revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the the response, which has addressed most of my concerns. I believe the authors will improve the readability in the final version. I am no longer opposed to the acceptance of this paper and will adjust my score accordingly. | null | null | Rebuttal 1:
Rebuttal: We conducted an ablation study to observe the behaviors of other algorithms, as suggested by Reviewer qMMa. A figure that illustrates the comparison is included in the attached pdf file.
In conclusion, all algorithms perform similarly in the middle to high unfairness regime, while those based on SGD3 are more stable in the low unfairness (high fairness) regime. However, since the stochastic minimization of the norm of the gradient of a convex function is a relatively niche topic, there are only a few algorithms with established convergence rates. As we aim for end-to-end guarantees, we have chosen methods based on SGD3. Nonetheless, other stochastic minimization methods could potentially perform just as well in practice.
Pdf: /pdf/6dac7b87471f7028e76bb98537634bed2fd23ff3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Interventionally Consistent Surrogates for Complex Simulation Models | Accept (poster) | Summary: In their use by practitioners, complex simulators may be run under intervention scenarii. To reduce the overall cost of running many scenarii, relying on cheaper to compute surrogate models is a standard practice. The authors thus propose surrogates that can additionally learn these interventions effect, relying on a causal inference representation. Specifically, it divides the training into a parametric surrogate family and abstraction map parameters that are learned with the Kullback-Leibler divergence. An epidemiology example is provided to show the interest of learning this decomposition on surrogate consistency.
Strengths: - the paper discuss a relevant practical use case for surrogate modeling
- the concepts are clearly described under the low page limit
Weaknesses: - the discussion of the results could be improved
- the test case is perhaps a bit simple
Technical Quality: 3
Clarity: 3
Questions for Authors: - the naive way for surrogate modeling in this context (say with neural networks or random forests) would be to also give as inputs the intervention parameters corresponding to the observation. It is not completely clear to me if this is the case for the RNN in the example.
- More analysis on the performance of interventionally trained data on observational data would be beneficial.
- perhaps some discussion on surrogates with causal inference would be relevant, see e.g.,
Witty, S., Takatsu, K., Jensen, D., & Mansinghka, V. (2020, November). Causal inference using Gaussian processes with structured latent confounders. In International Conference on Machine Learning (pp. 10313-10323). PMLR or Toth, C., Lorch, L., Knoll, C., Krause, A., Pernkopf, F., Peharz, R., & Von Kügelgen, J. (2022). Active bayesian causal inference. Advances in Neural Information Processing Systems, 35, 16261-16275.
- A related topic is the use of monotonicity information, e.g., on intervention effect, see e.g., Riihimäki, J., & Vehtari, A. (2010, March). Gaussian processes with monotonicity information. In Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 645-652). JMLR Workshop and Conference Proceedings.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Improving the discussion of results**
We will use some of the additional space in the revision to provide further discussion. In particular, we will:
- state and discuss the relative computational costs of our complex simulators and our surrogates (briefly: we see that our surrogates run approximately 3-30 times faster across the experiments we consider); and
- provide a more extensive discussion of the advantages of training surrogates for interventional consistency. For example, with respect to the epidemic case study we present, we can discuss a hypothetical scenario in which a policymaker must decide whether and when to introduce a lockdown in order to minimise the average number of infections over the simulated time horizon. Please see the Global Rebuttal for further details. (Is this the kind of discussion you would like to see? We hope this provides the discussion you are looking for on the relative performance of the interventional vs observational surrogates (as in your second question); please do let us know in the discussion period if there are specific points you would like us to discuss.)
**Further experiments**
To address your point that "the test case is perhaps a bit simple": the reason we focus on this particular epidemic simulator is that it allowed us to clearly and transparently demonstrate that simulators can be expressed as SCMs, which is important for establishing that the theory of causal abstraction can be used to learn surrogates. To nonetheless demonstrate our framework in a further, more complex setting, we have prepared another case study in which we model an ecology of species in a predator-prey relationship and the possible effects of reintroducing an additional species into the mix. Please see the Global Rebuttal for details. We emphasise that this additional case study is indeed more complex, as requested, in that it consists of agents that move around a spatial environment and reproduce and die, and consists of stocks of natural resources that grow and are consumed over time. We hope this addresses this comment, and that it provides further evidence that our framework can be applied in various settings.
**Question about alternative approaches to surrogates and RNN**
We believe the approach you suggest could in principle be taken for a limited set of surrogate families. (This is not what we do in the `LRNN` model: as described in the Appendix, we feed $\tilde{\boldsymbol{\theta}}$ into the surrogate at each time step, such that intervening at time $t$ amounts to feeding in a modified $\tilde{\boldsymbol{\theta}}$ at that time.)
However, this will not work generally. For example, it would not work when the surrogate family is mechanistic, such as in the case of the `LODE` family we consider in our experiments. Thus it can't be the basis of a general framework for training interventionally consistent surrogates. In contrast, our framework is useful because it handles both cases, and is thus more general.
This approach would also have the undesirable property that an intervention at time $t$ would have a causal influence on variables "located" at times $<t$, which by definition should not be the case. This would mean that time has different meanings in the two models, which amongst other things will reduce interpretability.
Note that interventions in our surrogates are explicitly specified on abstract causal variables via $\omega$. Consider a pandemic sim where music venues can be closed. In our surrogates, $\omega$ specifies how venue closure can be interpreted as a direct intervention on, say, $\alpha$ in the `LODE` surrogate. If interventions aren't transformed via $\omega$ you lose this interpretability. You also lose out on scalability: $\omega$ reduces the space of possible interventions significantly.
**Literature recommendations**
Similarly to us [R4.1] & [R4.2] focus on improving causal reasoning methods in complex systems with causal inference. [R4.1] investigates hierarchical data settings with latent confounders, while [R4.2]'s active learning framework performs inference without assuming access to the true SCM. Instead, it assigns a Bayesian prior over the causal model class & jointly infers a posterior over both causal models (discovery) & queries of interest (inference). Both frameworks aim to handle complex causal relations which is crucial for surrogate modelling of complex systems.
Additionally, both frameworks employ GPs to model causal relationships, aligning with our objective of building surrogates when it's hard to work directly with the underlying SCM. However, they rely on several simplifying structural assumptions in order to restrict their focus to a class of causal sufficient/identifiable models. In contrast, we allow for a wider class of SCMs, resulting in a more robust framework. Further, these papers overlook the interventional consistency between the surrogate & simulator, a topic we thoroughly examine via causal abstraction. A notable distinction is our approach for addressing the computational difficulties of large-scale simulators, especially within policymaking scenarios, through a causally consistent & computationally efficient surrogate paradigm. Our goal of developing interventionally consistent surrogates aligns with the goal of accurately estimating causal effects discussed in these papers. We will update the Related Work section of the revision, & include [R4.1] & [R4.2] as examples of surrogates for causal inference.
[R4.3] is indeed relevant to our work when domain expertise is available & can be incorporated into the surrogate by enforcing monotonicity on certain features. As noted, an example could be when we identify a monotonic effect in interventions. For instance, one may enforce that longer lockdowns lead to less infections. We'll incorporate this discussion into the revision.
**Refs**
[R4.1] Witty et al. (2020)
[R4.2] Toth et al. (2022)
[R4.3] Riihimäki et al. (2010)
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed replies to my comments and for proposing a newer test case. I think there is room for this work that relies partly on domain knowledge, but there is an effort to include the above responses.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and for your considered assessment of our work. If you believe our improvements warrant an increase in your initial score, we would greatly appreciate if this could be reflected in your updated score prior to the deadline in ~25 mins. | Summary: This work presents a method for learning surrogate models of simulations.
Both the simulation and the surrogate model are treated as causal models, where the surrogate is an element of a parametric family of causal models.
The map from simulation variables to surrogate variables is assumed to be known.
For training the surrogate, interventional consistency is used: interventions in the simulation should be well-represented by the surrogate.
That is, a map from simulation interventions to surrogate interventions has to be learned.
As a training loss, an abstraction error is used, which measures the distance between the pushforward of intervened simulation variables and intervened surrogate variables.
The theory section shows that zero approximation error leads to an exact transformation and that approximation errors for a given intervention are bounded.
The approach is demonstrated on an agent-based simulation of an epidemic model (SIRS), with interventions corresponding to different spreading rate settings (some represent lock-downs).
Strengths: - The paper is a pleasure to read: the writing is well-structured and very clear.
- The mathematical formalism is well-defined, consistent and easy to follow.
- Thus far, the causal abstraction literature has mostly focussed on the theoretical foundations and mathematical structure of abstractions. This paper shows an interesting application of causal abstractions and thereby bridges the gap between the causality ivory tower and useful applications.
Weaknesses: While I very much enjoyed reading the paper, it is unclear to me what its contribution is with respect to previous literature. The main elements (e.g. treating simulations as causal models and learning a simpler surrogate, making the connection to causal abstractions and using some form of approximation error for training) have been introduced before in [1] with only minor differences as far as I can tell.
References:
[1] Kekić et al. "Targeted Reduction of Causal Models." UAI (2024)
Technical Quality: 3
Clarity: 3
Questions for Authors: - L96 "we restrict our attention to hard interventions": It looks like you don't use this fact later on in the theory. Could you relax this requirement? In fact, you could argue that the interventions applied in the case study look more like soft interventions. You intervene on $\theta$, which really just encodes the parameters of the causal mechanisms responsible for propagating infections and other state variables through time. So theta looks more like something that encodes some mechanisms, rather than a state variable.
- L216 "we assume that the base model M is implicitly represented by a simulation model of a complex socio-technical system": Is this just a helpful picture to have in mind for the reader, or is there some technical assumption that you make? It doesn't seem to me that this is used later on. Maybe I'm confused by "socio-technical system": for what type of simulations would the method not work?
- Fig. 4: Why are there edges from $\tilde{\theta}$ to $\tilde{y}_2$ and to $\tilde{y}_3$? I thought $\tilde{\theta}_t$ encodes the transition parameters from one time step to the next. Similarly, for $\tilde{I}_0$: wouldn't it only affect the first time step?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: The method has some severe limitations in how easily it can be applied to real-world simulations, as it does not specify how to tackle the following:
- How to make the distribution of the surrogate family tractable and differentiable.
- How to select the surrogate family.
- How to find the variable map between the simulation and the surrogate.
These elements are assumed to be given, but I think they are the main difficulty when modelling simulations. I understand that modellers need to provide some level of domain expertise and that makes every application somewhat different. But the paper doesn't give much guidance on how the difficult points above can be addressed.
While Prop. 1 and 2 are reassuring, they don't give the modeller many insights as to how the approach behaves. For example, how do you need to choose the set of interventions in order to get a good surrogate? That is touched on in the case study, but to make the method more reliable in practice it would be great to have some more general theoretical analysis. Another open question is: What happens when you misspecify the surrogate by choosing a causal structure that doesn't match that of the simulation?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time & kind comments ("paper is a pleasure to read..very clear", "well-defined, consistent and easy to follow", "bridges the gap between.. causality.. and useful applications").
The reviewer identifies as a single weakness the novelty of our paper wrt a very recent UAI 2024 paper by Kekić et. al. [R3.1], presented at UAI less than 3 weeks ago. Despite this being categorized as a contemporaneous work by the NeurIPS Reviewer Guidelines, we are happy to detail the main differences between the two papers, highlight our novel contributions, & discuss this contemporaneous work in our paper.
The two papers share some similarities as they both start from an existing causal abstraction framework, adapt a measure of interventional consistency, and use it learn new models. However the two works differ substantially in aims, methodology & results:
- [R3.1] has an _explanatory focus_. They deal with _targeted reduction of causal models_ aimed to generate a model "explaining causal influences on an observable target variable $Y$"; they describe the target variable $Y$ as a "property of interest" & the abstraction to $Y$ as a "detector" quantifying "the presence or magnitude of a phenomenon in the data". Thus, reduction works around a chosen focal point ($Y$) & the only causal dynamics of interest are the ones converging on the said variable. In contrast, our work has a _simulation focus_: we deal with _learning surrogate models_ that would capture the whole causal dynamics of a system of interest at a different level of abstraction. Our work does not require the modeller to commit to any target variable: in the context of our SIRS simulation, [R3.1]'s approach would require the modeller to choose which target variable is of interest (e.g.: number of infected or number of recovered); our approach, instead, provides a simplified & accurate surrogate that simulates the dynamics of all relevant variables. To provide analogous simulation results, [R3.1]'s approach would require instantiating as many reduced models as the variables of interest
- There are also fundamental methodological differences. Since reduction is interested in a single target variable, only constructive transformations equivalent to clustering are considered & the reduced causal model is a simple collection of nodes $Z_i$ only affecting node $Y$. This trivial structure does not encode complex causal dynamics, such as influences between variables $Z_i$; it has indeed a resemblance to ICA, as explained in their discussion. On the other hand, we consider a broader class of $\tau$ maps and arbitrary surrogate models describing complex causal structures
- [R3.1] introduces further simplifications (linearity of $\tau, \omega$, affinity of the high-level mechanisms) for the sake of identifiability analysis. Experiments are run on simple physical models, as these simplifications rarely comply with real-world systems. Not requiring them, our approach is applied to actual ABMs which, as the reviewer states, "bridges the gap between the causality ivory tower and useful applications"
The most striking similitude between our works is how we adapted the existing interventional consistency loss by replacing a JS distance with a KL divergence, & substituting a max operator with an expectation. But these are common simplifications in ML adopted in causal abstraction papers predating both our works. Interestingly, from a similar loss function, we derived propositions that are relevant wrt our different aim: [R3.1] proves positivity, invariance to invertible reparametrization & zero for exactness; we prove zero for exactness and upper-bound for divergence.
We will briefly discuss this contemporaneous work in the Related Work section & add in the Appendix an exhaustive comparison along the lines above. A detailed comparison will be valuable to practitioners for choosing the approach that best suits their needs, whether modelling locally the dynamics of a single variable or whether simulating an entire system at a coarser scale.
**Questions & limitations**
Q1: Hard interventions are a requirement of the used abstraction framework as the loss is defined between posets of interventions which arise only in such a case. Our interventions are defined on parameters controlling the dynamics of the system; this modelling choice allows us to: disentangle the dynamics of the system & the params controlling it; rely on known ODEs for the dynamics; express parsimonious (defined by a single value) & interpretable (lockdown equal to setting the param to zero) interventions. In general, our work may be expanded to deal with soft interventions [R3.2], e.g., to model realistic uncertain/"fat-fingers" interventions by policy-makers.
Q2: Just a helpful picture! The method is general.
Q3: The `\tilde{\theta}_t` determine the update of the hidden state `z_t` of the ODE/RNN at time `t`, and `z_t` mediates the dependency of `\tilde{y}_t` on the `\tilde{\theta}_{1:t}` and `I_0` (please see Appendix E). However, we have wrongly put arrows from `\tilde{y}_t` to `\tilde{y}_{t+1}`; will remove in revision.
L: It is incorrect that we do not specify "how to make the distribution of the surrogate family tractable and differentiable": we show three examples for how this can be done by combining neural networks, differentiable simulators, & standard distributions in the experimental section. We will nonetheless discuss other strategies in the revision, e.g. the use of neural generative models (e.g., normalising flows) to specify the stochasticity of surrogates, & of sample-based interventional consistency losses (e.g., MMDs) for the case of differentiable surrogates with intractable distributions.
For other limitations, please see our answer to Reviewer fR4E: specifically, on choosing interventions/finding a variable map/misspecification see Q1/L1/Q2.
**Refs**
[R3.2] Massidda et al. "Causal Abstraction with Soft Interventions." CLeaR (2022)
---
Rebuttal Comment 1.1:
Comment: Thank you for providing a detailed rebuttal to my review. I went over the rebuttal, the other reviews and had another look at the paper and will try to comment on the rebuttal below.
## On the contributions with respect to prior work
From the section on contemporaneous work from the [Call for Papers](https://neurips.cc/Conferences/2024/CallForPapers):
> For the purpose of the reviewing process, papers that appeared online within two months of a submission will generally be considered "contemporaneous" in the sense that the submission will not be rejected on the basis of the comparison to contemporaneous work. Authors are still expected to cite and discuss contemporaneous work and perform empirical comparisons to the degree feasible.
Since [R3.1] appeared online in November 2023 (see the [arxiv version](https://arxiv.org/abs/2311.18639)) it is not considered contemporaneous work for this submission, as the submission deadline was in May 2024.
The paper's contributions, as summarised in the first part of the paragraph starting in L 49:
> To address this, we build on recent developments in causal abstraction [...]. We view the complex simulator and its surrogate as structural causal models [...], and propose a framework for constructing and learning surrogate models for expensive simulators of complex socio-technical systems that are interventionally consistent, in the sense that they (approximately) preserve the behaviour of the simulator under equivalent policy interventions. This perspective enables treating the surrogate model as a causal abstraction of the simulator.
The points in the part above have been covered in [R3.1] (with some minor differences that were mentionned in the rebuttal and which we will discuss below). Such that the statement in L58:
>Our approach establishes, for the first time, a connection between complex simulation models and causal abstraction, and a practical approach to learning interventionally consistent surrogates for complex simulators.
is incorrect (same goes for L369).
Now we can consider the merits of the paper regarding the second part of the contributions (L55):
> We motivate our proposed methodology theoretically, and demonstrate with simulation studies that our method permits us to learn an abstracted surrogate model for an epidemiological agent-based model that behaves consistently in multiple interventional regimes.
and as an extension of prior work.
The epidemiological case study in Sec. 5, does encode more complex causal structure and has a nonlinear (hand-crafted, rather than learned) variable map from the simulation to the surrogate.
However, as discussed in the limitations section of my review, the method does not specify how to find them.
Prior work [R3.1] defines a method to learn the maps between variables and interventions under some simplifications (as outlined in the rebuttal).
This submission, however, generalises this to exactly one case study (or two if you count the results promised in the rebuttal) and does not provide a method that is directly applicable to the wider ML/simulation community.
The crucial parts are assumed to be given through domain expertise and/or not learned by the method.
The part that is general about the proposed approach (Sec. 4) vague and most of the heavy lifting has to be done by the person that runs the simulation.
Furthermore, in order to consider this an extension of prior work you would expect an experimental comparison which is also missing. The comparison could be done by "instantiating as many reduced models as the variables of interest" as mentionned in the rebuttal or choosing one variable to compare against.
I commented above on the theoretical results in the initial review.
Therefore, besides some of the main contributions having been covered by prior work, I think the limitations outweigh the contributions of the proposed approach when viewed as an extension of it.
## Questions
Q1: Couldn't you always consider the parameters defining the causal mechanism as a separate variable and intervene on this separate parameter variable? Suppose your soft intervention changes the mechanism from the original one to a member of a family of mechanisms and the parameter variable acts as a "mechanism selector". In that case, wouldn't the approach be applicable?
## Conclusion
I appreciate the effort put into the rebuttal and the clarifications provided. However, the primary concerns regarding the novelty and applicability of the proposed method, especially in light of prior work, remain.
---
Reply to Comment 1.1.1:
Title: (1/2)
Comment: (1/2)
Thanks for your response. We provide a summary TL;DR before providing a detailed response in our next comment.
## TL;DR
1) **[R3.1] is entirely unsuitable** for our setting:
a) **[R3.1] does not handle multiple interdependent target variables**;
b) **[R3.1] does not handle discrete or bounded target variables**; and
c) **[R3.1] scales extremely poorly** requiring $\sim 10^7$ trainable parameters for our epidemic case study, and _billions_ of parameters for more realistic simulators.
In contrast, **our approach does not suffer these limitations**.
2) We disagree that fixing $\tau$ does the "heavy lifting": **knowing the concepts you care about doesn't mean you know the mechanism that relates them**. Moreover, we believe that fixing an interpretable $\tau$ is easier than setting opaque hyperparameters to learn a potentially uninterpretable $\tau$, as in [R3.1].
3) **Methods for nonlinear and non-Gaussian abstractions are recognised by [R3.1] _themselves_ as important**. Our work has developed such a method contemporaneously with [R3.1].
Whilst we believe our work is contemporaneous with [R3.1], we are still consulting the ACs about the evidence we can provide without breaching review policy. Thus, we will address the point regarding arXiv in a later comment. | Summary: The paper proposes a novel framework for learning surrogate models of expensive simulators by formulating them as structural causal models. Specifically, the authors focus on learning interventionally consistent surrogate models for large-scale complex simulation models. The authors' claims are supported by theoretical and empirical results.
Strengths: a. The paper is well-written and easy to follow. Concepts are clearly explained and equations well-commented
b. Empirical results are sounding
Weaknesses: a. Empirical tests seem to be limited in assessing the framework's true power. Tests on different, perhaps more complex, scenarios would help practitioners to better understand the benefits of the proposed framework and use it.
b. One motivation for the proposed framework is to provide policy-makers with "fast" and high-fidelity surrogates to test the effect of different interventions. However, the empirical results do not clearly indicate what the advantage of surrogates compared to simulators is in terms of speed of simulations.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations of the work are reported in section 7
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and suggestions for improving our paper. We provide itemised responses to your two questions below.
**a.** To expand our empirical assessment of the framework we propose, and to provide practitioners with a further example of how they may use our framework, we have prepared a further case study that we will include in the revised paper. We provide details on this experiment in the Global Rebuttal above. We will also include a discussion of how causal surrogates can be benchmarked going forward, such as by assessing the degree to which the surrogates preserve the optimality or ordering of interventions in the original simulator with respect to some downstream optimisation task. Please see the Global Rebuttal for further details. We believe this would also be a useful guide to practitioners, and hope that this also addresses the reviewer's question here.
**b.** We will include in the revision a comparison of the runtimes of the simulators and surrogates. Briefly: in the epidemic case study presented, the `LRNN`/`LODERNN`/`LODE` surrogates each run approx. 3 times faster than the ABM; in the new experiment described in the previous bullet point and in the Global Rebuttal, the surrogates run 10-30 times faster. Obviously, the exact speed up will be different for different simulators. That said, there are additional considerations that motivate the use of smaller-scale surrogates with tractable likelihood functions, such as interpretability, improved communication (e.g., in our epidemic case study, we would be able to explain to a decision-maker who is familiar with the `LODE` surrogate but not with the complex ABM that "applying intervention X in the ABM is like applying intervention Y in the `LODE`"), and the ability to optimise probabilities/probability densities directly, thus even potentially obviating the need for any simulation from the surrogate in any case. We will discuss this more fully, using some of the additional space in the revision, and the appendix if necessary.
---
Rebuttal 2:
Comment: Dear authors, thank you for the rebuttal and the new experiments you mentioned in the global rebuttal.
I still think that, although the topic is relevant for the ML community, there are strong limitations in its applicability by practitioners without strong domain expertise.
---
Rebuttal Comment 2.1:
Title: (1/2)
Comment: Thank you for your response.
We are a bit unsure about what your comment about domain expertise is referring to without any additional context. We have nonetheless taken an educated guess as to your concern and address this below (TL;DR now; detailed response to follow).
### TL;DR
1) **Our reliance on domain knowledge is minimal** relative to some areas of interest to the NeurIPS community. Our assumption of a fixed $\tau$ map is entirely reasonable given that **modellers always know why they are building a model**. (Why would they build a model if they _didn't_ know this?) We will in any case include several detailed examples for how to choose $\tau$ in different settings; we include some of these examples below. (We believe this would also help to address the limitation listed by Reviewer `fR4E`.)
2) The principles of **keeping humans in the loop**, of **incorporating human knowledge into ML pipelines**, and **adequately evaluating a model's objectives prior to deployment**, often appear in guidelines on the **responsible use of AI** in decision-making (e.g., Principles 1, 3 & 6 of `The Institute for Ethical AI & Machine Learning`'s "Responsible Machine Learning Principles"; "Assessment and Accountability" in `The Center for AI and Digital Policy`'s "Universal Guidelines for AI"). Our assumption that there exists a domain expert in the loop who informs $\tau$ is therefore **consistent with the responsible use and development of interventionally consistent surrogates** and in this sense is a strength of our approach.
---
Reply to Comment 2.1.1:
Title: (2/2)
Comment: ## More detailed response
1) Using human knowledge & expertise is ubiquitous in areas of interest to the NeurIPS community. For instance, universal differential equations [1] require domain experts to specify an ODE roughly capturing the physical laws underlying a system. Further, many modern neural architectures rely on physical symmetries found through human domain expertise to improve efficiency (e.g.: permutation symmetries in graphs [2], equivariances in NNs for point cloud data [3], time reparameterisation invariance in NNs for time series [4]). Other active areas of research that incorporate strong biases from human knowledge include physics-informed NNs [5] & human-in-the-loop RL [6].
**Our use of domain knowledge is minimal** in comparison to these areas. We only require the modeller to supply the variables of interest to them so that $\tau$ is defined. (The modeller generally _will_ know what they are interested in using the model for: if they have sufficient domain knowledge to build a model, they will know what they want to achieve with the model, i.e. how to define $\tau$.) Our minimal reliance on human input is exemplified by our `LRNN` surrogate, which is an off-the-shelf/general purpose network that reproduces the complex simulators' behaviours of interest by taking in interventions through the learned $\omega$ map.
We will however provide in our revision several detailed practical examples (beyond the two case studies we present) illustrating how $\tau$ may be chosen for large-scale simulators, as a guideline for practitioners. Some examples we will expand on in the revision:
- Consider the model of forced migration in [7]. Variables of interest to these modellers are the total number of displaced people by location over time by age, gender, and other demographic characteristics. $\tau$ would therefore be defined by counting the number of agents in each of these states at each location, i.e. $\tau_{l,d}(x_t) = \sum_{a\in A} \mathbb{I}[\text{agent }a\text{ has demographic features }d\text{ and is in location }l\text{ at time }t]$ where $l$ labels locations, $d$ are demographic features, $x_t$ is the state of the simulation at time $t$, $A$ is the set of all agents, & $\mathbb{I}$ is the indicator function.
- Consider the model of flood risk mitigation behaviours in [8]. The modellers want to model what precautions households take to protect themselves from floods in high-flood-risk areas under different policy interventions. Households can: do nothing; purchase insurance; purchase property-level protection; or purchase property-level protection & insurance (see Fig. 3). Here, $\tau$ would count the number of households taking such actions in this case (as in the example above).
- Consider the UK housing market model in [9]. Tables 2-6 define macroeconomic indicators such as inflation rates, unemployment rates & real interest rates that the modellers care about. Here, $\tau$ would be defined by standard macroeconomic formulae for these quantities.
2) Various guidelines on the **responsible use of ML** (e.g. those highlighted in our TL;DR) recommend that ML **should always be applied in conjunction with domain experts** to adhere to principles governing the responsible use of ML. This is especially true when it comes to high-stakes decision-making in complex systems (the setting we consider). In this way, rather than being a limitation of our approach, our assumption that $\tau$ is informed by a human expert provides a useful & important way to integrate human knowledge into the surrogate by allowing the decision-maker to fix the properties they care about for their decision-making – all while still allowing for flexibility through the learnable $\phi$ and $\psi$ parameters of the $\omega$ map & surrogate itself, respectively – & thus aligns with recommended guidelines for the responsible use of AI.
[1] White et al. "Stabilized neural differential equations for learning dynamics with explicit constraints." NeurIPS (2023)
[2] Gilmer et al. "Neural message passing for quantum chemistry" ICML (2017)
[3] Schütt et al. "Equivariant message passing for the prediction of tensorial properties and molecular spectra." ICML (2021)
[4] Kidger et al. "Deep signature transforms" NeurIPS (2019)
[5] Krishnapriyan et al. "Characterizing possible failure modes in physics-informed neural networks." NeurIPS (2021)
[6] Guan et al. "Widening the pipeline in human-guided reinforcement learning with explanation and context-aware data augmentation." NeurIPS (2021)
[7] Ghorbani et al. "Flee 3: Flexible agent-based simulation for forced migration." Journal of Computational Science 81 (2024)
[8] Geaves et al. "Integrating irrational behavior into flood risk models to test the outcomes of policy interventions." Risk Analysis (2024)
[9] Bardoscia et al. "The impact of prudential regulations on the UK housing market and economy: insights from an agent-based model" Bank of England Working Paper (2024)
---
Rebuttal 3:
Comment: Thank you again for your feedback – we hope we have been able to address your remaining concern about the limited role that domain knowledge plays in defining the variables of interest to the modeller. If you believe our improvements warrant an increase in your initial score, we would greatly appreciate seeing this be reflected in your updated score prior to the deadline in ~20 mins. Thank you again! | Summary: The paper introduces a framework for learning surrogate models for complex simulations that preserve simulation behavior under changes in the structural parameters of the underlying model (i.e. the intervention) using causal inference. The framework is tested on epidemiological agent-based models of disease spread, against a set of ablative baselines.
Strengths: S1: The paper addresses an important problem in the simulation of complex socio-technical, highly dynamical, systems; i.e. reducing the computational cost of running large-scale simulations when critical parameters of the model change. The proposed framework not only has potential to significantly accelerate experimentation in policy-adjacent fields, but also in other technical environments where that's the case (physics, many engineering sciences, etc).
S2: The theoretical foundation is solid, building on established concepts from causal inference and abstraction. The authors provide formal definitions and proofs for their key results on abstraction error and interventional consistency, and I found the reading relatively clear (even though causal models are not my field of work).
S3: The empirical evaluation on the SIRS epidemiological model provides a concrete demonstration of the method's effectiveness on what I think is a fairly representative model of the field (modulo again my superficial familiarity with such fields). The comparison between interventionally and observationally trained surrogates highlights the importance of considering interventional consistency, which neatly justifies both the paper narrative as well as the methodology.
S4: Overall the connection between complex simulation models and causal abstraction feels underexplored, and this work opens up new avenues for applying such techniques across the board.
Weaknesses: W1: While the SIRS model provides a good initial test case, the empirical evaluation is limited to a single domain. It would strengthen the paper to demonstrate the method's applicability to other types of complex simulations, such as economic or social systems models.
W2: The paper does not provide a thorough comparison to existing surrogate modeling techniques beyond a basic observational training baseline. It would be extremely valuable to see how the proposed method compares to state-of-the-art surrogate modeling approaches, even if they don't explicitly consider interventional consistency.
W3: The scalability of the proposed method to very large and complex simulations is not thoroughly addressed. It's unclear how well the approach would work for simulations with hundreds or thousands of variables and complex interdependencies.
W4: The paper lacks a discussion of how to choose appropriate interventions for training the surrogate. In practice, the space of possible interventions may be very large, and it's not clear how to select a representative set for training and testing.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1: How sensitive is the method to the choice of the interventional distribution $\eta$? How would you recommend practitioners choose this distribution in real-world applications? If you were to read the paper afresh, what would you need to have in the manuscript to be able to expand its experimental results so as to form a consistent set of benchmarks for the community?
Q2: Have you explored the performance of the method under model misspecification, i.e. where the surrogate family does not include a model that can perfectly capture the behavior of the original simulator?
Q3: The paper mentions that the method does not require explicit knowledge of the simulator's SCM. How does this compare to methods that do utilize such knowledge, and are there cases where having this knowledge would be beneficial (or where it's a necessity and/or the proposed method critically fails)?
Q4: How does the computational cost of training the surrogate compare to running the original simulator? Is there a break-even point in terms of the number of interventions one needs to evaluate for the surrogate to be worthwhile?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: L1: The current framework assumes that the map $\tau$, which defines the aggregate quantities of interest, is pre-specified. In practice, determining the right level of abstraction and which quantities to preserve may be challenging. The paper would benefit from a discussion of how to choose appropriate τ maps.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1 & W4: Generally, we expect $\eta$ to be informed by domain experts/policymakers based on downstream tasks, perhaps accounting for economic/political constraints. E.g. in a pandemic, economic constraints may preclude lockdowns of length >2 weeks, and political pressure may demand action soon; here, $\eta$ could be uniform over 2-week lockdowns starting within a month.
We now turn to the reviewer's question on benchmarks. Whilst numerous datasets & benchmarks have been proposed in the causality literature [R1.1-3], they are typically confined to SCMs much smaller than those of large-scale simulators. Likewise, to our best knowledge, there is no consensus in the social simulation community regarding benchmark simulators, besides simple examples (e.g. the Schelling model). Further, benchmarking surrogates requires access to a set of realistic downstream tasks. For instance, to properly assess a pandemic surrogate one must know the problems epidemiologists are interested in. Summarising, benchmarking surrogate models of complex systems requires:
- A set of large-scale standardised simulators spanning application domains.
- A set of downstream tasks for each simulator on which surrogate models can be deployed.
While we focus on presenting a new methodology for surrogate modelling grounded in causal abstraction theory, as we will discuss in revision, one way to devise a benchmark on downstream tasks is to check that interventional properties are preserved. For example, one can check that the internal ordering of interventions wrt their efficacy is preserved (see the Global Rebuttal).
Q2: As recognised by the reviewer, the surrogate family may not fully capture the behaviour of the simulator. Note that the task of learning a surrogate may be formulated as minimising $\text{KL}(P \Vert Q(\psi, \phi))$ over $\Psi \times \Phi$. Here, $P$ is a joint distribution over interventions and abstract states given by first sampling an intervention $\iota \sim \eta$ and then sampling from the base model under $\iota$, while $Q(\psi, \phi)$ is a joint distribution defined by first sampling $\iota \sim \eta$ and then sampling from the corresponding interventional distribution in the surrogate. Given the form of this problem, classical results for maximum likelihood estimation can be adapted to provide misspecification guarantees; by leveraging [R1.4], one can show that our surrogate estimation is asymptotically normal w/ sandwich covariance & mean corresponding to the surrogate model w/ minimum KL-divergence to the perfect abstract model under $\tau$.
Q3: Our method assumes no knowledge of the simulator's SCM. This ensures our method is generally applicable as it's difficult to directly characterise the SCM of large-scale simulators. It's possible that access to the base SCM/DAG may expedite abstraction by allowing us to focus on minimal intervention sets [R1.5-6], or leverage the identifiability of interventional distributions to reduce the number of simulations required from the base model [R1.7-8]. However, it's unclear that applying the do-calculus on large causal graphs is more efficient than simulating interventions directly. We will discuss these ideas in more detail in the appendices of the paper.
Q4: Many of the surrogates we consider are neural networks, thus the time complexity of sampling an interventional outcome scales linearly w/ the dimension of interventions. Moreover, our neural surrogates benefit from parallelisation within each time step, massively reducing their run-time. In contrast, social simulators often rely on pairwise interactions between potentially millions of agents that are difficult to parallelise. Thus, once trained, neural surrogates can be orders of magnitude faster to evaluate than the simulator. Precisely characterising this speed up is non-trivial & depends on the internal structure of the simulator and the surrogate. For a comparison of runtime costs in our experiments, see part (b) of our response to Reviewer fUGP.
Of course, there is a training cost associated with the surrogate, but this will be amortised over downstream tasks. That is, when the collective sample complexity of downstream tasks exceeds the sample complexity of causal abstraction, learning a surrogate is beneficial.
Furthermore, running complex simulators may require technical expertise & high performance hardware not widely available. Providing interpretable surrogates that run on commodity hardware reduces the barrier to entry in such cases. Note that this mirrors the release of low memory LLMs that run on commodity hardware commonplace today.
W1: Please see the Global Rebuttal.
W2: Our baselines are already complex wrt SOTA in social simulation (see e.g. [R1.9]). Please let us know if there are specific baselines you would like us to compare against.
W3: We agree that more complex simulators are used in practice, but we prioritised clear presentation by focusing on experiments that are easy to understand, familiar to the social simulation community, & complex enough to demonstrate our method's viability. As in our response to Q1: applying our method to more complex models is complicated by a lack of standard benchmarks in the social simulation community.
L1: We agree that selecting the appropriate granularity via $\tau$ is hard, but in many cases domain experts know what emergent properties interest them. For instance, epidemiologists/labour economists/central bankers often care about the number infected individuals/unemployment numbers/inflation rates over time. In other words, domain expertise can be leveraged to select a suitable $\tau$. We do not know of existing work that automatically selects the right granularity for abstraction; the closest we know of applies rate distortion theory to learn low dimensional representations of MAB and RL problems [R1.10-11] but do not have a causal flavour & focus on single downstream tasks.
**Refs**
In Official Comment. (Too few chars sorry!)
---
Rebuttal 2:
Title: References in Rebuttal
Comment: **Refs**
[R1.1] Geffner et al. "Deep end-to-end causal inference" NeurIPS Workshop on Causal Machine Learning for Real-World Impact (2022)
[R1.2] Melistas et al. "Benchmarking counterfactual image generation" arXiv (2022)
[R1.3] Mooij et al. "Distinguishing cause from effect using observational data: Methods and benchmarks" JMLR (2016)
[R1.4] White "Maximum likelihood estimation of misspecified models" Econometrica (1982)
[R1.5] Aglietti et al. "Causal Bayesian optimization" AISTATS (2020)
[R1.6] Lee and Bareinboim "Structural causal bandits: where to intervene?" NeurIPS (2018)
[R1.7] Lattimore et al. "Causal bandits: Learning good interventions via causal inference" NeurIPS (2016)
[R1.8] Bilodeau et al. "Adaptively exploiting d-separators with causal bandits" NeurIPS (2022)
[R1.9] Angione et al. "Using machine learning as a surrogate model for agent-based simulations" Plos one (2022)
[R1.10] Arumugam and Van Roy "Deciding what to learn: A rate-distortion approach" ICLR (2021)
[R1.11] Arumugam and Van Roy "Deciding what to model: Value-equivalent sampling for reinforcement learning" NeurIPS (2022)
---
Rebuttal Comment 2.1:
Comment: Thank you once again for your feedback – we hope we have been able to address your questions in our responses. If you believe our improvements warrant an increase in your score, we would be very grateful to see this reflected in your updated score ahead of the deadline in ~25 mins. | Rebuttal 1:
Rebuttal: We thank all reviewers for their feedback. We have responded to each of your points individually & are happy to expand on anything in the discussion period. We address some common points here.
**Additional experiment**
A common recommendation was to test our method in another experiment. We have therefore designed and run a further experiment, and provide details & results here. We'll use the extra space & appendix to describe & present the results of this extra case study, & hope this addresses `fR4E`, `fUGP`, & `3hmr`'s requests to include a further example from a different policy setting.
In this extra case study, we consider a different policy scenario: reintroducing a species into an ecology, & simulating the ensuing population dynamics. Specifically, we slightly adapted a model from [G1]: we model an environment initially consisting of `grass`, `sheep`, & `wolves`, in which `grass` grows & is eaten by `sheep`, `sheep` eat `grass` & reproduce & get eaten by `wolves`, and `wolves` eat `sheep` & reproduce. The intervention we consider entails reintroducing a third animal species -- `bears`, which eat both `sheep` and `wolves`, & also reproduce -- whose population is originally zero but is made non-zero at some intervention time $t$. (We imagine that $t$ is the variable the policymaker wants to optimise here.) This additional simulator is suitable for similar reasons to the SIRS ABM: predator-prey population dynamics models as in [G1] are easy to understand & are familiar to the social simulation community, yet are complex enough to demonstrate our method's viability.
We simulate the interactions between these 4 species in a spatial model, in which members of each animal species move around the grid and interact with the other species. We are then interested in understanding how the reintroduction of the `bears` affects the overall population dynamics (i.e., the counts of each animal in each species, along with the quantity of `grass` (i.e. the natural resource sustaining life) over time). As in the epidemic case study, we consider the problem of learning interventionally consistent surrogates for this complex spatiotemporal simulator, & once again examine three possible approaches for constructing surrogate families:
- a family of deterministic mechanistic models based on a discrete-time Lotka-Volterra model of population dynamics [see, e.g., G2], where (analogously to the `LODE` surrogate family discussed in the epidemic case study) the underlying deterministic dynamics of the population dynamics model index a probability distribution at each time step (in this case, a Binomial distribution for each of the 4 species);
- an `LRNN` family, exactly mirroring the `LRNN` family considered in the epidemic case study presented already;
- and a third family considers a hybrid approach, where (as in the `LODERNN` family considered in the epidemic case study) we pass a recurrent network over the underlying Lotka-Volterra-type population dynamics model first before taking the output of the recurrent network to index the Binomial distributions for each of the four species.
A table for the results of this additional case study is shown in the pdf document accompanying the Global Rebuttal, where we see that the results are qualitatively very similar to the epidemic case study already presented: we see that training surrogates using our framework yields significant improvements in the surrogates' interventional consistency over observationally trained baselines, and that interventionally trained surrogates only see a minor decrease in performance on observational data compared to the drop in performance the observational surrogates see on interventional data.
**Discussion of results & building benchmarks**
Some reviewers (namely `fR4E`, `fUGP` & `3hmr`) also suggested we expand our discussion of the results and of what benchmarks should be established for causal surrogate modelling. To this end, we will specifically discuss one further approach to benchmarking (beyond the metrics appearing in Table 1) based on the performance of causal surrogates on downstream decision-making tasks. In the context of the epidemic case study presented, we will discuss and use as a benchmark metric the question of how well the different surrogates preserve the ordering of interventions (lockdown vs. no lockdown) in terms of the degree to which they reduce the number of infections over time. The SIRS ABM predicts that any lockdown is better than no lockdown at all, and we would like for our surrogates to preserve this property (i.e. that no lockdown in the surrogate is also worse than introducing any lockdown).
We have checked for this property in both the interventional & observational surrogates, and see that the latter often do not predict that no lockdown is the worst option in this respect, and in some cases mistakenly predict that no lockdown is the _best_ option (e.g., the observational `LRNN` predicts that no lockdown was the best option in 1 of 5 training repeats, and was not the worst option in all 5 of 5 training repeats). In contrast, none of the interventional surrogates predict that no lockdown is the best option, and only the interventional `LODE` model predicts that no lockdown is not the worst option (in only 2 out of 5 training repeats).
We will use some of the extra space & appendix to expand on the discussion of the results in this way, and use this as an example of benchmarks that can be established & used in the future literature on interventionally consistent surrogate modelling.
**Refs**
[G1] Wilensky and Reisman "Thinking like a wolf, a sheep, or a firefly: Learning biology through constructing and testing computational theories—an embodied modeling approach." Cognition and instruction 24.2 (2006): 171-209.
[G2] Sabo "Stochasticity, predator–prey dynamics, and trigger harvest of nonnative predators." Ecology 86.9 (2005): 2329-2343.
Pdf: /pdf/7bd688e32d127d8b9cc876f8e4e8438e65346adb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning | Accept (poster) | Summary: This work presents a transformer architecture inspired by predictor-corrector methods for solving ODE problems. It adopts the Adams-Bashforth-Moulton method and utilizes an exponential moving average (EMA) method to compose the predictor, discussing two correctors (the EMA-based and the simple backward Euler method). Without the background of ODE, the proposed method could be seen as a special case of a transformer with cross-layer connections. Consider that the residual connection only links two layers, while the proposed method links multiple layers in a high-order way.
Strengths: This work provides theoretical results to enhance the connections between residual connections and the ODE solver, providing an interesting perspective on viewing cross-layer connections within neural networks.
Using EMA as a high-order predictor is a simple and flexible solution, and the experiments demonstrate its effectiveness.
Weaknesses: The experiments are insufficient and out-of-date. As the main contribution is the new transformer architecture, scaling law style experiments and testing on modern LLMs' benchmarks are recommended. I saw that Appendix D has some results, but it's better to have a formal experiment.
Technical Quality: 3
Clarity: 3
Questions for Authors: What's the efficiency of the proposed method, such as inference time and GPU memory consumption? I saw that inference efficiency has been mentioned in the limitations; it's better to have quantitative results and also memory states.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Agree with the limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Following your suggestion, we have scaled up our large language model (LLM) experiments, focusing on how our PCformer performs in the LLM setting. We aim to demonstrate the capabilities of PCformer from both model capacity and data volume perspectives. The updated results are as follows:
| Model(Params & Tokens) | Wiki.(ppl) | LMB.(ppl) | LMB. | PIQA | Hella. | SCIQ | ARC-c | Winograde| Avg. |
| -------------------------- | ---------- | --------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
| Transformer++ (340M & 6B) | 38.5 | 96.1 | 21.4 | 60.3 | 29.1 | 69.2 | 21.5 | 50.4 | 41.9 |
| PCformer (340M & 6B) | 35.3 | 78.8 | 23.6 | 61.6 | 30.1 | 71.6 | 22.9 | 51.8 | 43.6 |
| Transformer++ (340M & 16B) | 28.3 | 65.3 | 29.8 | 63.2 | 33.9 | 73.2 | 23.1 | 51.4 | 45.8 |
| PCformer (340M & 16B) | 25.6 | 39.7 | 34.5 | 65.2 | 36.9 | 79.6 | 23.2 | 52.2 | 48.6 |
| Transformer++ (1.3B & 16B) | 23.8 | 26.2 | 37.3 | 65.7 | 37.6 | 78.6 | 23.7 | 51.5 | 49.0 |
| PCformer (1.3B & 16B) | 20.9 | 23.2 | 42.5 | 68.3 | 43.4 | 81.5 | 25.1 | 52.4 | 52.2 |
| Transformer++(1.3B & 100B) | 16.3 | 11.8 | 51.6 | 71.0 | 51.7 | 86.7 | 28.1 | 54.6 | 57.2 |
| PCformer (1.3B & 50B) | 16.2 | 9.38 | 55.1 | 71.9 | 54.8 | 88.6 | 29.6 | 57.2 | 59.5 |
| PCformer (1.3B & 100B) | **14.0** | **7.46** | **59.6** | **73.8** | **60.0** | **90.7** | **31.7** | **61.7** | **62.9** |
- In our initial submission, we reported results on a 340M parameter configuration trained with 6B Slimpajama data. Here we included additional perplexity metrics on Wikitext and Lambada to provide more comprehensive results. Furthermore, for more convincing results, we scaled the model size from 340M to 1.3B parameters, which is the maximum size feasible within the limited time. Scaling to larger models requires additional engineering efforts, such as implementing tensor or pipeline parallelism for robust and efficient training (using Megatron codebase). We plan to report results on 7B or larger models in our next version and we believe PCformer can indeed perform strongly on those settings.
- For the data, we scaled from 6B to 16B and 100B tokens. Our findings show that with more training data, the average performance improves significantly. This demonstrates that our model benefits from increased data size, without experiencing diminishing returns. This indicates that model bias persists and plays a crucial role in our setting (Larger settings still worth to be explored in future work). As the model size increases (from 340M to 1.3B), PCformer shows substantial performance gains, both in terms of accuracy on the LM harness evaluation and lower perplexity on Wikitext and Lambada tests.
- We trained the 1.3B model on a cluster of 256 A100 GPUs. Note that 1.3B models consist of 24 layers, where the hidden size is 2048 and the FFN size is 5432 (8/3 * hidden size, 16 attention heads and SiLU activation functions. The baseline (1.3B + 100B tokens) was trained up to 20 hours and nearly 40 hours for our PCformer. Thus PCformer (1.3B + 50B tokens) was trained within similar time.
- Additionally, given that PCformer consumes more FLOPS per forward pass, we compared models with similar total FLOPS consumption. Our PCformer trained on less than 50B data outperformed Transformer++ trained on 100B data by a significant margin. Moreover, PCformer (340M & 16B) achieved results comparable to Transformer (1.3B & 16B) with nearly 1/4 of the parameters. These results demonstrate that PCformer remains competitive in settings with larger model capacity and data volumes, highlighting the potential for further research in model designs that fully utilize available data.
- These results demonstrate that PCformer remains competitive in settings with larger model capacity and data volumes, highlighting the potential for further research in model designs that fully utilize available data. While the increased inference and training costs in the decoder-only paradigm are notable, the substantial performance gains justify continued exploration of PCformer, including parameter-efficient training algorithms.
- We believe these updated results address concerns about the relevance and sufficiency of our experiments. Thank you for your valuable feedback. Please let us know if you have any further concerns.
> Q1: What's the efficiency of the proposed method, such as inference time and GPU memory consumption? I saw that inference efficiency has been mentioned in the limitations; it's better to have quantitative results and also memory states.
This question is a common issue among reviewers, we hope you can see the global response W1 for more details!
Due to the limited page in global response, we provide a comparison of the total inference time between PCformer and the baseline in the 1.3B setting (trained on 100B tokens). Using 8 A100 GPUs with CUDA 12.2, we measured the inference time across all benchmarks listed in the table. The PCformer took 1428 seconds, while the Transformer++ took 1037 seconds. Although PCformer's inference time is longer, the performance gains make the additional time investment worthwhile. | Summary: This paper takes inspiration from established high-order approaches in numerical analysis for solving differential equations to improve the architectural design of Transformers. Prior work has shown that residual networks can be seen as discrete approximations of Ordinary Differential Equations (ODE) and explored methods to improve the quality of the solution. Specifically, this paper introduces a predictor-corrector learning framework to minimize approximation errors and an exponential moving average-based coefficient learning method. These advancements are used within Transformer architectures that are evaluated on several tasks such as translation, summarization, language modeling and language understand tasks, improving over standard and previous ODE-based Transformers.
Strengths: - The paper is well motivated and clearly described for the most part. It provides sufficient background to make the reading self contained and explain what is the novel contribution of the work. It's also well situated and compared to prior work on ODE Transformers as it discusses prior first order and high-order methods with a single step.
- Builds on top of predictor-corrector methods from numerical analysis and extending them to improve their stability when training ODE Transformers. The key novelty lies in the selection of the predictor and corrector methods, and the proposal to use exponential moving average method to learn the coefficients in high-order methods instead of using constant values.
- The proposed method uses a high-order method as predictor and a multi-step method as corrector and aims to provide a better approximation to the implicit ODE problem in Transformers than previous studies.
- Presents empirical results on a variety of tasks are better than the original and previous ODE Transformers. This makes the work of interest to the researchers working in ODE Transformers.
Weaknesses: - W1. There is lack of emphasis in the experiments on the computational overhead introduced by the high-order predictor and multi-step methods. Achieving better quality is not sufficient for adoption in practical settings. I'd suggest quantifying the training and inference cost compared to standard and ODE Transformers.
- W2. In terms of parameter efficiency, the proposed model achieves better performance with 1/3 of parameters only on one of the examined datasets. It's not clear if this is by chance or if the result holds on other settings. It would be useful to report performance with a smaller model size on the rest of the datasets to better establish the underlying claim.
- W3. The impact to research communities beyond the ones focusing on ODE transformers is somewhat limited because the results are with relatively small model sizes. Showcasing that the results hold on larger architectures such as 7B Mistral would make the results more convincing.
- W4. The improvement of the proposed multi-step high-order method compared alternative predictor-corrector methods is small (Table 10) and the benefit compared to simpler first-order methods from the predictor-only paradigm in a controlled setting is not provided (similar to the ablation in Table 10).
Technical Quality: 3
Clarity: 3
Questions for Authors: - Q1. Where does the parameter efficiency of the model stems from and how a fair comparison with other models was ensured? In Appendix D, there is a section that discusses this but the comparison does not seem to be apples-to-apples (i.e. same config and #params) and whether the result holds in all the tasks in the experiment section.
- Q2. The method description wasn't entirely clear to me and I have the following questions:
- The parameterization of $\mathcal{F}(P_{t+1}, \theta_t)$ is not clear from the textual description. What kind of transformation is used here?
- How are the coefficients are parameterized exactly? Do you parameterize a single coefficient for each order for an $n$-order predictor or it is the same coefficient shared?
- The text mentions that a larger weight is assigned ($a=0.5$) to the estimated $F_{t+1}$, is this value fixed or you have experimented with different values? It would be useful to clarify which coefficients among $\alpha$ and $\gamma$ are hyper-parameters and which ones are learned.
- Q3. Is the observed improvement worth the the additional computational cost for obtaining a better approximation? It would be useful to show what is the difference compared to a standard or a simpler ODE Transformer.
- Q4. Do you mean "ROUGE results" instead of "Rough results" in Table 4? In the same table, what is the number of parameters used by each model?
- Q5. What is the size of the models in Table 8 and is it the same for all variants?
- Q6. In the ablation experiment and other results, there is lack of comparison with a simple 1-order single/multi-step method or high-order 1-step method. Is the proposed method substantially better than them?
- Q7: Did you perform continued training of BERT directly on language understanding tasks or on some pre-training data first in Table 7? Please also report the number of parameters for both models and the exact configuration for PCformer.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation section covers the shortcomings when the proposed methods are applied to different model designs but lacks discussion about the experiment shortcomings related to model size, computational cost, and comparison with simpler methods in a controlled setting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thanks for your recognition for our motivation and the organization of our method.
We also appreciate for your constructive feedback towards the shortcomes of the current manuscript, and we think all the concerns would be well addressed in our improved version. Here, we would like to show the details of your main concerns:
> W1: Computational overhead: e.g., training and inference cost.
We hope you can refer to the details of the general response W1. We report the training and inference cost of machine translation tasks and give a detailed analysis of large language models.
> W2: Parameter efficiency: performance with a smaller model size on the rest of the datasets.
- Besides OPUS, actually, we have already done simialr experiments in Figure 3 (Appendix), where a 28M PCformer can beat other models larger than it.
- In our general response W2 for our newly proposed results on LLMs, we found that **Pcformer can beat the vanilla Transformer by only training on half of the training data! (1.3B models and 50B vs 100B tokens)**; More excited, our 340M PCformer (16B tokens) can achieve comparable results with 1.3B models with only 1/4 parameters!
- For summarization tasks, a baseline with 12/18 encoder-layer configuration achieves RG-1, RG-2, and RG-L scores of 41.18, 18.23, 38.03 and 41.33, 18.31, 38.20, respectively. In comparison, our 2-order method yields scores of 41.96, 18.99, 38.74, **outperforming the baseline while utilizing only 1/3 to 1/2 of the layers.**
We'll add more comprehensive comparisons next.
> W3: Results on larger models such as Mistral 7B.
- We appreciate your suggestion and completely agree. Our current results demonstrate that our model performs exceptionally well on smaller model sizes (less than 1B), indicating that PCformer has significant potential to scale up to larger model sizes. Given the interest from reviewers in seeing how PCformer performs in LLM evaluations, we have scaled up the model size from 340M to 1.3B parameters and increased the training data from 6B tokens to 100B tokens. These efforts have pushed the limits of our current computational resources. The summarized results are provided in the general response W2. We hope these results address your concerns.
- We recognize the importance of showcasing results on even larger architectures like the 7B Mistral. However, achieving this would require additional computational resources. At present, we plan to train a PCformer based on the Mistral checkpoints and restart the training process with significantly less data compared to what was originally consumed in their pretraining. We are committed to conducting these more challenging experiments as soon as we have access to the necessary computational resources. Thank you for your valuable feedback.
> W4: More details of the ablation (Table 10)
Very good suggestions. Here we add the simpler first-order methods and ODE Transformer for each setting. The results are below:
| # | Predictor | Corrector | En-De | En-Fr | En-Ro | OPUS |
| ---- | -------------------- | --------------------- | ----- | ----- | ----- | ---- |
| 1 | First-order baseline | - | 29.91 | 43.22 | 34.20 | 31.5 |
| 2 | ODE Transformer | - | 30.77 | 43.96 | 35.28 | 32.3 |
| 3 | RK2-block with EMA | Multistep Method | 30.70 | 44.27 | 35.55 | 33.7 |
| 4 | RK2-block with EMA | Backward Euler Method | 30.95 | 43.68 | 36.00 | 33.2 |
| 5 | Multistep Method | Multi-step Method | 30.30 | 43.92 | 35.30 | 33.0 |
| 6 | Multistep Method | RK2-block with EMA | 29.78 | 42.68 | 34.40 | 32.5 |
| 7 | Multistep Method | Backward Euler Method | 30.30 | 43.62 | 35.27 | 32.8 |
We observe that all settings, except for the Multistep Predictor and High-order Corrector, show consistent BLEU improvements over the baseline (#1). This phenomenon is explained in our current manuscript: the predictor needs to be concise enough, otherwise, the initial value may cause the subsequent corrector computations to deviate. Additionally, both multistep and backward Eulere correctors show substantial improvements to ODE Transformer. Our proposed PCformer is a framework that you can choose a proper corrector according to the training date complexity, and RK2-block with EMA as the predictor always behaves well. We hope the results here can address your concern.
> Q1: About fair comparison in experiments.
We mainly want to compare the PCformer with the vanilla Transformer in two settings.
- Firstly, as we only introduce quite small parameters, just a 1D tensor for EMA coefficient learning and several layernorm to achieve RK-Norm, we compare these two models in similar (near the same) parameters. The PCformer beats the vanilla Transformer by a quite large margin nearly in all scenarios.
- Secondly, we can compare them in the same FLOPs, for example, a 6-layer PCformer with much fewer parameters can beat a 12-layer vanilla Transformer, meanwhile fewer computations.
- Thirdly, for LLMs, we find that PCformer also beats Transformer even the former trained with 50B tokens and the latter trained with 100B tokens. As we all know, increasing the training data is the most effective way and also the simplest way to improve performance. But at this serve setting, PCformer still behaves strongly.
Due to the limited time and page, we will add more comparisons in all experiments for a much stronger claim.
> Q2.1 The parameterization of is not clear from the textual description. What kind of transformation is used here?
As we emphasized that $\mathcal{F}_(P_{t+1},\theta_t)$ is the function to compute the derivate upon the input. Thus, either a self-attention network or an FFN, or even the whole encoder block could be regarded as an F function, also they could be seen functions in different granularities. In this work, for a fair comparison with ODE Transformer, we choose the whole block as the function.
---
Rebuttal 2:
Title: Further response to the remained questions
Comment: > Q2.1 The parameterization is not clear from the textual description. What kind of transformation is used here?
As we emphasized that $\mathcal{F}_(P_{t+1},\theta_t)$ is the function to computed the derivate upon the input. Thus, either a self-attention network or an FFN, or even the whole encoder block could be regarded as a F function, also they could be seen fuctions in different granularities. In this work, for a fair comparison with ODE Transformer, we choose the whole block as the function.
> Q2.2 Details of $\alpha$
Just a single coefficient. the code is as follows:
```
self.alpha = torch.nn.Parameter(torch.Tensor(1))
self.alpha.data.fill_(0.5)
```
Take a 2-order EMA as an instance, the final layer ouput is computed as follows:
```
x = residual + self.alpha*(1-self.alpha) * runge_kutta_list[0] + self.alpha*runge_kutta_list[1]
```
where runge_kutta_list is a list to stores the previous obtained intermediate approximations ($\hat{F}_i$).
We have uploaded our code to a anonymous github, more details could found in our codebase.
> Q2.3 Whether $\alpha$ is fixed
Similar with the previous question, we use 0.5 as an initail value for $\alpha$, while it is learnable. Thus $\alpha$ would be changed according to the graident decsent during the training phase. Sorry for the missing details for $\alpha$ and $\gamma$, and these two are both learnable tensors with an initial value 0.5. We will include this in our next version of the paper.
> Q3. Is the observed improvement worth the the additional computational cost for obtaining a better approximation?
As for the sequence generation models which follow a encoder-decoder paradigm, our models are quite competive and the observed improvement is indeed worth the additional computational cost. Actually, we have already compared our PCformer with a simpler ODE Transformer in sequence generation tasks. We can see that, PCformer (RK2) can beat ODE Transformer (RK4) with stronger performance and less computation cost. For example, in Table 1, PCformer (2-order) beats RK4-block (EMA) in both En-De and En-Fr tasks, with one less computation (The former 2 times predictor and 1 times corrector, while the later consumes 4 times forward computation for the high-order solution). SSimilar phenomena could be observed in abstractive summarization tasks (Table 2).
> Q4: typos and the parameters of the models displayed in Table 4.
Yes, we apologize for the typo; we mean "ROUGE results" instead of "Rough results" and will correct it in the next version. Regarding the number of parameters, all models listed in Table 4 have a similar parameter number, approximately 63M. This corresponds to a Transformer-base configuration, which includes 6 encoder layers and 6 decoder layers, each with a hidden size of 512 and 8 attention heads.
> Q5: What is the size of the models in Table 8 and is it the same for all variants?
The configurations of the models used to approximate the truncation errors are detailed in Appendix C.3. Specifically, we used a Transformer language model (decoder-only) with a hidden size of 512, tested in both 1-layer and 2-layer settings. Note that all all variants are the same. By comparing the results of the 1-layer and 2-layer models, we can see that both the ODE Transformer and our PCformer significantly reduce truncation errors, as measured by PPL. For instance, the PCformer (2nd order but with a 1-layer parameter) outperforms the Residual-Block (2-layer) and even surpasses the RK4-block (EMA) with fewer forward computations. This also provides context and a partial answer to your Q3 regarding computational efficiency.
> Q6: Comparison with a simple 1-order single/multi-step method or high-order 1-step method.
Yes, our PCformer significantly outperforms single/multi-step methods and high-order 1-step methods. Perhaps there was some oversight, but it is important to note that the single 1-step 1-order method corresponds to the vanilla Transformer (denoted as "Residual-block" in most tables). Transformer-DLCL represents a specific case of a multi-step method. We compared these models with our PCformer in Table 1. Additionally, the high-order 1-step method is represented by the ODE Transformer. We have already included comparisons with these methods in our MT experiments. We will ensure that these results are included in subsequent versions to provide a more comprehensive evaluation.
> Q7: Details of the BERT training.
Both the PCformer and the BERT models in Table 7 share the same parameter count, with approximately 335M parameters each. Specifically, both models have a hidden size of 1024, an FFN filter size of 4096, and 16 attention heads. We have aligned all settings with those specified in the original BERT paper to ensure a fair comparison. For PCformer, we pre-trained the model from scratch using a combination of WikiText and BookCorpus datasets. After the pre-training phase, we fine-tuned PCformer on the GLUE downstream tasks.
---
Rebuttal Comment 2.1:
Title: Response to authors
Comment: Thank you for the detailed responses to my questions and additional results! My main concerns have been addressed and I decided to increase my score:
- The scaling experiments with LLMs are quite promising and increased my confidence that the results hold on larger model sizes.
- In terms of efficiency, the results show that there is no significant overhead introduced which addressed my main concern. It was also great to see results that justify the parameter efficiency (e.g. performance of PCFormer 340B vs Transformer 1.3B).
- The new ablation results clarify the differences compared to simpler baselines. It appears that the proposed methods have a fairly good improvement for translation that is greater than 0.5-1 point.
The rest of the answers provided helpful clarifications, it would be great if they are reflected in the final version.
---
Reply to Comment 2.1.1:
Title: Thanks for the reconsideration
Comment: Thank you for reconsidering our work. We greatly appreciate the valuable suggestions you have provided, which we believe will further enhance our research. We plan to include these results and make thorough revisions in our next version. Thank you once again for your efforts! | Summary: This paper presents an approach to improve the performance of Transformer models for conditional natural language generation (machine translation and summarization). The authors introduce a predictor-corrector framework, inspired by numerical methods for
solving ordinary differential equations (ODEs), to enhance the accuracy and
stability of the models. The proposed model is evaluated against standard Transformer models on multiple benchmark datasets. The presented results show improvements in prediction accuracy and efficiency when compared to standard models.
Strengths: The application of the predictor-corrector paradigm to Transformer models is innovative and offers a new perspective, extending the ODE transformer proposed in (Li et al., 31).
The integration of high-order ODE solutions into the Transformer architecture is a novel contribution. The authors use EMA (Exponential
Moving Average) coefficient learning to enhance training stability.
The experimental results look comprehensive, with evaluations on multiple benchmark datasets demonstrating the efficacy of the proposed model.
The paper provides a detailed description of the methodology.
The paper is well-organized and clearly written, with each section leading to the next.
Weaknesses: The paper does not compare its results with state-of-the-art models, which could have provided a more comprehensive evaluation of its effectiveness. For instance for WMT14 En to FR, we can reach 43 in BLEU, instead of 41 for Attention is all you need. Moreover, the difference with ODE Transformer is very limited.
The theoretical justification for the predictor-corrector framework could be more detailed, particularly in explaining how it relates to and improves upon existing methods.
The computational overhead induced by the proposed method is only lightly discussed in the Appendix and would require more in depth discussion.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the predictor-corrector framework perform in comparison to
state-of-the-art pretrained Transformer models?
- Do you think that all the 72 references are necessary?
- The quotation of Newell 59 is maybe a bit too much. Don't you think?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors should discuss the potential computational overhead introduced by the predictor-corrector framework.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive advice and we think all the conerns would be well addressed in our improved version. We would like to address your concerns as follows:
> W1: The paper does not compare its results with state-of-the-art models, which could have provided a more comprehensive evaluation of its effectiveness.
Thank you for pointing out this issue. Due to page limitations, we focused our comparisons on the most closely related work, particularly those designed based on numerical methods, such as Transformer-DLCL, MacroNet, and ODE-Transformer. Notably, the ODE Transformer is a strong baseline on the WMT En-De and En-Fr benchmarks, used to achieving state-of-the-art results in these tasks. In addition, for the OPUS task, we compared PCformer with SoTA models like DeepNet and BranchNorm, which are leading models on this benchmark. To the best of our knowledge, PCformer indeed achieves SoTA results on WMT En-De (without pretrained models), En-Fr, En-Ro, and OPUS in these machine translation tasks. As for other tasks, such as summarization, PCformer shows competitive results compared to baselines learning from scratch. The current SoTA models in these tasks are primarily finetuned versions of BART and other advanced pretrained models. We will aim to make a fair comparison with these models in our next version. Thank you for your understanding and your valuable feedback.
> W2: The theoretical justification for the predictor-corrector framework could be more detailed, particularly in explaining how it relates to and improves upon existing methods.
- Compared to existing high-order methods such as Runge-Kutta (ODE Transformer), our PCformer leverages a predictor-corrector paramdigm to estimate intermediate approximations more accurately. Notably, we introduced an EMA-based coefficient learning method to enhance the robustness and stability of our high-order predictor, and the corrector then applies a multistep method to further reduce truncation errors.
- Theoretical Improvements: Truncation error is a crucial factor in numerical solutions. In Section 3.1.2, we provide a theoretical analysis demonstrating that higher-order intermediate approximations tend to be more accurate, leading to more reliable final results. And in Table 8, we show that PCformer can achieve lower ppl (analogous to truncation errors) than the 1-step and high-order methods.
- Practical Superiority: As demonstrated in our experimental results section, our framework significantly outperforms existing models, including the robust 3.8B DeepNet, on several large-scale datasets while using only one-third of the parameters.
> W3: The computational overhead induced by the proposed method is only lightly discussed in the Appendix and would require more in depth discussion.
Thank you for your advice. We regret that we were unable to include this discussion in the main body of this paper due to time constraints. We would like to provide the following summary on the complexity and computational overhead induced by our proposed method, the details please refer to the general response W1!
In a nutshell, PCformer only introduces slight inference overhead as we mainly applied the predictor-corrector in the encoder side, which takes a very small amount of computation in the process of auto-regressive decoding.
> Q1: How does the predictor-corrector framework perform in comparison to state-of-the-art pretrained Transformer models?
We have conducted extensive experiments across several benchmarks. Our proposed PCformer not only significantly outperforms the vanilla Transformer but also shows a substantial improvement over the strong ODE Transformer. We believe your question may be directed towards understanding how PCformer performs in comparison to other large pretrained language models (LLMs). If our interpretation is incorrect, please let us know, and we will address the specifics accordingly.
Due to the limited number of response characters, please refer to the general response W2 for the new proposed results on LM harness evaluation. We can see that PCformer beat the Transformer baseline across all settings, we will further scale the model to a much larger capactiy in the future, e.g. 7B/8B model to make a fair comparison with LLAMA models. While training even larger LMs from scratch is costly and time-consuming, it is a promising direction for further research.
> Q2: Do you think that all the 72 references are necessary?
We have carefully reviewed all the references cited in our work and believe that almost all reference are necessary. The large number of references is due to the comprehensive nature of our study, which spans five distinct tasks: machine translation, abstractive summarization, language modeling, natural language understanding, and LM harness evaluation. For each task, we report results on several widely used benchmarks, such as WMT En-De, En-Fr, En-Ro, OPUS, nine test sets for BERT, and eight test sets for LM harness evaluation. To ensure a fair comparison and to validate our results, it was essential to cite the datasets and the results from related work. Consequently, the extensive referencing reflects the breadth and depth of our study rather than an excess. We will futher check the details to avoid unnecessary reference included!
> Q3: The quotation of Newell 59 is maybe a bit too much. Don't you think?
Thank you for pointing this out. Our intention was to illustrate that the predictor-corrector paradigm aligns with the seminal ideas of Newell, as also mentioned in the Tree-of-Thought paper. Specifically, the predictor first provides an initial estimate. This estimate is then refined by the corrector for greater accuracy, mirroring the problem-solving process proposed by Newell. We appreciate your feedback and will carefully reconsider and reorganize this section in the next version of our manuscript to ensure it is succinct and relevant.
---
Rebuttal 2:
Comment: Thank you for all your comments and discussions. I acknowledge I have read them, but I stand by my original review. | Summary: The paper presents advancements in Transformer architecture to minimize errors in approximating solutions to Ordinary Differential Equations (ODEs).
The contributions are:
- Introducing a learning paradigm with a high-order predictor and multistep corrector to reduce truncation errors.
- Proposing an exponential moving average-based method to enhance the predictor's learning ability and stability by replacing constant coefficients with dynamic ones.
- Demonstrating superior performance across various benchmarks, including machine translation, abstractive summarization, language modeling, and natural language understanding, achieving notable improvements in BLEU scores and parameter efficiency.
The work shows improvements in translation tasks and highlights the general applicability of the proposed methods across different natural language processing domains.
Strengths: The paper introduces a new predictor-corrector framework within Transformer architectures, which is a fresh and innovative approach to addressing errors in approximating solutions to ODEs. The integration of a high-order predictor and multistep corrector, combined with an exponential moving average (EMA) coefficient learning method, represents a creative combination of established numerical analysis techniques with modern neural network architectures.
The paper successfully applies the method across various natural language processing tasks. The paper is well-structured and clearly written, making complex concepts accessible.
Weaknesses: The complexity of implementing these methods might be a barrier for practical adoption. The paper could benefit from providing more detailed guidelines or code to facilitate easier implementation and replication of the results.
The experiments primarily focus on natural language processing tasks. While the results are impressive, it remains unclear how well the proposed methods generalize to other domains, such as time series forecasting. Including preliminary results or discussions on the potential applicability to these other areas could strengthen the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you provide more details on the computational cost of the proposed predictor-corrector framework and EMA coefficient learning method? Specifically, how do these methods impact training time and resource utilization compared to traditional Transformer models?
How does the proposed method scale with increasing model size and dataset complexity? Have you encountered any challenges in scaling up your approach, and if so, how did you address them?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Provide a detailed analysis of the computational complexity and resource requirements of the proposed methods. This should include a comparison with standard Transformer models and an explanation of any trade-offs between performance improvements and computational costs.
Discuss the scalability of the proposed methods in more detail. Explain any challenges encountered when scaling up to larger datasets or models and how these were addressed. Provide insights into potential limitations in terms of scalability and how future work could overcome these challenges.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your recognition on our writting and the style to present our idea. We would like to answer your questions as follows:
> W1: The complexity of implementing these methods might be a barrier for practical adoption. The paper could benefit from providing more detailed guidelines or code to facilitate easier implementation and replication of the results.
We open-source our core code at https://anonymous.4open.science/r/Neurips-PCformer/. This allows others to easily reproduce our results and apply PCformer to additional benchmarks. We will also release the complete codebase after finalizing the README.md to facilitate better reproducibility.
> W2: The experiments primarily focus on natural language processing tasks. While the results are impressive, it remains unclear how well the proposed methods generalize to other domains, such as time series forecasting. Including preliminary results or discussions on the potential applicability to these other areas could strengthen the paper.
We mainly evaluate our PCformer on NLP tasks, including both the natural language understanding and natural language generation. While, our method is quite general that it could be applied to other Transformer variants in other areas, e.g., Swin-Transformer in computer vision. For example, our PCformer can also improve Swin-Transformer in tiny and base configurations. The results were conducted on 224*224 image size, and 2-order predictor with a multistep corrector.
| Model | Params. | Image size | Top1-acc |
| ---------------------- | ------- | ---------- | -------- |
| Swin-Transformer(tiny) | 29M | 224*224 | 81.3 |
| PCformer(tiny) | 29M | 224*224 | 82.0 |
| Swin-Transformer(base) | 88M | 224*224 | 83.5 |
| PCformer(base) | 88M | 224*224 | 84.0 |
Beside this, we also follow your suggestion to evaluate PCformer on time-series forecasting tasks. We select 10 multivariate datasets from UEA Time Series Classification Archive following the setting and the codebase provided by Flowformer [1]. Thus we choose the Flowformer as the baseline, which is also a strong model on these testsets. For the details, we build the PCformer upon Flowformer and report the 2-order predictor and Euler corrector as the training data is very small. Also, we use RK-Norm to avoid the model suffering from the overfitting problem as the authors of Flowformer trained their models upon to 100 epochs (or even 400 epoch on some tasks.). The results are evaluated by the best accuracy. We can see that PCformer can beat the Flowformer by 2 average score on 10 testsets, which demonstrates the effectiveness on time-series forecasting tasks. We hope these results can address your concern, and we'd like to add them into the updated version of this paper.
| Dataset | Flowformer | PCformer |
| -------------------- | ---------- | -------- |
| EthanolConcentration | 30.3 | 33.9 |
| FaceDetection | 67.0 | 68.2 |
| HandWriting | 29.1 | 33.5 |
| Heartheat | 77.0 | 78.5 |
| JapaneseVowels | 98.4 | 99.2 |
| PEMS-SF | 87.2 | 87.9 |
| SelfRegulationSCP1 | 89.0 | 92.2 |
| SelfRegulationSCP2 | 55.0 | 56.1 |
| SpokenARABICDIGITS | 98.0 | 100.0 |
| UWAVEGESTURELIBRARY | 85.3 | 86.3 |
| Average Score | 71.6 | 73.6 |
[1] Flowformer: Linearizing Transformers with Conservation Flows, ICML 2022.
> Q1: Can you provide more details on the computational cost of the proposed predictor-corrector framework and EMA coefficient learning method? Specifically, how do these methods impact training time and resource utilization compared to traditional Transformer models?
Our proposed PCformer is parameter-efficient as we share the parameters between the predictor and the corrector, especially for the F functions in the high-order predictor. Please refer to the general response W1 for more details.
> Q2: How does the proposed method scale with increasing model size and dataset complexity? Have you encountered any challenges in scaling up your approach, and if so, how did you address them?
Our PCformer could be easily scaled from both the model capacity and training tokens. We summarize the results of larger model capacity (from 340M to 1.3B) and training tokens (from 6B to 100B) in the general response W2, where PCformer can beat LLama-like models in all scenarios. Additionally, the results in Table 3 have already shown the superiority of PCformer in a 1.2B setting on OPUS multilingual translation tasks.
We think there is only a major challenge: the computational overhead of PCformer is somewhat higher than that of the vanilla Transformer (e.g., LLama models). This limitation has been discussed in our paper. We are actively working to address the training and inference overhead in our ongoing research.
---
Rebuttal 2:
Title: Any Further Concerns About Our Work
Comment: Dear Reviewer 1pKC,
We apologize for reaching out as the discussion deadline approaches. We are grateful for the comprehensive and useful feedback you provided, and we have responded in detail to your comments during the rebuttal phase, e.g., new results on time-series forecasting, the practical cost of training and inference, more strong results on larger LLMs using PCformer and so on.
We are eager to know if our proposed results and clarifications have adequately addressed your concerns. If so, we would appreciate it if you could reconsider your score.
Thank you once again for your time and effort.
Best regards,
The Authors
---
Rebuttal Comment 2.1:
Title: Looking Forward to follow-up discussion
Comment: Dear Reviewer 1pKC,
Thank you once again for your suggestions and valuable feedback. We apologize for reaching out again as the deadline approaches. To address your concerns, we have conducted experiments on both the time-series forecasting task and the image classification task. Additionally, we have provided data on the practical costs of training and inference for both the encoder-decoder and decoder-only paradigms. We have now uploaded the results of a 3B PCformer using 16B and 50B tokens, respectively. The results show that there are no major challenges for its scaling up. If our results and explanations help address your concerns, we would be grateful if you could acknowledge our rebuttal and consider adjusting your score accordingly.
Best wishes
The Authors. | Rebuttal 1:
Rebuttal: Thank you to all four reviewers for your efforts and instructive comments on our paper. We believe these updated results address your concerns regarding the efficiency and effectiveness.
> W1: about the computation overhead, inference and training cost comparison.
| Model | Layers | Inference | Memory | BLEU |
| ------------------------ | ------ | --------- | ------ | ---- |
| Transformer | 6 | 98.7 | 13.2 | 29.2 |
| Transformer | 12 | 94.5 | 18.7 | 29.7 |
| Transformer | 24 | 87.3 | 23.5 | 29.8 |
| ODE Transformer (RK2) | 6 | 93.5 | 15.1 | 30.7 |
| PCformer (RK2 predcitor) | 6 | 90.3 | 16.2 | 30.9 |
| ODE Transformer (RK4) | 6 | 87.1 | 17.3 | 30.5 |
1. The table above compares the inference speed (sentences/s) and memory consumption (GB) for various models in a big configuration on WMT En-De. The experimental results show that the proposed PCformer models achieve comparable inference speeds with the baselines. This is primarily because MT models typically follow an encoder-decoder paradigm, with the main computational overhead coming from the decoder side due to the auto-regressive decoding process, rather than the encoder on which we primarily experimented.
2. In terms of memory consumption, PCformer is also competitive. It consumes slightly more memory during the forward computation phase because we need to store previously obtained approximations for the multistep method (corrector), as well as iteratively generated inner step approximations for higher-order predictors. This slightly increases the memory consumption for encoder-decoder models.
3. However, if the proposed predictor-corrector paradigm is applied to encoder-only or decoder-only models, such as BERT for the former and LLMs for the latter, the additional overhead is nonnegligible. For example, our PCformer would consume about twice time compared the vanilla baseline in LLM training. Despite this, the performance gains are still noticeable. These observations motivate us to develop more efficient variants to overcome the computational overhead.
4. Fortunately, we have already collected some promising results in this direction. We attempted to achieve the high-order approximations in latent space and current experimental results show there is only a quite small performance gap compared with the PCformer in most of tasks, but is computation friendly!
W2: More results on LLMs.
| Model(Params & Tokens) | Wiki.(ppl) | LMB.(ppl) | LMB. | PIQA | Hella. | SCIQ | ARC-c | Winograde| Avg. |
| -------------------------- | ---------- | --------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
| Transformer++ (340M & 6B) | 38.5 | 96.1 | 21.4 | 60.3 | 29.1 | 69.2 | 21.5 | 50.4 | 41.9 |
| PCformer (340M & 6B) | 35.3 | 78.8 | 23.6 | 61.6 | 30.1 | 71.6 | 22.9 | 51.8 | 43.6 |
| Transformer++ (340M & 16B) | 28.3 | 65.3 | 29.8 | 63.2 | 33.9 | 73.2 | 23.1 | 51.4 | 45.8 |
| PCformer (340M & 16B) | 25.6 | 39.7 | 34.5 | 65.2 | 36.9 | 79.6 | 23.2 | 52.2 | 48.6 |
| Transformer++ (1.3B & 16B) | 23.8 | 26.2 | 37.3 | 65.7 | 37.6 | 78.6 | 23.7 | 51.5 | 49.0 |
| PCformer (1.3B & 16B) | 20.9 | 23.2 | 42.5 | 68.3 | 43.4 | 81.5 | 25.1 | 52.4 | 52.2 |
| Transformer++(1.3B & 100B) | 16.3 | 11.8 | 51.6 | 71.0 | 51.7 | 86.7 | 28.1 | 54.6 | 57.2 |
| PCformer (1.3B & 50B) | 16.2 | 9.38 | 55.1 | 71.9 | 54.8 | 88.6 | 29.6 | 57.2 | 59.5 |
| PCformer (1.3B & 100B) | **14.0** | **7.46** | **59.6** | **73.8** | **60.0** | **90.7** | **31.7** | **61.7** | **62.9** |
1. Here we scaled the model size from 340M to 1.3B parameters, which is the maximum size feasible within the limited time. For the data, we scaled from 6B to 16B and 100B tokens. PCformer can significantly beats the baseline in similar parameters, and with more training data, the average performance improves significantly. This demonstrates that our model benefits from increased data size, without experiencing diminishing returns. This indicates that model bias persists and plays a crucial role in our setting (Larger settings still worth to be explored in future work). As the model size increases (from 340M to 1.3B), PCformer shows substantial performance gains, both in terms of accuracy on the LM harness evaluation and lower perplexity on Wikitext and Lambada tests.
2. We trained the 1.3B model on a cluster of 256 A100 GPUs. Note that 1.3B models consist of 24 layers, where the hidden size is 2048 and the FFN size is 5432 (8/3 * hidden size, 16 attention heads and SiLU activation functions. The baseline (1.3B + 100B tokens) was trained up to 20 hours and nearly 40 hours for our PCformer. Thus PCformer (1.3B + 50B tokens) was trained within similar time.
3. Additionally, given that PCformer consumes more FLOPS per forward pass, we compared models with similar total FLOPS consumption.
- Our PCformer trained on less than 50B data outperformed Transformer++ trained on 100B data by a significant margin. Additionally, PCformer (340M & 16B) achieved results comparable to Transformer (1.3B & 16B) with nearly 1/4 of the parameters. These results demonstrate that PCformer remains competitive in settings with larger model capacity and data volumes, highlighting the potential for further research in model designs that fully utilize available data.
- While the increased inference and training costs in the decoder-only paradigm are notable, the substantial performance gains justify continued exploration of PCformer, including parameter-efficient training algorithms. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing | Accept (spotlight) | Summary: This paper presents a method by which an MLLM (gpt 4) is used as a task planner for image editing tasks. A catalog of off the shelf models is used as a set of tools and the planner constructs a task tree, executes proposed tasks, performs verification, and optionally backtracks.
The authors claim the following contributions:
* a unified image generation and editing system
* a planner which can construct a task tree and perform verification
* the ability to perform tool selection
Strengths: I really like the concept. It matches intuition that given a strong enough controller / planner, you might be able to perform editing more easily with a bag of tools.
Quantitative evaluations show substantial improvements.
Good writing.
Weaknesses: Probably should cite Gupta, et. al. "Visual Programming: Compositional Visual Reasoning Without Training", CVPR 2023.
No comparison with other MLLMs.
Contributions could be further refined and clarified.
Technical Quality: 3
Clarity: 3
Questions for Authors: I would recommend picking one or two more MLLMs and doing a comparison. I think it's fine that this paper serves as an existence proof that there exists an MLLM with planning capabilities suitable for multi-step image editing, but I didn't get a very good sense of how that performance is impacted by product-specific fine tuning and alignment that might happen behind the scenes with gpt 4.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: seems fine
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Probably should cite Gupta, et. al. "Visual Programming: Compositional Visual Reasoning Without Training", CVPR 2023.**
A1:
Thank you for your suggestion. This paper uses language models to generate visual programs for compositional visual tasks, which are then executed. In concept, this paper indeed shares some similarities with AI agent. The difference is mainly that our GenArtist focuses specifically on the field of image generation and editing. In addition to tool execution, we also introduce the verification and self-correction mechanisms to ensure the reliability of the results. In terms of planning, we propose the planning tree method, tailored to the specific characteristics of image generation. We will cite this paper in the related work section.
**Q2: No comparison with other MLLMs.**
A2:
Table 3-1: The peformance of GenArtist on T2I-CompBench, with various MLLMs.
| |color|shape|texture|
| :--: | :--: | :--: | :--: |
|llama2 (70B) + llava|0.7874|0.6223|0.6876|
|Mixtral-8x7B + llava|0.8142|0.6467|0.7342|
|GPT4-V|0.8482|0.6948|0.7709|
Since it is relatively difficult for most open-source MLLMs to output in the user-specified format, which makes it inconvenient for the utilization of the system. For better results, we only utilize GPT-4V as the MLLM agent in the original paper.
To experiment with more MLLMs, we utilize LLaVA for multimodal verification, feeding its outputs into large language models such as LLaMA or Mixtral for generating specific planning results. In this way, we conduct experiments on T2I-CompBench with two more MLLM agents and list the results in the above Tab. 3-1. As can be seen, our method can be applied to various MLLMs, demonstrating its flexibility in agent selection. Although the performance with open-source MLLMs is not as high as with GPT4-V, it still shows a significant improvement compared to existing single models. This further demonstrates the effectiveness of GenArtist.
**Q3: Contributions could be further refined and clarified.**
A3:
Thanks for your suggestion, we will refine and clarify our contributions in the introduction section of the final version.
Overall, we propose a new paradigm for the field of image generation and editing. Instead of utilizing a single model to directly generate the corresponding image, we delegate different functions to different models. An AI agent manages the utilization and sequence of these different models and performs verification and correction on the outputs. By leveraging the strengths of multiple models and the image understanding capability of the MLLM, our GenArtist significantly enhances the reliability of image generation and editing. We hope this innovative approach can provide new insights for future work in this field.
In terms of method design, compared to other existing agent-related methods, our framework is specifically tailored to the characteristics of the image generation and editing field. To address the challenges of complex prompts and the often unreliable generated images, we conduct decomposition and design a planning tree for task planning. Additionally, we introduce position information to enhance the spatial knowledge of the MLLM agent. These designs enable our GenArtist to more effectively tackle image generation related tasks, resulting in improved performance.
---
Rebuttal Comment 1.1:
Title: Rebuttal response
Comment: Rebuttal looks good, thank you for the MLLM comparison. I'm bumping my rating up to weak accept.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Dear Reviewer
We are happy that our reply address your concerns on MLLM comparison. We are sorry to bother you, but It seems that the score has not been changed yet. It will be nice to change the score when you are in convenience.
Thanks | Summary: This paper introduces a new multi-modal agent for image generation and editing that break down these tasks into subproblems to solve with external tools, including self correction module with verification feedback.
Strengths: - While the idea of augmenting a large (multimodal) language model with tools and turning it into an agent for complex tasks is not new, this work presents a novel multi-modal agent for image generation and editing that combines different existing ideas such as tool use, tree planning, and verification, etc.
- The authors demonstrate strong positive results on two benchmarks for image generation and editing respectively compared to baselines, which wouldn’t have been possible without great execution.
- The authors conducted ablation studies and present evidence that shows the importance of the different components e.g. planning tree vs. chain in the system.
- The paper is well written with informative and clear illustrations.
Weaknesses: - While the experiments authors conducted are well-motivated and support their claims, the paper would be stronger if it performed a deeper analysis, for example, on common error cases, or via more finegrained ablation studies on the tools / positive-aware tool execution.
- It would also be stronger if the authors evaluated the system on additional benchmarks such as GenAI-Bench: A Holistic Benchmark for Compositional Text-to-Visual Generation, although the reviewer understands that it might be impossible due to short rebuttal period.
- the paper lacks some technical details about the method such as the underlying model.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Did authors use language-only GPT4 or GPT4-V or GPT4o? For reproducibility, which version of GPT4 was used?
- One of the common failures of such multi-modal agent systems is due to errors in tool outputs. I wonder if the same issue exists in this system, and if so how the authors address such issues e.g. when the bounding boxes output by the localization/detection module are wrong?
Nits: table3: why is smart Edit missing L2 number?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - No, the authors have not addressed the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The paper would be stronger if it performed a deeper analysis, for example, on common error cases, or via more finegrained ablation studies on the tools / positive-aware tool execution.**
A1:
* error cases: We show two error cases in the Fig. 2 and Fig. 3 of the rebuttal document. As can be seen, sometimes, despite the agent correctly planning the specific execution of tools, the limitations of the tools themselves prevent correct execution, leading to incorrect results. For example, in Fig. 2 of the rebuttal document, it is required to add a very small blue cup. However, due to the lack of fine resolution ability in existing editing tools, the generated blue cup's size is inaccurate. In addition, as shown in Fig. 3 of the rebuttal document, errors in the output of localization tools can also affect the final result. For instance, when asked to remove the lettuce in the middle of a sandwich, the segmentation model fails to accurately identify the part of the object, leading to the erroneous removal operation.
* fine-grained ablation studies on the tools/position-aware tool execution: Regarding position-aware tool execution, we conduct the corresponding ablation study and list the results in the below Tab. 2-1. We evaluate the performance on the spatial and complex aspects of T2I-CompBench. As multimodal large models are usually not sensitive to position information, the performance is limited without the inclusion of position information, only a slight improvement over the tool selection results. After introducing position information, which enhances spatial awareness, there is a significant improvement in both the spatial and complex aspects. This validates the reasonability of our design.
Table 2-1: Ablation study on the position-aware tool execution on the spatial and complex aspects of T2I-CompBench.
| | spatial| complex|
| :--: | :--: | :--: |
|w/o position-aware tool execution| 0.4577 | 0.4083|
|w/ position-aware tool execution | 0.5437 | 0.4499 |
**Q2: It would also be stronger if the authors evaluated the system on additional benchmarks such as GenAI-Bench**
A2:
Table 2-2: The performance of GenArtist on the 'basic' prompts of GenAI-Bench .
| method |attribute|score|spatial|action|part|overall|
| :--: | :--: | :--: | :--: | :--: | :--: | :--: |
|SDXL|0.84|0.84|0.82|0.83|0.89|0.83|
|DeepFloyd-IF|0.83|0.85|0.80|0.82|0.89|0.83|
|Midjourney v6|0.88|0.87|0.87|0.87|0.91|0.87|
|DALL-E 3|0.91|0.90|0.92|0.89|0.91|0.90|
|GenArtist|0.92|0.90|0.93|0.89|0.92|0.91|
Table 2-3: The performance of GenArtist on the 'advanced' prompts of GenAI-Bench .
| method |count|differ|compare|negate|universal|overall|
| :--: | :--: | :--: | :--: | :--: | :--: | :--: |
|SDXL|0.71|0.73|0.69|0.50|0.33|0.63|
|DeepFloyd-IF|0.74|0.74|0.71|0.53|0.68|0.66|
|Midjourney v6|0.78|0.78|0.79|0.50|0.76|0.69|
|DALL-E 3|0.82|0.78|0.82|0.48|0.80|0.70|
|GenArtist|0.79|0.82|0.79|0.56|0.78|0.74|
We conduct experiments on the GenAI-Bench and report the VQAscores on the basic and advanced prompts separately on the above Tab. 2-2 and Tab. 2-3. As can be seen, GenArtist achieves better performance in almost all aspects of this benchmark, further demonstrating the effectiveness of our approach.
**Q3: the paper lacks some technical details about the method such as the underlying model.**
A3:
In our framework, we utilize GPT4-V as our MLLM agent. For our tool library, we have listed the utilized tools in the Tab. 1 of our original paper. For auxiliary tools, we utilize Grounding DINO as the object detector, SAM as the object segmentor, ControlNet utilized models for ControlNet-related auxiliary tools, and language-only GPT4 as the layout generator. We will also add more detailed introductions about these tools in the final version.
**Q4: For reproducibility, which version of GPT4 was used?**
A4:
We use GPT4-V as our MLLM agent.
**Q5: I wonder if the same issue exists in this system, and if so how the authors address such issues e.g. when the bounding boxes
output by the localization/detection module are wrong?**
A5:
Regarding error cases, we analyze this issue in the error case section of the "A1" section above and include two cases in the rebuttal document. It can be seen that problems within the tool output or the localization module can lead to some erroneous outputs. Utilizing more powerful tools or incorporating some human feedback during the verification stage can effectively address this issue. For example, in the case shown in Fig. 3, using open-vocabulary segmentation models that are capable of recognition or integrating human feedback to adjust the segmentation mask can effectively address this issue.
**Q6: table3: why is smart Edit missing L2 number?**
A6:
We directly copy the MagicBrush benchmark results from the original SmartEdit paper. Since the original paper only reports four other metrics, we do not include the L2 metric value here.
---
Rebuttal Comment 1.1:
Title: Looking forward to Feedback as Discussion Deadline Approaches
Comment: Thanks for your thorough reviews, which are very helpful to improving the quality of our paper. We apologize for any inconvenience caused, but as the deadline for discussion (Aug 13 11:59 pm AoE) draws near, we would like to provide an update on our progress.
If you need further clarification or have additional questions, please don't hesitate to contact us. Again, we sincerely thank you for your time and effort in reviewing our paper.
Thanks
---
Rebuttal 2:
Title: Concerns addressed?
Comment: Dear reviewer, thank you for providing constructive feedback to the authors! Are your questions satisfactorily addressed by the rebuttal? Would you like to revise your rating or do you need any more information from the authors to make that decision?
---
Rebuttal 3:
Comment: Thanks for the response and additional evaluations! My questions have been addressed and I have no other concerns. I will keep my rating as it is. Please update the revision with additional technical details e.g. GPT4 version for reproducibility. | Summary: This paper presents GenArtist, a system that utilizes MLLMs as agents for image generation and editing, especially for complex language descriptions. The key idea is to first use MLLM to decompose the generation task as an execution tree of various tools such as SDXL and LMD, and then utilize all the tools in a predefined tool library. Extensive experiments on T2I-CompBench and MagicBrush shows promising results.
Strengths: - The problem of unified image generation and editing over arbitrary language instructions is very relevant for both academia and industry. How to leverage the existing powerful LLMs such as GPT-4 and other tools is also very important for the deployment of current academic progress.
- The proposed method is quite simple and is reasonable. The paper is clearly written and the results look good to me.
Weaknesses: Overall, I think this paper has made some good contributions and shown some promising results. However, I hold some concerns as follows,
**Major concern.** My major concern is about the possible limitation in the decomposition tree. All images generated for the complex text is by first generating an initial image, and then utilizing editing tools to match the text description. This could be very limited if the images initially generated are not very decent. This is because even if the final edited image matches the language description better in semantics, the overall quality and layout may not be very natural in some cases. The editing trace may be easily detected and this may be attributed the weakness of the editing models. The editing operation in the paper is mostly editing local regions while keeping the original overall structure, and this could be the problem. For example, the generated result with "hot dogs" in Figure 1 is not natural or of decent quality to me when compared to results generated by models like Playground. The image is blurred and the hot dogs look a little fake.
**Human-aligned evaluation.** The current evaluation is mainly conducted on T2I-CompBench and MagicBrush. However, it has been well known that traditional metrics such as DINO scores are not aligned with humans. This could cause a result that even if the image quality is not as good as another single-expert result, the score may still be better. However, this might be misaligned with human preference in some cases. Can authors conduct more evaluations on more advanced benchmarks like DreamBench++ [1]?
Minor suggestion: I think the paper organization could be improved by trying to put each full paragraph on one page or two. Try your best not to split one section or subsection into two pages. For example, Section 4.1 and 4.3 are all put under the bottom of one page with very few contents, this could cause a sense of reading fragmentation.
[1] DreamBench++: A Human-Aligned Benchmark for Personalized Image Generation.
Technical Quality: 2
Clarity: 4
Questions for Authors: As stated in weakness, I am curious about the results of modern human-aligned benchmarks, such as DreamBench++. I would be happy to see more human-aligned evaluation, if possible.
I am looking forward to the authors' response.
Confidence: 5
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: Yes, the authors have discussed the limitations. However, I would suggest the authors discuss the potential limitation brought by the initial generated results and editing artifacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: My major concern is about the possible limitation in the decomposition tree.**
A1:
* Thank you for your suggestion. We have indeed considered this issue. Therefore, during verification, in addition to verifying the accuracy of the generated images, the agent is also required to assess their aesthetic quality (as illustrated in L173). If the overall quality of the generated image is poor, the agent will utilize different generation tools or choose different random seeds to regenerate the images, in order to ensure their overall quality. For instance, in Fig. 9 of our main paper, the agent regenerates the image due to the low quality of the initially generated image, thereby ensuring the aesthetic quality of the final generated image.
* By explicitly requiring the agent to maintain higher standards for the aesthetic quality of generated images, the overall quality of the generated images can be further improved. We apply this to regenerate the "hot dogs" case in Fig. 1, and list the generated image in the Fig. 1 of the rebuttal document. As can be seen, the image quality is further improved.
* The quality issue is indeed related to the utilized editing tools. The main reason is that most of the currently utilized editing tools are based on Stable Diffusion 2 or even v1.4. Due to the weaker based models, these editing tools are usually limited in keeping the overall image quality compared to larger generative models like SDXL. We can expect that as more powerful editing models emerge, we can replace the corresponding models in the current editing tool library and this issue will be alleviated.
* As an agent-centered system, our framework is also flexible in terms of human-computer interaction. During verification, human feedback can be appropriately integrated. By incorporating human evaluation and feedback on the overall quality of the images, the quality of the generated images can be further improved.
**Q2: Can authors conduct more evaluations on more advanced benchmarks like DreamBench++?**
A2:
Table 1-1: The performance on the DreamBench++ benchmark in the GPT score.
| method | concept preservation | prompt following |
| :--: | :--: | :--: |
|BLIP-Diffusion | 0.547 | 0.495|
|Emu2 | 0.528 | 0.689 |
|IP-Adapter-Plus | 0.833 | 0.413 |
|GenArtist| 0.848 | 0.603 |
|GenArtist (more tools) | 0.852 | 0.753 |
We conduct experiments on DreamBench++ and compare our results (in GPT score) with several tuning-free methods listed in the original DreamBench++ paper. Since DreamBench++ is primarily about single-object customization generation, where our current GenArtist framework includes only a few relevant tools, we also expand our tool library by introducing more customization tools, and then conduct the corresponding experiments. The comparative results are listed in the above Tab. 1-1.
As can be seen, the current version of GenArtist has already achieved the strong performance on DreamBench++, surpassing all listed tuning-free methods in concept preservation and outperforming most existing models in prompt following. After expanding the tool library, further improvement can be observed, particularly in prompt following. This demonstrates that our framework can achieve better results no matter for image accuracy or image quality on such a recent benchmark.
**Q3: I think the paper organization could be improved by trying to put each full paragraph on one page or two.**
A3:
Thank you for your suggestion. We will enhance the paper organization by adding some additional explanations, such as introducing aspects about image quality, to ensure that sections 4.1 and 4.3 do not span across pages.
---
Rebuttal 2:
Title: Thanks, rating kept
Comment: Thank you for your rebuttal efforts and response. The hot dog case does look better but it is still a little bit fake. However, this issue can be alleviated by using more advanced image editions and generative tools. I appreciate you for letting me know about the aesthetic quality correction procedure, which is helpful. I strongly suggest that the authors discuss this issue properly in the revised paper and incorporate all rebuttal experiments into the final paper. Besides, I suggest the authors use the latest GPT-4o instead of GPT-4V and conduct all experiments again since GPT-4o is a more advanced GPT tool to date, and I think it would be valuable to keep using state-of-the-art tools. All details, including the precise GPT-4o version (eg, `turbo-2024-04-09`), should be explicitly recorded in the paper.
Overall, I think my concern is resolved, and I will keep my rating for a supportive assessment for this paper. | null | null | Rebuttal 1:
Rebuttal: We include some essential images for the rebuttal in the PDF file here, mainly comprising the regenerated images for the hot dog case and some analysis about error cases.
Pdf: /pdf/7bf66d245cdeb3ecee8358f0df44996956ace38a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
End-to-End Ontology Learning with Large Language Models | Accept (poster) | Summary: This paper proposes a new methodology of Ontology Learning using Large Language Models and ontology paths which is called OLLM. After introducing two ontology-datasets for Wikipedia and ArXiv, they train an LLM using the titles, summaries, and ontology paths to allow for sub-graph generation. The method is additionally improved using post-processing techniques for summing and pruning the subgraphs with the purpose of building a complete ontology. New metrics are also presented for facilitating the comparison of the generated ontology with the ground truth and other baseline techniques.
Strengths: • This work presents a novel technique leveraging the power of LLMs and ontology paths for ontology learning.
• Two different ontology-datasets (Wikipedia and ArXiv) are constructed and used for the experiments.
• New metrics for ontology evaluation are proposed based on semantic similarity.
• OLLM seems to outperform most of the baseline methods in the experiments.
• Overall, the paper is well-written.
Weaknesses: • While the OLLM results are better than most of the baseline methods that is not the case using the motif distance metric (Wikipedia: Finetune = 0.05 while OLLM = 0.080, Arxiv: Memorisation = 0.037 while OLLM = 0.097). That is not well explained in the paper.
• Performance seems to drop significantly when using a smaller dataset such as arXiv.
• It is unclear whether OLLM can generalize well to different domains without having an initial ontology for extracting the paths.
Technical Quality: 2
Clarity: 3
Questions for Authors: • Usually training sets consist of the biggest part of a dataset than validation and test sets. Why did you choose such small portion of the datasets for training? Could that be one reason for the initial poor generalisation?
• It is also unclear how much the overlapping parts of training/validation/test sets contribute to the final result.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The main limitation, to my opinion, is that OLLM cannot generate a complete ontology for a different domain without already having an initial ontology for extracting the paths first. This means that there is still the need of reusing or constructing some part of an ontology.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback.
> While the OLLM results are better than most of the baseline methods that is not the case using the motif distance metric (Wikipedia: Finetune = 0.05 while OLLM = 0.080, Arxiv: Memorisation = 0.037 while OLLM = 0.097). That is not well explained in the paper.
We thank the reviewer for raising this point. As the data splits are constructed symmetrically (i.e., the train, validation, and test splits have the same distribution), we expect all data splits to have similar structures. This implies that Motif Distance is biased towards overfitting, since the structure of the train and test splits tend to be similar, even though the represented concepts are different. This explains why methods that tend to overfit (i.e., Memorisation and Finetune) achieve the best scores. We will include this discussion in Section 5.3 in the final version.
> Performance seems to drop significantly when using a smaller dataset such as arXiv.
We would like to emphasise that the two experiments are designed to answer different questions and thus the results are not suitable for direct comparison. We use Wikipedia to test the model’s ability to generalise in-domain, whereas the arXiv experiment tests the transferability using only a small number of training examples. The arXiv task also appears to be more difficult in general, as indicated by the worse performance of the baselines.
> It is unclear whether OLLM can generalize well to different domains without having an initial ontology for extracting the paths.
We agree that training examples are needed to give the best performance. Ontologies come in different “styles” (e.g., the granularity of concepts) which is difficult to specify without training examples. However, we argue that our end-to-end modelling approach is useful in other domains:
1. In the arXiv experiment, we only used 2000 finetuning examples to transfer from Wikipedia to arXiv and achieved good performance. This is a modest cost if one were to apply our model to a new domain.
2. Even without training examples, our 0/1/3-shot prompting method outperforms existing baselines (including the new ablation methods, see general response).
> Usually training sets consist of the biggest part of a dataset than validation and test sets. Why did you choose such small portion of the datasets for training? Could that be one reason for the initial poor generalisation?
We deliberately chose to use a larger test split as we wanted to ensure that sufficiently many concepts and relations were unseen in the training split. This tests the model’s ability to generalise in-domain. Using a larger split for evaluation also helps to get more reliable metrics, as we did not do repeated runs.
As described in Section 5.1, the fundamental cause of poor generalisation of direct finetuning is due to the imbalanced sampling of high and low-level concepts. This results in the model learning high-level concepts much faster, resulting in overfitting high-level concepts while underfitting low-level ones. Increasing the training set size does not solve such issue.
> It is also unclear how much the overlapping parts of training/validation/test sets contribute to the final result.
It is inevitable and desirable that there is an overlap between the data splits. We attempt to quantify the contribution of such overlap to the final result by introducing the “Memorisation” baseline method, which demonstrates the quality of resultant ontology if one were to blindly exploit such overlap. The fact that Memorisation performs poorly suggests that the overlaps do not contribute significantly to the final result.
---
Would the reviewer please consider raising the score if we have addressed their concerns?
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed explanations and additional results. Since my concerns were addressed, I am increasing my score. | Summary: The paper is well-written, addresses a valuable real-world problem, and proposes a simple, intuitive, and effective method. The experiments are rigorously designed with comprehensive evaluation metrics. Overall, this is a high-quality work with notable contributions.
However, there are concerns about whether the improved performance is due to end-to-end modeling or LLM capabilities, the assumption that each document is associated with at least one concept, the consistency of the generated ontology, and the post-processing procedures.
Strengths: - S1: The paper is well-written and easy to follow
- S2: The investigated ontology learning problem is of great value for real-world applications
- S3: The proposed method is simple, intuitive, yet problem-oriented, novel, and effective
- S4: The experiments are well-designed, in particular, the baselines and the data-splitting strategy are rigorously
- S5: The set of evaluation metrics is comprehensive and covers various aspects of performance
Weaknesses: - W1: The paper's main claim is that end-to-end modeling is better than pipelined methods. However, it is unclear whether the improved performance is credited to the end-to-end modeling approach or the capabilities of LLMs. Although I am convinced that end-to-end + LLM is a good solution, I am curious if applying LLMs for the subtasks in the pipeline will work even better, especially considering LLMs are good at decomposing problems and solving them step-by-step.
- W2: The authors assumed each document is associated with at least one concept in the ground truth ontology. However, in real-world applications, it would be very possible that the whole document does not contain any relevant edges or even concepts. Can the proposed OLLM say no if a random irrelevant document is given?
- W3: Consistency is a crucial property of ontologies, which is the prerequisite to run most logical reasoners. Is the generated ontology guaranteed to be consistent?
- W4: I am concerned about the post-processing procedure. Are there any entity (concept) resolution mechanisms involved to merge semantically equivalent concepts? For example, if “English” and “english” are both there, are then treated as the same concept? And for “English Language” and “The Language Named English”, are they merged?
- W5: Why using 3 prompts is generally worse than 1-shot?
Technical Quality: 3
Clarity: 4
Questions for Authors: Please refer to W1-W5.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback.
> W1: The paper's main claim is that end-to-end modelling is better than pipelined methods. However, it is unclear whether the improved performance is credited to the end-to-end modelling approach or the capabilities of LLMs. Although I am convinced that end-to-end + LLM is a good solution, I am curious if applying LLMs for the subtasks in the pipeline will work even better, especially considering LLMs are good at decomposing problems and solving them step-by-step.
We thank the reviewer for their suggestion and have run additional ablations based on applying LLMs for subtasks. The full experiment setup can be found in the general response. Here, we summarise the results:
1. The concept discovery + link prediction-based approach (studied in LLMs4OL [2]) suffers from serious scalability bottlenecks. In particular, it requires $O(n^2)$ inferences for link prediction, which is impractical for large models and/or large number of concepts. This is the reason why this baseline was not used originally. For the ablation, we made ad-hoc fixes to the method to make it manageable for our large problem sizes (see general response).
2. The results show that it is generally worse than our zero-shot prompting method based on subgraph modelling.
We believe this is further evidence that our subgraph modelling method is a good solution.
> W2: The authors assumed each document is associated with at least one concept in the ground truth ontology. However, in real-world applications, it would be very possible that the whole document does not contain any relevant edges or even concepts. Can the proposed OLLM say no if a random irrelevant document is given?
We do not view this assumption as a fundamental limit to OLLM. One simple modification we can make is to include empty targets in the training set such that the model learns to output an empty graph if the document does not contain any relevant concepts. We made this assumption as it applies to our datasets and simplifies the implementation.
> W3: Consistency is a crucial property of ontologies, which is the prerequisite to run most logical reasoners. Is the generated ontology guaranteed to be consistent?
Ensuring consistency is a challenge for many OL methods and is not specific to OLLM. There exist generic post-processing methods that can guarantee consistency [6]. We performed further analysis of the generated output by OLLM and found that they are near-consistent: the output ontology for Wikipedia consists only of 97 simple cycles and no cycles were found in arXiv. Using the greedy algorithm of repeatedly removing the edge that breaks the most simple cycles (a heuristic to the smallest set of edges whose removal makes the graph acyclic), we prune all such cycles and make the ontology consistent by removing just 26 out of 10414 edges. This is quite surprising considering we did not explicitly optimise our model to satisfy consistency.
> W4: I am concerned about the post-processing procedure. Are there any entity (concept) resolution mechanisms involved to merge semantically equivalent concepts? For example, if “English” and “english” are both there, are then treated as the same concept? And for “English Language” and “The Language Named English”, are they merged?
We do not perform any concept-merging in our post-processing steps. While we agree that applying such normalisation strategies can produce better ontologies, we do not think it contributes to answering the research question of “Does OLLM produce better ontologies?”. Instead, we decided to restrain the post-processing steps to be simplistic and minimal without relying on many heuristics. This allows us to attribute any differences in output quality to the method itself without being confounded by interactions with the post-processing procedure.
> W5: Why using 3 prompts is generally worse than 1-shot?
We thank the reviewer for raising such concerns. In response, we did a short investigation focusing on arXiv (as that’s where 3-shot seemed to perform worse than 1-shot). We found that 3-shot appears to generate unnecessarily long responses, which we hypothesise leads to noisier outputs. Across the training set, the ground truth subgraph targets contain a median of 4.0 concepts (IQR: 4.0). In contrast, 1-shot predicts a median of 9.0 (IQR: 8.0) and 3-shot predicts a median of 11.0 (IQR: 9.0). We see that giving more examples tends to result in longer responses, which might be an inherent bias of Mistral 7B.
We note that the performance gap between 1-shot and 3-shot is generally insignificant so we do not believe the claim “3-shot is generally worse than 1-shot” is fully justified.
---
Would the reviewer please consider raising the score if we have addressed their concerns?
[6]: Sun, Jiankai, et al. "Breaking cycles in noisy hierarchies." Proceedings of the 2017 ACM on Web Science Conference. 2017.
---
Rebuttal Comment 1.1:
Comment: Again, we thank the reviewer for their constructive feedback. As we are near the end of the discussion period, we would like to confirm if our answers are comprehensive and satisfactory. Should any new questions arise, we would be happy to answer them.
---
Rebuttal Comment 1.2:
Comment: Thanks for the rebuttal. It solved most of my concerns. I will keep my evaluation. | Summary: The paper introduces a novel method called OLLM (Ontology Learning with Large Models) for automating the construction of ontologies using large language models (LLMs). Ontologies are crucial for structuring domain-specific knowledge, but their manual construction is labor-intensive. The authors aim to address the limitations of partial ontology learning approaches by proposing an end-to-end method to build ontologies from scratch.
This paper demonstrates OLLM's effectiveness through quantitative and qualitative results on Wikipedia data, showing that it outperforms subtask composition methods in producing more semantically accurate ontologies while maintaining structural integrity. Additionally, OLLM's adaptability to new domains like arXiv is showcased, requiring only a small number of training examples for effective adaptation.
Strengths: 1) The authors propose a custom regularizer that reweights concepts based on their frequency of occurrence, which helps to mitigate overfitting on common concepts and enhances the model's ability to generalize to new, unseen data.
2) The paper introduces a novel suite of evaluation metrics that use deep learning techniques to measure semantic and structural similarity between ontologies. These metrics provide a more robust and nuanced assessment compared to traditional methods.
3) The authors demonstrate the effectiveness of OLLM through comprehensive experiments on Wikipedia and arXiv datasets. The results show that OLLM outperforms existing subtask-based methods, indicating its practical utility.
Weaknesses: 1) The paper focuses on building ontologies with concepts and taxonomic relations, which is only a part of the full spectrum of ontology components. A comprehensive ontology includes not only hierarchical relationships but also properties, semantic relations, constraints, and other elements that provide a richer and more explicit semantic content. The paper does not address how these additional aspects are captured and integrated into the learned ontologies.
2) The paper uses Wikipedia categories and arXiv taxonomy as the basis for ontology learning. However, these structures are more accurately described as taxonomies or folsonomies rather than full-fledged ontologies, which can lead to confusion about the capabilities and outputs of the proposed method.
3) The paper does not provide a clear and detailed explanation of the end-to-end ontology learning model's training process. Key aspects such as the construction of the training dataset, filtering of irrelevant categories, and the specific model architecture and training regimen are not thoroughly described. For example, it is not clear how the model deals with Wikipedia categories that have little semantic content but are used for page management.
4) The paper does not include a comparative analysis with other ontology learning methods, such as the LLMs4OL (LLMs4OL: Large Language Models for Ontology Learning, https://arxiv.org/abs/2307.16648.). Without such comparisons, it is difficult to assess the relative advantages and disadvantages of the proposed method in terms of effectiveness, efficiency, and scalability.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback.
> The paper focuses on building ontologies with concepts and taxonomic relations, which is only a part of the full spectrum of ontology components. A comprehensive ontology includes not only hierarchical relationships but also properties, semantic relations, constraints, and other elements that provide a richer and more explicit semantic content. The paper does not address how these additional aspects are captured and integrated into the learned ontologies.
> The paper uses Wikipedia categories and arXiv taxonomy as the basis for ontology learning. However, these structures are more accurately described as taxonomies or folsonomies rather than full-fledged ontologies, which can lead to confusion about the capabilities and outputs of the proposed method.
We agree that our method only focuses on the basic components of an ontology, and such structures are perhaps better described as taxonomies. We thank the reviewer for raising this point and will refrain from phrases like “solving the full task of building an ontology” (L7) in the final version of the paper for better clarity.
The main contribution of this paper is to use LLMs to construct basic ontologies end-to-end as one task, and doing so yields more accurate concepts and taxonomic relations. We view more complex aspects of ontologies as extensions and thus beyond the scope of this paper. We hope the reviewer can view our achievements in comparison to existing methods for modelling concepts and taxonomic relations, and agree that improving the quality of the fundamental components of an ontology is a sufficient contribution already.
> The paper does not provide a clear and detailed explanation of the end-to-end ontology learning model's training process. Key aspects such as the construction of the training dataset, filtering of irrelevant categories, and the specific model architecture and training regimen are not thoroughly described.
In Section 4.1, we give a detailed description of how we construct our datasets, including the process of obtaining the raw data and how we construct the data splits. Section 3.1 describes how we obtain document-subgraph pairs from the source data. We did not perform any additional filtering of categories, hence not such mention in the paper. Due to space limitations, we did not include all implementation details. We will consider rewording relevant parts of the paper for better clarity and include more details in the appendix.
Similarly, the specific model architecture and training details are broadly mentioned in the main text. As described in Section 5.1, OLLM is a LoRA finetune of Mistral 7B which has its architecture described in [5]. We also included a detailed description of all the LoRA and training hyperparameters in Appendix A.1.1 to aid reproducibility.
We also share our dataset, model and code (including dataset construction and training), which hopefully resolves all ambiguities.
> For example, it is not clear how the model deals with Wikipedia categories that have little semantic content but are used for page management.
We did not apply any filtering to the source data, hence our model will learn to construct and organise every concept in the dataset as part of at subgraph modelling stage. We decided to not apply any filtering to the source data to minimise external bias on how the data “should” look like. We note that it is often not clear-cut whether a Wikipedia category is just for page management: for example, the Wikipedia categories of the form “Lists of [subject]” refer to the special type of articles where the main body is a bullet point/table listing of the subject, which is a useful concept in the Wikipedia domain.
> The paper does not include a comparative analysis with other ontology learning methods, such as the LLMs4OL (LLMs4OL: Large Language Models for Ontology Learning, https://arxiv.org/abs/2307.16648.).
We thank the reviewer for their suggestion and have run additional ablations based on LLMs4OL. The full experiment setup can be found in the general response. Here, we summarise the results:
1. The link prediction-based approach studied in LLMs4OL suffers from serious scalability bottlenecks. In particular, it requires $O(n^2)$ inferences for link prediction which is impractical for large models and/or large number of concepts. This is the reason why this baseline was not used originally. For the ablation, we made ad-hoc fixes to the method to make it manageable for our large problem sizes (see general response).
2. The results show that it is generally worse than our zero-shot prompting method based on subgraph modelling.
We will include the above discussion points regarding LLMs4OL in the final version of the paper.
---
Would the reviewer please consider raising the score if we have addressed their concerns?
[5]: Jiang, Albert Q., et al. "Mistral 7B."
---
Rebuttal Comment 1.1:
Comment: Again, we thank the reviewer for their constructive feedback. As we are near the end of the discussion period, we would like to confirm if our answers are comprehensive and satisfactory. Should any new questions arise, we would be happy to answer them. | Summary: The paper aims to address the challenge of constructing ontologies, which traditionally require substantial manual effort. Ontologies are structured representations of domain knowledge used for automatic machine processing. Previous methods for ontology learning (OL) using large language models (LLMs) focused on solving individual subtasks, but this approach failed to capture the interactions between these subtasks. Experimental results on Wikipedia demonstrate that the authors' proposed approach, called OLLM, outperforms traditional subtask composition methods, producing more semantically accurate ontologies while maintaining structural integrity. The model also shows effective adaptation to new domains, such as arXiv, with only a small number of training examples, indicating its scalability and domain-independence.
Strengths: This is an interesting problem, and could serve as the foundation for important neurosymbolic applications in the future.
The authors introduce OLLM, a general and scalable method that builds ontologies from scratch by fine-tuning an LLM. Instead of focusing on individual relations between entities, OLLM models entire subcomponents of the target ontology, reducing overfitting on high-frequency concepts through a custom regularizer. The principles of the approach are moderately innovative.
The paper also introduces new metrics for evaluating the quality of the generated ontology, measuring its semantic and structural similarity to the ground truth. This is laudable, but in going through the metrics, I noted some significant potential flaws that I note below.
Weaknesses: I would question the validity of the fuzzy F1 metric. Using embedding similarity is not a good fit here, as for difficult samples, it would likely be wrong. There is also bias that comes from pretraining (so the metric can't really be used in 'unusual' domains for which such embeddings are not available; but that's where the true applications of this proposal would be!) The gap between the literal and fuzzy F1 in the performance table should have been a warning to the authors.
The experiments could have been much stronger. A more difficult benchmark than either of the ones the authors selected would have been more valuable for assessing the true potential or limitations of the method.
The baselines could similarly have been stronger. It doesn't look like the authors considered some form of sophisticated (e.g., chain of thought) prompting, which could have made a big difference here.
Technical Quality: 2
Clarity: 3
Questions for Authors: Given that there are so many ontologies and corpora out there, why only limit to these two very well-known datasets? I feel that the approach could have been better evaluated with multiple ontologies, some of which are unusual and hence not captured as well by LLMs.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The study focuses on constructing simple ontologies that include only concepts and taxonomic relations. To extend OLLM to produce non-taxonomic relations, tags indicating the relation type could be added to each edge during the linearization of subgraphs for sequence modeling. New evaluation metrics may also be needed to handle multiple types of relations. Another limitation is that the taxonomic relations in the generated ontologies are not always transitive due to the presence of cycles, a common issue in many ontology learning methods. There are existing algorithms for cycle removal to clean hierarchies. Additionally, the study could not fully control for data contamination since the pretraining dataset of Mistral 7B is not publicly known. However, the generated ontologies were found to be sufficiently different from the ground truth, suggesting that OLLM does not directly remember samples from its pretraining stage.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback.
> Using embedding similarity is not a good fit here, as for difficult samples, it would likely be wrong. There is also bias that comes from pretraining (so the metric can't really be used in 'unusual' domains for which such embeddings are not available; but that's where the true applications of this proposal would be!)
For the tasks considered in this paper, we did not observe instances where concepts with clearly different meanings were given similar embeddings. This can be credited to the quality of the pretrained embedding model and relatively common concepts present in Wikipedia and arXiv. Note that the pretraining of the embeddings model aims to cover every domain and to find the best generalising embeddings. We expect such embedding to remain informative even when applied out-of-distribution.
We agree that for more exotic domains (e.g., protein ontology), a generic embedding model like the one used in this paper is unlikely to give accurate results. However, we argue that the evaluation framework proposed in this paper is still applicable as long as a more specialised embedder is available. We believe applications with accurate embeddings are much more prevalent than those with accurate ontologies.
> The gap between the literal and fuzzy F1 in the performance table should have been a warning to the authors.
We take this as an argument _for_ using embeddings. The cases where Literal and Fuzzy F1 disagree the most (e.g., when Literal F1 is comparatively high but Fuzzy F1 is comparatively low) are on methods that show signs of overfitting, particularly Memorisation and Finetuning. We observe Literal F1 has a strong bias towards methods that memorise the training set. This is because literal F1 is sensitive to semantically insignificant syntax differences such as casing and word form, whereas Fuzzy F1 (as well as Continuous F1 and Graph F1) are generally agnostic to syntax.
We also emphasise that we do not claim that any single metric can truthfully reflect the quality of an ontology. Given the many aspects of what constitutes a good ontology, it is an oversimplification to summarise the performance with a single value. We thus use multiple metrics to get a holistic understanding of the output. From this perspective, it is not surprising that different metrics will arrive at different conclusions, otherwise it would not have been useful to use multiple metrics in the first place.
> The experiments could have been much stronger. A more difficult benchmark than either of the ones the authors selected would have been more valuable for assessing the true potential or limitations of the method.
We chose Wikipedia as our main benchmark as it covers a wide range of topics, so many specific domains of interest can likely be found as a subgraph of the Wikipedia ontology. We believe the diversity of topics makes it more challenging than specialised ontologies like WordNet (focusing on word-level relations) or GeoNames (focusing on Geography). We would like to know whether the reviewer has any particular benchmark in mind that they believe would be a strong addition to the paper. We are open to including more benchmarks if necessary.
> The baselines could similarly have been stronger. It doesn't look like the authors considered some form of sophisticated (e.g., chain of thought) prompting, which could have made a big difference here.
We designed our prompting baselines to study the merits and failure modes of using LLMs to build ontologies out-of-the-box. The main weakness of zero-shot prompting (as discussed in the paper) is the inability to produce ontologies that are structurally similar to the target. Many prompting techniques, such as chain-of-thought, focus on improving reasoning and logic (e.g., the original CoT paper) which do not appear to be the main bottleneck in ontology learning.
We nonetheless thank the reviewer for this suggestion and have included an additional ablation baseline using a more deliberate prompting method (inspired by zero-shot CoT [1]). A detailed description of the experiment setup can be found in the general response. The results support our hypothesis above, showing no significant improvement over basic zero-shot prompting.
> Given that there are so many ontologies and corpora out there, why only limit to these two very well-known datasets? I feel that the approach could have been better evaluated with multiple ontologies, some of which are unusual and hence not captured as well by LLMs.
We designed OLLM to be very general and demonstrated its effectiveness on Wikipedia as a proof-of-concept, and on a different dataset (arXiv) to show its generalisation. We believe they are sufficient in answering our primary research question “How can LLMs be used effectively and scalably for OL?”. While we agree that it would be useful to study which domains are well-captured by LLMs, it is beyond the scope of this paper.
However, as mentioned above, we are open to suggestions from the reviewer if they think a particular dataset would be a strong addition to the paper.
---
Would the reviewer please consider raising the score if we have addressed their concerns?
---
Rebuttal Comment 1.1:
Comment: Again, we thank the reviewer for their constructive feedback. As we are near the end of the discussion period, we would like to confirm if our answers are comprehensive and satisfactory. Should any new questions arise, we would be happy to answer them. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time. The feedback is constructive and helpful.
We are happy to see that most reviewers found the end-to-end OL task interesting, and agree that our core contribution, OLLM, is a novel approach to leveraging LLMs for OL. Reviewers also agree that the paper is well-written.
Most of the reviewers’ concerns revolve around our experiment results and evaluation strategy, such as inconsistent ranking among different metrics, and insufficient baselines. We thank the reviewers for raising such points and in response have performed additional ablations and analysis. We give a summary of our new results here. Further discussions can be found in our response to each reviewer in the context of their concerns.
# Consistency
Reviewer 3YVJ questioned the consistency of the output ontology. For example, it does not guarantee an anti-symmetric relation under transitivity (i.e., there may be cycles). We analysed the ontology generated by OLLM and found that they are almost consistent, with only 97 simple cycles in Wikipedia and no cycles in arXiv. All the cycles in Wikipedia can be broken if we remove just 26 of the 10414 edges. This is quite surprising considering we did not explicitly optimise our model to satisfy consistency.
# Ablations
See supplementary pdf for full metrics.
## Chain of thought (CoT)
**Motivation**: Reviewer Qo2b suggested that a stronger baseline can be established if we employ more sophisticated prompting techniques, such as CoT.
**Method**: Prediction involves two rounds of inference: In the first round, we ask the model (Mistral 7B instruct) to describe the possible relevant concepts for the input document and to explain its reasoning. Then, we ask the model to predict the subgraph in the specified format given the additional, self-generated context.
**Result**: We tested the CoT method on Wikipedia and found no significant difference from basic zero-shot prompting.
**Interpretation**: Most advanced prompting techniques, including CoT, primarily aim to improve logic and reasoning. We hypothesise that the performance in OL is more dependent on the model’s understanding of natural language than its ability to perform multi-step reasoning, hence we do not observe any significant improvement from CoT.
## LLMs4OL
**Motivation**: Reviewer cxnM and 3YVJ both suggested that a LLM-based subtask composition baseline (as studied in LLMs4OL [2]) would be useful for evaluating whether the improvement by OLLM is due to the improved methodology (end-to-end modelling) or simply just because we used LLMs.
**Method**: The subtasks studied in LLMs4OL are concept discovery (given a document, predict its relevant concepts), and link prediction (given two concepts, predict whether they are taxonomically related). Unfortunately, constructing a baseline from such two subtasks is non-trivial. We encountered significant scalability issues in the link prediction stage as it required $O(n^2)$ inferences. We make two modifications to overcome such limitation:
1. After the concept discovery stage, we only discard all but the N most frequent concepts to limit the number of inferences required during link prediction, where N is the number of concepts in the ground truth.
2. Instead of using zero-shot Mistral 7B as the link predictor, we use a finetuned BERT as the link predictor as it runs much faster. Given that [2] demonstrated that finetuned models perform much better than zero-shot inference on link prediction, we expect the finetuned BERT to be at least as good, if not better, than zero-shot Mistral 7B on this subtask.
In summary, the method is as follows:
1. Use zero-shot Mistral 7B for concept discovery, allowing it to tag more than one concept per document.
2. Discard all but the top N most common concepts. Manually add the root concept.
3. Perform link prediction with a finetuned BERT for all concept pairs.
4. Apply post-processing, as described in section 3.2.
We design this baseline such that it is comparable to zero-shot end-to-end modelling: both use zero-shot Mistral 7B as the backbone, just utilised in different ways.
**Result**: We tested this method on Wikipedia and found that it is worse than zero-shot end-to-end modelling on all metrics except Motif Distance.
**Interpretation**: We take this as evidence that our end-to-end modelling approach is a clear improvement over traditional subtask-based OL. Not only does the link prediction-based method suffer from significant scalability bottlenecks for large ontologies, its performance is also worse. The results suggest that we can more effectively and efficiently leverage the capabilities of LLMs beyond just solving subtasks, such as by predicting subgraphs.
# Final remark
We ask the reviewers to evaluate our contribution in the context of existing methods for using LLMs in OL. Our experiments and ablations (above) suggest that our method is more effective and scalable than traditional LLM-based subtask composition methods (e.g., LLMs4OL). Additionally, in contrast to prior attempts to use LLMs in a more end-to-end manner that relies on qualitative evaluation [3, 4], our evaluation framework is more systematic and quantitative, laying the foundation for more rigorous research in the future. We hope the reviewers will consider raising the score if we have addressed any of their concerns.
[1]: Kojima, Takeshi, et al. "Large language models are zero-shot reasoners."
[2]: Babaei Giglou, Hamed, Jennifer D’Souza, and Sören Auer. "LLMs4OL: Large language models for ontology learning."
[3]: Funk, Maurice, et al. "Towards ontology construction with language models."
[4]: Trajanoska, Milena, Riste Stojanov, and Dimitar Trajanov. "Enhancing knowledge graph construction using large language models."
Pdf: /pdf/4db05a6411adef21c8262bde49956aaac4fd78a1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
EReLELA: Exploration in Reinforcement Learning via Emergent Language Abstractions | Reject | Summary: The work presents the idea of employing emergent languages (EL) abstractions combined with count-based approaches for exploration, to improve exploration in sparse reward reinforcement learning (RL) settings.
Strengths: Overall, the major strength of this paper comes from its **novelty**.
* The idea of emergent languages in RL has been scarcely explored and the idea presented is sensible and interesting.
* The Compactness Ambiguity Metric (CAM) definition is useful to compare how emergent languages compare to natural languages.
* The results presented in simple Minigrid environments show that the method has the potential to improve exploration capabilities.
Weaknesses: The paper presents some weaknesses that make my opinion lean towards rejection.
* **Presentation**: from an aesthetic point of view there are things make the paper hard to read, such as the use of bright colours for text on a white background (see Experiments section) or wrapped Figures and equations that are too close to the main text (see captions of Fig.1 and 2)
* **Clarity**: some of the explanations provided are not completely clear. For instance, I struggle to understand what is happening in Figure 1 and the caption of the Figure (which is a long description with no sentence breaks) does not clarify enough. Similarly, for Figure 2, it is not clear what is happening, e.g. why some events are above or below the black line, and the Figure is not clearly explained in the text or caption.
* **Insights on the learned representation**: given that the use of emergent language abstractions is the main contribution, it would have been useful to get more insights into the representation being learned by the agent. While the authors present quantitative results in terms of performance or distance from natural language, it is not clear what are the properties of the emerging language, e.g. sentence length, number of unique utterances, etc. Assuming the RL community is one of the targeted audiences for the paper, it would be crucial to present some insights into this to ensure the contribution is clear.
* **Limited evaluation**: the evaluation is extensive in terms of ablations (also to be found in Appendix) but is quite limited in terms of environments and baselines tested. Differences from other related works, e.g. reference [51], should be at least described more in details, if running additional baselines is not feasible
Technical Quality: 2
Clarity: 1
Questions for Authors: I strongly recommend the authors revise the presentation of the work and consider the above limitations to improve the paper. I also leave here some other minor issues I found:
* Line 122, missing whitespace before "In the context"
* RANDOM baseline, the authors had to dedicate one paragraph to explain why the "RANDOM" baseline is not actually random. I would rather change its name in the legend to improve clarity
* the intrinsic reward function is not stated in the main text. How is this defined?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Some limitations are presented at the end of the experiments section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Once again we thank the reviewer for their time and thorough review.
We will try to address most review points below.
## Regarding the lack of insights on the learned representations:
We thank the reviewer for this comment and their curiosity towards these results. Our paper initially did not include these measures in order simplify the narrative, but we are more than happy to add them, possibly in appendices with analysis, and pointing reference in the main text. Most interestingly, we found that a high number of unique utterances over the validation set of the referential game is crucial for the RL agent to have high performance.
## Regarding limited comparison and evaluation against related works:
We agree that further comparison to related works, and especially [paper 51], both in terms of results and method would improve the quality of the work. We will follow your advice and aim to include reference [paper 51] at least in describing more details and comparing it to our approach, and we will aim to include learning curves on the KeyCorridorSXR3 environments as these are the ones used in [paper 51], but such experiments are greatly time-consuming as they require a far greater observation sampling budget than experiments with the KeyCorridorSXR2 environments (from 1M to at least 20M, i.e. 20x). We have started those experiments and hope to have results ready by the time of a possible camera-ready submission. Preliminary results show that EReLELA increases the sample-efficiency by roughly 2x compared to L-AMIGo. In the mean time, we acknowledge the need for further comparison to related ‘competitive baselines’, as flagged similarly by Reviewer ra5c, and we propose to update Figure 4 (left) accordingly. Please refer to our rebuttal answer to Reviewer ra5c for further details.
---
Rebuttal Comment 1.1:
Title: Rebuttal (Part 2/2)
Comment: ## Regarding Presentation and Clarity Issues:
We thank the reviewer for their thoughtful advice on the matter, and mean to redirect that part of the conversation towards the global rebuttal answer as the matter is shared among reviews. Please let us know if all our proposal improvements are addressing all your concerns.
Please let us know if in light of all other reviews and discussions you identify any other improvements that we could perform in order to improve your rating of our work.
## QAs :
Regarding the RANDOM baseline, thank you for catching this issue, we propose to relabel it as NoRGTraining-EReLELA and update the caption of the related figure to explain it as an 'ablated version of EReLELA where the RG agents are left untrained'.
Regarding the intrinsic reward function, thank you for catching this mistake, we define it for timestep $t$ as follows:
$r_{int}(s_t, a_t, s_{t+1}) = \begin{cases}
1 & \text{if } N(s_{t+1}) = 1, \\\\
0 & \text{otherwise,}
\end{cases}$
where $N(s)$ is the number of times the state $s$ has been visited in the current RL episode (because of the intra-life framing, as opposed to inter-life framing where it would consider the whole RL training process). | Summary: This paper investigates using an emergent communication protocol as a auxiliary reward in navigation reinforcement learning setting, particularly those where exploration is a difficult (i.e., sparse reward settings). The experiments show that certain emergent communication games can be effective in solving the RL problem.
Strengths: In conjunction with standard criteria, there are three characteristics that are particularly important for emergent communication research: reusability (how easily can another researcher use the products of this research), generalizability (how much do the findings of this research apply broadly to our knowledge of emergent communication), and directedness (does this research contribute concretely to particular questions in emergent communication research).
### Quality
- (minor) The experiment demonstrate that some of proposed approaches beat the baseline.
### Clarity
- Nothing of note.
### Reusability
- Nothing of note.
### Generalizability
- Nothing of note.
### Directedness
- (minor) Comparing emergent and natural language's utility for reinforcement learning is an important problem in emergent communication research.
- (minor) If emergent communication protocols are effective abstract representations of environment states, this could be useful to RL more generally.
Weaknesses: ### Quality
- See Clarity.
- (major) Is there a "competitive baseline" tested in this experiments; that is, the state-of-the-art, no-frills method that the proposed solutions would be competing against in the real world? If there is, the comparison needs to be a clearer as to what exactly the advantage of using EReLELA is.
- (minor) The natural language baseline does not seem to actually be "natural"; synthetic seems more accurate. If the language procedurally generated, I do not think it can be considered natural.
### Clarity
- (major) I found this paper (esp. Section 3) very difficult to understand, even after rereading certain sections. Overall, I do not have a concrete idea of what EReLELA is or why it is important. For example, I understand that emergent communication protocol is supposed to abstracting observations and that the referential game is used as an "Intrinsic Reward Generator", but I do not understand how this is incorporated into the RL algorithm. Furthermore, how is the referential game distinguished from more straightforward ways of generating auxiliary rewards?
### Reusability
- See Clarity.
### Generalizability
- (minor) How would the core findings of EReLELA apply more generally to other RL or emergent communication settings?
### Directedness
- Nothing of note.
Technical Quality: 2
Clarity: 2
Questions for Authors: What is EReLELA and CAM, as simply as possible? I do not feel like can effectively evaluate the rest of the paper until I understand these concepts. I believe too much time introducing the paper and discussing background relevant to the half-page spent discussing EReLELA, the core contribution of the paper. A good potential approach for explicating the contributions of the paper is to start with a short, clear, high-level intuitive explanation before introducing the details, hearkening back to the components of the original explanation they correspond to.
### Comments
- The term "agnostic" is used to describe some of the agents used, but I think this is only defined in the appendix. The main body of the paper should stand by itself, that is, key experimental elements should be defined in the main body.
- Figure 1: It is difficult to determine what about the agents is being represented in the visualization.
- Line 139: "would entail to" -> "would entail"
- Line 151: "constraint" -> "constrain"
- Line 151: "a specific semantic" -> "specific semantics"
- Line 193: "these are compactness" -> "these as compactness"
- Line 198: "ensues" -> "ensures"
- Noticed a number of other typos or awkward phrasing; I would recommend a careful proofread.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thorough review. We will try to address most reviewing point below.
## Regarding the three characteristics that are particularly important to EmeCom:
We appreciate the characteristics highlighted and tend to agree with their evaluations, with the exception of the following:
Firstly, in terms of reusability, we proposed the CAM that is reusable in any EmeCom setting involving temporally-correlated stimuli, i.e. video-like stimuli. We agree that EmeCom has not been targetting those kind of stimuli so far, only recently upgrading from symbolic stimuli to pixel-based, image-like stimuli, but we expect video-like stimuli to become of interest, especially as EmeCom research permeates mainstream NLP (i.e. LLMs and multi-modal LMs), which has been finding applications in robotics, where it is common to consider video streams.
Secondly, our paper also proposes the STGS-LazImpa framework which is an incremental research contribution, but a contribution nonetheless, and this part of our agent is reusable as a plug-and-play feature. We propose to emphasise it further by moving some elements of Appendix G(.1) into the main papers, as suggested by Reviewer 85kw already.
## Follow-up and regarding the need for some competitive baselines:
After having discussed the EmeCom-focused characteristics/criteria raised, we would like to highlight that our paper is not constrained solely to EmeCom as it is an attempt to apply EmeCom principles and results in another field, to wit, the field of hard-exploration RL. Thus, we would like to make sure that the importance of our contributions to hard-exploration RL are not being eclipsed by the EmeCom framing of the discussion here. More specifically, we would like to point at a possible misunderstanding (and resulting possible mis-evaluation of the contributions) that the NLA agent which is referred to as a baseline in the current review is actually not a baseline but a fair-comparison state of the art agent, as it is adapted from [3:Tam et al., 2022]’s SotA approach, and the adaptation is in using a count-based exploration paradigm (instead of novelty/surprise), with an off-policy RL algorithm (i.e. R2D2 instead of PPO). A baseline would rather be the backbone RL algorithm alone, in our viewpoint. We can propose to include results of it (that are actually almost null across the board), if you think that this addition would clarify the value of our contribution, please let us know? We also propose to (i) emphasise further the SotA-ness of our NLA agent in the main text, and make it clearer that it even more than a ‘competitve baseline’, and (ii) to include and discuss the results of several others SotA/competitve baseline approaches like:
1. ICLR2020’s RIDE [1]: for MultiRoom N7-S4 : success ratio of ~0.8 after ~ 500k observations (as the value cannot be easily extracted, we would re-run their implementation to get exact values)
2. ICLR2021’s RAPID [2] : for MultiRoom N7-S4 : success ratio of 0.787 ± 0.001 after ~500k observations / for KeyCorridor-S3-R2 reaching 0.934 ± 0.004 after ~1M observations (extracted from tables in the paper).
## Regarding the Synthetic-ness of our Natural Language Abstraction (NLA):
We appreciate that the captions used for the NLA agent being procedurally-generated is a point of contention, but we followed previous practices of [3] , which uses the adjective ‘natural’ to specify the quality and form of the caption rather than the process in which it is obtained (i.e. not with human beings producing it). Moreover, our considerations and results are agnostic to the process through which captions are obtained, we only indeed care about their quality and form, i.e. which vocabulary and grammar are being used, which here refers to that of the English natural language. That being said, it clearly ought to be flagged as a limitations of our study because using natural language captions produced from human beings would have yield a more varied and rich distribution, which would impact the resulting RL agent’s performance (detrimentally supposedly), but, like in [3], we make the choice here to use synthetically generated natural language captions because they can be generated “accurately and reliably, and at scale”. We propose to add that disclaimer to our paragraph on Natural Language Oracles and re-title it as Synthetic Natural Language Oracle. Please let us know if that address your concerns fully or not?
### References:
[1]: Raileanu, Roberta, and Tim Rocktäschel. "RIDE: Rewarding Impact-Driven Exploration for Procedurally-Generated Environments." International Conference on Learning Representations 2020.
[2]: Zha, Daochen, et al. "Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments." International Conference on Learning Representations 2021.
[3]: Tam, Allison, et al. "Semantic exploration from language abstractions and pretrained representations." Advances in neural information processing systems 35 (2022): 25377-25389.
---
Rebuttal Comment 1.1:
Title: Rebuttal (Part 2/2)
Comment: ## QAs:
Thank you for your advice towards making our paper more readable, we propose below (similarly to in the global rebuttal answer) what we hope are short-enough, clear, high-level intuitive explanation to introduce EReLELA and the CAM:
EReLELA is a wrapper around any off-/on-policy RL algorithm that augments the reward signal by linearly combining the original extrinsic reward signal with an intrinsic reward signal derived using an intra-life count-based exploration method whose state abstraction is obtained from the speaker agent of a referential game. Thus, it effectively embeds complex, high-dimensional RL observations into captions/linguistic descriptions in the emergent language of the referential game training.
The Compactness Ambiguity Metric is a metric to characterise the kind of state abstraction that a language perform over a state space, in particular RL state space but not limited to it. It assumes that the state space can be organised as a set of video streams, that is to say a sequence of temporally-correlated states, like an RL trajectory over an episode. Along with sequences of temporally-correlated states/observations, it takes as input the timestamps-aligned state abstractions/linguistic descriptions of each state in the language whose abstraction power is being evaluated. Intuitively, the metric relies on the sorting of those linguistic descriptions into an histogram's buckets based on the length of the temporal interval over which a given linguistic description remained unchanged.
Please let us know if these attempts at writing high-level intuitive explanations of EReLELA and the CAM are easier to understand, especially in light of our updated Figure 3 in the rebuttal PDF attached to our global rebuttal answer. We are looking forward to any further input that would help us improve those weaknesses of our paper.
## Regarding comments:
Thank you for those comments.
Firstly, we will make sure to move the definition to ‘agnostic’ back into the main text, and will include outputs of a glossary and acronyms package ([https://www.overleaf.com/learn/latex/Glossaries](https://www.overleaf.com/learn/latex/Glossaries) ) to the appendix in order to ease the reading experience.
Regarding Figure 1, as mentioned in the global rebuttal answer, we will remove it in order to make space for other elements since it is rather found unhelpful across reviews.
---
Reply to Comment 1.1.1:
Title: Kind request for your input on our rebuttal
Comment: Dear Reviewer,
As the discussion period is nearing its end, we kindly request your input on our rebuttal. Your further engagement would be extremely helpful in refining the paper.
We look forward to hearing your thoughts and addressing any remaining questions or concerns you may have.
Thank you once again for your time and consideration.
---
Rebuttal Comment 1.2:
Comment: I appreciate the responses to the rebuttal as they are clear and improve my
understanding of the paper. That being said, giving the number of technical
details in the paper, I am not sure to what degree the clarity would be
improved even if these summaries were incorporated into the paper. I will
raise my score to a 4 due to the clarifications provided in the rebuttal but
lower my confidence to a 2 since it seems that I have only a limited
understanding of the paper. | Summary: This paper proposes to leverage the Emergence Communication paradigm via the use of referential games to learn state abstractions for a Reinforcement Learning domain. The authors claim that using this approach, their proposed method is able to learn abstractions that boost exploration for an RL agent, and leads to performance that is comparable to Natural Language-based state abstractions.
Strengths: 1) The paper strongly motivates the problem highlighting the need of state-based abstractions for RL agents, the advantages and limitations of Natural Language-based abstractions, and how Emergent Language-based abstractions can avoid those limitations while achieving comparable results.
2) In my honest opinion, this paper greatly stands out for explaining the relevant literature and how this paper situates itself within the existing works. I thoroughly enjoyed reading the Introduction, and Section 2.1even more so!
3) The method is well-explained, and experimental setup flows very logically making it intuitive to the readers the insights presented by the paper and the questions that arise.
Weaknesses: Please see the questions section for more details.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1) Is there any specific advantage gained from introducing the problem setting (line 101-113) early on? It may be useful to have it right before Section 3, but this is a very minor concern.
2) What are some other applications that could potentially benefit from the proposed Compactness Ambiguity Metric? How generalizable is the metric to measure other abstractions, or what problems may arise in that case?
3) While authors mention that currently the work only covers 2D environments and their abstractions, what would be the potential issues with using CAM for 3D environments or robotic settings?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have discussed important limitations of the work, and also a well thought-out section on the broader impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thorough review, and are grateful for their appreciation of the work. We address below the review points:
## Q1: Could we gain readability from moving ln101-113 into Section 3?
Thank you for mentioning this, ln101-113 was indeed a remainder from a previous version of Section 2.1 and now that you point at it we wholeheartedly agree that it would be better to move it to Section 3, more specifically Section 3.2.
## Q2: What are some other applications that could potentially benefit from the proposed CAM? How generalizable is the metric to measure other abstractions, or what problems may arise in that case?
The CAM is general enough to measure any abstraction, without requiring any extension, but we realise from your question that our current application of it in the main paper’s experimental section is inadvertently conflating it with the distance metric we built over the CAM measures to make the analysis of our experiment easier to follow, computing CAM distances between ELs and the oracle-based languages, as opposed to showing directly the CAM **measures** . In order to more clearly emphasise the generality of our CA metric and clearly distinguish it from the distance metric we built over it, we propose to add an extra paragraph in Section 4, after the Natural Language Oracles paragraph, entitled \*\*CAM-based Distance Metric\*\* in order to clearly highlight this extra layer of analysis that we add but that is specific to our MiniGrid’s NL Oracles.
## Q3: What would be the potential issues with using CAM for 3D environments or robotic settings?
There is not issue to using the CAM for 3D environments, as we actually provide such results in Appendix E.1 and E.2 using the 3D environment MiniWorld. As far as robotic settings are concerned, we do not provide any results but would like to emphasise again that our proposed CAM is not making any assumptions about the type of stimuli being considered apart from the fact that they are temporally correlated, i.e. that they can be understood as multiple video streams of any durations, and therefore should be general enough to accommodate any setting we can think of. We are pointing at those results in 3D environments at the end of Section 3.1 about CAM, but in order to highlight it further we propose to include another reference to those results in the Limitation paragraph of Section 4, where we discuss that the EReLELA architecture is yet to be externally validated in 3D environments.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: Thanks for your response! I will maintain my assessment of the paper.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Dear Reviewer,
Thank you for your acknowledgment of our rebuttal.
Please do not hesitate to let us know more of your thoughts towards refining the paper if any comes about.
Thank you once again for your time and consideration. | Summary: EReLELA investigates whether by asking the agent to learn and describe the environment through emergent language (EL) can help with hard exploration tasks, compared to using natural language (NL) description alone.
I personally find this angle interesting and refreshing -- and the connection to count-based exploration bonus is novel.
In theory, EL should work better than NL because by definition of pragmatics, through reference game (RG), the description from EL should be more compact and discriminative than NL. However, I appreciate the honesty of the authors that they point out there was no significant difference between EL and NL.
I thought back and forth about whether to accept or reject this paper. My current stance is that -- if the authors cannot substantially rewrite the experiment section and update their figures to make their conclusions very easy to understand, I don't think this paper meets the bar of acceptance.
Strengths: 1. The direction is novel. The idea and execution are both solid. Ablations are great.
2. The first 4 pages of writing (intro, related work, background) are clear.
3. The experimental hypotheses are very clear (H1/H2/H3) and reasonable
4. Evaluation environments (KeyCorridor of MiniGrid) make sense and is commonly used.
Weaknesses: 1. I can only vaguely understand CAM. The way it's being described is still very confusing to me, and I already have some background on speaker-listener models. I recommend the authors considering rewriting with general audience in mind -- maybe present an algorithm box that shows how it's computed? Currently this section is interleaved with intuitions and actual procedure. Maybe separate them to some extent? (I see Appendix F/G is about agent architecture and RG. Would you guys consider condense the paper and move these two sections into main text?)
2. Sec 3.2 is very brief.
3. Experiment figures are labeled in a way that is beyond confusing. I would urge the authors to not use `Agnostic STGS-Lazlmpa-5-1 ELA+AccThresh=90+Distr=256+UnifDSS` as labels in their figure. Such label is fine for internal presentations/reports, but it is difficult for reviewers to quickly understand what the figure is saying.
My main concern of this paper is not about the content nor the experiment, just about the presentation.
Technical Quality: 3
Clarity: 2
Questions for Authors: N/A
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and thorough review. Please refer to the global rebuttal answer extensively as most of the reviewing points raised were addressed there. We address below the remaining one.
Please let us know if there is improvement you could think of in light of our rebuttals and other reviews, in order to substantially raise your appreciation of the work.
## Regarding the labels on experimental figure:
We thank you for this comment, we entirely agree and will make the change to more sensible labels.
---
Rebuttal Comment 1.1:
Title: Kind request for your input on our rebuttal
Comment: Dear Reviewer,
As the discussion period is nearing its end, we kindly request your input on our rebuttal. Your further engagement would be extremely helpful in refining the paper.
We look forward to hearing your thoughts and addressing any remaining questions or concerns you may have.
Thank you once again for your time and consideration. | Rebuttal 1:
Rebuttal: We thank reviewers for their time and thorough reviews. We believe that their advice will greatly improve the paper.
In this global rebuttal answer, we will address the main review points, shared among reviewers. We will address the remaining points in individual rebuttals to each reviews.
## Clarity issues:
As highlighted by all reviewers, Section 3 is difficult to understand possibly because:
1. the CAM details mix procedural and intuition details ;
2. the CAM details lack a precise list of assumptions that are assumed about the inputs of the metric ;
3. both the CAM and the EReLELA sections lack simple, short, clear, high-level intuitive explanation ;
4. related figures (1,2 and 3) are unclear and confusing.
In order to address those clarity issues, we propose the following:
1. Following reviewers advice, we will clearly separate procedural from intuition details around the CAM section by including an Algorithm to specify how to compute the metric, and updating Figure 2 towards more clarity ;
2. We updated figure 3 (cf. rebuttal PDF) to provide a bigger picture viewpoint about the whole RL feedback loop and how EReLELA's components are organised around the underlying RL algorithm and the environment, as well as the relationship between the RG and the intrinsic reward generator that uses a count-based exploration method.
3. We propose some simplified, short, clear, and high-level descriptions of both EReLELA and CAM and we will add them at the beginning of their related subsections:
EReLELA is a wrapper around any off-/on-policy RL algorithm that augments the reward signal by linearly combining the original extrinsic reward signal with an intrinsic reward signal derived using an intra-life count-based exploration method, which relies on a state abstraction obtained from the speaker agent of a referential game, effectively embedding complex, high-dimensional observations/states into captions in the emergent language of the referential game training.
The Compactness Ambiguity Metric is metric to characterise the kind of state abstraction that a language perform over a state space, in particular RL state space but not limited to it. It assumes that the state space can be organised as a set of video streams, that is to say a sequence of temporally-correlated states. Along with sequences of temporally-correlated states, it takes as input the timestamps-aligned state abstractions/linguistic descriptions of each state in the language whose abstraction power is being evaluated. It relies on the sorting of those linguistic descriptions into an histogram's buckets which corresponds to the temporal interval over which a given linguistic description remained unchanged.
4. We will remove figure 1 in order to make space to include some elements of Appendix G into Section 3.2, and formally clarify how the intrinsic reward is generated from the EL state abstraction.
## Limited comparison to related competitive baselines:
As highlighted by **Reviewer ra5c** and **1YKp**, the paper lacks comparison to related works' competitive baselines. We agree that including further baselines would strengthen the paper and propose to address it in four ways:
1. We will clarify that our NLA agent is adapted from [3:Tam et al., 2022] and is here to enable a fair comparison to a competitive/SotA approach.
2. We will include in Figure 4 results on KeyCorridorS3R2 from [1:Raileanu et al., 2020] and [2:Zha et al., 2021], using horizontal dotted lines rather than the learning curves themselves in order to clarify that the comparison is not entirely fair since they rely on on-policy RL algorithms whereas we present results using an off-policy RL algorithm.
3. We hope to be able to include results on KeyCorridorS3R3 in order to enable a performance comparison with the SotA approaches from [paper 51:Mu et al., 2022], provided that we can run sufficiently many seeds before a possible camera-ready version of the paper (up to a 20M observation sampling budget). Our preliminary results show that EReLELA increases the sample-efficiency by roughly 2x compared to [paper 51]'s best-performing L-AMIGo.
4. We will include further details and discussion about related works from [1,2,4] and highlighting how our method differs, in the extra page of a possible camera-ready version.
## References:
[1]: Raileanu, Roberta, and Tim Rocktäschel. "RIDE: Rewarding Impact-Driven Exploration for Procedurally-Generated Environments." International Conference on Learning Representations 2020.
[2]: Zha, Daochen, et al. "Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments." International Conference on Learning Representations 2021.
[3]: Tam, Allison, et al. "Semantic exploration from language abstractions and pretrained representations." Advances in neural information processing systems 35 (2022): 25377-25389.
Pdf: /pdf/a0ce383028ca553ceba13ae384c2f61bf226992c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Spectral Adapter: Fine-Tuning in Spectral Space | Accept (poster) | Summary: In summary, this paper investigates advancements in Parameter-Efficient Fine-Tuning (PEFT) for pre-trained neural networks by integrating spectral information from pretrained weights, aiming to enhance the classic LoRA approach. By employing Singular Value Decomposition (SVD), the authors introduce two spectral adaptation techniques: additive tuning and orthogonal rotation of the top singular vectors.
Strengths: The topic of LoRA adaptation addressed in this paper is valuable due to its applicability to a wide range of high-level and low-level tasks. The authors introduce two variants of the proposed enhanced LoRA method. In specific datasets and tasks, these variants demonstrate improvements over state-of-the-art (SOTA) methods.
Weaknesses: There are some typos that need to be corrected. For example, a spelling mistake in the word "Apppendix" in line 32, it should be corrected to "Appendix." Additionally, the word "digged" in line 20 should be corrected to "dug" to use the proper past tense of the verb "dig.".
It is not common to put the whole literature review in Appendix.
In line 53, the phrase "orthogonal rotating the top singular vector space." This phrase should be corrected to "orthogonally rotating the top singular vector space" to properly use the adverbial form "orthogonally," which modifies the verb "rotating."
Additionally, if the spectral space is modified, the rank will also change. This implies that the optimal ranks of the two spectral spaces differ, making it unfair to compare different LoRA absed methods using the same rank.
There is a logic error of the claim stated in line 82-93: Specifically, the statement that "these methods require storing all U, S and V during training while only the diagonal vector of S is tuned, which nearly doubles the storage requirement compared to pretraining when fine-tuning on downstream tasks" is misleading. Storing all components (U, S, and V) does indeed increase storage, but it's not clear why this would "nearly double" the storage requirement. The increase in storage would depend on the specifics of the matrix dimensions and the storage format. The phrase needs to clarify how the storage requirement nearly doubles to avoid logical inconsistency.
Technical Quality: 3
Clarity: 3
Questions for Authors: Literature [36, 4, 56] already studied the spectral space of model weights; it is not clear what is new in this paper, and highlighting the difference would make the contribution of this paper much clearer.
The inclusion of the revised Singular Value Decomposition (SVD) based fine-tuning, as compared to classic LoRA, is time-consuming and the computational burden increases with the number of fine-tuned blocks or layers. How does the paper address this issue?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Instead of fine-tuning the singular values of the weights, this paper proposes fine-tuning the singular vectors of the weights. However, the motivation behind this approach is not convincingly presented. For example, The motivation for fine-tuning the singular value vectors is not clearly articulated. As we know, U is an orthogonal matrix representing the left singular vectors, S is a diagonal matrix of singular values, and V is an orthogonal matrix representing the right singular vectors. A critical question that arises is whether U and V maintain their orthogonality after fine-tuning. Orthogonal U and V matrices provide an optimal basis for representing the weight matrices. Losing orthogonality results in a suboptimal basis, which can lead to less efficient representations of the neural network weights. Also, without orthogonality, the interpretability and distribution of the singular values will also be affected. Furthermore, orthogonal matrices are numerically stable and well-conditioned, meaning small changes in the data lead to small changes in the results. Without orthogonality, the resulting matrices may become ill-conditioned, causing numerical instability and issues in the training and optimization algorithms.
Additionally, the key contribution of the paper lacks clarity and contains logical errors. It would be helpful if the authors could address these concerns and provide further clarification.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the careful reviews. Given the rebuttal character limit, we address $\textbf{most important questions}$ here, followed by some $\textbf{other comments later}$.
$\textbf{Weaknesses 4.}$ Additionally, if the spectral space is modified, the rank will also change. This implies that the optimal ranks of the two spectral spaces differ, making it unfair to compare different LoRA based methods using the same rank.
$ \textbf{Answer to Weaknesses 4.}$ The general goal for parameter-efficient fine-tuning is simply to achieve better training loss (and hopefully validation loss as well) with fewer trainable parameters, and is thus not to decide what is the optimal rank and how the spectral space changes. Our proposed method shows better train/test performance, e.g., Figure 1; leads to better model fusion results, e.g., Figure 5; and is more parameter-efficient, e.g., Figure 6. This only requires one round of SVD of the weight matrices, which induces ignorable overhead, e.g., Figure 7.
Moreover, we are not simply comparing with LoRA or LoRA-based models, we compare with state-of-the-art fine-tuning models including OFT which is less like LoRA, and another spectral adaptation mechanism SVDiff. See Figure 6 for example.
We acknowledge it's possible that optimal ranks for different fine-tuning models may differ, thus we test over different ranks. See Figure 1 and 8 for results with rank 4, 8, and 64, which are most commonly used ranks in nowadays fine-tuning tasks. Our method is better than LoRA/LoRA-based/other methods for all cases.
$\textbf{Weaknesses 5.}$. There is a logic error of the claim stated in line 82-93: Specifically, the statement that "these methods require storing all U, S and V during training while only the diagonal vector of S is tuned, which nearly doubles the storage requirement compared to pretraining when fine-tuning on downstream tasks" is misleading. Storing all components (U, S, and V) does indeed increase storage, but it's not clear why this would "nearly double" the storage requirement. The increase in storage would depend on the specifics of the matrix dimensions and the storage format. The phrase needs to clarify how the storage requirement nearly doubles to avoid logical inconsistency.
$ \textbf{Answer to Weaknesses 5.}$ For simple demonstration, assume we have a weight matrix $W\in\mathbb{R}^{n\times n}$ which is of full rank. Then after the singular value decomposition, we will get $U\in\mathbb{R}^{n\times n}, S\in\mathbb{R}^n,V\in\mathbb{R}^{n\times n}.$ SVDiff proposes to tune all singular values, thus it requires to store all $U,S,V$ in their full format, which results in $(2n^2+n)\*$sizeof(float) storage overhead plus $n$ trainable parameters. On the contrary, consider our spectral adapter which tunes only the top-$r$ columns of $U$ and $V$, thus we form and store $W_2=U[r:]\text{diag}(S[r:])V[r:]^T$ together with top-$r$ columns of $U,V$ and top-$r$ elements of $S$. In our training, $W_2$ can be summed up with the top spectral part. This paradigm results in $(n^2+2rn+r)\*$sizeof(float) storage overhead with additional $2nr$ trainable parameters for additive spectral adapter and $2r^2$ trainable parameters for rotational spectral adapter. Since $n$ is usually much larger than $r$ in training PEFT models, SVDiff requires $\sim 2n^2*$sizeof(float) storage, which doubles $\sim n^2*$sizeof(float) storage needed by our spectral adapter. This storage explosion is also observed in our empirical training of SVDiff compared to spectral adapter.
$\textbf{Questions 1.}$ Literature [36, 4, 56] already studied the spectral space of model weights; it is not clear what is new in this paper, and highlighting the difference would make the contribution of this paper much clearer.
$ \textbf{Answer to Questions 1.}$ [36] studies specifically the singular value distribution of weight matrices of LLM, it is more of a theoretic work in field of statistical ML. The conclusion there is that singular value distribution is more structured for larger model and varies across different training phases. [56] has nothing to do with spectral space of weight matrices, it only observes that strong attention scores are often attached to initial tokens in LLM training, which they dub as "attention sink". [4] establishes connection between attention sink and spectral space of weight matrices, it suggests that the bottom spectral space of NN weights plays a critical role in attention sink phenomenon. None of this work proposes to exploit spectral space of NN weights in fine-tuning tasks. Thus the difference between these works and ours is huge.
$\textbf{Questions 2.}$ The inclusion of the revised Singular Value Decomposition (SVD) based fine-tuning, as compared to classic LoRA, is time-consuming and the computational burden increases with the number of fine-tuned blocks or layers. How does the paper address this issue?
$\textbf{Answer to Questions 2.}$ We note that the overhead brought by computing SVD of the pretrained weights is one-round and can be cached. After the matrix decomposition is obtained and stored, online training would induce similar burden as classic LoRA setting. To further investigate the practicality of our proposed method, we involve in Section 4.4 both the runtime and storage comparison of our method compared to LoRA. With large models of current size, the induced runtime overhead of the SVD procedure is marginal compared to the training time used. Meanwhile, the GPU storage needed by our method is close to LoRA. We have also discussed about both online and offline training and testing cost in Appendix E.
---
Rebuttal 2:
Comment: $\textbf{Weaknesses 1.}$ There are some typos that need to be corrected. For example, a spelling mistake in the word "Apppendix" in line 32, it should be corrected to "Appendix." Additionally, the word "digged" in line 20 should be corrected to "dug" to use the proper past tense of the verb "dig".
$\textbf{Answer to Weaknesses 1.}$ Thanks for pointing out these issues, we've corrected the spelling and the tense of the word in our revision.
$\textbf{Weaknesses 2.}$ It is not common to put the whole literature review in Appendix.
$\textbf{Answer to Weaknesses 2.}$ The decision of putting literature review into appendix is due to the page limit, we will move it into the main content with more page budget in camera ready version if our work gets accepted. Moreover, since we already involve baseline method explanation and all its relative properties whenever a prior method is compared to, e.g., bottom part of page 6 and middle part of page 8 (with Table 3 summarizing the key features), we believe that postponing literature review into appendix doesn't affect reading and understanding. Also, our introduction part (Section 1) and methodology part (Section 2) already draw connection to most relevant work, the literature review section (Appendix A) includes additional works that are less relevant and is presented for completeness and credit for all related work in related research fields.
$\textbf{Weaknesses 3.}$ In line 53, the phrase "orthogonal rotating the top singular vector space." This phrase should be corrected to "orthogonally rotating the top singular vector space" to properly use the adverbial form "orthogonally," which modifies the verb "rotating."
$\textbf{Answer to Weaknesses 3.}$ Thanks for the suggestion, we've corrected the phrase in our revision.
$\textbf{Limitations.}$ Instead of fine-tuning the singular values of the weights, this paper proposes fine-tuning the singular vectors of the weights. However, the motivation behind this approach is not convincingly presented. For example, The motivation for fine-tuning the singular value vectors is not clearly articulated. As we know, U is an orthogonal matrix representing the left singular vectors, S is a diagonal matrix of singular values, and V is an orthogonal matrix representing the right singular vectors. A critical question that arises is whether U and V maintain their orthogonality after fine-tuning. Orthogonal U and V matrices provide an optimal basis for representing the weight matrices. Losing orthogonality results in a suboptimal basis, which can lead to less efficient representations of the neural network weights. Also, without orthogonality, the interpretability and distribution of the singular values will also be affected. Furthermore, orthogonal matrices are numerically stable and well-conditioned, meaning small changes in the data lead to small changes in the results. Without orthogonality, the resulting matrices may become ill-conditioned, causing numerical instability and issues in the training and optimization algorithms.
$\textbf{Answer to Limitations.}$ Thanks for raising this point. In our paper, we propose two spectral fine-tuning paradigms: an additive version and a rotational version. Our Spectral Adapter$^R$ applies a trainable rotation to the top-$r$ left and right singular vectors and preserves the orthogonality. This is achieved by using the differentiable Cayley parameterization. Our
Lemma 4.1 proves that Spectral Adapter$^R$ with Cayley parameterization maintains orthogonality of the singular vectors. This follows by parameterizing a skew-symmetric matrix $Q=(A-A^T)/2$ and letting the rotation matrix be $(I+Q)(I-Q)^{-1}$. In Appendix C, we provide a proof that $(I+Q)(I-Q)^{-1}$ is orthogonal.
On the other hand, our second version, Spectral Adapter$^A$, drops the orthogonality constraint. However, after fine-tuning, the weights are close to orthogonal since only the top-$r$ gets updated and the gradient updates are of small magnitude. We find in our experiments that both adapters work very well, and outperform low rank adapters. In this version, the reviewer is correct that dropping orthogonality will invalidate the optimality of the SVD decomposition in low-rank matrix approximation problems. However, in fine-tuning foundational models, the requirement of orthogonality of the fine-tuned weights is hard to justify. Meanwhile, we agree with the reviewer that interpretability of the fine-tuned model is improved when we use orthogonal fine-tuned weights. This is provided by our Spectral Adapter$^R$. If one wants to perform interpretability tasks which require singular values with our Spectral Adapter$^A$, just re-SVD the tuned matrix and continue analysis with the new set of basis.
---
Rebuttal Comment 2.1:
Title: Response to author rebuttal
Comment: The reviewer has carefully examined both the authors' rebuttal and the feedback from other reviewers. While many of the concerns have been addressed satisfactorily, the implementation of the revised Singular Value Decomposition (SVD)-based fine-tuning method, in contrast to the traditional LoRA approach, remains time-intensive. Furthermore, the computational load escalates with each additional fine-tuned block or layer. The authors have not proposed a solution to this issue in the current version. However, the overall quality of the manuscript has indeed improved post-rebuttal compared to the initial submission. Consequently, the reviewer has decided to increase the initial evaluation score.
---
Rebuttal 3:
Comment: Please let us know whether our replies address all your concerns. If so, could you please kindly consider increasing the score? If not, we are willing to address any other question in more details. Thanks! | Summary: The paper proposes fine-tuning pretrained model weights in the spectral space for parameter efficiency. It explores two spectral adaptation mechanisms: additive tuning and orthogonal rotation of top singular vectors. The authors introduce these methods, providing theoretical analyses on rank capacity and robustness to support their approach. Experiments on language and diffusion model fine-tuning demonstrate the proposed method's superiority over previous parameter-efficient fine-tuning techniques.
Strengths: 1. The proposed spectral adapters, which introduce spectral adaptation, are interesting.
2. The theoretical analysis showing that spectral adapters have a larger rank capacity than LoRA is reasonable.
3. Experiments on language and diffusion model fine-tuning demonstrate that the proposed method outperforms other parameter-efficient fine-tuning techniques while maintaining efficiency.
Weaknesses: 1. The analysis of spectral adaptation robustness in Section 3.2 could be clearer. It would be helpful to provide what $\mathcal{R}(X)$ denotes and more clearly explain why fine-tuning $u^*$ is considered noiseless.
2. The paper's organization might benefit from some restructuring. Consider moving certain content, such as Lemma 4.1 and Table 3, from the experiments section to the methods section. Additionally, restructuring the experiments section could improve clarity.
3. While spectral adaptation is proposed, it would be valuable to more clearly demonstrate in the methods section how this approach is superior to prior works like SVDiff and OFT.
4. In Figures 1 and 8, using validation loss instead of training loss only for comparing PEFT methods could provide more meaningful insights.
5. It might be beneficial to discuss limitations in a separate section rather than within the Experiments section.
6. Including quantitative measures alongside the qualitative comparisons in the image generation results could strengthen the analysis.
Technical Quality: 3
Clarity: 2
Questions for Authors: N/A
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors mention some limitations in the checklist part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the careful reviews. Given the rebuttal character limit, we address $\textbf{most important questions}$ here, followed by some $\textbf{other comments later}$.
$\textbf{Weaknesses 1.}$ The analysis of spectral adaptation robustness in Section 3.2 could be clearer. It would be helpful to provide what $\mathcal R(X)$ denotes and more clearly explain why fine-tuning $u^\ast$ is considered noiseless.
$\textbf{Answer to Weaknesses 1.}$ Sorry about the confusion, here $\mathcal R(X)$ denotes the row space of data matrix $X$, i.e., $\mathcal R(X):=\text{row}(X).$ We've added the definition in our revision.
By "noiseless'', what we mean here is that, we know from the theory that optimal weights need to lie in a certain subspace for the toy example considered in Section 3.2. However, in practice they can deviate due to optimization errors. We find this subspace via SVD, effectively denoising the weights, and perform fine-tuning of this subspace. To explain more about the math derivation in Section 3.2, we provide a more concrete example in the official comment below.
$\textbf{Weaknesses 4.}$ In Figures 1 and 8, using validation loss instead of training loss only for comparing PEFT methods could provide more meaningful insights.
$\textbf{Answer to Weaknesses 4.}$ In Figure 1, we provide the training loss plot along with the test accuracy plot on the right panel (which is evaluated on the GSM8K benchbark measured using lm evaluation harness https://github.com/EleutherAI/lm-evaluation-harness). It can be seen that spectral adapter provides lower training loss and higher test accuracy. For Figure 8, since the reviewer raised this point, we have added similar test scores in our ``one-page attached pdf (left two panels in Section B)``. We have also included these new figures in our revision.
$\textbf{Weaknesses 5.}$ It might be beneficial to discuss limitations in a separate section rather than within the Experiments section.
$\textbf{Answer to Weaknesses 5.}$. Thanks for the suggestion, we will add an additional section for limitations in our revision. The main potential limitation of the proposed method is the overhead for computing SVD of weight matrices. However, for most modern networks (Stable Diffusion, Llama 3 and Mistral), this is tractable since the dimensions of layer weights are moderate. See Figure 7 which shows the SVD time/memory overhead is negligible. For certain other models with larger weights, our method may require more resources.
Another limitation to our discussion is that we only focus on tuning top spectral space, though we investigated tuning other spectral space sporadically, i.e., see Figure 9, 10, 11, Section 4.2, and ``Section B (right most plot) of our added one-page pdf``. However, there might be some cases where minor spectral space should be changed (as pointed out by reviewer Nthn) and the top spectral space might also shift during training (as pointed out by reviewer N6x8). Though the proposed method empirically works well, more advanced spectral space scheduling technique is worth exploring.
$\textbf{Weaknesses 6.}$ Including quantitative measures alongside the qualitative comparisons in the image generation results could strengthen the analysis.
$\textbf{Answer to Weaknesses 6.}$ Thanks for the suggestion. To consolidate our work further, we have provided quantitative measurements for diffusion model results in our attached one-page pdf, and we have also appended these new results in a revision of our current paper. For the evaluation, we follow the same computation metrics as used in paper [12] and paper [42], where the cosine similarity between clip vectors are taken for distance measurement. Specifically, in ``Section A.1 in that pdf``, we provide quantitative results for our Figure 5. For text alignment score, we compute with the following prompts corresponding to each column:
1) "two dogs and a cat in front of Mount Fuji",
2) "two dogs and a cat in a galaxy",
3) "two dogs and a cat on a playground".
For image alignment score, since we are generating multi-character plots, we crop each generation vertically into three parts, with each cropped plot containing only a single animal. We then compute the alignment score of each cropped component with corresponding reference images of the same animal. For generation of column 1, FedAvg scores best but has only 0.0005 higher average than our method. For generation of column 2 and 3, our method achieves highest average and is around 0.01 better than other methods.
``Section A.2 of the attached pdf`` contains quantitative results for our Figure 6. We include both the trend curve for alignment score vs. parameter budget (left panel), and also the exact score number (right panel, please expand to see the number). We also shade the region with trainable parameter budget only achievable by our method. Notably, it can be clearly observed that our method already generates sensible images with very few parameters ($\leq$ 20K) that can never be obtained by other methods. For larger parameter budget, our method is able to generate images of quality comparable to SOTA. This exemplifies the parameter efficiency of our method.
---
Rebuttal 2:
Comment: $\textbf{Additional Answer to Weaknesses 1.}$ consider the same two-layer ReLU model trained for minimizing squared loss, i.e.,
$\qquad\qquad\qquad\qquad\qquad \\min_{W^{(1)},W^{(2)}} \\|(XW^{(1)})_+W^{(2)}-y\\|_2^2+\\beta(\\|W^{(1)}\\|_F^2+\\|W^{(2)}\\|_2^2$,
where $X\in\mathbb{R}^{n\times d},W^{(1)}\in\mathbb{R}^{d\times m},W^{(2)}\in\mathbb{R}^{m}, y\in\mathbb{R}^n.$ Now we decompose each first-layer neuron $W_j^{(1)}\in\mathbb R^d$ (there are in total $m$ of them) into two parts $W_j^{(1)}=w_{j1}+w_{j2}$ where $w_{j1}$ lies in row space of $X$ and $w_{j2}$ is perpendicular to row space of $X$, i.e., $w_{j1}\in\mathcal R(X)$ and $w_{j2}\perp\mathcal R(X).$ This indicates $Xw_{j2}=0$ and thus $w_{j2}$ has no contribution to reduce the first quadratic loss term above. Since we have a second non-negative weight decay term, which would then result in $w_{j2}=0$ when trained to optimality.
Therefore, when minimal loss is achieved, all first-layer neurons should lie in the row space of $X$, i.e., $W_j^{(1)}=w_{j1}\in\mathcal R(X),~\forall j.$ However, it might happen that due to optimization errors, some neurons are not exactly aligned with row space of $X$ and may have small value in the perpendicular direction. For demonstration, consider a toy example where we have two pieces of data, each of dimension three, i.e., $X\in\mathbb R^{2\times 3}$, and takes value
$$X=\begin{bmatrix}
1 & 0 & 0 \\\\
0 & 1 & 0
\end{bmatrix}.$$
Consider the case when we have three neurons, with first and second neurons being in row space of $X$ and third neuron perpendicular to row space of $X$ with a small value:
$$
W = \begin{bmatrix}
5 & 0 & 0 \\\\
0 & 7 & 0 \\\\
0 & 0 & 0.1
\end{bmatrix}.
$$
Then, using torch.svd on $W$ results in
$$
U = \begin{bmatrix} 0 & 1 & 0\\\\ 1 & 0 & 0\\\\ 0 & 0 & 1 \end{bmatrix}, S = \begin{bmatrix}7 & 5 & 0.1\end{bmatrix}, V=\begin{bmatrix}0 & 1 & 0\\\\ 1 & 0 & 0 \\\\ 0& 0 & 1 \end{bmatrix}.
$$
The top, second, and third spectral decomposition components would be
$$\\text{top:} \begin{bmatrix}0 & 1 & 0\end{bmatrix} \begin{bmatrix}7\end{bmatrix} \begin{bmatrix}0\\\\1\\\\0\end{bmatrix}=\begin{bmatrix}0 & 0 & 0\\\\ 0 & 7 & 0\\\\ 0& 0 & 0\end{bmatrix}, \text{second:}\begin{bmatrix}1 & 0 & 0\end{bmatrix}\begin{bmatrix}5\end{bmatrix}\begin{bmatrix}1\\\\0\\\\0\end{bmatrix}=\begin{bmatrix}5 & 0 & 0\\\\ 0 & 0 & 0\\\\ 0& 0 & 0\end{bmatrix}, \text{third:}\begin{bmatrix}0 & 0 & 1\end{bmatrix}\begin{bmatrix}0.1\end{bmatrix}\begin{bmatrix}0\\\\0\\\\1\end{bmatrix}=\begin{bmatrix}0 & 0 & 0\\\\ 0 & 0 & 0\\\\ 0& 0 & 0.1\end{bmatrix}.$$
Therefore, if we focus on tuning the top two directions, we only need to deal with the ones aligned with row space of $X$ and are thus protected against small optimization errors, which we consider to be more robust and we address as "noiseless''.
$\textbf{Weaknesses 2.}$ The paper's organization might benefit from some restructuring. Consider moving certain content, such as Lemma 4.1 and Table 3, from the experiments section to the methods section. Additionally, restructuring the experiments section could improve clarity.
$\textbf{Answer to Weaknesses 2.}$ Thanks for the suggestions, our current structure follows: methodology (Section 2), theory (Section 3), experiments (Section 4), where the experiment part is splitted further into: language model (Section 4.1), diffusion model with additive spectral adapter (Section 4.2), diffusion model with rotational spectral adapter (Section 4.3), runtime comparison (Section 4.4). We've tried hard on optimizing the layout of the paper and we want to keep methodology part (Section 2) short since we want to demonstrate what are the newly proposed method more clearly and succinctly. We have separate baseline discussions in Section 4.2 (bottom of page 6) and Section 4.3 (middle of page 8, including Table 3) since we are comparing with two different sets of baseline methods, that's why we haven't put Table 3 in the methodology section. The divergence in baselines being compared is because we are studying different characteristics of the fine-tuning model, i.e., we focus on model fusion in Section 4.2 and parameter efficiency in Section 4.3, thus we compare with different prior methods which are more proficient with respect to each of the aspect.
Lemma 4.1 is more about methodology, but our primary concern is that it's specific to rotational adapter thus we put it in Section 4.3. We'll consider moving this part to the methodology section or we'll create another separate implementation section to discuss this implementation detail.
We thank the reviewer again for bringing up the structure issue which suggests that our current paper layout might still cause come confusion, we'll do more optimization on this in our revision.
---
Rebuttal 3:
Comment: $\textbf{Weaknesses 3.}$ While spectral adaptation is proposed, it would be valuable to more clearly demonstrate in the methods section how this approach is superior to prior works like SVDiff and OFT.
$\textbf{Answer to Weaknesses 3.}$ Comparison/Advantage to SVDiff: idea wise, SVDiff proposes only to tune singular values, which is more constrained compared to our proposed method that considers tuning singular vector space. To see this, first of all, tuning singular values can be achieved by tuning singular vector space (just tune the scale of each row/column pair in $U$ and $V$); Secondly, tuning singular values would result in the same row/column space while tuning singular vector space has more freedom. Practically, SVDiff requires to store all $U,S,V$ in their full format while our spectral adapter only tunes the top spectral component, thus the bottom part can be merged. For example, consider a weight matrix of dimension $n\times n$ with full rank, we have $U\in\mathbb R^{n\times n}, S\in\mathbb R^n, V\in\mathbb R^{n\times n}$ after singular value decomposition. Then SVDiff requires storing all $U,S,V$ and thus takes $(2n^2+n)\*$sizeof(float) GPU storage. For our spectral adapter, say we are tuning the top-$r$ spectral components, then we only need to store top-$r$ components of $U, S, V$ and we merge the rest as $W_2=U[r:]S[r:]V[r:]^T$ that we can sum up with the top part. Thus our method requires $(n^2+2rn+r)\*$sizeof(float) storage, which is two times smaller than SVDiff. Note we can not reduce beyond $n^2*$sizeof(float) since that is required to store the original weight matrix.
Comparison/Advantage to OFT: OFT proposes to tune the row space of weight matrix by rotating them orthogonally. This is connected to our rotational spectral adapter which rotates the top columns of singular vector matrices. Intuitively, OFT is preserving the distance between neurons while our rotational spectral adapter is preserving the orthogonality of singular vector basis. These are two lines of logic thus it's hard to say our method is strictly superior to OFT in a rigorous sense. However, our method does have advantage in how it tackles the parameter efficiency issue. Vanilla OFT method is not parameter-efficient since for any weight matrix $W\in\mathbb R^{n\times n}$, it needs to have another orthogonal permutation matrix $A\in\mathbb R^{n\times n}$ of the same size and it uses $AW$ in place of $W$. In OFT paper, the authors manually inject parameter efficiency by constraining $A$ to be only block diagonal with orthogonal blocks, and the authors claim that this constraint is not affecting much in their empirical experiments. Moreover, with such block diagonal $A$, the trainable parameter budget is constrained to be $r^2\*(n/r)$ where we assume the block size is $r\times r$ and it requires that $r|n$. On the contrary, our rotational spectral adapter orthogonally rotates the top $r$ columns of $U$ and $V$ matrices thus we are guaranteed with more parameter budget choices, i.e. our $r$ can take any value between $1$ and $n$, and our method is naturally parameter-efficient.
---
Rebuttal 4:
Comment: We hope that our response has positively influenced your perception of our work. Please let us know if your queries have been addressed satisfactorily. If so, could you please kindly consider increasing the score? If you require further clarifications of any points, we are enthusiastic about engaging in further discussion. Please do not hesitate to contact us. We highly value the generous contribution of your time to review our paper. Thanks!
---
Rebuttal Comment 4.1:
Comment: Dear reviewer, since there are only two days left for author-reviewer discussion, we would like to confirm whether our responses have effectively addressed your concerns. We provided detailed responses to your concerns a few days ago, and we hope they have adequately addressed your issues. If you require further clarification on any of these points, please do not hesitate to contact us. Thanks! | Summary: In this paper, the authors proposed to modulate top-r singular vectors after performing SVD on the pretrained weights. Both theoretical analysis and experiments have shown that the proposed two types of spectral fine-tuning methods can improve the representation capacity of low-rank adapters.
Strengths: 1. The introduction of the proposed method is pretty clear, and the theoretical analysis and review of other PEFT methods help the readers quickly and comprehensively understand the core of the proposed method.
2. The advantages of the proposed method under low-rank conditions are very obvious (Figure 1 and Figure 8).
Weaknesses: 1. The proposed method have two versions, including spectral adapter^A and spectral adapter^R. However, the application boundaries of these two methods are not clear. Some experiments are applied by adapter^A, but others are applied by adapter^R. It's better to further discuss the difference or relationship between these two types of versions.
2. It's better to study the selection of columns of U and V. For example, bottom-r selection and random-r selection.
3. If we can combine addition and rotation?
Technical Quality: 3
Clarity: 2
Questions for Authors: My main concerns are listed in the weaknesses.
Additional question: as illustrated in the paper, the spectral adapters are initialized once at the beginning. This is done by carrying out SVD on pre-trained model weights, identifying the most significant components to finetune on. However, analysis from GaLoRE (GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection) shows that the primary components could shift from one to another during the training. So, it’s better to examine the possibility of re-parametrizing the spectral adapters back into model weights, doing SVD again, and creating new spectral adapters to catch new primary components periodically during the training. By doing this we might come up with a better result.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the careful reviews. Here are our answers to the questions.
$\textbf{Weaknesses 1.}$ The proposed method have two versions, including spectral adapter-A and spectral adapter-R. However, the application boundaries of these two methods are not clear. Some experiments are applied by adapter-A, but others are applied by adapter-R. It's better to further discuss the difference or relationship between these two types of versions.
$\textbf{Answer to Weaknesses 1.}$ Thanks for the suggestion, which is very reasonable. Our language model experiments are all done with additive spectral adapter. We carry out diffusion model experiments in Section 4.2 with additive spectral adapter and in Section 4.3 with rotational spectral adapter. These two sections are devoted to studying their different characteristics: in Section 4.2, we investigate the performance of additive spectral adapter for adapter fusion task. For different fine-tuning tasks, we propose to tune different spectral vector basis, which improves the generation results after they are merged. In Section 4.3, we explore the parameter efficiency capacity for rotational spectral adapter. Notably, the trainable parameter in rotational spectral adapter is only of size $\mathcal O(r^2)$ and thus can be very small for small $r.$ As far as we know, this is the only PEFT method that has trainable parameter budget scales only with $r$. Prior models which focus solely on reducing number of trainable parameters such as VeRA and LiDB (see Table 3) still have parameter budget scales with weight size. Remarkably, for only $r=2$, i.e., each weight matrix only induces $2*2^2=8$ trainable parameters, the fine-tuning already takes effect in some cases, e.g., Figure 15. No previous models can achieve this number of trainable parameter budget.
Overall, we suggest additive spectral adapter for both general case fine-tuning and adapter fusion tasks, we suggest rotational spectral adapter as a remedy when there is strict constraint on trainable parameter budget. Thanks for raising this confusion, we will make the distinction clearer in our revision.
$\textbf{Weaknesses 2.}$ It's better to study the selection of columns of U and V. For example, bottom-r selection and random-r selection.
$\textbf{Answer to Weaknesses 2.}$ Thanks for the suggestion. We have included diffusion model results for bottom-$r$ selection (right plot of Figure 9, third columns of Figure 10 and Figure 11), which behaves much worse than top-$r$ selection and original LoRA model. We will consider moving some of them to the main text for better comparison.
Since the reviewer has raised this point, we also do some additional experiments with top-$r$, middle-$r$, and bottom-$r$ tunings. See the third panel of ``Section B in our attached one-page pdf``. This experiment is for fine-tuning Llama 3 8B model (same as Figure 1) with $r=4$. Here for middle-$r$ tuning, we start at the $20$th column and tune consecutive $4$ columns beginning there, i.e., we are tuning the $20$th$\sim 24$th columns of $U$ and $V$. The result shows that tuning top-$r$ is better than middle-$r$, which is then better than bottom-$r$. One future direction is to study more random-$r$ selection techniques and perhaps dynamically varying the choice of $r$ based on some criterion, which would be in spirit close to AdaLoRA [64].
$\textbf{Weaknesses 3.}$ If we can combine addition and rotation?
$\textbf{Answer to Weaknesses 3.}$ Technically yes, but then it means one would need to store a set of heterogeneous fine-tuning models and need to distinguish them in forward pass since different merging methods need to be adopted. Thus it might bring some cumbersomeness.
$\textbf{Questions.}$ as illustrated in the paper, the spectral adapters are initialized once at the beginning. This is done by carrying out SVD on pre-trained model weights, identifying the most significant components to finetune on. However, analysis from GaLoRE (GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection) shows that the primary components could shift from one to another during the training. So, it’s better to examine the possibility of re-parametrizing the spectral adapters back into model weights, doing SVD again, and creating new spectral adapters to catch new primary components periodically during the training. By doing this we might come up with a better result.
$\textbf{Answer to Questions.}$ Thanks the reviewer for raising up this question. In GaLore paper, the authors propose to re-SVD after each $T$ iterations where $T$ is a free hyperparameter (page 5, section 4 in GaLore paper). The reason behind is that the authors suspect the spectral distribution of "gradient" will shift while training. Overall, GaLore deals with subspace projection of gradient matrix instead of weight matrix itself.
We note that it's more reasonable to assume there would be spectral shift for gradient matrices compared to weight matrices since gradients capture the fast descent directions which are more likely to change via training, while weight matrices are more invariant due to their strong impact on modeling capacity. For example, for vision models pretrained with human face pictures, when tuned with animal pictures, though there is likely to be some spectral space shift for weight matrices, we suspect it won't be large since human face pictures and animal pictures both obey real-world physics rules, from both structured shapes to color consistency. While on the other hand, if we want to be more careful with spectral shift of weight matrices happened during training, we can employ similar method as GaLore and re-SVD after some number of iterations. Though the re-SVD procedure may bring some overhead.
Please let us know whether our replies address all your concerns. If so, could you please kindly consider increasing the score? If not, we are willing to address any other question in more details. Thanks!
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, since there are only two days left for author-reviewer discussion, we would like to confirm whether our responses have effectively addressed your concerns. We provided detailed responses to your concerns a few days ago, and we hope they have adequately addressed your issues. If you require further clarification on any of these points, please do not hesitate to contact us. We highly value the generous contribution of your time to review our paper. Thanks! | Summary: The paper presents a new low rank adapter for large models. The idea is to apply the adapter in the SVD decomposition of a weight matrix. Two methods are proposed. First, train parameters that get added to top r columns of U and V matrices. Second, train parameters that rotate top r columns of U and V matrices. The resulting methods are shown to perform well on LLMs and diffusion models.
Strengths: I like the idea presented in the paper. It is intuitive and simple. A similar idea was presented recently in the PiSSA paper [1]. I think the experiments are sufficient and interesting. Moreover, compared to PiSSA that directly tunes the top r eigenvectors, additively tuning them has the benefit of providing a clear intuition when merging different adapters. I specially enjoyed reading the discussion on adapter merging as this is a topic that I have been thinking for a while now.
[1]: https://arxiv.org/abs/2404.02948
Weaknesses: Evaluation is a big problem in this paper.
- LLM evaluations are few while diffusion evaluations seem to be mostly quantitative. I want to caveat this by saying that while few, the GSM-8k experiments with Mistral-7B are good enough to convince me that the method works. However, a comprehensive evaluation would have been more convincing and could shed light on cases where the method fails.
- I wish the authors had focused more on LLM evaluations. I may be biased as I work on LLMs.
- Figure 8 of supplementary (above like 586) is interesting and the authors should consider moving it to the main body of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Does training for more epochs change the results?
- Why did you only pick GSM8K?
- How do you accommodate that different methods may require different LRs to learn optimally?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations section is missing. One limitation that I can think of is: is it possible for a model to overfit since we are always tuning the most important eigenvectors. What if the fine-tuning dataset is structured so that only a select non-top eigenvectors need to be updated. Datasets like GSM-8k restrict the model output to a narrow pattern that is significantly different from what the model outputs normally. Hence, updating the top eigenvectors makes sense. However, what if the fine-tuning dataset is supposed to modify only a few things that the model has learnt. For instance, there's a lora that switches the model output from Joe Biden to whoever is the next US president. I believe that updating the top eigenvectors is actually a problem here, and, as the authors note in lora merging section (lines 210-214), there may need to be some eigenvector scheduling.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the careful review. Given the rebuttal character limit, we address $\textbf{most important questions}$ here, followed by some $\textbf{other comments later}$.
$\textbf{Weaknesses 1.}$ LLM evaluations are few while diffusion evaluations seem to be mostly quantitative.
$\textbf{Answer to Weaknesses 1.}$ Thanks for the suggestion. To consolidate our work further, we have provided quantitative measurements for diffusion model results in our attached one-page pdf, and we have also appended these new results in a revision of our current paper.
For LLM evaluations, we have both evaluated on GLUE benchmark with DeBERTa model (Table 1) and GSM8K benchmark with Mistral 7B (Table 2) and with Llama3 (Figure 1). Since GLUE benchmark is more about commonsense reasoning and GSM8K is for math tasks, we feel these cover common LLM reasoning tasks. Moreover, the DeBERTa model we tried is of size 86M, Mistral model is of size 7B and Llama3 is of size 8B, thus we hope these cover both small and large models (given the resource constraint we have). We'll consider exploring more tasks in our revision.
With respect to the quantitative evaluation for vision tasks, we follow the same computation metrics as used in paper [12] and paper [42], where the cosine similarity between clip vectors are taken for distance measurement. Specifically, in ``Section A.1 in that pdf``, we provide quantitative results for our Figure 5. For text alignment score, we compute with the following prompts corresponding to each column:
1) "two dogs and a cat in front of Mount Fuji",
2) "two dogs and a cat in a galaxy",
3) "two dogs and a cat on a playground".
For image alignment score, since we are generating multi-character plots, we crop each generation vertically into three parts, with each cropped plot containing only a single animal. We then compute the alignment score of each cropped component with corresponding reference images of the same animal. For generation of column 1, FedAvg scores best but has only 0.0005 higher average than our method. For generation of column 2 and 3, our method achieves highest average and is around 0.01 better than other methods.
``Section A.2 of the attached pdf`` contains quantitative results for our Figure 6. We include both the trend curve for alignment score vs. parameter budget (left panel), and also the exact score number (right panel, please expand to see the number). We also shade the region with trainable parameter budget only achievable by our method. Notably, it can be clearly observed that our method already generates sensible images with very few parameters ($\leq$ 20K) that can never be obtained by other methods. For larger parameter budget, our method is able to generate images of quality comparable to SOTA. This exemplifies the parameter efficiency of our method.
$\textbf{Questions 1.}$ Does training for more epochs change the results?
$\textbf{Answers to Question 1.}$ The plots in Figure 1 show training/testing results for different number of steps (though we only do single epoch training). As what can be observed, the progress at the end of training is already pretty slow for all methods. Though theoretically we believe the global minimal training loss should be pretty close for all methods since $r=8$ is a small value and won't affect that much, the empirical progress can take long.
$\textbf{Questions 2.}$ Why did you only pick GSM8K?
$\textbf{Answer to Question 2.}$ As we have explained in our answer to Weaknesses 1, we evaluated on both GLUE and GSM8K with various model sizes. We hope this kind of captures general reasoning capability. These are all metrics we have experimented with and we are not cherry-picking GSM8K. We prefer GSM8K to other specialized reasoning task since it's math problems and we are in STEM.
$\textbf{Question 3.}$ How do you accommodate that different methods may require different LRs to learn optimally?
$\textbf{Answer to Question 3.}$ We usually follow the lr setting in prior works, which we describe in each of the corresponding appendix sections. For example:
1. For Figure 1 results (GSM8K score), we follow the parameter setting in QDoRA blog https://www.answer.ai/posts/2024-04-26-fsdp-qdora-llama3.html, where they use the same lr for all LoRA, DoRA and QDoRA. We use the same setting for our spectral adapter and all baseline methods. We follow the setting there since we are training on the same dataset and the same model. This experiment is not covered in any of the primary report of our baseline methods.
2. For Table 1 results (GLUE score), we tune lr for our spectral adapter. For baseline methods, we follow hyperparameter setting for LoRA and AdaLoRA in their original reports for the same benchmark. We don't cite the score there since we are not tuning the exact same NN components/models. We use the same hyperparameter setting as LoRA for DoRA (since DoRA paper has no this benchmark evaluation) and we follow the setting used in BOFT, a variant of OFT, for OFT experiments. Since OFT primary report only includes vision tasks and BOFT instead compares to OFT on GLUE benchmark.
3. For vision tasks for adapter fusion, we use the default parameter setting for both our spectral adapter and all baseline methods as in original mix-of-show repo (https://github.com/TencentARC/Mix-of-Show) since our code is adapted from this repo.
The reason why we generally don't tune each method individually is due to that we think granular lr value such as $2.2e-3$ used by AdaLoRA for GLUE (Table 8 in AdaLoRA's report) is hard to find and is thus impractical. We've tried hard to keep our comparisons fair for all methods. The lrs we have used for LLM tasks in the current paper ranging from $1e-3$ to $1e-5$, which cover common lrs used for fine-tuning LLMs.
---
Rebuttal 2:
Comment: $\textbf{Strengths.}$ A similar idea was presented recently in the PiSSA paper.
$\textbf{Comments on Strengths.}$ Thanks for mentioning PiSSA, which is a very recent and concurrent work to ours. This work explores a similar idea as our additive spectral adapter. We noticed it after completing our work. To distinguish our work from PiSSA and explain to other reviewers who may not be familiar with this line of work, we make several notes here. While PiSSA focuses only on LLM tasks and frames their method as a specific initialization method of classic LoRA model, we treat it quite differently and consider the fine-tuning procedure from a spectral decomposition angle. Here are some of the main components we consider which are not involved in PiSSA:
1) PiSSA does not consider rotational fine-tuning and does not maintain the orthogonality of SVD.
2) PiSSA does not explore adapter fusion, i.e., merging fine-tuned models.
3) PiSSA has no vision or diffusion model experiments.
4) PiSSA does not contain theoretic explanations.
In contrast, we cover both additive and rotational SVD fine-tuning, adapter fusion by combining models fine-tuned via different spectral spaces, and we experiment with both LLM and generative diffusion models. Moreover, we have a theoretical analysis of adapter rank capacity (Lemma 3.1), weight subspace alignment (Section 3.2) and orthogonality via Cayley parameterization (Lemma 4.1). We believe there is still room for future research on NN weight spectral decomposition.
Despite, our rotational spectral adapter has pushed parameter efficiency to certain extreme, in Figure 15 for example, we observe that our rotational adapter with $r=2$ is already performing quite well, which means each weight matrix is attached with only $2*2^2=8$ trainable parameters. As far as we know, this is the first PEFT model with trainable parameter budget scales with $r^2$ where $r$ is the rank, which is fully independent of weight dimension. Prior models such as VeRA and LiDB focus solely on reducing trainable parameter budget but their number of parameters still scales with original weight dimensions.
$\textbf{Weaknesses 2.}$ I wish the authors had focused more on LLM evaluations. I may be biased as I work on LLMs.
$\textbf{Answer to Weakness 2.}$ We'll consider involving more LLM tasks in our revision. We also feel LLM tasks seem more convincing since the quantitative evaluation metrics are more well-established.
$\textbf{Weaknesses 3.}$ Figure 8 of supplementary (above like 586) is interesting and the authors should consider moving it to the main body of the paper.
$\textbf{Answer to Weakness 3.}$ Thanks for the suggestion, we'll consider moving them to main text in our revision. We have also provided newly generated test scores for Figure 8 experiments, see the left and middle panel in ``Section B of our attached one-page pdf`` for the results. It kind of shows that increasing the rank can close up the gaps between different methods.
---
Rebuttal 3:
Comment: $\textbf{Limitations.}$ 1. The limitations section is missing. One limitation that I can think of is: is it possible for a model to overfit since we are always tuning the most important eigenvectors. What if the fine-tuning dataset is structured so that only a select non-top eigenvectors need to be updated. Datasets like GSM-8k restrict the model output to a narrow pattern that is significantly different from what the model outputs normally. Hence, updating the top eigenvectors makes sense. However, what if the fine-tuning dataset is supposed to modify only a few things that the model has learnt. For instance, there's a lora that switches the model output from Joe Biden to whoever is the next US president. I believe that updating the top eigenvectors is actually a problem here, and, as the authors note in lora merging section (lines 210-214), there may need to be some eigenvector scheduling.
$\textbf{Answer to Limitations.}$ Sorry about the missing of limitation section. We'll add a discussion about the limitations in our revision. We feel the main potential limitation of the proposed method is still the overhead for computing SVD of weight matrices. However, for most modern networks (Stable Diffusion, Llama 3 and Mistral), this is tractable since the dimensions of layer weights are moderate. See Figure 7 which shows the SVD time/memory overhead is negligible. For certain other models with larger weights, our method may require more resources.
With respect to the reviewer's concerns, we first would like to thank the reviewer for their careful considerations and detailed reading. We have actually thought about similar ideas around using non-principal spectral spaces before, we would like to explain some of our thoughts about this.
First of all, if we have a fine-tuning task which requires modification of non-top eigenvectors and the exact space needs fine-tuning is known a priori, we can apply spectral adapter to that particular subspace. In Figure 10 and 11 of Appendix F.5, we illustrate fine-tuning the bottom-$r$ eigenvectors for diffusion models, we have added the results for fine-tuning starting at the $20$th column of $U$ and $V$ for Llama3 in the right panel of ``Section B of our attached one-page pdf``. We also experiment with fine-tuning different singular vectors in our adapter fusion experiments (Section 4.2), where we assign different objects different spectral spaces (e.g., singular vector index 1 to 8 for object A and singular vector index 9 to 16 for object B). This prevents the overlap of the learned adapters in the weight space and improves the visual quality (Figures 5, 12 and 13). We agree with the reviewer that some fine-tuning tasks may require modifying certain non-top eigenvectors, and knowing exactly which eigenvector to tune is usually pretty hard. A simple approach would be to run spectral adapter fine-tuning for different bands of singular vectors and evaluating them on the validation loss, then one can dynamically choose the best basis for fine-tuning. This can be done once at the beginning, or also after a fixed number of training iterations to accommodate for spectral shift that may happen.
Second of all, while the training loss of fine-tuned models can be comparable for different adapters, their robustness levels can be different. Capacity wise, if there is a large change $\Delta W$ of low dimension required for weight $W$ while fine-tuning, then both LoRA component and our spectral adapter's component can be dedicated to model $\Delta W$. Thus, these methods would have close training loss landscape.
We feel that the difference between these adapter models would lie more in their robustness and stability in their optimization procedure. As shown in our analysis in Section 3.2, our spectral adapter can be more robust against optimization error in certain cases.
Regarding the connection to eigenvector scheduling, we propose to schedule different tuning tasks over different spectral spaces thus to better utilize their orthogonality. We believe that modification happening within different orthogonal basis would affect each other less, this is also the underline logic of [42]. But say if we have an extreme case that task 1 is modifying exactly the $5$-th column of $U$ and $V$, and task 2 is modifying exactly the $10$-th column of $U$ and $V$. Then in this case, we agree that it would be ideal if one could recognize a priori which indices need to be tuned, though this is a nontrivial job. Our hope here is that, with rank $r=1$ for example, tuning the top component would capture a bit about task 1 (the top part may change from $U_1S_1V_1^T$ to some combination of $U_1S_1V_1^T$ and $U_5S_5V_5^T$), and tuning the second top component would capture a bit about task 2. These differences would happen more independently with respect to each eigen basis. Thus the integrity of each task is better preserved.
---
Rebuttal 4:
Comment: Please let us know whether our replies address all your concerns. If so, could you please kindly consider increasing the score? If not, we are willing to address any other question in more details. Thanks!
---
Rebuttal 5:
Comment: Dear reviewer, given that this is the last day of author-reviewer discussion period, we would like to for the last time check whether our above responses have addressed your prior concerns satisfactorily. Hope that given our (newly) incorporated quantitative scores for our diffusion model experiments, which demonstrate improvement of our method, and different models/benchmarks for LLM tasks we've already considered, your prior concerns with respect to evaluation problems have been alleviated.
If our responses effectively tackled all your questions, could you please kindly consider raising your score? If there is any point requires further classification, please let us know and we'll be here to address any existing questions until the final second of the discussion period. Thanks again for spending time reviewing our work! We highly value the generous contribution of your time to review our paper. | Rebuttal 1:
Rebuttal: Since there are both similar questions that different reviewers have asked about and some points we want to make clear to all reviewers, we summarize several important points here. We also reply individually to each reviewer with respect to their own concerns in more details. We have highlighted the newly added content in the one-page pdf attached below, and all Sections/Figures mentioned in our reply which are not highlighted can be found in our original paper.
$\textbf{Quantitative evaluations for diffusion model results:}$ as suggested by both reviewer 7N9o and reviewer Nthn, we have added quantitative evaluations for diffusion model experiments in ``Section A of our attached one-page pdf``. We follow evaluation metrics in paper [12] and [42] to compute cosine similarity between clip vectors. From the results it can be observed that out method hits best overall average (Section A.1) and generates good results with small parameter budget (Section A.2, we shade the region of parameter budget only achievable by our method).
$\textbf{Tuning non-top spectral space:}$ as pointed out by both reviewer Nthn and reviewer N6x8, there might be scenarios that only some non-top spectral space needs fine-tuning, and there might also be spectral distribution shift during fine-tuning. We note that first of all, if one knows a priori which part of the spectral space needs fine-tuning, one can directly apply the proposed spectral adapter method to it. We provide tuning bottom$-r$ singular vector results in Figure 9, Figure 10, and Figure 11. We have also provided additional results for fine-tuning Llama3 8B starting at $20$th column of $U,V$ matrices in ``Section B (right most panel) of our attached one-page pdf``, from which it can be observed that tuning top spectral component behaves better. Moreover, in Section 4.2, we study adapter fusion by distributing each tuning task to different spectral spaces. While knowing which index of the singular vector matrix needs fine-tuning a priori is usually hard, some more advanced scheduling strategy can be adopted. For example, one can run spectral adapter fine-tuning for different bands of singular vectors and evaluate them on the validation loss, then one can dynamically choose the best basis for fine-tuning. This can be done once at the beginning, or also after a fixed number of training iterations. To deal with spectral shift while training, re-SVD after some fixed number of iterations can tackle this.
Pdf: /pdf/c1c309169ea52b99f81cf2a43e4fe889d590cd65.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models | Accept (poster) | Summary: This paper discusses the impact of the common practice of filtering for English-only data on training vision-language models, showing that this negatively impacts performance on tasks covering diverse cultural regions and backgrounds. The work proposes foregoing this filtering step to improve this representation, as well as potentially using a short English-only fine-tuning stage to achieve a more acceptable balance between standard task performance and diversity.
Strengths: Bias in vision-language models, and generally in machine learning, is an important problem. The proposed mitigation method (training on global data without English-only filtering) makes sense and the results seem to support this practice.
Weaknesses: The contribution of the paper seems limited, as it is unsurprising that training on global data improves performance on benchmarks measuring global data understanding. The findings are not contextualized with prior works on the effect of training CLIP-style models on large, weakly-curated datasets or for multilingual understanding [1-3]. Results are only shown for one particular type of dual-encoder VLM (SigLIP). There is also no consideration of the effect of limited resources for training, which may make training on a larger less-curated dataset less feasible and use more computational resources, which may disproportionately impact global research communities, calling into question the paper’s categorical call to stop filtering such datasets for English-language data (L161).
[1] Carlsson et al. Cross-lingual and multilingual CLIP. LREC 2022
[2] Chen et al. mclip: Multilingual clip via cross-lingual transfer. ACL 2023
[3] Cherti et al. Reproducible scaling laws for contrastive language-image learning. CVPR 2023
Technical Quality: 2
Clarity: 3
Questions for Authors: Why is only SigLIP tested? Even restricted to CLIP-style models, it seems like the same methodology could be applied to any vision-language dual-encoder model, and to initialization schemes other than the random initialization used (L577) which presumably could have a significant effect on global representation.
Can you elaborate on how filtering hurts performance for “low-income households” (L50)? The datasets used (L98 on) are explicitly chosen to cover diverse geographical regions, which includes developing countries, but it is not clear whether socioeconomic status has a clear effect on results when controlling for geographic region and/or language. This also relates to the paper title which explicitly mentions socioeconomic diversity, though the paper seems to demonstrate the effect of geographic diversity specifically.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: Overall the presentation of limitations is thoughtful and fairly exhaustive. There lacks a discussion of resource and compute limitations, which affect the ability to train on larger, less-filtered datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful and insightful feedback. We especially appreciate their positive assessment of the importance and presentation of our work. We provide our answers to the reviewer’s raised questions and concerns below, are looking forward to their response, and hope for a favorable reassessment of our scores.
**Q1: Shouldn’t it be expected that the *globe* models outperform the *en* model on benchmarks measuring global understanding?**
While we agree that this might seem intuitive at first sight, it is not necessarily true. There are English websites published in most countries and pictures of landmarks are frequently captured by tourists, so it is possible that the English subset of the web is sufficient to capture cultural diversity. Our contribution is to provide a comprehensive evaluation of cultural diversity in SOTA contrastive VLMs, and we hope that these results suffice to clear any lingering doubts about the necessity of training on global data when building foundation models. Please refer to our global response for more detailed information.
**Q2: Are the findings related to prior work on multilingual understanding?**
We thank the reviewer for the additional pointers to prior work on multilingual understanding and will make sure to include these references in the revised version of our paper. In the context of this work, however, we purposely disentangled multilinguality from cultural diversity because we believe these measure different aspects. As noted in Section 5, we do, however, see the investigation of their intersection as an intriguing subject for future research and are looking forward to new insights in that area!
**Q3: Do the findings hold for other contrastive VLMs beyond SigLIP (e.g., CLIP)?**
SigLIP and CLIP operate on the same principle of aligning representations/embeddings for texts and images and the difference is only in the choice of the loss function. We expect the same results to hold for CLIP as well. The reason we use SigLIP is because of its superior performance and widespread use (>1M downloads/month in Huggingface).
**Q4: How does additional, quality-based data filtering affect the findings presented in this work?**
Thank you for raising this question. We have conducted new experiments to verify this. Please refer to the global response for a summary of our findings. Generally, we observe that the same conclusions hold even after applying quality filters based on image-text similarity. We will add more details about this in the revised version of the paper.
**Q5: Are the findings limited to geographic diversity?**
While geographic and socioeconomic diversity are inherently linked due to different income levels and economic circumstances in countries and regions around the world, Dollar Street does contain data from different income groups for every region. Kindly refer to Figure 3 (left), in which we specifically address these differences.
**Q6: Does the proposed removal of the English-only data filter increase computational cost?**
We believe our findings are orthogonal to having additional, quality-based data filters that can be applied in practice to limit computational cost incurred during training. In particular, while we do not challenge the use of quality filters, our work warns against using those filters that are based on English-only datasets or favor English-only captions. Please refer to the global response for a more detailed response.
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Dear reviewer,
Thank you again for the detailed and constructive comments.
As the discussion period is about to end, we would like to ensure we've addressed all of your questions and concerns. If you feel we have satisfactorily responded, please let us know. Otherwise, please let us know your remaining concerns so we can address them before the discussion period closes.
Thank you
---
Rebuttal Comment 1.2:
Comment: Thank you for your thoughtful response. My concerns are generally addressed, particularly the point that all experiments are compute-matched. I would recommend highlighting this in the revised version and acknowledging that underrepresented communities may have limited compute resources. I am still not fully convinced that the result is unexpected (it seems natural that landmarks are more captured in global data) but the findings seem overall valuable to the community and I have adjusted my rating accordingly.
---
Reply to Comment 1.2.1:
Comment: Dear reviewer,
Thank you for engaging with us and for adjusting your rating. We appreciate the detailed and constructive comments, and we are happy that we have addressed your concerns. We will make sure to highlight these points in future revisions of our work.
Sincerely | Summary: The paper shows that vision-language models trained solely on English filtered data displays a bias towards western-centric benchmarks. They then present a simple solution for training models that perform well on globally diverse datasets while not sacrificing performance on gold standard datasets such as ImageNet. They accomplish this by training on the unfiltered dataset then fine-tuning on the relevant subset for the downstream task.
Strengths: **Originality:**
The paper presents a relevant and unaddressed problem of cultural bias. The remedy of only filtering for english data after pretraining is simple and effective.
**Quality:**
The experiments are thorough and properly support the claims of the paper.
**Significance:**
The paper could impact how researchers train foundational models and help to mitigate cultural bias in future models.
**Clarity:**
The paper is reasonably clear to understand. There are small issues such as with figure 1, but nothing glaring.Overall the weaknesses are minor.
Weaknesses: Overall the weaknesses are minor.
- One potential confounding factor is CLIP-filtering which most text-image models use to improve accuracy. It’s unclear how this approach would interact with this and other filtering techniques. For example, it’s known that CLIP models [1] tend to filter out non-english captions.
- One other potential weakness is that the technique could be computationally expensive since CLIP needs to be retrained on a potentially large English subset.
Technical Quality: 3
Clarity: 2
Questions for Authors: Minor Comments:
- It would be useful to add the size of WebLI for comparison to LAION since most CLIP-like models have been trained on LAION.
- I think the commas in the paper have been messed up for the numbers.
I think this statement in the contributions is a bit strong and overclaiming. The logical leap from zero-shot classification on Dollar Street to harming low socioeconomic groups is a bit far:
"filtering the training data to English image–text pairs negatively impacts cultural diversity and disproportionately hurts communities of lower socioeconomic status, exacerbating existing disparities. Its impact is demonstrably captured by zero-shot classification accuracy on Dollar Street."
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful and insightful feedback. We especially appreciate their positive assessment of the originality, quality, significance, and clarity of our work. We hope that our rebuttal below addresses all of the reviewer’s questions, are happy to provide more details, and look forward to the reviewer’s response to our rebuttal.
**Q1: How does additional, quality-based data filtering affect the findings presented in this work?**
Thank you for raising this question. We have conducted new experiments to verify this. Please refer to the global response for a summary of our findings. Generally, we observe that the same conclusions hold even after applying quality filters based on image-text similarity. We will add more details about this in the revised version of the paper.
**Q2: Does the proposed removal of the English-only data filter increase computational cost?**
We believe our findings are orthogonal to having additional, quality-based data filters that can be applied in practice to limit computational cost incurred during training. In particular, while we do not challenge the use of quality filters, our work warns against using filters that are based on English-only datasets or favor English-only captions. Please refer to the global response for a more detailed response.
**Q3: Size of WebLI**
As is detailed in Section 2 of our paper, all models are trained on 10B image–text pairs (roughly 610k training steps). Hence, all models are compared on a *compute-matched* setup: *en* models are trained for ~3 epochs, while *globe* models are trained for a single epoch. Please refer to the global response for more details.
**Q4: What evidence supports the claim that English-only data filtering exacerbates existing disparities?**
We show examples of this in Figure 1.c and Figure 3 (left). For instance, accuracy on Dollar Street for low-income groups ($0-200) drops from 35% using *globe-tl* to less than 29.9% when trained on English-only data. Similarly, accuracy on examples from the African continent drops from 38.4% using *globe-tl* to less than 35.8% using English-only data. Because of this, pretraining on English-only exacerbates existing disparities because there is already a large gap in performance between low-income and high-income regions (as shown in the figures) and training on English-only data increases those gaps.
We also thank the reviewer for the comment regarding the commas and will make sure to carefully check this for future revisions of our paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. My concerns have been addressed in the rebuttal. I am maintaining my rating of weak accept. My justification is that the paper provides some good empirical insights on how to handle english and non-english data for large-scale VLM training. I think the scope and generality of the findings are relatively small which is why my score isn't higher, but I like the paper and think it could be useful for other researchers.
---
Reply to Comment 1.1.1:
Title: Follow up
Comment: Dear reviewer,
Thank you for engaging with us. We appreciate the detailed and constructive comments, and we are happy that we have addressed your concerns.
Sincerely | Summary: For the field of training contrastively learned-based VLM, this paper firstly displays that training from English data would lead to worse cultural diversity in zero-classification evaluation. By discarding the influence of languages in evaluation, this paper proposed a geo-localization task, which could observe a trained model underperforms globe and globe-tl data. Besides, the paper also explores the possible strategies for achieving a better trade-off between standard performance and cultural diversity. According to the observations, the authors found that pre-train on the globe and fine-tuning in English is the optimal strategy with less cost.
Strengths: - The paper points out that cultural diversity is important but neglected in standard benchmarks.
- The paper discussed the practice of development strategies in pretrain and fine-tuning the constrastive VLM for the trade-off between standard performance and cultural diversity, which makes the practice benefit from this research.
Weaknesses: - The writing is not clear enough. The structure and relation between different parts of this paper is not clear.
- The necessity of the proposed geo-localization task remains unclear in this paper. We don’t know the necessity of geo-localization compared with prior metrics that could reflect the cultural diversity of constrastive VLM.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could you please restate the definition of the concept of Socioeconomic Diversity for me? Additionally, how can it be measured? What level of diversity do we expect the model to achieve?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful and insightful feedback. We provide our answers to the reviewer’s raised questions and concerns below, are looking forward to their response, and hope for a favorable reassessment of our scores.
**Q1: How is the paper structured?**
We are sorry to hear that the reviewer found the structure of our paper to be unclear. We motivate and present a brief overview of our work in Section 1, before moving on to discussing the chosen model architecture as well as training and evaluation datasets in Section 2. We also briefly summarize our key findings at the end of Section 2 and explain them in more detail in Section 3. We conclude with a discussion of related work, limitations and future work. If the reviewer has any specific suggestions for improving clarity, we would be happy to hear them and incorporate them in the revised version of the paper.
**Q2: How does the proposed geo-localization task compare to alternative metrics of cultural diversity in contrastive VLMs?**
As can be seen in Figure 5 (right), the proposed geo-localization task is well correlated with other metrics of cultural diversity in contrastive VLMs (such as zero-shot classification on Dollar Street, GLDv2, GeoDE and MaRVL). One important distinction, however, is that the few-shot geo-localization task only evaluates the learned image embeddings while discarding the text tower, which can provide insightful additional information. In addition, it provides a very strong signal as shown in Table 2, where differences in accuracy can reach up to 10%.
**Q3: How is socioeconomic diversity defined and measured in the context of this work?**
As noted in Section 5, we do not claim to offer a precise definition of cultural or socioeconomic diversity in the context of VLMs. Instead, we use model performance across different countries, regions, and income groups as a useful proxy for these concepts. We acknowledge in Section 5 (limitations) that this may not cover all aspects of cultural and socioeconomic differences.
---
Rebuttal Comment 1.1:
Title: Follow-up
Comment: Dear reviewer,
Thank you again for the detailed and constructive comments.
As the discussion period is about to end, we would like to ensure we've addressed all of your questions and concerns. If you feel we have satisfactorily responded, please let us know. If you have any further questions, we are happy to address them before the discussion period closes.
Thank you
---
Rebuttal Comment 1.2:
Title: Official Comment by Reviewer mxHk
Comment: Thanks for the response which addressed most my concerns. I would keep my rating. | Summary: The paper compares performance of VLMs trained on just English data, multilingual data and the English translations of the multilingual data, on a range of benchmarks including Western-centric ones as well tasks that involve geographically diverse inputs. The paper finds that (i) training on just English data negatively impacts cultural diversity, (ii) existing multilingual evaluations may not work as well as intended due to their images still being Western-centric, (iii) it is possible to improve cultural understanding without sacrificing performance on the popular Western-centric benchmarks
Strengths: 1. The evaluations are done throughly, and a new test is proposed, which addresses limitations of existing evaluations (e.g. XM3600 images are still Western-centric).
2. Despite some concerns about the setup (see Weaknesses below), I think the paper raises an important message and some of the findings probably would hold for other training setups
3. The paper proposes a new training pipeline to improve cultural understanding without impacting performance on Western benchmarks (ImageNet). This probably has practical value for a lot of ML practitioners.
Weaknesses: 1. In the "Summary of Findings" section, the claims should be made more specific and grounded in evidence, e.g.
(i) "exacerbating existing disparities" --> the authors should show how much the performance *gap* between best and worst groups increase by training on **en** versus **globe**,
(ii) instead of saying "world knowledgeable", quote specific numbers and findings.
2. In figure 1 (and other parts of the paper), the authors claim that models trained on only English data do poorly on geographically benchmarks, e.g. by confusing landmarks with similar ones in Western countries. However, this result is expected given that the **en** data is a subset of **globe**, and the landmarks test may be in-distribution for models trained on **globe** and out-of-distribution for those trained on **en**. I think the paper's claim could be made stronger by better controlling for these confounding factors (e.g. by removing test-set knowledge contamination).
3. Besides, I find the dataset construction used in the experiments unrealistic: the authors state that "globe denotes the raw, multilingual data with minimal filtering applied" - many VLMs are trained on highly filtered multilingual datasets (e.g. LAION)
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Since **en** is a subset of **globe** (and thus assumably containing fewer training samples), are all models just trained until convergence? How are the different training set sizes controlled for in the experiments?
2. In Conclusion, the authors claim that "there seems to be a trade-off between optimized performance on standard benchmarks and maintaining cultural diversity" - I thought the paper already shows in Section 3.4 that this trade-off could be avoided?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have sufficiently address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful and insightful feedback. We especially appreciate their positive assessment of the thoroughness, importance and practicality of our work. We hope that our rebuttal below addresses all of the reviewer’s questions, are happy to provide more details, and look forward to the reviewer’s response to our rebuttal.
**Q1: How much does training on en instead of globe increase the performance gap between groups of high and low socioeconomic status?**
We show examples of this in Figure 1.c and Figure 3 (left). For instance, zero-shot classification accuracy on Dollar Street for low-income groups ($0-200) drops from 35% with *globe-tl* to less than 29.9% with *en*. Similarly, accuracy on examples from the African continent drops from 38.4% using *globe-tl* to less than 35.8% using *en*. Because of this, pretraining on English-only data exacerbates existing disparities because there is already a large gap in performance between low-income and high-income groups (as shown in the figures) and training on English-only data increases these performance gaps.
**Q2: How does additional, quality-based data filtering affect the findings presented in this work?**
Thank you for raising this question. We have conducted new experiments to verify this. Please refer to the global response for a summary of our findings. Generally, we observe that the same conclusions hold even after applying quality filters based on image-text similarity. We will add more details about this in the revised version of the paper.
**Q3: Shouldn’t it be expected that the globe models outperform the en model on benchmarks measuring global understanding?**
Not necessarily. Please refer to the global response for our answer to this question.
**Q4: How and for how long are the models trained?**
As is detailed in Section 2 of our paper, all models are trained on 10B image-text pairs (roughly 610k training steps). Hence, all models are compared on a *compute-matched* setup: *en* models are trained for ~3 epochs, while *globe* models are trained for a single epoch. Please refer to the global response for more details.
**Q5: Can the trade-off between good performance on standard benchmarks and maintaining cultural diversity be avoided?**
While pre-training on global data and fine-tuning on English data allows to balance performance on standard and culturally diverse evaluation metrics, neither fine-tuning nor data mixing allow for complete avoidance of the tradeoff between these metrics (as can be seen in Figure 5 (left)). It is however possible that other approaches (such as different training paradigms or model weight merging) might lead to improved tradeoffs and maybe eventually even a complete avoidance thereof. We are looking forward to future work in this area!
---
Rebuttal Comment 1.1:
Title: Follow-up
Comment: Dear reviewer,
Thank you again for the detailed and constructive comments.
As the discussion period is about to end, we would like to ensure we've addressed all of your questions and concerns. If you feel we have satisfactorily responded, please let us know. If you have any further questions, we are happy to address them before the discussion period closes.
Thank you | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for the detailed and constructive questions and comments. We especially appreciate the positive feedback on the thoroughness of our experiments, significance of findings, and clarity of work. In the below, we answer the concerns shared by multiple reviewers. We hope that these, in addition to our responses to individual reviewers, help clarify any open questions.
**Q1: How does additional, quality-based data filtering affect the findings presented in this work?**
This is an excellent point and we plan to include this in the revised version of the paper. We have pretrained two additional SigLIP models with quality-based filtering on 1B image–text pairs each: *en* and *globe-tl*. We evaluate these on Dollar Street, GeoDE, ImageNet and COCO. We observe that the same qualitative conclusions continue to hold even after applying a quality filter based on image-text similarity. For instance, *globe-tl* performs better than *en* on 0-shot Dollar Street, GLDv2, and GeoDE but performs worse on Western-oriented benchmarks, such as ImageNet and COCO retrieval. In addition, the improvement in geolocalization is particularly significant.
| Task | en | globe-tl |
| ---- | -- | -------- |
|Dollar Street | 46.1% | 48.3% |
|GLDv2 | 20.7% | 28.2% |
|GeoDE | 90.5% | 90.5% |
|0-shot ImageNet | 67.0% | 66.3% |
|COCO I2T R@1 | 56.7% | 52.8% |
|COCO T2I R@1 | 37.3% | 34.5% |
|Dollar Street 10-shot | 9.4% | 9.8% |
|GeoDE 10-shot (country) | 10.1% | 14.7% |
|GeoDE 10-shot (region) | 22.0% | 28.2% |
We plan to additionally train all three variants *en*, *globe*, and *globe-tl* for 10B quality-filtered image–text pairs as was done in the paper, and include the results in the supplementary material of the paper. But as shown above, the same conclusions hold even after applying quality filters.
**Q2: Shouldn’t it be expected that the globe models outperform the en model on benchmarks measuring global understanding?**
We thank the reviewers that have raised this important question. While this might seem intuitive at first sight, it is not necessarily true. There are English websites published in most countries and pictures of landmarks are frequently captured by tourists, so it is possible that the English subset of the web is sufficient to capture global diversity. In addition, we have found that certain seemingly global datasets (such as XM3600) seem to fail in capturing cultural nuances. Hence, a key contribution of our work is identifying four zero-shot classification datasets actually capturing global understanding, as well as introducing the few-shot geo-localization task. Given that this is the first comprehensive evaluation of cultural diversity in SOTA contrastive VLMs as far as we know, we hope that these results suffice to clear any lingering doubts about the necessity of training on global data when building foundation models.
**Q3: How and for how long are the models trained?**
As is detailed in Section 2 of our paper, all models are trained on 10B image–text pairs (roughly 610k training steps). Hence, all models are compared on a *compute-matched* setup: *en* models are trained for ~3 epochs, while *globe* models are trained for a single epoch. We have observed that performance gaps between *en* and *globe* models continue to persist even for significantly longer training durations. We will add more details about this last point in the revised version of the paper.
**Q4: Does the proposed removal of the English-only data filter increase computational cost?**
All models presented in our paper incur the same computational cost due to being trained for the same number of total examples seen (albeit different numbers of unique examples). As can be seen in our response to Q1, our findings are orthogonal to having additional, quality-based data filters that are often applied in practice to limit computational cost incurred during training. In particular, while we do not challenge the use of quality filters, our work warns against using filters that are based on English-only datasets or favor English-only captions.
We are grateful again for the reviewers' detailed and constructive feedback. If we have satisfactorily answered your questions, we hope you would consider revising your scores. Otherwise, please let us know if there are any other questions or concerns, so we can respond to them during the discussion period. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AGILE: A Novel Reinforcement Learning Framework of LLM Agents | Accept (poster) | Summary: This paper jointly introduces an evaluation environment for LLM agents and an LLM agent architecture called AGILE.
The environments is a Question-Answering environment concerning the characteristics of commercial products sourced from Amazon. The benchmark itself is composed of 3 tasks: 1) a fact retrieval task where the agent should retrieve information about a product, 2) a product query task where the agent should retrieve a product that matches query criteria, and 3) a reasoning task where the agent should reason from the knowledge it has about a product to answer questions about possible uses or suitability of a product.
The agent is composed of an LLM backbone that, under the control of an executive module, can retrieve memories of previous interactions stored in its database and use tools. Tools are used for agentic flow (reflect, submit answer), for memory management (retrieve memory, put new memory in database), and for asking advice from a human expert (access to ground truth about a product). The LLM is finetuned with SFT on these tasks. The agent is further fine tuned on a scored version of the tasks with reinforcement learning (PPO).
The authors demonstrate the superiority of their architecture on their proposed environment and on a medical QA benchmark over one-shot prompting gpt4, and demonstrate that reflection, tool use, memory, ground truth product advice, and reinforcement learning contribute positively, with varying degrees, to agent performance.
Strengths: * This paper combines known LLM agent modules and studies their impact in two question-answering domains that are of interest to industry practitioners: product recommendation and medical knowledge;
* Agents asking for help has been studied at length in the embodied question answering literature with previous-generation agents (see for instance https://arxiv.org/abs/1909.01871 or https://arxiv.org/abs/2302.04865), but as far as I am aware has not been studied with LLM-based agents;
* The paper is fairly clear and easy to read, the proposed contributions are easy to understand, examples are provided for the task, and the results are comprehensive;
* The claims of the paper are well substantiated with extensive experiments, and the ablations allow to easily disentangle the effect of different components of the agent;
* The data collection process for creating the benchmark seems sound and the task looks challenging and interesting;
* Reinforcement-Learning studies of complete LLM agents are rare and needed.
* The tables in the related work makes the relationship of this work and other previous art clearer.
Weaknesses: * An overarching and pervasive weakness of this paper that is difficult to ignore and that has a great influence on my judgement is the claim of novelty of the agent architecture. In a post-Voyager (https://arxiv.org/abs/2305.16291) world, LLM-based agents equipped with tool use and retrieval are standard and the AGILE agent presented in this work introduces little conceptual novelty in the agent architecture, while even the title of the paper claims that the agent is novel. LLM agents of this type are so much part of the landscape that many reviews exist that organize existing approaches (see for instance the excellent https://arxiv.org/abs/2309.02427 that is not that recent either). The approach is standard enough that advanced software stacks (langchain, llamaindex) are devoted to speeding up building retrieval-augmented generation (RAG)-based agents for industry practitioners. The paper framing should be updated accordingly to reflect that the approach being studied is standard, but applied to a new task.
* Relatedly, it would have been important for the authors to exactly pinpoint the difference between their agent and for instance Voyager, and either implement baselines from existing work on their task or test their agent on other tasks (eg WebShop which is very similar in spirit).
* There are still novel elements in the proposed approach. The first is the use of reinforcement learning. While the use of reinforcement learning to improve language agents is not novel (see https://arxiv.org/abs/2302.02662 for a textworld task or https://arxiv.org/abs/2403.04642 for a math word task), the use of RL in RAG agents has not been studied as far as I am aware. It also makes up a significant portion of the method section and the appendix, however the experiments demonstrate only a marginal improvement due to RL. Why is this the case? How are the training curves like? Has the training converged? I also do not see any standard deviation information for the RL part. The authors justify the lack of error bars with the computational cost of experiments with is fair and completely acceptable where the comparison between methods is straightforward, but would have been helpful in the sft/ppo comparison where the difference is small and could have been due to chance, and it makes sense to concentrate computational effort on a truly novel part of the contribution;
* The other novel element of this paper is the asking for advice part: the model can opt out of the task by seeing the ground truth which it can then report, at a cost (for the RL agent). We can see that allowing this action to the agent increases its accuracy, that increasing the cost of advice results in RL agents using advice less often, and that memoryless agents use advice more often. This is interesting, and could have been a core contribution of the paper, if it were explicit and investigated as such. The conditions under which LLM agents are aware of their own knowledge, as well as them taking appropriate steps to reduce lack of information is a research programme in itself, as has been discussed by John Schulman in his Berkeley lecture in 2023: https://www.youtube.com/watch?v=hhiLw5Q_UFg&t=215s Can the authors predict when the model is going to ask for advice? Does the RL training help the model know what it knows (metacognitive skills)? These are interesting questions that the authors could have investigated;
* Relatedly, the related work paragraph on uncertainty looks unfinished. Relevant work on LLM’s metacognitive abilities are cited but there is no word on how the current work contributes to the discussion;
* (minor) Time of training and number of gpus required for training should appear after the type of gpu; does the training use lora?
* The evaluation for longform answers is based on GPT4, but no evidence is presented to correlate agreement of GPT4 with human judges on this task.
Technical Quality: 3
Clarity: 2
Questions for Authors: * My first and most important suggestion is that the authors should change the framing of the paper. The experimental work is sound and the tasks are worthwhile, but in the current state of the paper the conclusions seem to be that RAG agents work better than prompting on question-answering tasks, which is known. However, the paper could be: 1) a study on RL in RAG agent settings, 2) a study on training metacognitive abilities of RAG agents, and/or 3) a paper introducing a new benchmark (but the tasks seem not that challenging compared with tasks like SWE-bench https://www.swebench.com/ or reasoning tasks like theorem proving https://github.com/openai/miniF2F, https://leandojo.org/).
* [Conclusion, l305] why is LLM system 1 and interaction with tools system 2? What does it mean? If the claim is to be left in the paper it should be explained and adequate citations should be provided (also is this analogy fruitful? What purpose does it serve? is this a claim about the cognitive plausibility of the proposed agent architecture or its resemblance to humans?).
* [Clarity] The paper should explain what mcqa is when first mentioned, since it is confusing for people not familiar with the benchmark. Same goes for the meerkat model.
* Why is the improvement from RL only 2%?
* The original RAG paper should be cited: https://arxiv.org/abs/2005.11401
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: * Some limitations with respect to lack of resources are acknowledged in the appendix (small open source models are trained, only a subset of the training set is used). This is fine; the methods used are pretty powerful and can be expected to scale with increasing compute.
* If understood as a way to help agents answer queries, expert advice is definitely non-scalable. It is fine as a research question, maybe less so for modelling real-world applications. This could be mitigated with long-term memory of the agent (give advice once, retrieve forever). In any case, a discussion on the feasibility of expert advice in real world scenarios would be nice.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Comment: We sincerely thank you for your effort and valuable comments in reviewing our paper. We appreciate your recognition of our contributions and your insightful feedback. We have addressed these concerns in the Rebuttal Section within the given time constraints to the best of our ability.
---
Please let us know if our responses address your concerns. We are grateful for your time and effort in helping us enhance our work.
---
Rebuttal 2:
Rebuttal: > **Q1 (Weakness 1 & Question 1):** Paper framing
A1: Regarding novelty, we would like to note that we have summarized our core contributions at the end of the Introduction section [line 65], aligning exactly with your suggestions. The Methods section also aligns with your summary. We acknowledge that the "novel framework" statement was somewhat overstated. Hence, we will revise our paper's title to: "**AGILE: A Novel Reinforcement Learning Framework for LLM Agents**," to emphasize that our main contribution is the end-to-end RL formulation and training of LLM agents. We will also properly update other parts of the paper to clearly convey this point.
Beyond the RL aspects, we want to highlight other novel contributions of our paper. First, training the agent to proactively seek advice requires it to silently estimate its own confidence before making decisions. This capability cannot be achieved through SFT alone, further demonstrating the importance of RL training.
Second, the ProductQA dataset is a novel contribution. It is the first agent task involving very long trajectories, with each containing thousands of correlated QA sessions and millions of tokens in total. The actions the agent takes in one session influence all future sessions. This characteristic is markedly different from other benchmarks where the impact of actions is much shorter-term.
---
> **Q2 (Weakness 2):** Difference between AGILE and other agents
A2: Agents like Voyager are based on prompt engineering, while AGILE is a RL formulated framework that allows end-to-end training. Application of AGILE to other tasks, such as WebShop, is left to future work.
---
> **Q3 (Weakness 3 & Question 4):** Additional results of RL training
A3: First, we provide training curves in the global response PDF file, which indicates that RL training converged after about 500 steps.
Second, following your suggestion, we conducted multiple independent trials of PPO training to study the variation of the result (see Table 2 in the PDF file). On average, RL training improves the total score by 2.6%, with a standard deviation of 0.3%, demonstrating the significance of RL improvements.
Third, we would like to discuss why RL training improved performance by a moderate 2%. We believe the reasons are: (1) The agile-vic13b-sft model is a strong baseline as it imitates the policy of the agile-gpt4-prompt; (2) SeekAdvice decisions made by agile-gpt4-prompt are nearly optimal under the default advice cost (c=0.3). To further study the impact of RL training, we conducted additional experiments in two more settings:
- We re-generated SFT training data for agile-vic13b-sft such that the agent performs SeekAdvice randomly in 25% of cases. This initial policy is simpler but more general. In this setting, we name the SFT model agile-vic13b-sft-random, and the final model trained with RL on top of it agile-vic13b-ppo-random. As shown in the table in global response, RL training brings a **7.1%** improvement in this setting. Interestingly, the performance of agile-vic13b-ppo-random is better than that of agile-vic13b-ppo that we reported in the paper. We conjecture that random SeekAdvice is a better initial policy because it enables exploration in all directions.
- In the second experiment, we lowered the advice cost to 0.1. After PPO training, the agile-vic13b-ppo-random agent quickly adapted to the new cost, performing SeekAdvice much more aggressively than the initial agent trained by SFT. In this scenario, RL training brings a **22.3%** improvement.
See Table 3 in the PDF file for concrete numbers.
---
> **Q4 (Weakness 4):** RL training for metacognitive skills
A4: RL training helps the model's metacognitive skills. Experiments on ProductQA and MedMCQA datasets show that, after PPO training, the agent seeks advice less often and achieves higher accuracy, indicating improved self-assessment in resolving queries versus seeking assistance.
> **Q5 (Weakness 5):** Related work and our contributions to the topic of uncertainty
A5: We note that AGILE solves a complex self-evaluation problem because the agent must make multiple seeking advice decisions in a long trajectory, each having a long-term influence on future decisions. This differs from existing work in uncertainty. We will add discussion to the revised version.
---
> **Q6 (Weakness 6):** Training details
A6: On ProductQA, SFT takes 3.6 hours and PPO takes 5.5 hours on 8 H800. On MedMCQA, SFT takes 0.9 hours and PPO takes 2 hours. The LLM is trained without using LoRA.
---
> **Q7 (Weakness 7):** Agreement between GPT-4 evaluator and human
A7: We conducted an additional evaluation by randomly selecting 100 triplets (question, reference long answer, model-predicted long answer) from ProductQA and manually labeled the correctness. Our results show a 94% agreement rate between the GPT-4 evaluator and the author.
---
> **Q8 (Question 2):** Analogy to System 1 and System 2 processes
A8: According to the dual-process theory [1], human thinking involves two systems: System 1 (fast and unconscious) and System 2 (slow and conscious). Recent research has explored AI systems incorporating both processes [2]. We propose that AGILE uses LLM for System 1 and integrates external tools for System 2, mirroring human thinking. We will include further discussion in the revised version.
[1] D. Kahneman. 2003. Maps of bounded rationality: Psychology for behavioral economics.
[2] Y. Bengio, Y. Lecun, and G. Hinton. 2021. Deep learning for AI.
---
> **Q9 (Question 3):** Further explanations of MCQA and the Meerkat model
A9: MedMCQA is a multiple-choice QA dataset from medical school entrance exams. Meerkat is a medical LLM trained with high-quality CoT reasoning paths from 18 medical textbooks and diverse instruction-following datasets. We will include these explanations in the revised version.
---
> **Q10 (Question 5):** Citation of RAG paper
A10: We will add it to the paper.
---
Rebuttal Comment 2.1:
Title: Additional questions
Comment: I thank you for your reply, that answers some of my questions. I am especially happy that you took the time to perform several seeds of the RL experiments (the results are much more robust now) and that you performed comparison with humans judgements in the answer to Q7 (I now trust the results of evaluations).
I also appreciate that you performed additional RL experiments to try and show it helps compared to the strong stf baseline. To make sure I really understand: the original sft policy had 0% [SeekAdvice] rates? And what your new experiment does is start out with higher advice rates and show it leads to more efficient use of human advice (esp when cost of advice falls to 0.1, effectively solving the task by soliciting advice more than half of the time)?
As to your remark:
> A2: Agents like Voyager are based on prompt engineering
I think nothing prevents researchers to build a Voyager agent (with an open weights model) and perform RL on it right?
I also appreciate that you emphasize that you build a new task, but from what I can see it doesn't seem very hard since agents (with quite standard architecture, again I think the type of agent you implement is quite common, see also https://arxiv.org/abs/2304.03442) based on 13b models, without RL or search, are able to solve most of it. That's why I was calling for evaluations on other datasets, to be able to compare. (ReAct is not a very strong baseline). I appreciate your new experiments on HotpotQA, and would be interested to know how it compares to the SotA listed in the benchmark homepage (in terms of metrics), and how you implemented advice in the case of this dataset.
---
Rebuttal 3:
Title: Response to additional questions (Part 1)
Comment: Thank you for your valuable feedback. We would like to address each of your questions as follows.
---
> **Q1:** I also appreciate that you performed additional RL experiments to try and show it helps compared to the strong stf baseline. To make sure I really understand: the original sft policy had 0% [SeekAdvice] rates? And what your new experiment does is start out with higher advice rates and show it leads to more efficient use of human advice (esp when cost of advice falls to 0.1, effectively solving the task by soliciting advice more than half of the time)?
**A1:** In all training experiments, we consider three types of SeekAdvice rates: 1) the rate in the SFT training data, 2) the rate predicted by the SFT agent after training, and 3) the rate predicted by the RL agent after training.
For both the original and additional experiments, the SeekAdvice rate in the SFT training data (rate 1) is 25%. In the original experiment, GPT-4 makes SeekAdvice decisions. In the additional experiment, these decisions are made randomly but maintain the 25% rate.
In the original experiment, the SFT agent (agile-vic13b-sft) predicts SeekAdvice in 25.6% of cases, closely matching the training data. In contrast, in the additional experiment, the SFT agent (agile-vic13b-sft-random) predicts SeekAdvice in only 1.4% of cases. This discrepancy is due to the LLM's greedy decoding. Although it predicts SeekAdvice with logit scores corresponding to a 25% probability, the greedy decoder typically selects other actions with higher logit scores. We also tried random sampling decoding, and the results are attached to the table below.
In the original experiment, the RL agent (agile-vic13b-ppo) has a SeekAdvice rate of 23.3%, while in the additional experiment, the RL agent (agile-vic13b-ppo-random) has a rate of 30.6%. The new agent achieves a higher overall score, likely because it starts from a simpler and more general initial policy, allowing PPO to find a better final policy. In contrast, the original agent starts with a strong GPT-4 policy and only makes slight adjustments.
When the SeekAdvice cost is reduced to 0.1, PPO training converges to a much more aggressive SeekAdvice rate of 67.1%. This experiment demonstrates that RL training is sensitive to the SeekAdvice cost, always optimizing the policy under specific costs. In contrast, the GPT-4 agent remains insensitive to cost changes, even when the cost is provided as input. The SFT agent is also insensitive since generating optimal SFT data for a given cost is difficult.
| | Seeking Advice Cost | [SeekAdvice] Rate in SFT Training Data | Greedy Decoding | Greedy Decoding | Greedy Decoding | Sampling Decoding | Sampling Decoding | Sampling Decoding |
|---|---|---|---|---|---|---|---|---|
| | | | Advice Rate | Accuracy | Total Score | Advice Rate | Accuracy | Total Score |
| agile-vic13b-sft | 0.3 | 0.25 | 0.256 | 0.843 | 0.766 | 0.308 | 0.839 | 0.747 |
| agile-vic13b-ppo | 0.3 | - | 0.233 | 0.854 | 0.784 | 0.278 | 0.842 | 0.759 |
| agile-vic13b-sft-random | 0.3 | 0.25 | 0.014 | 0.749 | 0.745 | 0.291 | 0.823 | 0.736 |
| agile-vic13b-ppo-random | 0.3 | - | 0.306 | 0.89 | 0.798 | 0.363 | 0.896 | 0.787 |
| agile-vic13b-sft-random | 0.1 | 0.25 | 0.014 | 0.749 | 0.748 | 0.291 | 0.823 | 0.794 |
| agile-vic13b-ppo-random | 0.1 | - | 0.671 | 0.981 | 0.914 | 0.573 | 0.941 | 0.884 |
---
> **Q2:** I think nothing prevents researchers to build a Voyager agent (with an open weights model) and perform RL on it right?
**A2:** We agree with you that our main contribution is the end-to-end RL formulation and training of LLM agents.
In addition, our agent can manage very long trajectories. In the ProductQA task, there could be hundreds of question-answering rounds and could generate very long trajectories yield training sequences that span millions of tokens. To address the challenge, two key capabilities are necessary: 1) Some actions can erase or modify the context to prevent the context from infinitely extending and unmanageable for the LLM; 2) session-level RL training. These challenges cannot be resolved simply by applying PPO training on Voyager-like agents. We discuss the solutions to these issues in our work.
---
Rebuttal 4:
Title: Response to additional questions (Part 2)
Comment: > **Q3:** I also appreciate that you emphasize that you build a new task, but from what I can see it doesn't seem very hard since agents (with quite standard architecture, again I think the type of agent you implement is quite common, see also https://arxiv.org/abs/2304.03442) based on 13b models, without RL or search, are able to solve most of it. That's why I was calling for evaluations on other datasets, to be able to compare. (ReAct is not a very strong baseline). I appreciate your new experiments on HotpotQA, and would be interested to know how it compares to the SotA listed in the benchmark homepage (in terms of metrics), and how you implemented advice in the case of this dataset.
**A3:** Thanks for your comments.
1. Experiment implementation.
In the HotPotQA task, the agent has the option to either use search tools, seek advice, or directly predict an answer. If the agent chooses to use search tools, it generates a search query to retrieve relevant information, which is then appended to the agent's context. If the agent seeks advice, it obtains a human answer (ground-truth answer in our setting).
2. Comparison to SoTA listed in the benchmark homepage.
Thank you for your reminder. We will include the SoTA results in our results table. However, we want to highlight two key differences between our AGILE agent and the methods on the leaderboard:
- Our agent uses an external retrieval tool that returns the most relevant document based on a query. This retriever is the same as the one we used for ProductQA and MedicalQA and is not fine-tuned on the HotpotQA dataset. During training, we focus solely on training the LLM of the agent. In contrast, the top five systems on the leaderboard [1,2,3] (except the second one, which lacks a reference link) all train task-specific retrieval models on the 90K HotpotQA training examples. Training such models significantly boost accuracy, and these works claim it as their main technical contribution. Our paper primarily studies the training of LLMs and treats external tools as black boxes, so using a task-independent retriever aligns better with our objectives.
- The top systems on the leaderboard [1,2,3] train separate answer extraction models to extract answers from document spans. Our agent directly generates answers from the LLM. While the generative nature may lower exact match (EM) accuracy, we believe it enhances simplicity and generalizability. Besides EM accuracy, we also calculated a GPT-4 accuracy where answers are compared for correctness, recognizing (USA, United States) as correct. As shown in the table below, our system's actual accuracy is much higher than the EM accuracy.
Regarding baselines, we noted the original ReAct baseline's implementation in [4] was suboptimal. We reproduced their results using GPT-4, leading to improved performance. In the table below, we also provide results of other works that use the same experimental setting as ours (using a black-box retrieval tool and generative answers). Our system's overall performance surpasses all these baselines.
| Method | Advice Rate | Accuracy (Exact match) | Accuracy (GPT-4 Evaluator) | Total Score (Exact match) |
|---|---|---|---|---|
| ReAct [4] | - | 0.351 | - | - |
| ReAct (gpt-4) | - | 0.482 | - | - |
| CRITIC [5] | - | 0.443 | - | - |
| Expel [6] | - | 0.390 | - | - |
| AutoAct [7] | - | 0.384 | - | - |
| agile-gpt4-prompt | 0.194 | 0.664 | 0.842 | 0.567 |
| agile-vic13b-w/o Advice | 0.000 | 0.553 | 0.751 | 0.553 |
| agile-vic13b-w/o RL | 0.171 | 0.668 | 0.857 | 0.617 |
| agile-vic13b-ppo (ours) | 0.156 | 0.675 | 0.858 | 0.628 |
| Supervised SoTA [1] | - | 0.727 | - | - |
[1] J. Zhang, et al. End-to-End Beam Retrieval for Multi-Hop Question Answering, NAACL 2024.
[2] Z. Yin, et al. Rethinking label smoothing on multi-hop question answering, CCL 2023.
[3] XY. Li, et al. From easy to hard: Two-stage selector and reader for multi-hop question answering, ICASSP 2023.
[4] S. Yao, et al. ReAct: Synergizing reasoning and acting in language models, ICLR 2023.
[5] Z. Gou, et al. CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, ICLR 2024.
[6] A. Zhao, et al. Expel: Llm agents are experiential learners, AAAI 2024.
[7] S. Qiao, et al. AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning, ACL 2024.
---
Rebuttal 5:
Comment: Dear Reviewer b4yX,
Thank you for taking the time to review our manuscript. We sincerely appreciate your insightful feedback. As the deadline for the discussion phase nears, we would like to kindly remind you of our recent response, in which we diligently addressed the concerns you raised and provided detailed explanations.
Should you have any further questions or require additional clarifications, we would be eager to address them promptly.
Thanks again for your valuable feedback.
---
Rebuttal Comment 5.1:
Comment: Thank you for your thorough response!
In light of the new experiments and additional data, i find that some of my concerns have been addressed.
This paper:
* introduces a long-horizon task that has real-world relevance, in which one can ask for expert advice;
* introduces a RL formulation (and training algorithm) for LLM-agents;
* shows that traditional LLM agent components help solve the task (known);
* shows that RL training help solving the task, and, importantly, that RL training allows the agent to ask for advice when it is not able to solve the task, which leads to high scores when the cost of advice is low;
* implements and tests the agent on HotPotQA where it achieves scores competitive with specialized baselines.
I am still concerned that RL training seems not so effective and that large-scale, cheap advice is an unrealistic scenario, but the agent learning to defer when it doesn't know the answer, as well as the experiments on HotpotQA have convinced me to raise my score. I would also kindly request that the formulation of the paper is amended to reflect the exact components of the method that are novel.
---
Reply to Comment 5.1.1:
Comment: We really appreciate your thoughtful feedback and recognition of our work's contributions. We are grateful for the improved score, which is truly encouraging. Your constructive feedback helped us enhance our work, and we will certainly integrate your suggestions into our revised version.
In response to your concerns about the effectiveness of RL training, we would like to highlight two key advantates:
1. RL training enables the discovery of a better policy compared to the one obtained from SFT training alone.
2. RL training is particularly sensitive to the SeekAdvice cost, optimizing the policy according to specific costs, whereas SFT training is insensitive since generating optimal SFT data for different cost is difficult.
Thank you again for your effort and valuable feedback, which helped us improve our paper. | Summary: This work presents a novel framework for LLM agents named AGILE. The entire AGILE system is trained end-to-end using reinforcement learning. A key feature of AGILE is its ability to seek advice from external human experts. Additionally, the authors have developed ProductQA, a challenging dataset of complex QA, to comprehensively evaluate the capabilities of the agent. Extensive experiments show that an agent within this framework, even when based on a smaller model and trained with RL, can outperform GPT-4.
Strengths: 1. The paper introduces a novel framework, AGILE, for LLM agents that integrates memory, tools, and expert consultations, all optimized through reinforcement learning.
2. The development of the ProductQA dataset.
3. The paper is well-organized and clearly written. The architecture and workflow of the AGILE framework are explained with clarity.
4. The experiments on ProductQA and MedMCQA verify the AGILE framework and show that AGILE agents based on 13B and 7B LLMs trained with PPO can surpass GPT-4 agents.
Weaknesses: 1. It is somewhat like WebGPT (in terms of actions and policy learning strategy), with extensions supporting memory, expert consultation, and reflection.
2. The experiments are all related to a single agent. It would be better to show the application for multiple agents and include planning.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How can this framework be extended to support multiple agents?
2. In Table 5, the accuracy of "w/o Memory" and "w/o Tool-Use" is higher than that of agile-vic13b-ppo. Is the metric "Accuracy – Advice Rate" a more straightforward one than Total Score?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your positive feedback and effort in reviewing our paper. Thank you for your constructive questions. In our response, we will quote each question and provide our answers accordingly.
## Response to comments
---
> **Q1 (Weakness 1):** It is somewhat like WebGPT (in terms of actions and policy learning strategy), with extensions supporting memory, expert consultation, and reflection.
**Answer:** Thanks for bringing this to our attention. We would like to clarify the key distinctions between our proposed AGILE framework and WebGPT:
- WebGPT primarily uses a policy learning strategy to train the agent for web operations and does not incorporate memory, expert consultation, or reflection. In contrast, AGILE emphasizes proactively seeking advice from human experts, allowing the agent to achieve high-level accuracy when handling complex and challenging questions. The agent can also enhance its performance over time by reflecting on expert feedback and memorizing it for future use.
- While WebGPT utilizes reinforcement learning, our work addresses the significant challenge of training the policy for invoking various modules and the reasoning, planning, reflection, and seeking advice abilities of the LLM agent in an end-to-end manner. This challenge is particularly pronounced in long trajectory scenarios. To overcome this, we propose a session-level optimization algorithm that facilitates policy learning at the session level, thereby mitigating the difficulties associated with long trajectory optimization.
- As shown in Table 5 in our paper, incorporating memory, seeking advice, and reflection into AGILE results in relative total score improvements of 4.0%, 5.0%, and 1.7%, respectively, on the ProductQA dataset. These results demonstrate the necessity of these modules for LLM-based agents in practical scenarios.
---
> **Q2 (Weakness 2):** The experiments are all related to a single agent. It would be better to show the application for multiple agents and include planning.
**Answer:** We appreciate your valuable suggestions. Applying AGILE to multi-agent systems is indeed an interesting direction. The AGILE framework can be extended to facilitate interactions with machine agents in various roles, such as students or teachers, and in different formats, such as debates or coordination. For example, a multi-agent version of AGILE could seek advice from a human expert at a high cost or from a more capable machine agent at a lower cost. Furthermore, the planning for multiple agents can be enhanced using end-to-end RL training.
In our paper, we primarily focus on the RL formulation and end-to-end training of LLM agents, along with their seeking advice capabilities, which we believe are fundamental and essential for agent performance. Due to space and time limitations, the application of AGILE to multi-agent systems will be addressed in future work. Thank you for your constructive feedback.
---
> **Q3 (Question 1):** How can this framework be extended to support multiple agents?
**Answer:** Thanks for your question. Please refer to our response to Q2.
---
> **Q4 (Question 2):** In Table 5, the accuracy of "w/o Memory" and "w/o Tool-Use" is higher than that of agile-vic13b-ppo. Is the metric "Accuracy – Advice Rate" a more straightforward one than Total Score?
**Answer:** Thank you for this valuable comment. In Table 5, although the accuracy of "w/o Memory" and "w/o Tool-Use" is higher than that of agile-vic13b-ppo, they exhibit a significantly higher Advice Rate compared to agile-vic13b-ppo. This higher Advice Rate results in lower overall performance for "w/o Memory" and "w/o Tool-Use" when assessed using our Total Score metric.
In our paper, the total score is defined as the average reward across all sessions, taking both the advice rate and the accuracy into account. Specifically, the total score is expressed as "Total Score = Accuracy - Seeking Advice Cost * Advice Rate". This formulation encompasses the "Accuracy – Advice Rate" metric as a special case where the Seeking Advice Cost is set to 1.
It's important to note that the Seeking Advice Cost can vary across different real-world applications. For instance:
- In a math problem-solving task, the required human expert might need high professional expertise in mathematics, resulting in a high cost (e.g., a cost of 1).
- In product question-answering scenarios, the human expert could be a normal person who has received short-term customer service training, leading to a lower cost (e.g., a cost of 0.3).
---
Please let us know if our replies address your concerns. Thanks for taking the time to consider this discussion. We appreciate your time and effort in helping us improve our work.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed response. | Summary: This paper proposes AGILE, a reinforcement-learning based framework for finetuning LLMs for conversational QA tasks. The models are initially trained using imitation learning, then further finetuned with RL. Once finetuned, the models show strong performance, surpassing GPT-4 while using much smaller models.
Strengths: - The proposed framework is interesting and novel, distilling the usage of various tools, reflection, memory retrieval/writing, and human advice seeking from larger models' trajectories, to a smaller model, then further training it using RL to surpass the performance of the larger model
- The performance is strong
- Introduces a new dataset, ProductQA which is a useful resource for conversational QA in online shopping scenarios.
- Cost-benefit analysis of advice seeking is insightful.
Weaknesses: I only have two main concerns:
- Because the model is only evaluated on two conversational QA datasets (and not on any of the mentioned datasets in Table 8), it's difficult to judge the generality of the proposed method.
- There is no comparison with standard ppo
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the reason behind why there is not comparison with rl finetuning baseline (standard ppo)?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We genuinely appreciate your positive feedback and the time you invested in reviewing our paper. Thank you for your insightful questions. In our response, we will address each question individually, quoting them and providing our answers accordingly.
## Response to comments
---
> **Q1 (Weakness 1):** Because the model is only evaluated on two conversational QA datasets (and not on any of the mentioned datasets in Table 8), it's difficult to judge the generality of the proposed method.
**A1:** Thanks for the helpful suggestion. To address the concern regarding the generality of AGILE, we conducted additional experiments on the HotPotQA dataset, which is one of the tasks listed in Table 8. The main results and ablation study are presented below.
Compared with ReAct implemented by prompting GPT-4, our method (agile-vic13b-ppo) shows a 19.3% relative improvement in accuracy. Furthermore, agile-vic13b-ppo achieves a 10.7% relative improvement in the total score over agile-gpt4-prompt, which is the AGILE agent implemented by prompting GPT-4. The ablation study underscores the indispensability of seeking advice and PPO training in achieving the agent's strong performance.
| Method | Advice Rate | Accuracy | Total Score |
|---|---|---|---|
| ReAct (gpt-4) | - | 0.482 | - |
| agile-gpt4-prompt | 0.194 | 0.664 | 0.567 |
| agile-vic13b-w/o Advice | 0.000 | 0.553 | 0.553 |
| agile-vic13b-w/o RL | 0.171 | 0.668 | 0.617 |
| agile-vic13b-ppo (ours) | 0.156 | 0.675 | 0.628 |
---
> **Q2 (Weakness 2 & Question 1):** What is the reason behind why there is not comparison with rl finetuning baseline (standard ppo)?
**A2:** Thanks for your question. In the ProductQA task, there could be hundreds of question-answering rounds. These rounds are correlated. Actions in earlier rounds can write the memory, thus have lasting effects on subsequent rounds. For example, knowledge distilled from seeking advice can help respond in the future. These long trajectories yield training sequences that span millions of tokens and are impractical for standard PPO training. To address this issue, we proposed a session-level training algorithm that takes lasting effect into account, which is detailed in Appendix A of our paper.
---
Please let us know if our replies address your concerns. Thanks for taking the time to consider this discussion. We appreciate your time and effort in helping us improve our work.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you to the authors for the additional results. This response addressed my concerns. | Summary: The paper "AGILE: A Novel Framework of LLM Agents" introduces a new framework for Large Language Model (LLM) agents designed to handle complex conversational tasks. The framework, named AGILE (AGent that Interacts and Learns from Environments), incorporates LLMs, memory, tools, and interactions with experts. AGILE is formulated as a reinforcement learning (RL) problem and is fine-tuned using Proximal Policy Optimization (PPO). The authors present a new dataset, ProductQA, for evaluating the framework and report that AGILE agents outperform GPT-4 in their experiments. The paper claims significant improvements in performance due to the integration of memory, tools, expert consultation, and RL training.
Strengths: **Comprehensive Evaluation:** The creation of the ProductQA dataset and the extensive experiments conducted on both ProductQA and MedMCQA provide a robust evaluation of the framework’s capabilities.
**Significant Performance Improvements:** The reported improvements over GPT-4 agents in both ProductQA and MedMCQA are noteworthy, indicating the effectiveness of the AGILE framework.
**Detailed Methodology:** The paper provides a thorough explanation of the RL formulation, training processes, and the roles of different components within the framework, which enhances reproducibility.
Weaknesses: **Limited Novelty:** The reliance on human experts for advice, while useful, is not a novel concept and has been explored in previous works and becomes a common practice. The paper does not sufficiently differentiate its approach to human expert interaction from existing methods.
- Xiao, H. and Wang, P., 2023. Llm a*: Human in the loop large language models enabled a* search for robotics. arXiv preprint arXiv:2312.01797.
- https://python.langchain.com/v0.1/docs/use_cases/tool_use/human_in_the_loop/
**Scalability Concerns:** The scalability of AGILE with more complex environments is not addressed. It is unclear how well the framework would perform with task complexity.
Technical Quality: 2
Clarity: 2
Questions for Authors: **Benchmark Selection:** Why were ProductQA and MedMCQA specifically chosen as the benchmarks for this study? Are there other benchmarks where AGILE could be tested to validate its generalizability?
**Human Expert Advice:** Is there a ground truth for when the agent needs to seek advice, and how is this optimized?
**Memory Component:** Can you provide more details on how the memory component scales with an increasing number of interactions and how it ensures efficient retrieval of relevant information?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Comment: We sincerely appreciate your effort and valuable comments in reviewing our paper. Your recognition of our contributions and your insightful feedback are greatly valued. We have addressed your concerns in the Rebuttal Section to the best of our ability within the given time constraints.
---
Please let us know if our responses satisfactorily address your concerns. We are grateful for your time and effort in helping us improve our work.
---
Rebuttal 2:
Rebuttal: > **Q1:** Limited Novelty: The reliance on human experts for advice, while useful, is not a novel concept and has been explored in previous works and becomes a common practice. The paper does not sufficiently differentiate its approach to human expert interaction from existing methods.
> - Xiao, H. and Wang, P., 2023. Llm a*: Human in the loop large language models enabled a* search for robotics. arXiv preprint arXiv:2312.01797.
> - https://python.langchain.com/v0.1/docs/use_cases/tool_use/human_in_the_loop/
**A1:** Thanks for bringing this to our attention. In AGILE framework, agent can proactively seek advice from human experts when its confidence is low. This is distinct from existing human-in-the-loop methods, which rely on passively receiving human feedback (Xiao and Wang, 2023) or following pre-defined rules (https://python.langchain.com/v0.1/docs/use_cases/tool_use/human_in_the_loop/). Proactively seeking advice is a much more complicated decision-making, since the agent must estimate its own confidence in the current state, predict the potential value of the advice for future sessions, and consider the cost of experts. We will cite the works you mentioned and clearly differentiate our proactive seeking-advice mechanism from existing human expert interaction methods in our revised paper.
---
> **Q2:** Scalability Concerns: The scalability of AGILE with more complex environments is not addressed. It is unclear how well the framework would perform with task complexity.
**A2:** Thanks for your feedback. We acknowledge the importance of addressing scalability of AGILE. To this end, we have introduced ProductQA, a complex benchmark designed to evaluate the comprehensive capabilities of the agent. ProductQA tests an agent's ability to handle historical information and accumulated knowledge, leverage tools, interact with humans, perform self-evaluation, and conduct reflection. In addition, the training and testing tasks are made disjoint to assess the agent's ability to adapt to new tasks.
Our experimental results show that AGILE agent, based on 13b LLMs and trained with PPO, outperforms GPT-4 agent on ProductQA. This demonstrates the AGILE framework's potential to scale and manage complex tasks effectively. Furthermore, AGILE is a general agent framework, allowing for various extensions, such as integrating additional tools, further enhancing its scalability with increasing task complexity.
---
> **Q3:** Benchmark Selection: Why were ProductQA and MedMCQA specifically chosen as the benchmarks for this study? Are there other benchmarks where AGILE could be tested to validate its generalizability?
**A3:** While the proposed agent framework is general, in this paper, we evaluate it in complex question answering, a task an LLM agent has the potential of outperforming existing solutions such as the use of an LLM alone. We use ProductQA since it assesses a variety of agent capabilities including knowledge accumulation, tool-use, human interaction, reflection, etc, making it ideal for demonstrating the comprehensive ability of our framework. To validate the generality of AGILE, we select MedMCQA as an additional benchmark. This task requires extensive medical knowledge and the ability to effectively seek expert advice.
In response to your suggestion about exploring more diverse benchmarks, we perform experiments on HotPotQA, featuring natural, multi-hop questions. We train an agile agent using the Vicuna-13b as a base. The experimental results show that the agile agent outperforms GPT-4 agent by 10.8% in relative total score. Ablation studies verify that ppo training improves total score by 1.8%. Detailed results of the additional experiments will be included in the revised version to underscore the generality of the AGILE framework.
| Method | Advice Rate | Accuracy | Total Score |
|---|---|---|---|
| ReAct (gpt-4) | - | 0.482 | - |
| agile-gpt4-prompt | 0.194 | 0.664 | 0.567 |
| agile-vic13b-w/o Advice | 0.000 | 0.553 | 0.553 |
| agile-vic13b-w/o RL | 0.171 | 0.668 | 0.617 |
| agile-vic13b-ppo (ours) | 0.156 | 0.675 | 0.628 |
---
> **Q4:** Human Expert Advice: Is there a ground truth for when the agent needs to seek advice, and how is this optimized?
**A4:** Thanks for your questions. Determining when the agent should seek advice relates to the model's confidence and cost of human experts. Since the optimal decision is model-dependent, it is difficult to establish a ground truth and perform supervised fine-tuning. However, our RL framework can effectively address this issue. By defining rewards for both correctly predicting answers and seeking advice, we can optimize the skill as part of the policy model on end-to-end RL training. For example, we assign a reward of +1 for a correct prediction and a penalty of -c for seeking advice, where c represents the human cost. This approach allows the agent to learn an optimal trade-off between relying on its own predictions and seeking external advice. We will explain the method more clearly in the revised paper.
---
> **Q5:** Memory Component: Can you provide more details on how the memory component scales with an increasing number of interactions and how it ensures efficient retrieval of relevant information?
**A5:** Thank you for your question. The memory is designed as a scalable database where stores agent's trajectories, including question-answering pairs and agent reflections. We employ vector-based retrieval for efficient information access. Specifically, we use the all-MiniLM-L6-v2 model for embedding the text data. When an instance is stored, it is embedded into a vector representation that serves as its key in memory.
For retrieval, we embed the user question using the same model and then perform a cosine similarity search among the stored embeddings. This is a well-studied process and can be performed efficiently even when the memory is large.
---
Rebuttal 3:
Comment: Dear Reviewer 5wwv,
We sincerely appreciate the time and effort you have invested in reviewing our manuscript. As the deadline of the discussion phase approaches, we would like to kindly remind you of our rebuttal, which addresses each concern you raised in your feedback.
If you have any further questions or concerns, we would be happy and eager to address them promptly in the discussion period.
Thank you once again for your valuable feedback. | Rebuttal 1:
Rebuttal: We sincerely appreciate the valuable feedback from all reviewers. We have tried our best to address each question raised in the respective reviews.
Additionally, we have conducted supplementary experiments to address certain concerns and incorporated the results in a one-page PDF file attached to this global response. The PDF includes the following:
- **Results on HotPotQA.** Experiment results and ablation study on a new task: HotPotQA.
- **Robustness of RL training.** Results of multiple PPO training runs, providing the mean and standard deviation of the improvement brought by RL training.
- **Additional results of RL training.** Improvement achieved by our RL training with different initial policy models or different reward values.
- **RL training curves.** Reward and value function loss curves during the PPO training process.
Pdf: /pdf/84889371293ecaa925a03a893fa039c216b42341.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
StrategyLLM: Large Language Models as Strategy Generators, Executors, Optimizers, and Evaluators for Problem Solving | Accept (poster) | Summary: The paper proposes a prompt engineering method to address generalizability and consistency issues in existing prompting approaches for LLM-based problem solving. It utilizes four LLM-based agents including strategy generator, executor, optimizer, and evaluator. The proposed method works by generalizing knowledge from instances through induction and applying generalized knowledge to solve a given problem through deduction.
Authors evaluate the performance of their method empirically on multiple data sets and against a few alternative methods including self consistency with chain of thoughts. Results show superior performance for the proposed method compared with a few alternative approaches.
Strengths: Authors have evaluated their method's performance across multiple data sets and against various alternative methods. In addition, they conduct experiments to assess the universality of their proposed method, the effectiveness of task level strategy for reasoning. They also analyze the cost of strategy-based prompt engineering.
Weaknesses: The main gap in the paper is the lack of detailed explanation to bridge between the superior performance of the proposed method and other studied methods including CoT through concrete examples and clear intuitions. The intuition explained in the paper relies on generalization and consistency as the main traits of the proposed method. It would have been helpful to use a concrete example showing how alternative methods such as CoT fail to provide the right solution for a given problem while the method succeeds at doing so. It could be argued that the raw information is already available in the Appendix section, however, the reviewer is asking for interpretation of the way CoT is solving a problem vs. the proposed method by highlighting the generalization and consistency gaps in CoT's approach.
Technical Quality: 1
Clarity: 2
Questions for Authors: - Where does the feedback in strategy optimizer come from? Is it automatically generated by LLM? If yes, can you elaborate more on how it's generated? If not, can authors clarify where feedback is coming from?
- Authors report biggest gain from their method for symbolic reasoning tasks, but Addendum does not provide any examples of such tasks and the corresponding generated strategies.
- How does strategy LLM generate multiple sets of strategy? Is it by asking it to re-run the same prompt multiple times or does it involve changing the original prompt?
Confidence: 3
Soundness: 1
Presentation: 2
Contribution: 1
Limitations: Authors have listed lack of domain specific knowledge in underlying LLMs due to lack of domain specific training data as a limitation of their work. Beyond this limitation, strategy LLM might be susceptible to bias by generating discriminating strategies that may perform well on the train set which is not addressed in the paper.
Flag For Ethics Review: ['Ethics review needed: Discrimination, bias, and fairness']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. We address each question as follows:
> The intuition explained in the paper relies on generalization and consistency as the main traits of the proposed method. It would have been helpful to use a concrete example showing how alternative methods such as CoT fail to provide the right solution for a given problem while the method succeeds at doing so.
We use an example from the test set of the CP dataset in the MATH benchmark for illustration.
**Question**: Suppose that I have 6 different books, 2 of which are math books. In how many ways can I stack my 6 books on a shelf if I want a math book on both ends of the stack?
**Gold Answer**: 48
The CoT prompt for the CP dataset is illustrated in Figure 33 on Page 36 of our paper, while the strategy generated by GPT-3.5 is detailed in Figure 18 on Page 22. The predictions of CoT and StrategyLLM are as follows:
**CoT Prediction**:
```
We can treat the two math books as a single entity. So we have 5 entities to arrange on the shelf, which can be done in $5!$ ways. However, within the math book entity, the two math books can be arranged in $2!$ ways. Therefore, the total number of ways to arrange the books is $5! \cdot 2! = \boxed{240}$.
Answer: The answer is 240.
```
**StrategyLLM Prediction**:
```
1. Understand the problem statement:
- We have 6 different books, 2 of which are math books.
- We need to stack the books on a shelf in a way that a math book is on both ends of the stack.
2. Identify the counting or probability concept:
- We need to use the concept of permutations to solve this problem.
3. Break down the problem:
- We have 6 books, 2 of which are math books.
- We need to arrange these books on a shelf in a way that a math book is on both ends of the stack.
- The math books can be arranged in 2 ways (one on the left end and one on the right end).
- The remaining 4 books can be arranged in 4! = 4 x 3 x 2 x 1 = 24 ways.
4. Calculate the required counting or probability:
- To calculate the total number of ways to stack the books, we need to multiply the number of ways to arrange the math books with the number of ways to arrange the remaining books.
- Total number of ways = 2 (ways to arrange math books) x 24 (ways to arrange remaining books) = 48.
5. Simplify or reduce the problem:
- There are no further simplifications or reductions possible for this problem.
6. Communicate the solution clearly:
- There are 48 ways to stack the 6 books on a shelf such that a math book is on both ends of the stack.
Answer: The answer is 48.
```
The CoT prediction fails to account for the requirement of placing a math book at both ends, resulting in an incorrect solution. In contrast, the generalizable strategy employed by StrategyLLM allows for a comprehensive understanding of the problem, effective utilization of relevant concepts, and a breakdown of the task. This approach ensures that critical details are not overlooked, leading to the correct answer. Additionally, consistently applying a strategy across few-shot examples illustrates how that strategy can be utilized in various contexts and ensures that the same strategy will be applied to the test instance.
Beyond accuracy, the solutions derived from StrategyLLM are more detailed, interpretable, and accessible, making them particularly suitable for contexts that demand clarity and comprehensibility, such as educational settings.
> Where does the feedback in strategy optimizer come from? Is it automatically generated by LLM? If yes, can you elaborate more on how it's generated? If not, can authors clarify where feedback is coming from?
The feedback in the strategy optimizer is indeed automatically generated by LLMs. Following the execution of a strategy, we can identify which examples yield correct solutions and which do not. We then prompt the LLMs to analyze the incorrect solutions, providing insights into potential reasons for their failures, such as calculation errors, misunderstandings of the problem, or the application of incorrect formulas. The LLM also offers suggestions for enhancing the strategy, which may include adding subtasks, decomposing complex subtasks into simpler components, or revising ineffective subtasks. Subsequently, we utilize the LLM to refine the strategy based on this feedback.
> Authors report the biggest gain from their method for symbolic reasoning tasks, but Addendum does not provide any examples of such tasks and the corresponding generated strategies.
We would like to clarify that we have included a strategy and an execution example for the LLC task in Figure 27 on Page 30 of our paper. This example illustrates the application of our method on symbolic reasoning tasks.
> How does strategy LLM generate multiple sets of strategies? Is it by asking it to re-run the same prompt multiple times or does it involve changing the original prompt?
StrategyLLM generates multiple strategies by re-running the same prompt multiple times, utilizing temperature sampling to introduce variability in the outputs and allowing for the exploration of diverse strategies.
> StrategyLLM might be susceptible to bias by generating discriminating strategies that may perform well on the training set, which is not addressed in the paper.
We emphasize that StrategyLLM is specifically designed to address potential biases present in few-shot prompts and to enhance the generalizability of the prompts through the use of task-level strategies. Our comprehensive results demonstrate that the strategy-based prompts exhibit greater generalizability compared to traditional CoT prompts.
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for their detailed explanations. After reading the concrete math example, I'm still unclear why missing the fact that we need the math books at both ends of the stack is something specific to CoT. This seems to be a miss on the LLM that is in charge of interpreting the question. It could be argued that if the strategy generator LLM misses this fact the same way that CoT's LLM missed it, then the proposed method by authors would make the same mistake as CoT.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer D2vF
Comment: Thank you for your continued feedback and for highlighting the need for further clarification regarding the differentiation between the CoT approach and our proposed StrategyLLM method. We appreciate the opportunity to elaborate on this critical aspect.
**Clarification:**
We would like to clarify that the underlying LLM for the strategy generator is the same as the LLM being tested. Additionally, CoT and StrategyLLM leverage the same LLM for problem interpretation and resolution. However, the fundamental difference between these methods lies in their approach to guiding the LLM’s reasoning process.
**Understanding LLM Capabilities:**
It’s important to recognize that the LLM itself has the capacity to understand the details of a problem, such as the requirement to place math books at both ends of a stack. However, how effectively the LLM applies this understanding can vary significantly depending on the prompting method used.
**The Role of Structured Strategy in Mitigating Misinterpretation:**
The StrategyLLM approach is designed to mitigate the likelihood of misinterpretations through a structured reasoning process. Unlike CoT, which primarily guides the model to think step-by-step, StrategyLLM encourages a more holistic understanding of the problem by enforcing a structured strategy that must be followed consistently across examples.
For instance, in the provided example, StrategyLLM ensures a systematic approach to solving it:
*Step 1: Problem Understanding:* The strategy mandates a clear understanding of the problem requirements, including the necessity for math books at both ends of the stack.
*Step 2: Identifying Relevant Concepts:* The strategy requires identifying the relevant counting or probability concepts needed to solve the problem, ensuring that the LLM considers the correct mathematical approach.
*Step 3: Breakdown of the Problem:* The strategy explicitly requires identifying the critical elements of the problem, such as the placement of the math books and the arrangement of the remaining books.
*Step 4: Calculation:* The strategy then guides the model to apply relevant mathematical concepts (e.g., permutations) to derive the correct solution.
We can observe that the strategy itself does not directly identify critical details; rather, it strongly encourages the LLM to recognize these details for each problem. This approach significantly reduces the risk of overlooking crucial aspects.
**Differentiating from CoT:**
CoT allows for a more flexible reasoning process, which can sometimes lead to inconsistencies or omissions in understanding and solving problems. In the case of the example provided, CoT failed to capture the requirement of placing math books at both ends because the step-by-step reasoning did not enforce a structured breakdown of the problem. Instead, it focused on treating the two math books as a single entity, leading to an incorrect interpretation.
**Conclusion:**
In conclusion, while both CoT and StrategyLLM rely on the LLM’s interpretation, StrategyLLM’s structured approach reduces the likelihood of critical misinterpretations. This structured reasoning process ensures that all essential aspects of a problem are considered, leading to more accurate and reliable solutions.
We hope this clarifies the distinction and addresses your concern. We appreciate your feedback and are committed to addressing any further questions or concerns you may have. | Summary: This work uses an LLM to describe the strategy that it should use to solve classes of problems, and then applies that strategy to the different test cases as required. In addition, there is a caching and evaluation process whereby these strategies are improved during in an in-context learning prompt training/refinement cycle. This approach is tested across a number of datasets and underlying (commercial and open-source) LLMs.
Strengths: Well written paper, with a clear idea and thorough experimentation.
Good to see both commercial and open-source models being tested side-by-side.
The strategy artifacts produced by the model are very interesting in their own right. They are produced by the 'portable' prompts in the Appendix, however by the time they have been validated/cached/refined, they will likely be model-specific.
Weaknesses: The actual prompt (Figure 7, in Appendix C) on which *the whole approach relies* should be included in the body of the paper.
Text Tweak:
* L560: "the optimal strategies for various datasets" - seems unlikely. Perhaps "the best strategies discovered ..." ?
Technical Quality: 4
Clarity: 4
Questions for Authors: * L221-233: Clarification: Is the Strategy Generator the same LLM as being 'tested' - or is the strategy being generated originally by GPT-3.5?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: As described in the text, this approach is very dependent on whether the underlying LLM has a 'high level' understanding of how to approach the problem domain. This is a dependency that somewhat undercuts the broad claims of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. We address each question as follows:
> Regarding the actual prompt and text tweak
We appreciate your suggestions. We will incorporate these revisions in the new version of our paper.
> Is the Strategy Generator the same LLM as being 'tested' - or is the strategy being generated originally by GPT-3.5?
We would like to clarify that the underlying LLM for the strategy generator, executor, optimizer, and evaluator is the same as the LLM being tested.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses to my questions. Having read those (and those in response to the other reviewers' questions), I'll stick with my original Rating (with apologies to the AC for not being making the final decision any easier) | Summary: This paper introduces a novel prompting strategy for large language models called StrategyLLM, which it employs to solve a range of tasks in math, commonsense reasoning, word sorting, and last letter concatenation. The approach involves (1) prompting a language model to generate task-specific instance-general strategies (sequences of subtasks) from a few examples of a given task, (2) prompting a language model to execute these strategies in order to grade them, (3) prompting a language model to refine strategies, (4) evaluating the strategies on a validation set to select inference time strategies, and (5) employing one or more of the selected strategies to solve new instances of the tasks of interest. The key attribute of this approach is that the strategies generated are not instance-specific, and indeed the paper finds improved generalization compared with prompting with instance-specific solutions as in the CoT baseline. The paper evaluates the StrategyLLM approach against a selection of standard prompting baselines, finding improvements across all tasks considered and across all models considered, and provides additional analyses.
Strengths: The main high-level thrust of the paper, that LLM can generate strategies specific to a task, and then carry out these strategies is an important direction. This paper shows that LLMs can indeed use strategy generation to achieve greater generalization than an LLM performing in-context learning from example solutions. The experiments are adequately wide-ranging and the results are compelling; strategy generation as employed in StrategyLLM yields improvements over the baselines compared against. These baselines are well selected, and while they don't represent the full suite of prompting strategies that people employ with LLMs today, they make for a fair comparison that allows for measuring the effect of the core idea of StrategyLLM, having the LLM generate task-specific strategies.
Prompting methods to elicit improved reasoning from large language models is an active area, ripe with a large number of unexplored ideas. This paper does a good job of placing its approach in the context of some of the recent work in this area. This is not the first paper to employ prompting strategies that share this high-level thrust, and as the paper cites a closely related learning-to-plan approach [15] has previously been explored.
Overall the paper is clearly written, though in the Weaknesses section I remark on how the SolutionLLM baseline should be explained more clearly.
Weaknesses: The approach in StrategyLLM is modestly complicated. For example, a simpler approach that still tests the core idea of the paper would be to keep the strategy generator and inference steps, but skip the intermediate grading and optimization of the strategies. Would the method work without these extra prompting phases, or are they essential to the method? An ablation would be valuable. I would expect these phases are essential, since they provide a data-driven validation of strategies before they are employed. This raises another natural question which is: is it the data-driven validation of the prompt (in this case, a strategy) that is the main advantage of StrategyLLM, or is it specifically that the strategies are not instance-specific? An experiment that compares with a prompt selection approach analogous to StrategyLLM's (i.e. generating multiple prompts, grading them on a validation set, refining them as needed, using a validation set to select between them, and finally employing SC or ZC to get a final answer), but generating e.g. few-shot prompts instead of strategies would yield a fairer comparison.
In my view, the SolutionLLM baseline is not explained clearly. Two points in particular could use clarification. First, state more clearly what is meant by a "solution"; this is understandable by looking at the prompt in the appendix, but could use clarification in the text. Second, state explicitly how the SolutionLLM prompt is used; the prompt in the appendix merely outputs the solution to a specific example (I presume this is a dev/validation example), but it is not stated (if I am not mistaken) how this results in answers for novel tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: Clarity: Line 85 describes the role of the evaluator, constructing few-shot prompts for a strategy "using itself and its execution result". Please clarify the interface of the evaluator (line 122), and specifically what the form of the few shot examples p_{st} are. Providing the prompt used by the evaluator would be a welcome change, as it is the main absent component in the current revision.
See also the clarify questions about the SolutionLLM baseline in the Weaknesses section of the review.
Let's have some discussion on how two rounds of validation are used before applying the strategy to the task instances of interest. Specifically, the execution phase grades a strategy on a small set of task examples $\mathcal{E}$ (the same examples used to generate the strategy), and then the evaluator E grades the best performing strategies on a larger validation set $\mathcal{V}$. Both rounds of validation are being used for filtering the set of strategies. In the paper, these are presented as separate phases of the StrategyLLM method with different type signatures and different descriptions. Clarity-wise, I fear this might be obscure the fact they represent two rounds of validation and filtering.
The final prompts used during inference include both the strategy generated as well as consistent few-shot prompts (consistent because they all employ the same strategy). Do you have evaluation of relative value of the presence of the strategy vs the presence of the consistent few-shot prompts.
I would encourage you to write out what StrategyLLM is in contrast with StrategyLLM-SC and StrategyLLM-ZS; my assumption is that StrategyLLM employs only one selected strategy, but I don't see this stated explicitly.
Terminology: Why do you describe your approach as involving multiple LLM agents? To me it seems like it is simply an algorithm that calls a single LLM with multiple prompts orchestrated by an (ordinary) algorithm. I understand that there are separate components for G, X, O, E, and inference, and my question is specifically around why you describe each of G, X, O, and E as a separate "agent".
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper includes an adequate Limitation and Impact section, though it does not address societal impacts in particular. Per the checklist response, this is because the paper aims "to enhance the capabilities of large language models in common task-solving scenarios and does not introduce privacy, security, or fairness issues."
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. We address each question as follows:
> A simpler approach that still tests the core idea of the paper would be to keep the strategy generator and inference steps, but skip the intermediate grading and optimization of the strategies. Would the method work without these extra prompting phases, or are they essential to the method?
We would like to emphasize that these phases are indeed essential to the method. Specifically, StrategyLLM relies on the strategy executor to apply the generated strategies to the provided examples. The execution results are necessary for formulating the strategy-based prompts. Additionally, the strategy evaluator plays a vital role in determining the most effective strategy for inference. Without these phases, we would be unable to generate the few-shot prompts necessary for each strategy or to make informed decisions regarding which strategy to employ.
> Is it the data-driven validation of the prompt (in this case, a strategy) that is the main advantage of StrategyLLM, or is it specifically that the strategies are not instance-specific? An experiment that compares with a prompt selection approach analogous to StrategyLLM's (i.e. generating multiple prompts, grading them on a validation set, refining them as needed, using a validation set to select between them, and finally employing SC or ZC to get a final answer), but generating e.g. few-shot prompts instead of strategies would yield a fairer comparison.
We appreciate your valuable suggestion. In response, we implement a baseline approach, which involves initially generating $n$ prompts, retaining $k$ of these for validation, and subsequently selecting the prompt with the highest validation accuracy for inference. The results of Meta-Llama-3-8B-Instruct on the CP, StrategyQA, and MA datasets are presented in the table below. We can observe that StrategyLLM significantly outperforms this baseline across all datasets, highlighting the benefits of integrating generalizable strategies.
| | CP | StrategyQA | MA | Avg |
|------|------|-----|-----|------|
| Baseline | 19.5 | 71.0 | 53.3 | 47.9 |
| StrategyLLM | 24.5 | 74.0 | 64.7 | **54.4** |
> Regarding the SolutionLLM baseline
A solution is a step-by-step reasoning path. In this baseline, we utilize LLMs to generate a solution for each example in the prompt, which is the same as the examples used in the CoT prompt. These examples, along with their respective solutions, are then combined to formulate a few-shot prompt for inference, analogous to the CoT approach.
> Regarding the evaluator and constructing the few-shot prompt
Unlike other agents, the strategy evaluator does not utilize a specific prompt. Instead, to assess the effectiveness of a strategy, the evaluator constructs a few-shot strategy-based prompt tailored to that particular strategy. An illustrative example of the strategy-based prompt is depicted in Figure 3 of our paper. This prompt is structured as follows: **Strategy + (Question_1, Solution_1, Answer_1) + … + (Question_n, Solution_n, Answer_n)**. The solutions, Solution_1 through Solution_n, are derived during the strategy execution phase by applying the strategy to the $n$ examples in the prompt. The strategy evaluator then infers using each prompt on the validation set and calculates its validation accuracy.
> In the paper, the execution and validation are presented as separate phases of the StrategyLLM method with different type signatures and different descriptions. Clarity-wise, I fear this might be obscure the fact they represent two rounds of validation and filtering.
Thank you for your observation. We acknowledge that these phases represent two rounds of validation. However, they differ in their approach to obtaining solutions. During the execution phase, we directly prompt the LLM to generate a solution for a specific example by following the strategy. Next, in the validation phase, the strategy and the examples with their solutions are utilized to construct the strategy-based prompt. LLMs use this strategy-based prompt to infer solutions for each validation example. We will clarify these distinctions in the revised version of our paper to enhance understanding.
> Do you have evaluation of relative value of the presence of the strategy vs the presence of the consistent few-shot prompts.
We conduct experiments to assess inference performance without the presence of the strategy, utilizing only the consistent few-shot prompt. The results for Meta-Llama-3-8B-Instruct are summarized in the table below. The findings indicate that omitting the strategy leads to a decrease in performance. This is because the consistent prompt merely illustrates how to apply the strategy to specific examples and may not encapsulate all the nuances of the strategy itself.
| | CP | StrategyQA | MA | Avg |
|-----|------|------|------|-------|
| StrategyLLM wo/ strategy | 24.5 | 66.0 | 63.3 | 51.3 |
| StrategyLLM | 24.5 | 74.0 | 64.7 | **54.4** |
> I would encourage you to write out what StrategyLLM is in contrast with StrategyLLM-SC and StrategyLLM-ZS; my assumption is that StrategyLLM employs only one selected strategy, but I don't see this stated explicitly.
Yes. StrategyLLM employs the most effective strategy identified during our experiments. We will state this clearly in the revised version of our paper.
> Why do you describe your approach as involving multiple LLM agents?
We follow the terminology used in existing popular agent frameworks such as BabyAGI (https://github.com/yoheinakajima/babyagi). Multiple agents collaborate to accomplish a task, with each agent assigned a subtask of the task. While the underlying LLMs for these agents can differ, our experiments utilize the same LLM across all agents to ensure a fair comparison.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses to my questions and for the new experimental results.
The results are helpful for clarifying the necessity and relative importance of the different facets of the StrategyLLM approach.
For the parts of your response that clarify ambiguities in the paper, I encourage you to incorporate those clarifications into the paper itself.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer WbXZ
Comment: Thank you for your thoughtful feedback and for taking the time to review our rebuttal. We are pleased to hear that the new results have helped clarify the necessity and importance of the various facets of our StrategyLLM approach. We appreciate your suggestions and will incorporate these clarifications into our revised manuscript to enhance the overall clarity and comprehensiveness of our work. | Summary: This paper proposes StrategyLLM, a pipeline for improving the few-shot reasoning performance. The main intuition is that when solutions to few-shot exemplars are inconsistent in terms of the reasoning process, the performance can be suboptimal compared to those with consistent solutions. Based on this intuition, StrategyLLM includes 3 prompting-based components: strategy generator, executor, and optimizer. The strategy generator generates task-level reasoning guidelines, the executor generates solutions to few-shot exemplars based on the generated strategies, and the optimizer improves the strategy when the accuracy on examples is lower than a threshold. Experiments on several reasoning benchmarks demonstrate that StrategyLLM outperforms other prompting methods, including CoT and plan-and-solve.
Strengths: 1. The empirical results are good compared to the baselines prompting methods, and the evaluation covers multiple LLMs.
2 There is a thorough breakdown analysis on different number of samples, different number of optimization steps, gap to the oracle selection accuracy, etc.
Weaknesses: 1. The few-shot setting for StrategyLLM is unclear. Specifically, the paper says that a strategy is applied when it passes an accuracy threshold, which is <1. Does it mean that when using the few-shot prompt constructed by StrategyLLM, some examples have wrong solutions? If this is the case, it is surprising that StrategyLLM still outperforms the baselines.
2. Have the authors tried self-consistency with a single strategy? How much performance gain is from aggregating multiple strategies compared to sampling from a single strategy?
3. On MATH, it is unnatural to construct separate prompts per category. Have the authors tried constructing a single prompt for all MATH problems, and how does it compare to baselines?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Please clarify the few-shot setting for StrategyLLM. Specifically, does the few-shot prompt constructed by StrategyLLM contain wrong solutions when the generated strategies do not pass all examples?
2. Have the authors tried self-consistency with a single strategy? How much performance gain is from aggregating multiple strategies compared to sampling from a single strategy?
3. Have the authors tried constructing a single prompt for all MATH problems, and how does it compare to baselines?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. We address each question as follows:
> The paper says that a strategy is applied when it passes an accuracy threshold, which is <1. Does it mean that when using the few-shot prompt constructed by StrategyLLM, some examples have wrong solutions?
Yes, we would like to clarify the following points: (1) In most scenarios, we set the accuracy threshold at 0.75, and the number of examples used is fewer than 8, there can be at most 1 example with an incorrect solution in these scenarios. (2) These incorrect examples may not largely degrade the performance. As pointed out by existing research (e.g., [1]), the relevance of the solution to the question and the presentation of intermediate reasoning steps can be more critical than the correctness of the final answer.
[1] Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters. ACL 2023. https://aclanthology.org/2023.acl-long.153/
> Have the authors tried self-consistency with a single strategy? How much performance gain is from aggregating multiple strategies compared to sampling from a single strategy?
We have explored self-consistency using a single strategy. The performance of Meta-Llama-3-8B-Instruct on the CP, StrategyQA, and MA datasets is summarized in the table below. "StrategyLLM-Single-SC" refers to self-consistency applied with the best-performing strategy identified, which generates a set of solutions via
temperature sampling to obtain multiple answers. The results indicate that StrategyLLM-Single-SC underperforms compared to StrategyLLM-SC and even StrategyLLM, highlighting the advantages of aggregating multiple strategies.
| | CP | StrategyQA | MA | Avg |
|---------------------------|------|------------|------|-------|
| StrategyLLM | 24.5 | 74.0 | 64.7 | 54.4 |
| StrategyLLM-Single-SC | 23.5 | 69.8 | 59.3 | 50.9 |
| StrategyLLM-SC | 25.0 | 74.0 | 66.0 | **55.0** |
> Have the authors tried constructing a single prompt for all MATH problems, and how does it compare to baselines?
Thank you for your suggestion. We would like to clarify that constructing a distinct prompt for each subject may be more appropriate as the strategy can be more specific to the subject and incorporate more domain-specific knowledge of it.
Additionally, to address your question, we conduct experiments using a single CoT prompt and a strategy-based prompt, both utilizing the same 7 examples, with each example drawn from a different subject in the MATH benchmark. Both the StrategyLLM and CoT prompts were evaluated on the test sets across all MATH subjects. The results for Meta-Llama-3-8B-Instruct are presented in the table below. The findings indicate that StrategyLLM consistently outperforms CoT across all subjects, demonstrating the advantages of our framework.
| | AL | PA | IA | CP | NT | GE | PC | Avg |
|---------------------------|------|------------|------|-------|------|-------|------|------|
| CoT | 36.5 | 44.5 | 4.5 | 23.0 | 18.0 | 16.0 | 13.5 | 22.3 |
| StrategyLLM | 39.0 | 48.5 | 8.5 | 24.0 | 22.5 | 18.0 | 17.5 | **25.4** |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for adding additional evaluation in the rebuttal. I will keep my original rating. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ProTransformer: Robustify Transformers via Plug-and-Play Paradigm | Accept (poster) | Summary: This paper proposes a robust attention mechanism to improve the resilience of transformer-based architectures. The method does not need additional training or finetuning with only four lines of code, which is simple yet effective. To show the effectiveness of the proposed mechanism, the paper conducts experiments across different tasks including topic classification, sentiment analysis, textual entailment, jailbreak attack, on different backbone architectures such as large language models. The results show promising resilience.
Strengths: 1. The method is simple yet effective. With only four lines of codes, it's able to provide good resilience performance across diverse tasks and model architectures.
2. The experiments are quite comprehensive. The study of robust transfomers include language modeling, image classification, and graph representative learning. For language modeling, the evaluations consider classic text attacks, prompt attacks, and jailbreak attacks.
3. The results demonstrate promising performance.
4. The plug-and-play nature of ProTransformer makes it practical for real-world applications, as it can be easily integrated into existing model architectures without significant overhead.
Weaknesses: 1. As the paper points about one of the advantages of the method is the efficiency (efficient Newton-ISLR algorithm), there should be more discussions about the runtime of the proposed method across different architectures. The current version only includes the study in Tab.2 on a single model type. What is the influence on LLMs as they are more computation intensive? And how is the runtime compared with other robust designs?
2. For the results on image classification tasks, there is no comparision. How is the performance of the proposed method compared with other robust designs such as [*]?
[*] Robustifying Token Attention for Vision Transformers
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. For the main results in Tab.1, how many layers (K) in the transformer models are equipped with ProAttention?
2. For the results in Tab.2, what is the robutness performance by employing different numbers of ProAttention layers?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Please refer to weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your recognition of the efficiency and effectiveness of our method. We are glad to solve134
your concerns and answer your questions with the following illustrations.
**W1**: As the paper points about one of the advantages of the method is the efficiency (efficient Newton-ISLR algorithm), there should be more discussions about the runtime of the proposed method across different architectures. The current version only includes the study in Table 2 on a single model type. What is the influence on LLMs as they are more computation intensive? And how is the runtime compared with other robust designs?
**Answer.** I include the average running time across different architectures. The results show the consistent conclusion with the BERT backbone: Our ProTransformer will only include 1-2 times of inference time while achieving great robustness.
We want to point out that robustifying models is usually very time-consuming and labor-intensive. For example, adversarial training methods usually require substantial training time (e.g., 7-100 times of the normal training time [1]), while our ProTransformer can directly plug in the robust attention into a pre-trained backbone model without any training. Moreover, randomized smoothing methods usually require a lot of inference time (e.g., 1,000 times for the language domain [2] and 100,000 times for the vision domain [3]). Additionally, most robust architectures are quite complicated, require training the models from scratch, and exhibit limited robustness. Therefore, our ProTransformer, which only requires 1-2 times more inference time, is quite efficient and economical.
[1] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. "Towards deep learning models resistant to adversarial attacks." In International Conference on Learning Representations, 2018
[2] Lou, Qian, et al. "CR-UTP: Certified Robustness against Universal Text Perturbations." arXiv preprint arXiv:2406.01873 (2024).
[3] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. "Certified adversarial robustness via randomized smoothing." International conference on machine learning. PMLR, 2019.
[4] Han, Xing, et al. "Designing robust transformers using robust kernel density estimation." Advances in Neural Information Processing Systems 36 (2024).
### Table 5: Average running time (ms) on AG-NEWS.
| # Layers (K) | 0 (vanilla) | 1 | 2 | 3 | 4 | 5 | 6 |
|--------------|-------------|-------|-------|-------|-------|-------|-------|
| ProBERT | 6.14 | 9.04 | 11.67 | 14.34 | 17.33 | 19.89 | 21.87 |
| ProALBERT | 5.63 | 8.70 | 13.03 | 16.10 | 19.37 | 22.77 | 23.27 |
| ProDistilBERT| 2.17 | 4.16 | 6.88 | 8.38 | 9.17 | 11.95 | 13.19 |
| ProRoberta | 4.95 | 3.10 | 11.86 | 14.93 | 18.40 | 20.89 | 23.04 |
**W2**: For the results on image classification tasks, there is no comparison. How is the performance of the proposed method compared with other robust designs such as [*]?
[*] Robustifying Token Attention for Vision Transformers
**Answer.** We validate the advantage of our ProTransformer over RVT [1] and TAP&ADL [2] in the following table.
### Table 6: Adversarial robustness under PGD.
| Model | Clean | PGD |
|---------------------|--------|--------|
| ViT | 98.74 | 0.26 |
| RVT [1] | 97.65 | 18.71 |
| FAN-B-Hybrid +TAP&ADL [2] | 98.45 | 22.23 |
| Pro-ViT (Ours) | 98.40 | 33.40 |
[1] Towards Robust Vision Transformer
[2] Robustifying Token Attention for Vision Transformers
**Q1**: For the main results in Tab.1, how many layers (K) in the transformer models are equipped with ProAttention?
**Answer.** We equipped ProAttention with 3 layers (K).
**Q2**: For the results in Tab.2, what is the robustness performance by employing different numbers of ProAttention layers?
**Answer**: We already include the ablation study on the number of layers in the Appendix G.6.2. The results validate that our algorithm can converge well within 3 steps.
| Model | Clean | Textfooler |
|---------------------|-------|------------|
| Pro-BERT-MCP K = 5 | 93.6 | 39.2 |
| Pro-BERT-MCP K = 4 | 93.7 | 37.2 |
| Pro-BERT-MCP K = 3 | 93.2 | 37.8 |
| Pro-BERT-MCP K = 2 | 93.7 | 33.9 |
| Pro-BERT-MCP K = 1 | 93.9 | 26.8 |
---
Rebuttal Comment 1.1:
Comment: Thanks for the response from the authors. The response have answered my questions and I will maintain my rating of acceptance for the paper. | Summary: This paper proposed a robust transformer architecture named ProTransformer through a plug-and-play paradigm without further training or fine-tuning. The authors include robust token estimators in the self-attention blocks which are more resilient to the dominating impact of input tokens, and apply Newton-ISLR to approximate these estimators. The comprehensive experiments for various tasks, modalities, attacks, and backbones show that ProTransformer can significantly improve the robustness of transformers without compromising benign performance.
Strengths: 1. This paper is well-written with clear descriptions of their method with both motivation/intuition analysis and theoretical analysis.
2. Comprehensive experiments are conducted to prove their claims of the enhanced robustness of Transformers. This includes robustness analysis for both the traditional adversarial attacks within the word or character level and the recent jailbreak attacks for large language models. Detailed ablation studies and transformers for alternative modalities are also considered.
3. The ProTransformer is simple to use (Plug-and-Play) and has significantly improved the robustness. Moreover, ProTransformers can not only work on language modeling but also show its generalization for transformer-based architectures in vision and graph domains.
Weaknesses: 1. Lack of analysis of clean performance in jailbreak attacks. I want to know if ProTransformer would hurt the generation quality of LLMs.
2. Less discussion about the semantic level attacks and robustness of corruption. For example, the AdvGlue Benchmark (https://adversarialglue.github.io/) for corruption robustness and PAIR[1] for semantic-level jailbreak attacks.
3. Lack of comparison with baseline defense methods.
[1] Chao, Patrick, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J. Pappas, and Eric Wong. Jailbreaking black box large language models in twenty queries. arXiv preprint arXiv:2310.08419 (2023).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. When using white-box adaptive attacks like PGD and GCG, do you compute the gradient through your ProAttention blocks?
2. For the jailbreak attack results in Figure 6, why do you use Huber instead of MCP attack constraints? As shown in Section 4.2.1, MCP shows the best performance.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of the paper are not adequately discussed. The cost of the evaluation with ProTransformer should be included in the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your recognition of the novelty and effectiveness of our method. We are glad to solve33
your concerns and answer your questions with the following illustrations.
**W1**: Lack of analysis of clean performance in jailbreak attacks. I want to know if ProTransformer would hurt the generation quality of LLMs.
**Answer.** In ProTransformer, using Huber loss will almost not hurt the generation quality while using L1 or MCP will slightly sacrifice the generation performance. We construct 100 synthetic texts (including prompts and reference texts) evaluate the generation with the BLEU score and a series of ROUGE scores with the vicuna-7b as the backbone. The following results verify that Huber model keep similar score with L2 (vanilla), while L1 and MCP may sacrifice a little bit quality.
### Table 2: BLEU and ROUGE score on generation quality.
| Penalty in ProTransformer | L2 (Vanilla) | Huber | L1 | MCP |
|----------------------------|--------------|-------|------|-------|
| BLEU | 0.151 | 0.154 | 0.145| 0.136 |
| ROUGE-1 | 0.385 | 0.390 | 0.368| 0.375 |
| ROUGE-2 | 0.379 | 0.353 | 0.344| 0.314 |
| ROUGE-L | 0.396 | 0.390 | 0.385| 0.368 |
**W2**: Less discussion about the semantic level attacks and robustness of corruption. For example, the AdvGlue Benchmark ([https://adversarialglue.github.io/](https://adversarialglue.github.io/)) for corruption robustness and PAIR[1] for semantic-level jailbreak attacks.
[1] Chao, Patrick, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J. Pappas, and Eric Wong. "Jailbreaking black box large language models in twenty queries." _arXiv preprint arXiv:2310.08419 (2023)_.
**Answer.** For AdvGlue Benchmark, we already included the evaluation under typos-based Textbugger and embedding similarity-based Textfooler. We add the experiment of the context-aware BERT-Attack in the following table, the results show our Pro-BERT outperforms all of the other baselines.
### Table 3: ASR (%) under BERT-Attack.
| Method | Aua% | ASR% |
|---------|------|------|
| BERT | 5.8 | 93.7 |
| FreeLB | 21.7 | 76.6 |
| PGD | 21.0 | 77.6 |
| MixADA | 7.6 | 91.8 |
| TA-VAT | 19.2 | 79.4 |
| SAFER | 38.5 | 58.8 |
| RanMASK | 36.0 | 58.4 |
| Pro-BERT (Ours) | 42.7 | 54.4 |
Additionally, we also evaluate the advantage of our ProTransformer under semantic-level jailbreak attack PAIR in the following table.
### Table 4: ASR (%) under PAIR Jailbreaks.
| Model | ASR(%) |
|--------------------------|--------|
| Vicuna | 98.7 |
| Vicuna+SmoothLLM | 51.4 |
| Pro-Vicuna | 48.9 |
| Pro-Vicuna+SmoothLLM | 41.6 |
**W3**: Lack of comparison with baseline defense methods.
**Answer.** Actually we have included the comparison with baseline defense methods for each domains:
- Language domain: backbone baselines ALBERT, DistilBERT, RoBERTa and BERT, and popular defenses including MixADA, PGD-Adv, FreeLB, TA-VAT and SmoothLLM.
- Vision domain: We include Deit, Convit, BeiT, Swin, ViT.
- Graph domain: GCN, GAT, GNNGuard, RGCN, GRAND, ProGNN, Jaccard-GCN and Soft-Median.
**Q1**: When using white-box adaptive attacks like PGD and GCG, do you compute the gradient through your ProAttention blocks?
**Answer.** Yes. We compute the gradient through the ProAttention blocks.
**Q2**: For the jailbreak attack results in Figure 6, why do you use Huber instead of MCP attack constraints? As shown in Section 4.2.1, MCP shows the best performance.
**Answer.** In Section 4.2.1, the task is classification, so both Huber and MCP losses would not sacrifice too much clean performance. But in jailbreak, we evaluate the model with the generated response. In this scenario, Huber loss has $L_2$ penalty in the low-value region, which enables the model to keep better generation quality. But MCP will hurt the generation quality, result in some non-semantic answer.
**L1**: The limitations of the paper are not adequately discussed. The cost of the evaluation with ProTransformer should be included in the limitations.
**Answer.** Please refer to the **global response for the concern of limitations of ProTransformer.**
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I would maintain my score. | Summary: This paper proposes an interpretable robust attention layer to robustify transformer architecture via a plug-and-play paradigm.
Strengths: The proposed method is practical and can be plugged into the given transformer as a plug-and-play layer.
The experiments are robust and conclusive, signifying the efficacy and satisfactory performance of the proposed method.
Weaknesses: It seems that the complexity of the proposed ProAttention is still greater than that of linear attention [1] [2] [3].
[1] Han, Dongchen, et al. "Flatten transformer: Vision transformer using focused linear attention." Proceedings of the IEEE/CVF international conference on computer vision. 2023.
[2] Katharopoulos, Angelos, et al. "Transformers are rnns: Fast autoregressive transformers with linear attention." International conference on machine learning. PMLR, 2020.
[3] Zhu, Lianghui, et al. "DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention." arXiv preprint arXiv:2405.18428 (2024).
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Could the authors compare the proposed ProAttention with linear attention under the same experimental setting?
Confidence: 1
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The comparison with different attention mechanisms is missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your recognition of the efficiency and effectiveness of our method. We are glad to solve134
your concerns and answer your questions with the following illustrations.
**W1**: It seems that the complexity of the proposed ProAttention is still greater than that of linear attention [1] [2] [3].
[1] Han, Dongchen, et al. "Flatten transformer: Vision transformer using focused linear attention." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[2] Katharopoulos, Angelos, et al. "Transformers are rnns: Fast autoregressive transformers with linear attention." International conference on machine learning. PMLR, 2020.
[3] Zhu, Liangxiu, et al. "DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention." arXiv preprint arXiv:2405.18428 (2024).
**Answer**:
Thanks for your comments. We would like to emphasize that while we aim to develop algorithm with acceptable efficiency, efficiency is not the focus of this work. Therefore, linear attention is irrelevant to this context. Instead, our ProAttention targets on improve the robustness of transformers. Linear attention aims to improve the efficiency of transformers but is still vulnerable against adversarial attacks as verified in next answer. Their contributions are totally orthogonal.
**Q1**: Could the authors compare the proposed ProAttention with linear attention under the same experimental setting?
**L1**: The comparison with different attention mechanisms is missing.
**Answer:**
Thanks for your comments. We compare different attention mechanisms (vanilla attention, Linear attention [1], Longformer attention [2], and our ProAttention) on IMDB across various attacks in the following table. Our method shows better robustness compared to other attention mechanisms including the linear one.
[1] Katharopoulos, Angelos, et al. "Transformers are rnns: Fast autoregressive transformers with linear attention." International conference on machine learning. PMLR, 2020.
[2] Beltagy, Iz, Matthew E. Peters, and Arman Cohan. "Longformer: The long-document transformer." arXiv preprint arXiv:2004.05150 (2020).
### Table 1: The results of sentiment analysis on IMDB.
| Model | Clean | Textfooler | TextBugger | DeepWordBug | PWWS |
|-------------------|-------|------------|------------|-------------|------|
| Vanilla attention | 92.3 | 11.8 | 11.3 | 32.8 | 26.4 |
| Linear attention | 89.4 | 6.9 | 5.4 | 20.7 | 17.7 |
| Longformer attention | 87.5 | 6.4 | 7.9 | 28.5 | 23.2 |
| ProAttention (L1) (Ours) | 93.3 | 24.6 | 13.0 | 36.0 | 32.7 |
| ProAttention (Huber) (Ours) | 93.0 | 24.8 | 13.4 | 36.9 | 31.5 |
| ProAttention (MCP) (Ours) | 93.5 | 22.1 | 44.6 | 55.5 | 56.3 | | Summary: In this paper, the authors intend to robustify transformer architectures against adversarial attacks to enhance their resilience across various machine learning tasks. Specifically, they propose the ProAttention mechanism. They use a novel interpretation of the self-attention mechanism as a weighted least squares (WLS) estimator, along with robust WLS token estimators and an efficient Newton-IRLS algorithm with convergence guarantees, which enhances the robustness of transformers without additional training or fine-tuning. Further, the authors develop a plug-and-play layer, ProAttention, to be integrated into existing transformers. They demonstrate several experiments to show the performance of this architecture in improving robustness across language modeling, image classification, and graph representation learning.
Strengths: 1. The idea of ProAttention is novel. Most existing work focuses on either architecture-agnostic defenses or robustness improvements through adversarial training, but these approaches often fail to generalize across different tasks or require substantial computational resources. In this paper, the authors propose a plug-and-play paradigm to enhance robustness against adversarial attacks without additional training or fine-tuning.
2. The ProAttention framework is reasonable, and the adaptation of the Newton-IRLS algorithm makes it efficient and effective. The framework leverages the robust weighted least squares estimator to mitigate the effects of adversarial attacks.
3. The experiments are extensive, and sufficient hyper-parameters are provided for reproduction. The authors detail their experimental setup and provide comprehensive ablation studies to demonstrate the robustness and efficiency of their approach under various attack scenarios.
Weaknesses: 1. The paper does not include error margins for experiments, especially for ablation studies in Section 4. The paper provides extensive experimental results; however, it does not include error bars or confidence intervals to indicate the variability or statistical significance of the results.
2. This paper could benefit from better presentation and clearer structure. For example, the captions for the figures, especially Fig. 1 and 2, are too simple and lack the necessary explanations. This may lead to misunderstandings about the designs or results.
3. I do not see a discussion of the method's limitations, despite the title of the final section mentioning “Limitation.”
4. Although the paper proposes enhancing transformer robustness through the ProAttention mechanism, it needs a thorough explanation and analysis of why this method performs better under various attacks. I suggest that the authors further discuss this aspect in more detail
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The paper claims that the Newton-IRLS algorithm converges efficiently within three steps. However, can the authors provide a more detailed analysis or proof of this convergence rate, especially for larger models or more complex datasets?
2. The authors show the effectiveness of ProAttention in enhancing robustness across language, vision, and graph domains. However, each domain's specific implementation details and hyper-parameter tuning strategies seem different. Can the authors clarify how the ProAttention framework can be uniformly applied across these diverse domains without extensive domain-specific modifications?
3. The ProAttention framework relies on the weighted least squares (WLS) estimator and its robust variants. Can authors provide a deeper theoretical justification for choosing WLS and its robustness under adversarial attacks? Specifically, how do the assumptions of WLS hold up under different types of adversarial perturbations, and are there scenarios where the WLS assumptions might fail, leading to reduced robustness?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Despite being designed for efficiency, the ProAttention framework introduces additional computational overhead during inference. The paper mentions that the ProAttention layer requires 1-2 times more inference time, which may limit its deployment in real-time applications or on devices with limited computational resources.
The ProAttention framework relies on WLS estimators and their robust variants. While the paper provides a theoretical justification for choosing WLS, its effectiveness under different adversarial perturbations needs further investigation. In some scenarios, the assumptions of WLS may fail and reduce robustness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your recognition of the efficiency and effectiveness of our method. We are glad to solve134
your concerns and answer your questions with the following illustrations.
**W1**: We want to clarify that the proposed ProAttention is plugged into fixed pre-trained models without finetuning, hence there is no randomness of optimization of training. Also, the attack strategies strictly follow the procedures without randomness. Besides, our evaluation method is common in robustness of language models, and consistent with the existing benchmarks [1] and text attacks works.
[1] Searching for an effective defender: Benchmarking defense against adversarial word substitution, 2021.
**W2**: Thanks for pointing it out. We have referred to the figures and included the explanations in the main text. We will include necessary captions for the figures in the revised version.
- For Figure 1, we present three attack mechanisms to manipulate and harm the language models:
1. Classic text attacks modify the input content.
2. Prompt attacks perturb the prompt template.
3. Jailbreaks add adversarial non-semantic suffixes.
- For Figure 2, we show the schematic overview of the ProTransformer: We design our ProTransformer by plugging ProAttention into transformers, replacing the vanilla attention mechanism without additional training. Furthermore, our ProTransformer is versatile and can be applied across various domains, including language, image, and graph processing.
**W3**: Please refer to the **global response for the concern of limitations of ProTransformer**.
**W4**: We have provided the detailed motivations and explanations for the robustness in Section 3. We are glad to summarize them again here:
1. In Section 3.1, we highlight the vulnerability of vanilla attention due to the quadratic impact of L2 penalty in the WLS (L2) estimator. In particular, the attacker can easily manipulate the input tokens to dominate the output because of the quadratical penalty on the difference between input and output tokens.
2. In Section 3.2 and 3.3, we demonstrate that robust penalties, such as L1 or MCP, can mitigate the quadratic impact to a linear or constant impact. Specifically, when some tokens are attacked and introduce large residues, our ProAttention can adaptively downweight their attention weights to mitigate their impact.
To simplify the understanding, it is helpful to think about a special case of it when all the attention weight are equal: median estimation (L1 case) is notably more robust than mean estimation (quadratic case) because median estimation can be formulated as the minimizer of the sum of absolute deviations (L1 norm) which reduces the impact of outliers.
**Q1**: We would like to clarify that in general, there is no way to prove that an optimization algorithm converges in three steps, especially for the challenging non-convex and non-smooth problems proposed in this paper. In addition, there is no clear connection between the convergence rate of the neural network layers and the size of models or datasets. What we provide in our paper is to theoretically guarantee each iteration step will decrease the function values so running more steps will at least not hurt. We then provide empirical evaluations to show it indeed converges fast in a few steps, which makes the algorithm practical. The exact convergence rate is neither the focus nor the primary contribution of this work, but this can be an exciting future research.
**Q2**:
It is common in adversarial defenses to control the balance between clean and robust performance through key hyperparameters. For instance, the famous paper TRADES [1] needs to carefully tune the combination between clean loss and robust loss to achieve best performance even for different attack budgets for the same model architectures and datasets, not to mention totally different domains evaluated in this paper. Since we plug-in our robust layers into the given models without training, it is expected some reasonable hyperparameter tuning is necessary to obtain optimal performance.
We will provide some general guidance in the revision: (1) Huber loss better maintains clean performance so it typically works better when the attack is weak; (2) MCP loss works better when the attack is strong; (3) L1 loss is a special case of Huber loss; (4) the proposed Huber-MCP penalty can flexibly reduce to different options such as Huber, MCP, and L1 based on the values of hyperparameters δ and γ as shown in Figure 3 in the paper.
As a future work, we can make those hyperparameters as learnable parameters learned through fine-tuning or adversarial training, which can avoid hyperparameter search but also need more training costs.
[1] Zhang, Hongyang, et al. "Theoretically principled trade-off between robustness and accuracy." (ICML, 2019)
**Q3**:
We want to clarify that WLS is not an assumption; the vanilla attention is exactly equivalent to the WLS estimator (L2) under our new interpretation. Our robust ProAttention is an improved variant.
**L1**: Please refer to the **global response for concern of efficiency of ProTransformer**
**L2**: Please refer to Q3.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I look forward to seeing the updated manuscript.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We have carefully incorporated your suggestions into our revision, but the updated manuscript cannot be uploaded at this moment. We would greatly appreciate it if you could consider our clarifications in the rebuttal. | Rebuttal 1:
Rebuttal: Thanks to all the reviewers for the recognition of the novelty and effectiveness of our method. Since several reviewers have the concern about the efficiency of ProTransformer, we will elaborate the response as follows.
## **Response for the concern of efficiency of ProTransformer**
We want to point out that robustifying models is usually very time-consuming and labor-intensive. For example, adversarial training methods usually require substantial training time (e.g., 7-100 times of the normal training time [1]), while our ProTransformer can directly plug in the robust attention into a pre-trained backbone model without any training. Moreover, randomized smoothing methods usually require a lot of inference time (e.g., 1,000 times for the language domain [2] and 100,000 times for the vision domain [3]). Additionally, most robust architectures [4] are quite complicated, require training the models from scratch, and exhibit limited robustness. Therefore, our ProTransformer, which only requires 1-2 times more inference time, is quite efficient and economical. Certainly, there is still some space to further improve the efficiency of ProAttention, which can be the focus of future work.
[1] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. "Towards deep learning models resistant to adversarial attacks." In International Conference on Learning Representations, 2018.
[2] Lou, Qian, et al. "CR-UTP: Certified Robustness against Universal Text Perturbations." arXiv preprint arXiv:2406.01873 (2024).
[3] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. "Certified adversarial robustness via randomized smoothing." International Conference on Machine Learning. PMLR, 2019.
[4] Han, Xing, et al. "Designing robust transformers using robust kernel density estimation." Advances in Neural Information Processing Systems 36 (2024).
## **Response for the concern of limitations of ProTransformer**
Our limitations are listed as follows:
- Despite acceptable complexity of our ProTransformer, there still exists a potential to improve the efficiency of our models.
- We majorly claims and validate the effectiveness of our model under plug-in and play paradigm. We are excited about the future of the proposed ProTransformer architecture and hope to see its full potential with training or fine-tuning on large models. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Ad Auctions for LLMs via Retrieval Augmented Generation | Accept (poster) | Summary: The paper considers the integration of ads into large language model (LLM) generation, as well as the design of a mechanism for ad allocation and pricing that comes with this integration. The paper introduces the notion of a "segment auction", where the output discourse is break down into various segments. Following the retrieval augmented generation (RAG) framework, the auction selects winner for each segment considering the bids and relevance, and provide the winning ad to the LLM as part of the prompt to generate segment with the ad incorporated. The paper provides theoretical analysis on the desirable properties of the segment auction, as well as empirical analysis in a synthetic setting.
Strengths: - The paper does a good job connecting the ideas to theoretical concepts, and provides solid theoretical analysis
- The paper is well-motivated, tackling a novel problem that has great practical implications
Weaknesses: - The paper uses strong assumptions on the retrieval component being calibrated to the click through rate, which forms the basis of most theoretical results provided. In the experiment section, the relevance measure are instead estimated by a model from the sentence-transformers library, which leaves a gap between theoretical guarantees and empirical approaches.
- Under the general segment auction exposited in Appendix C, where each relevance measure is additionally dependent on all previous segments and allocations, the assumption of calibrated relevance seem even harder to achieve. Without this assumption and its associated theoretical properties, the paper presents only incremental technical contribution since running a second price auction and incorporating the winner of the auction into the LLM prompt lacks technical novelty.
- The experiments section in the paper is not compelling enough.
- The five types of auctions considered in the Experiment section, which varies in whether replacement or relevance measure is used, are not well-motivated and appear disconnected from the earlier theoretical section, which focus on only the segment auction with replacement. The authors do not provide a compelling reasoning for considering this five variants, and the result analysis does not provide a strong case or intuition for which should be chosen (see line 273-287) - the writing and analysis for these parts could be improved.
- Both the numbers in output quality and the qualitative example provided in Figure 4 suggests that the addition of ads in the generation quite notably impairs the quality of the generated content. Particularly worrisome is the fact that in both the single allocation and multi allocation example (as well as in the examples in Appendix G.2), as soon as the paragraph starts pivoting to the ads *in the first sentence*, it never goes back in providing *any* additional useful information to answering the original question *"Can you suggests books similar to `to kill a mocking bird'"*. This seems to empirically suggests that the presence of ads in the earlier segment may impair the quality of content generated in the following segments.
- The experiments assume that each segment is one sentence in the experiments and add an ad to each segment. This is arguably too extreme for it to be deployed and a more realistic setting might be to integrate only a few ads to an entire page of results, which will suggests that the segment be longer than a single sentence. Additional experimental results should demonstrate that the proposed mechanism work as well under a longer segment.
- When the number and categories of the advertisers are not diverse enough, augmenting an irrelevant ad appears to come with a significant cost to the quality of the output. The bid system also introduces the potential problem of misleading and corrupting the accuracy of the response, which is a new problem not faced by traditional ad auctions. This should be either empirically evaluated or discussed in more detail (see more in Limitations part).
- Typo in line 140 - both the per-click and per-impression price are denoted using the same symbol.
Technical Quality: 2
Clarity: 2
Questions for Authors: - How will the relevance measure $q_i$ be obtained if the mechanism is put into practice, and will there be ways of ensuring accuracy and calibration?
- The qualitative empirical results right now (e.g. Figure 4) shows that the generated output stops providing useful information to the user once the ad starts appearing in the generated text in the first sentence, and all subsequent generations are just rhetorical transitions to another ad. Is this effect prevalent? Are there additional qualitative or quantitative results showing the effects of ad on subsequent segments and potential solutions to preserve an informative output?
- Current pricing uses the VCG mechanism for incentive compatibility. In ad auctions GSP and first price auctions are frequently used. Can these alternative pricing schemes can be easily implemented under the current scheme?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: The authors mentions the inherent trade-off between revenue and quality that's evident from the empirical observations. Different from traditional ad auctions, under LLM when the generation of later texts depends on prior generations, integrating additional ad leave open potential room for attack and injecting biased information through the type of the ad itself. The authors can provide a more detailed exposition of the potential social impact of such mechanism.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for thoughtful and detailed comments.
> The paper uses strong assumptions on the retrieval component
We agree that implementing a performant, calibrated retrieval component is a challenging engineering task in practice. However, it is well studied in the literature. We refer to McMahan et al. [1] and Graepel et al. [2] for descriptions of actual click-through rate estimation systems at Google and Microsoft. Both papers have sections on calibration using isotonic regression. We will cite these works and add some more discussion of calibration to the paper based on the reviewer’s comments.
For our experimental evaluation, we view retrieval and calibration as modular components, where various implementations could be used in practice. Our evaluation uses embedding similarity as a measure of similarity because RAG commonly uses vector embeddings for retrieval. Vector embeddings capture semantic information and enable efficient similarity search, but as mentioned above, other options are possible. As we did not conduct the real-world experiment with an implemented system, our experiments are bound to use a proxy measure. We also kindly refer to simultaneous work by Dubey et al. [3] that pose the same assumption on the calibrated relevance (in their words, prominence).
1. McMahan et al.: Ad Click Prediction: a View from the Trenches, KDD’13
2. Graepel et al.: Web-Scale Bayesian Click-Through Rate Prediction for Sponsored Search
Advertising in Microsoft’s Bing Search Engine, ICML‘10
3. Dubey et al.: Auctions with LLM Summaries, Dubey, Feng, Kidambi, Mehta, Wang, KDD’24
> Under the general segment auction exposited in Appendix C
Modern click-through rate estimation and retrieval systems take many contextual features to make their predictions. We refer again to papers [1, 2] cited above. In our application, prior generated output would be included as contextual features. We agree this is important and nontrivial to handle in practice, but again view this aspect as orthogonal/modular to the question of auction design tackled in our paper.
> The five types of auctions considered in the Experiment section
Thank you for raising this point. In our camera-ready draft, we will elaborate on our choice of baselines and how our results highlight trade-offs between mechanisms in terms of output quality, revenue, and social welfare. Here’s a brief intuition. We refer to L#242-250 regarding different baselines and L#271-287 for interpreting the results.
In summary: (1) Naive II generates the highest revenue but results in the lowest relevance and social welfare, potentially incorporating unrelated ads and diminishing user experience; (2) Segment auctions with replacement achieve the best social welfare and relevance but lead to lower minimum social welfare; (3) Multi-allocation produces competitive results with segment auctions without replacement but sacrifices revenue.
Tables 2 and 7 evaluate LLM output under different mechanisms. We measure relevance to the original output (without ads). Multi-allocation mechanisms generally produce better outputs, as longer segments with more ads lead to more coherent results. Single-allocation auctions with shorter segments result in lower quality. Naive I and II lead to the lowest quality due to the inclusion of irrelevant ads. Segment auctions without replacement show marginal improvement over those allowing replacement.
These results highlight the trade-offs between different criteria, guiding the auctioneer to select the appropriate mechanism or a combination based on their requirements.
> Figure 4 suggests that the addition of ads impairs..
We kindly refer the reviewer to global rebuttal. As seen in the attached pdf, in 3rd paragraph in the first sample and in particular 2nd paragraph in the second sample, the output still answers the query. These can further be improved with more prompt engineering.
> Augmenting an irrelevant ad appears to come with a significant cost to the quality of the output
Like standard search auctions, target ads can be prefiltered to exclude irrelevant ones. Most ad platforms use prefiltering to reduce computational load and maintain user satisfaction. Therefore, we assume all ads are relevant in L#31.
> Privacy / reliability issue
We agree that integrating ads without privacy or robustness issues is crucial. However, our main focus is to introduce the RAG integration into the LLM ad system, validate its feasibility, and derive theoretical and empirical evidence. Our paper doesn’t address every practical issue. Similar perspectives are taken by Duetting et al., Dubey et al., and Soumalias et al. For broader technical concerns like privacy and reliability in LLM ad systems, refer to Feizi et al.'s survey paper.
1. Dubey et al.: Auctions with LLM Summaries, KDD’24
2. Duetting et al.: Mechanism Design for Large Language Model, WWW’24
3. Soumalias et al.: Truthful Aggregation of LLMs with an Application to Online Advertising
4. Feizi et al: Online Advertisements with LLMs: Opportunities and Challenges
> How will the relevance measure be obtained if the mechanism is put into practice?
We refer to the cited papers on CTR prediction for calibration. While we use vector similarity as a proof of concept, more research is needed on designing effective metrics to measure relevance between queries and ads.
> The qualitative results show that the generated output stops providing useful information.
Please see our response above.
> Can these alternative pricing schemes can be easily implemented?
As can be seen from our randomized segment auction, any pricing scheme can be implemented from our idea - randomly perturb $q_i \cdot b_i$, and apply any mechanism as desired. For instance, perturbing the bids and then run GSP (Figure 3) does not induce incentive-compatible mechanism as VCG does, however, this would be a reasonable analogue of the GSP auction widely adopted in the sponsored search auction.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their detailed response.
- I'm still not convinced with novelty.
- The main theoretical innovation is to connect the RAG framework with ad auctions. This is achieved in the paper by incorporating bids into the generation probability (Eq. 3) through Gumbel perturbations, as the authors highlighted in the global rebuttal. I agree with the authors that this is a nice incorporation of a method used in econometrics. However, it appears lacking if the use of this trick is the main theoretical contribution of the paper.
- The new notion of logarithmic social welfare is interesting. However, as the authors mentioned, it is a new concept has never been studied in the literature so I would be more convinced of its importance if there's more evidence supporting its merits and properties theoretically and/or empirically. Other theoretical derivations like proof for DSIC and the pricing rule bear close semblance to the ad auctions literature. Overall, I find the research question interesting, but still not finding a lot of theoretical innovations in the proposed approach.
- I'll be more convinced of the merits and practical importance of the approach if there's strong empirical evidence. I thank the authors for the additional pdf they uploaded in the global rebuttal. I'm still not fully convinced that it resolved my point in the review:
> Particularly worrisome is the fact that in both the single allocation and multi allocation example (as well as in the examples in Appendix G.2), as soon as the paragraph starts pivoting to the ads in the first sentence, it never goes back in providing any additional useful information to answering the original question "Can you suggests books similar to `to kill a mocking bird'".
- The authors pointed to the "3rd paragraph in the first sample and in particular 2nd paragraph in the second sample". I don't really see it in the 3rd paragraph in the first sample, maybe I'm missing something? For the 2nd paragraph in the second sample, I do see it: *"Additionally, consider delving into classics such as "Cry, the Beloved Country" by Alan Paton, which, like Harper Lee's masterpiece, offers profound insights into social justice and empathy"*. This seems good. Although in this case *both* the first and the second paragraph are advertising **MassMart**, so it seems to be a case where LLM doesn't need to make transitions. Are there examples where the paragraphs are advertising different companies but the subsequent paragraphs still add meaningful materials to the question? Would also help if we could see how much prompt engineering will be needed to achieve that and whether it's ad hoc to a single question. Ideally we want a prompt structure that works for at least a subset of questions.
---
Rebuttal 2:
Comment: We appreciate the reviewer's responsiveness.
### Novelty
We first emphasize that our main contribution is to propose the idea of combining RAG and ad auctions for LLM and laying down a foundation in this line of research. Indeed, even though applying RAG into the LLM ad auction seems a very plausible direction, there has been no single work dedicated to it. That is to say, our primary objective and contribution is to propose a potentially promising framework for the LLM ad auctions, which we believe will inevitably emerge in a near future, and validate its feasibility.
Although RAG was introduced in 2020 and advertising has long been a fascinating field of study, the integration of ads within the RAG framework (to potentially monetize LLMs) is a novel approach that was not studied theoretically nor validated through empirical research.
As the reviewer agreed with the idea of perturbation, although this idea is easily understandable in hindsight, it is not easy to come up with at first glance. Also, we think that the idea of reverse-engineering the mechanism design by starting from an exogenously given allocation function by RAG is not a trivial perspective to derive. We respectfully disagree with the reviewer that a paper necessarily includes complicated theoretical techniques or proofs, but believe that simple and concise but novel ideas often advance the fields in many directions.
Finally, as the reviewer pointed out, delving more into logarithmic social welfare seems very promising research directions. We indeed have exposited several initial properties in Appendix D including that it can be viewed as a version of weighted Nash social welfare, but further theoretical results remains open problems, e.g., whether this is consistent with other notions of fairness in terms of reachability/approximability. We will make this point clearer in the main body of the paper.
### Experiments with longer segments
We would like to emphasize that optimizing the incorporation of ads into the LLM output, via prompt engineering, is not a goal of this paper. Our intention in the experiments was to provide a proof of concept with basic, straightforward prompts, to confirm that the approach was viable (and surely in practice we may apply this approach for longer answers etc for quality purposes).
Having said that below we bring another sample that we obtained using the single ad segment auction without replacement, even without much of prompt engineering (we just reused the original prompts but slightly added a bit: keep answering the query without deviating from it by fully focusing on the ads). We conducted this experiment several times, and found that LLM has been pretty consistent on almost all of them, i.e., a rather simple prompting structure would work.
> (Segment 1) If you appreciated the thematic depth and moral introspection of "To Kill a Mockingbird," you might enjoy exploring similar narratives like "The Help" by Kathryn Stockett or "A Time to Kill" by John Grisham. Find these titles easily at BookHaven, your ultimate online bookstore. With an extensive collection and personalized recommendations, BookHaven ensures a seamless shopping experience. Dive into a world of endless possibilities with BookHaven, where every book finds its perfect reader.
> (Segment 2) Additionally, consider delving into classics such as "Cry, the Beloved Country" by Alan Paton, which, like Harper Lee's masterpiece, offers profound insights into social justice and empathy. Pair your reading experience with a visit to EspressoEdge, where each sip of their high-quality, handcrafted beverages provides a moment of luxury. Savor rich espressos or creamy lattes while you immerse yourself in timeless literature at EspressoEdge.
> (Segment 3) For a contemporary twist, you might also enjoy "Small Great Things" by Jodi Picoult, a novel that tackles race and prejudice in modern society. Enhance your reading experience with Velora's range of tablets and e-readers, which offer crisp displays and user-friendly interfaces. Velora's smart devices ensure your favorite books are always accessible, whether you're at home or on the go. Elevate your tech experience with Velora.
We can definitely include these samples in the appendix for the camera ready if the reviewer finds it informative.
---
Rebuttal 3:
Comment: Thanks for the detailed response and the new example, this is helpful to know. I'll think about the authors response, and wait for the other reviewers' response and discussion before making a final decision on my rating.
Edit: The new example given by the authors partly addresses a concern raised in my original review so I will slightly adjust my rating accordingly. My other points remain the same at the moment. | Summary: This paper studies an interesting and timely application of ad auctions for LLMs via retrieval augmented generation. They propose a segment auction that takes the bid and relevance as the input and outputs the price by a randomized second price auction. This auction maximizes the logarithmic social welfare that is proposed in this paper.
Strengths: - The combination of RAG and Ad Auction is pretty interesting and novel.
- The presentation of the figures is clear and informative.
- This paper has empirical experiments.
Weaknesses: - The underlying assumption of this paper, i.e., [line 188-189] the relevance is independent of the previous segment is too strong.
Technical Quality: 3
Clarity: 3
Questions for Authors: In the paper, does the auction repeat for $T$ times, where one for each segment?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s valuable comments.
> Independent assumption
We in fact extend our independent segment auction to a more general segment auction with history dependent relevance measure in Appendix C. We will make sure to add a pointer to this part in the main body of the paper. In the dependent segment auction, our notion of relevance depends on the previous tokens. In this case, since the relevance measure that determines the value of the objective function LSW might arbitrarily depend on the previous tokens, from the computational perspective, it requires the extension of every possible token sequence to exactly optimize the objective function without no assumption. Interestingly, we show that our segment auction can be viewed as a local greedy algorithm that approximates the globally optimal allocation rule. More precisely, our segment auction chooses next tokens to maximize LSW given that previous tokens are fixed.
> Question on the repetition
The reviewer is correct - our segment auction repeatedly runs to determine the output for each segment, where there are flexibilities in determining what to choose as a segment. Also our multi-ad segment auction further allows flexibility in determining how many ads to allocate within a segment.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. After I reviewed all the reviews/rebuttals, I believe this paper is an important and timely starting point for pricing RAG-based LLM, so I raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We appreciate your positive feedback. Thanks a lot for raising the score!
---
Rebuttal 2:
Comment: After reviewing the authors' rebuttal and other reviews, I have several points to raise:
**Modeling Approach**: The paper employs hard insertion instead of model fusion to recommend an RAG-based sponsored search. This choice seems suboptimal, especially considering concurrent theoretical work (see https://arxiv.org/pdf/2407.04471) that utilizes specific model fusion methods for similar tasks. However, given that this concurrent work was published post NeurIPS 2024 deadline, it's understandable that the authors did not reference it.
**Empirical Performance**: I concur with Reviewer 2L4f's concerns regarding empirical performance. More comprehensive evaluations are needed to substantiate the proposed method's efficacy.
I still maintain my current score, but I want to second that there are indeed some points worth noticing in this paper. | Summary: In this paper, the authors integrate the auction mechanism into RAG LLMs for computational advertising. They propose a novel segment auction method where an auction is run to integrate single or multiple ads into each segment output of LLMs. Experiments on several auction scenarios are conducted to verify the effectiveness and feasibility of the proposed framework.
Strengths: 1. The research problem of this paper is interesting. This paper combines the popular RAG LLM framework with traditional auction mechanisms, exploring the prospects of integrating computational advertising with LLMs.
2. This paper is well-structured with a clear and coherent logic.
Weaknesses: 1. The technical novelty of the proposed method seems limited.
2. The experimental evaluation should include more baseline methods.
3. The evaluation method is not comprehensive enough.
Detailed Comments:
1. The proposed method lacks innovation. While combining computational advertising with LLMs is an interesting direction, this paper merely provides a simplistic integration of auction mechanisms and RAG, lacking innovation in its overall approach.
2. In the evaluation of the proposed method, this paper only compares two naive baselines (without relevance score / without an LLM), lacking comparison with other existing auction methods. The comparison should be made within the same RAG framework, between the proposed auction mechanism and other existing auction mechanisms, to demonstrate that the proposed mechanism is most compatible with RAG LLM.
3. The effectiveness of the whole proposed framework is not well verified. First, simply comparing the cosine similarity of embeddings between the output text and the original text without ads may not sufficiently indicate the text quality. On one hand, comprehensive metrics like perplexity could be incorporated. On the other hand, the quality of the advertising content in the text has not been considered. Second, there is a lack of overall results that can demonstrate how well this method achieves the tradeoff between advertising effectiveness, output quality, allocating efficiency, and fairness.
Technical Quality: 2
Clarity: 2
Questions for Authors: Refer to detailed comments.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Refer to detailed comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for detailed review. We hope that our rebuttal can address your concerns.
Note that our framework consists of several components, each of which could be further researched to improve real-world deployment. Our focus was on proposing the framework, analyzing it theoretically, and evaluating it with simple yet efficient methods.
> The technical novelty of the proposed method seems limited.
We kindly ask the reviewer to see our general response in “Novelty of the paper”
> The experimental evaluation should include more baseline methods.
Our framework on ad auctions LLM is indeed the first study to explore ad auction framework within the LLM’s textual output, so we compare our mechanisms with simple mechanisms that one might naively think of. There are simultaneous works by Aranyak and Soumalias, however, we remark that these works do not explicitly compare their mechanisms with one another due to the lack of existing/standard benchmarks, but also only considers a naive version of auctions, e.g., greedy append of ad documents similar to our naive mechanism. We kindly ask the reviewer to let us know if the reviewer has any specific auction to compare in mind.
> First, simply comparing the cosine similarity of embeddings between the output text and the original text without ads may not sufficiently indicate the text quality.
Measuring text quality while including advertisements is a challenging research question that requires further exploration by the NLP community. Traditional metrics like ROUGE may be unsuitable due to ads in the output and the absence of ground truth. While our framework's validation employed semantic similarity as a proxy for quality, qualitative analysis suggests our framework generally produces high-quality outputs. We acknowledge that this aspect could open a promising research area.
> On one hand, comprehensive metrics like perplexity could be incorporated.
Thank you for your valuable suggestion. We found that comparing perplexity is indeed useful and conducted experiments using the LLaMA3 8B model. We measured perplexity (given the query) for outputs in scenario 1 (100 samples per mechanism). Perplexity was low for original outputs but increased significantly when ads were included. Auction mechanisms that allow replacements showed slightly lower perplexity due to ad replication, which reduces model surprise. Other baselines performed competitively in terms of perplexity.
| mechanism | perplexity |
|----------------------------------|------------|
| original | 4.93 |
| single-alloc w replacement | 12.66 |
| multi-alloc greedy | 14.69 |
| single-alloc w/o replacement | 14.44 |
Note that while perplexity is a useful measure, it doesn't fully capture how well the model responds to the query or the overall output quality. This limitation highlights an interesting problem that requires further research to develop better metrics for evaluating response relevance and quality.
> On the other hand, the quality of the advertising content in the text has not been considered.
Thanks to the reviewer for pointing this out, It indeed raises an interesting and fundamental research question for the general RAG framework: how effectively does the model utilize retrieved documents? There has been extensive study to evaluate RAG output given its retrieved documents. We believe that this problem requires further analysis in ad domain specifically.
In our framework, we assumed for simplicity that when an ad is selected, the LLM can effectively integrate it into the output. Qualitative results support this assumption, demonstrating that the model properly blends ads into the generated output.
> Second, there is a lack of overall results that can demonstrate how well this method achieves the tradeoff between advertising effectiveness, output quality, allocating efficiency, and fairness.
We note that qualitative results show that our proposed approach is feasible and recent LLMs like GPT4 are useful for this framework. We used simple but efficient proxies for measuring ads relevance and output quality that compares different baselines and show that output quality of our mechanism is pretty reasonable. A bit more formally, advertising effectiveness and allocating efficiency can be captured via social welfare, output quality can be captured via relevance, and fairness can be captured via minimum social welfare as stated in L#260~265.
---
Rebuttal Comment 1.1:
Comment: We thank you again for your valuable feedback and comments which have helped to strengthen our paper. As the discussion period is ending soon, we would really appreciate if you could let us know if our responses have addressed your concerns. We will be happy to answer any further questions and address any remaining concerns to the best of our abilities in the remaining time!
---
Reply to Comment 1.1.1:
Comment: Dear reviewer, As the discussion period is ending soon, we would really appreciate if you could let us know if our detailed responses have addressed your concerns and that possibly you can upgrade your score given other reviews with high score for this paper. Thanks again for your valuable comments and time. | Summary: The work studies ad auctions integrated in LLM output powered by retrieval-augmented generation. The authors propose an ad auction (so-called segment auction) where an ad is put in retriever with some probability. An efficiency-fairness balance is maximized (through logarithmic social welfare). An extension to multi-ad setup is proposed. Evaluation of the framework has been done through synthetic experiments.
Strengths: - Highly important novel topic for research (due to restructuring of web search market and LLM/RAG-based search system deployment around the world)
- Both theoretical guarantees and empirical evaluation
Weaknesses: - Unclear selection of optimization function (see questions)
- It would be nice to have real-life evaluation as the topic is highly relevant for production search engines
Technical Quality: 3
Clarity: 3
Questions for Authors: - It is not very clear why does LSW has been selected for optimization? Is it chosen after segment auction invention? Or you purposely searched for auction that optimize LSW?
- Does segment auction maximize SW? What is the auction design for SW optimality?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first appreciate the reviewer for insightful comments.
> About unclear selection of optimization function
The reviewer is correct that it is chosen after the allocation function of segment auction is determined. We remark that our selection of optimization functions stems from the probabilistic retrieval nature of the RAG framework. RAG framework suggests retrieving each document proportional to the probability $p_\eta(a_i | x)$. Thus, we restrict our mechanism to have the same allocation function so that the overall architecture exactly resembles the RAG framework but with ad documents. In hindsight, we observe that such an allocation function maximizes LSW, which has never been studied in the literature to the best of our knowledge. We firmly believe that LSW may play a crucial role as an objective function in ad auctions for LLMs in further works, and also we found that it achieves a balance between fairness and efficiency in a similar vein with the Nash social welfare.
> Does segment auction maximize SW? What is the auction design for SW optimality?
In fact, an (single-allocation) auction that maximizes SW will be any auction that selects an ad with the highest value of $q_i \cdot b_i$, e.g., second-price auction or first-price auction. As per the revenue-equivalence theorem by Myerson’82, all these auctions would induce the same revenue, whereas our segment auction intentionally deviate from such SW-maximizing allocation function to be consistent with the probabilistic retrieval of RAG, i.e., in order to improve the overall quality of LLM outputs themselves. | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive feedback. In this section, we address the concerns regarding design choices and engineering components in our proposed framework, along with a recap over our main contributions, and further results for multi-ad segment auction.
------
### Segments and Prompt Engineering
+ choice of segments: the segment size is intentionally set and is not constrained by our mechanism or theoretical results. It's determined based on our understanding of the large language model's (LLM) ability to incorporate ads within each segment's output, in order to focus on delivering our main takeaway. In the paper, we used smaller segments to produce shorter, more efficient outputs and provide clearer qualitative results. The success of the LLM with shorter segments suggests that our approach will also work for longer segments. We have included some qualitative examples for **longer segments (e.g., paragraphs) in the PDF file**.
+ output quality: The quality of the outputs can be improved by refining the prompts given to the language model, ensuring better integration of ads within the output while effectively responding to users’ queries. Effective prompt engineering can lead to more coherent outputs that meet the needs of both advertisers and users.
While we appreciate the reviewers' insights on these choices, we emphasize that the main contribution of our paper is to propose and theoretically analyze this framework, alongside empirical evaluation. There is always room for improvement in each component by selecting better options.
------
### Novelty of the paper
We believe that the main technical novelty lies in applying the RAG framework into the ad auction for LLM and revealing its several properties.
+ First, our main mechanism considers a completely different approach than the standard mechanism design literature since we consider the allocation function as given from the RAG’s probabilistic retrieval nature. Then, given the allocation function as fixed, we aim to discover which objective such an allocation function tries to achieve, which turns out to be the logarithmic social welfare we defined. We firmly believe this approach could be a stepping stone to explore the intersection of RAG and mechanism design.
+ In addition, our perturbation technique inspired by discrete choice methods is indeed a novel component to implement any such randomized allocation rule in a simplistic manner. For instance, if one wants to naively implement a linearly proportional allocation rule, the straightforward approach would be to simply randomize the allocation function and sample from the corresponding function. In this approach, however, the payment function is not easy to obtain. One may apply Myerson’s lemma to obtain the expected payment given the randomized allocation function, however, this only guarantees incentive-compatibility in an ex-ante manner. In stark contrast, our perturbation technique ensures that the segment auction is ex-post incentive-compatible if the mechanism given the perturbed bids is incentive-compatible. Even more interestingly, in our framework, one can apply arbitrary pricing schemes given by any auction literature, e.g., generalized second price auction (GSP), even though it is not incentive-compatible. The approach is again to apply random perturbations to the bids, and then use the GSP pricing rule with the resulting fixed bids. Note that this is not possible without perturbation technique, e.g., it is not straightforward to answer what is the analogue of GSP when the allocation function needs to be randomized.
We remark that there has been no work to consider the idea of RAG + LLM ad auction and investigate the underlying theoretical / empirical nature of the corresponding framework in detail.
+ Finally, we evaluated various segment auction mechanisms and other ad auction baselines within the RAG framework. Our analysis provides insights into auction outcomes, such as social welfare and revenue, in some realistic scenarios, providing practical guidance for implementing this framework in real-world use cases.
------
### Further result on the multi-ad segment auction
We actually have obtained the allocation function of the GSP analogue under the RAG framework. Precisely, for the multi-ad segment auction presented in Figure 3, we observe that the allocation probability can be computed as follows:
$$ P(S \text{ wins }) = \sum_{T \subseteq S} (-1)^{|T|+1} \frac{\sum_{j \in T} q_jb_j}{\sum_{i \in \bar{S}\cup T}q_ib_i},$$
which strictly generalizes the single allocation segment auction by taking $S$ to be a set including $i$. We will add a brief remark on this result in the camera-ready.
Pdf: /pdf/cb239b872c505f42be12933f09538db3566bd16b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Cross-modal Representation Flattening for Multi-modal Domain Generalization | Accept (poster) | Summary: This paper extends the analysis of unimodal flatness to the domain of multi-modal domain generalization (MMDG) for the first time, and proposes the Cross-Modal Representation Flattening (CMRF) method. By constructing a shared representation space and cross-modal knowledge distillation, it addresses the issues of competition and inconsistency in flatness across multiple modalities. Experimental results demonstrate that CMRF significantly outperforms all baseline methods in the multi-modal multi-source domain generalization setting, effectively enhancing the model's generalization capability. In particular, it achieves an average improvement of up to 3.52% in the video-audio modality combination.
Strengths: 1. The new Cross-Modal Representation Flattening (CMRF) method is used for multi-modal domain generalization. This method constructs a consistent flat loss region in the representation space and utilizes cross-modal knowledge transfer to enhance the generalization ability of each modality, which is a novel approach in the multi-modal domain.
2. The paper delves into two key issues in multi-modal domain generalization: inter-modal competition and inconsistent flatness between modalities. This analysis lays a solid foundation for proposing effective solutions.
3. The paper mentions that applying SAM can only improve the generalization of the better modalities in multi-modal networks, while it has little to no benefit and may even be harmful for the weaker modalities. This indicates that traditional SAM-based single-modal methods, due to generalization gaps, fail to find the flat minimum value where each modality coexists, leading to inconsistent flatness and thus not fully utilizing the generalization ability of all modalities. The analysis and conclusions are correct.
Weaknesses: 1. The paper mentions using simple moving average to build the teacher network, but how to extract knowledge from the mixed representation and how to transfer this knowledge to each modality is not clear enough.
2. The paper points out that inter-modal competition is a key issue, but the specific performance of the competition and the detailed evaluation of the proposed mitigation strategies are not thorough. It is hoped that the authors can provide more quantitative analysis on inter-modal competition and validation of the effectiveness of the mitigation strategies.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Section 3.3, the authors proposed a method for cross-modal representation flattening, but further explanation is needed on how to construct interpolation representations and how to optimize these interpolations to achieve flattening of the loss region.
2. The authors should provide more details on how to dynamically adjust weights based on the differences in the generalization ability of each modality.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer for valuable comments. We respond below to each of the concerns.
>1. The paper mentions using simple moving average to build the teacher network, but how to extract knowledge from the mixed representation and how to transfer this knowledge to each modality is not clear enough.
**Response:** The reason why using the mixed representation promotes multimodal domain generalization is twofold. Firstly, the mixed representations are constructed based on the moving averaged teacher network, which has flatter loss landscape compared with student network [A]. Thus, the representations from teacher network naturally leads to better generalization. Secondly, mixing the representations from one modality with representations from other modality means applying perturbations, and the applied perturbations guide the current modality to learn about other modality. Therefore, by distillation, we extract knowledge from the mixed representation and transfer them to each modality. Moreover, as the representations of each modality learn towards these interpolations, we optimize them by uni-modal classification losses, which ensures that when the losses of interpolations are large, we can also optimize them to lower losses, so that the representation loss landscape between modalities can be flattened.
> 2. The paper points out that inter-modal competition is a key issue, but the specific performance of the competition and the detailed evaluation of the proposed mitigation strategies are not thorough. It is hoped that the authors can provide more quantitative analysis on inter-modal competition and validation of the effectiveness of the mitigation strategies.
**Response:** Modality competition refers to the mutual inhibition between modalities in joint training, which is reflected in in-domain performance straightforwardly as studied in previous literature. Below we give the uni-modal validation results (in-domain) on EPIC-kitchens with video and audio data. Modal competition is manifested in that each single modality of Base performs worse than uni-modal training, which further leads to worse out-of-domain performance for each modality, as in the results of Tabs. 1,4, where the generalization ability of each modality degrades under different modality combinations and different datasets, indicating the existance of modality competition.
More results of the in-domain validation accuracy will be released later.
|||Video||||Audio|||
|:--------------:|:----------------:|:----------------:|:----------------:|:------------:|:----------------:|:----------------:|:----------------:|:------------:|
||D2,D3->D1|D1,D3->D2|D1,D2->D3| Avg |D2,D3->D1|D1,D3->D2|D1,D2->D3| Avg|
|Uni-modal|79.58|75.58|75.19|76.78|**60.32**|54.29|53.16|55.92|
|Base|75.78|73.60|72.40|73.93|54.58|52.23|49.11|51.97|
|SAM|77.03|73.81|73.75|74.86|54.90|51.60|49.67|52.06|
|EoA |78.94|73.20|75.12|75.75|56.85|52.76|52.45|54.02|
|SimMMDG|80.86|74.81|74.57|76.75|54.58|53.34|52.90|53.60|
|CMRF(ours)|**81.26**|**77.21**|**75.69**|**78.05**|58.77|**54.89**|**54.38**|**56.01**|
In contrast, our method achieves the best uni-modal in-domain performance as shown above, indicating that **it optimizes modality competition effectively**, which in turn improves the generalization ability to other domains as in Tab. 4. The detailed results with each domain as target and more results on HAC dataset are shown in Tab. 8-13 in appendix, demonstrating modality competition and our approach really mitigates this problem comprehensively.
> 3. In Section 3.3, the authors proposed a method for cross-modal representation flattening, but further explanation is needed on how to construct interpolation representations and how to optimize these interpolations to achieve flattening of the loss region.
**Response:** How to construct interpolation representations can be explained by Eq. 8. The extracted representations from teacher network will sum weighted, where the weight $\delta$ is sampled from distribution Beta($\alpha$, $\alpha$). Here we set $\alpha <1$, so that $\delta$ is biased towards 0 or 1. Considering the semantic gap between modalities, we let interpolation closer to student modality act as its teacher signal. For example, for two representations $z_M$ and $z_N$ from modality M and N, one interpolation is $z_{M,N}=\delta z_M + (1 - \delta ) z_N $, if $\delta = 0.8$ (close to M), $z_{M,N}$ will be the teacher signal for modality M and if $\delta = 0.2$ (close to N), $z_{M,N}$ will be the teacher signal for modality N, the same as Eq. 9.
How to optimize these interpolations depends on two losses: the distillation loss and the uni-modal classification loss (L186, L187). As our response for Question 1, by distillation, we make the representations of each modality learn towards interpolations. Then by uni-modal classification losses, if the losses of interpolations are large, we can optimize them to lower losses, so that the representation loss landscape between modalities can be flattened.
> 4. The authors should provide more details on how to dynamically adjust weights based on the differences in the generalization ability of each modality.
**Response:** The weight is adjusted under different epochs. We use the teacher network with its uni-modal classifier to get each uni-modal average validation accuracy on validation set, i.e. $\hat{A}_k$. And if the performance gap between two modalities is greater than a certain threshold $\mu$, we put a larger weight (1.0) on the better modality to make it play a greater role in the distillation process and put a smaller weight (0.5) for the other.
[A] Devansh Arpit,et al.Ensemble of averages: Improving model selection and boosting performance in domain generalization. NIPS 2022.
---
Rebuttal 2:
Comment: The authors' rebuttal dispelled some of my concerns, and I choose to maintain the score. | Summary: The paper addresses the problem of multi-modal domain generalization (MMDG). The authors identify two main challenges in MMDG: modality competition and discrepant uni-modal flatness and propose a novel method called Cross-Modal Representation Flattening (CMRF). This method optimizes the representation-space loss landscapes instead of the traditional parameter space, allowing for direct connections between modalities. The authors demonstrate the effectiveness of CMRF through extensive experiments on benchmark datasets EPIC-Kitchens and Human-Animal-Cartoon (HAC), showing superior performance in both multi-source and single-source settings.
Strengths: 1. The introduction of Cross-Modal Representation Flattening is novel to address the challenges of MMDG
2. The paper provides a detailed analysis of modality competition and discrepant uni-modal flatness, which are crucial factors in multi-modal generalization
3. Comprehensive evaluation demonstrates the effectiveness of the proposed methods
4. The paper is very well written and easy to follow
Weaknesses: 1. Are there any visualization or quantitative results to show that after CMRF training the loss landscape of different modalities get consistent flat region?
2. What is the influence of α in the Beta distribution? What if it is larger than 1?
Technical Quality: 3
Clarity: 3
Questions for Authors: Are there any visualization or quantitative results to show that after CMRF the loss landscape of different modalities get consistent flat region? What is the influence of α in the Beta distribution? What if it is larger than 1?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Currently, the paper need to test on validation set to estimate generalization of each modality for Eq. 11, which can be time-consuming with the scale increase of validation set.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer for valuable comments. We respond below to each of the concerns.
> 1. Are there any visualization or quantitative results to show that after CMRF training the loss landscape of different modalities get consistent flat region?
**Response**: Thanks a lot for the valuable comments. We can evaluate the loss flatness by applying low-frequency perturbation from the Gaussian Distribution on representations as in [A], where the Variance controls the perturbation strength. The magnitude of the performance drop indicates how flat the loss is. The results are shown Figs.1,2 in rebuttal.pdf. With the increase of Variance, the performance drop of different modalities grows quickly. Although previous SAM can obtain flatter loss landscape for the better modality (video or optical flow), its improvement to weak modality (audio) is low or even harmful. The same phenomenon also exists in some multimodal domain generalization method such as SimMMDG. In contrast, our method has the smallest performance drop on each modality, indicating that our method achieves flatter loss landscape for both modalities simultaneously, i.e., getting consistent flat region. More results on different modality combinations and different datasets will be released later.
> 2. What is the influence of α in the Beta distribution? What if it is larger than 1?
**Response**: In this paper, we let the parameters in Beta($\alpha$, $\beta$) equal, i.e., $\alpha=\beta$, so it becomes Beta($\alpha$, $\alpha$). In this way, the distribution is symmetric around the midpoint of the interval [0, 1]. When $\alpha <1$, the Beta distribution is U-shaped, with more weight towards the boundaries (0 and 1). When $\alpha >1$, the Beta distribution becomes symmetric and bell-shaped around 0.5. The larger the value of $\alpha$, the more peaked the distribution becomes around 0.5. In this paper, we want to flatten each modality's loss landscape by distilling knowledge from the mixed multimodal representations. However, because of the knowledge heterogeneity between modalities, if the mixed representations contains much more inconsistent knowledge from other modalities, the cross-modal distillation will lead to worse performance [B]. Therefore, we want the mixed teacher information used for distillation to contain more information about the student modality, i.e., the values sampled from the beta distribution should be biased towards 0 or 1. And if $\alpha >1$, the values sampled from the beta distribution are more likely to be 0.5, where the knowledge of two modalities is mixed equally. In this case, if the other modality contains knowledge that is inconsistent with the student modality, the performance of distillation will be degraded. This is also the example represented by Fixed Mix (mixing ratio is 0.5) in Tab. 6, which is worse than our method with $\alpha=0.1$.
[A] Yixiong Zou, et al. Flatten Long-Range Loss Landscapes for Cross-Domain Few-Shot Learning. CVPR 2024.
[B] Zihui Xue, et al.The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge Distillation. ICLR 2023.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their rebuttal. Most of my concerns have been well-addressed and I thus increased my score to 6. | Summary: In this paper, the authors identify two primary limitations in multi-modal domain generalization (MMDG): modality competition and discrepant uni-modal flatness. To address these challenges, they propose a novel approach called Cross-Modal Representation Flattening (CMRF). CMRF constructs interpolations by mixing multi-modal representations from a teacher model and uses feature distillation to optimize high-loss regions between modalities. It adopts a supervised multi-modal contrastive loss to enhance the flat loss region in the shared feature space. Furthermore, CMRF employs adaptive weighting based on modal validation set accuracy to better utilize each modality. The effectiveness of the proposed method is validated through extensive experiments.
Strengths: 1. Flatness analysis within multi-modal domain generalization (MMDG) is innovative. Identifying modality competition and discrepant uni-modal flatness as key limitations offers a fresh perspective on the challenges in the field and has the potential to advance understanding in the broader area of multi-modal learning.
2. Extensive experiments in various modality combinations and settings demonstrate the effectiveness of the proposed model.
Weaknesses: 1. This paper lacks a detailed discussion on the practical implementation aspects of the proposed method, such as computational requirements and scalability.
2. Although the authors' claims about addressing modality competition and discrepant uni-modal flatness are intuitive, the paper does not adequately address how to evaluate the loss flatness for multi-modal data. Additionally, the proposed method lacks sufficient evidence and empirical results to convincingly demonstrate its effectiveness in optimizing these specific limitations within multi-modal domain generalization.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What does the comparative data in Table 1 represent?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer for valuable comments. We respond below to each of the concerns.
> 1. This paper lacks a detailed discussion on the practical implementation aspects of the proposed method, such as computational requirements and scalability.
**Response:**
**1) Computational requirements:** Firstly, the Projector (3-layer MLP) for each modality is very small in our method and is also used in previous method such as SimMMDG, indicating that additional computation from more parameters is negligible. Secondly, although our method requires more computation from teacher model, it only requires the forward propagation but not gradient backpropagation on it, so the additional computation is also limited.
Finally, the main additional computational costs come from testing on validation set for adaptive weights Eq.11, as discussed in our Limitation section. The one epoch training time on D2,D3->D1 with video-audio data from EPIC-kitchens of different methods are shown below (in seconds). SWAD and our CMRF need more computation as they both require testing on validation set (The value in parentheses indicates the validation time). Searching for an efficient method to evaluate the generalization of each modality would be our future work, such as add low-frequency noise to stimulate domain shift for evaluation.
| Base | SAM | SAGM | SWAD | EoA | RNA-Net | SimMMDG | CMRF (ours) |
|--------|---------|----------|-----------|-------|--------------|----------------|-------------------|
| 308.5 | 346.4 | 345.6 | 367.9 | 309.2 | 311.3 | 327.7 | 369.9 (59.4) |
**2) Scalability:** Our method has no restrictions on the type and number of modalities. Since our method focuses on representation space other than parameter space, the condition that our method can be applied is that a consistent representation space of multiple modalities is available, which is definitely satisfied in multimodal deep learning by mapping with neural networks. This also demonstrates the flexibility of our approach to different model structures. In this paper, our experiments include the three common modalities (video, audio, optical flow), as well as different numbers of modalities, to verify our claims. All these show that our method has good scalability.
> 2. Although the authors' claims about addressing modality competition and discrepant uni-modal flatness are intuitive, the paper does not adequately address how to evaluate the loss flatness for multi-modal data. Additionally, the proposed method lacks sufficient evidence and empirical results to convincingly demonstrate its effectiveness in optimizing these specific limitations within multi-modal domain generalization.
**Response:** Thanks a lot for the valuable comments. We can evaluate the loss flatness by applying low-frequency perturbation from the Gaussian Distribution on representations as in [A], where the Variance controls the perturbation strength. The magnitude of the performance drop indicates how flat the loss is. The results are shown Figs.1,2 in rebuttal.pdf. With the increase of Variance, our method has the smallest performance drop on each modality, indicating that our method achieves flatter loss landscape for both modalities simultaneously and in turn provides flatter multi-modal loss landscape.
In order to better demonstrate the effectiveness of our approach in solving the two specific limitations, please allow us to restate the two problems. **Modal competition** refers to the mutual inhibition between modalities in joint training, which is reflected in in-domain performance straightforwardly as studied in previous literature. Below we give the uni-modal validation results (in-domain) on EPIC-kitchens with video and audio data. Modal competition is manifested in that each single modality of Base performs worse than uni-modal training, which further leads to worse out-of-domain performance as shown in Tab. 8 in appendix. Our method achieves the best uni-modal in-domain performance, indicating that **it optimizes modal competition effectively**, which in turn improves the generalization ability to other domains as in Tab. 4.
|||Video||||Audio|||
|:--------------:|:----------------:|:----------------:|:----------------:|:------------:|:----------------:|:----------------:|:----------------:|:------------:|
||D2,D3->D1|D1,D3->D2|D1,D2->D3| Avg |D2,D3->D1|D1,D3->D2|D1,D2->D3| Avg|
|Uni-modal|79.58|75.58|75.19|76.78|**60.32**|54.29|53.16|55.92|
|Base|75.78|73.60|72.40|73.93|54.58|52.23|49.11|51.97|
|SAM|77.03|73.81|73.75|74.86|54.90|51.60|49.67|52.06|
|EoA |78.94|73.20|75.12|75.75|56.85|52.76|52.45|54.02|
|SimMMDG|80.86|74.81|74.57|76.75|54.58|53.34|52.90|53.60|
|CMRF(ours)|**81.26**|**77.21**|**75.69**|**78.05**|58.77|**54.89**|**54.38**|**56.01**|
**Discrepant uni-modal flatness** refers to the fact that the generalization gaps between modalities make it difficult to optimize flat regions for each modality simultaneously. According to our analysis in Sec. 3.2 and results in Tabs. 1, 4 and above tabel, previous SAM-based methods contribute more to the better modality and contribute less or even are harmful to weak modalities. The Fig. 1 in rebuttal.pdf also show that SAM obtains flatter loss for better modality (video) and sharper loss for weak modality (audio). In contrast, our method **achieves flatter loss landscape simultaneously for each modality, leading to consistent flatness**.
The above shows that the effectiveness of our method for both modal competition and Discrepant uni-modal flatness, and the improvement of multimodal domain generalization comes from solving both problems at the same time, not just one of them.
> 3. What does the comparative data in Table 1 represent?
Data in Tab. 1 are the average results by treating each domain as target domain, e.g., each data is the average of results from A,C→H H,C→A H,A→C on HAC.
[A] Yixiong Zou, et al. Flatten Long-Range Loss Landscapes for Cross-Domain Few-Shot Learning. CVPR 2024.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for addressing my concerns. I tend to increase my score to 5. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their positive feedback:
1. A fresh perspective of modality competition and discrepant uni-modal flatness for MMDG [eDBu, iu3y, 8XoV];
2. Flatness analysis within multi-modal domain generalization (MMDG) and the proposed method for flatting representation loss landscape are innovative [eDBu, 8XoV];
3. Good writing and easy to follow [iu3y]
4. Extensive experiments demonstrate the effectiveness of the proposed model [eDBu, iu3y].
We respond to each reviewer's comments below. We provide some figures in the rebuttal.pdf.
Pdf: /pdf/ec9e9f23c207f798d667df2b68e2b804339ef910.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Decision-Language Model (DLM) for Dynamic Restless Multi-Armed Bandit Tasks in Public Health | Accept (poster) | Summary: The paper titled "A Decision-Language Model (DLM) for Dynamic Restless Multi-Armed Bandit Tasks in Public Health" introduces a novel method for improving public health resource allocation. By combining Restless Multi-Armed Bandit (RMAB) models with the interpretive power of Large Language Models (LLMs), the authors have created a system that can adaptively fine-tune health policies based on human-language commands. This innovation allows for more flexible and responsive policy adjustments, crucial for addressing the changing needs of public health programs.
A significant contribution of this work is showing how LLMs can generate and refine reward functions for RMABs, enhancing decision-making in resource-limited settings. The authors demonstrate this through a collaboration with ARMMAN, an Indian non-profit focused on maternal health. Their simulations reveal that the DLM can significantly improve the allocation of health workers, leading to better engagement and outcomes. This approach promises to make public health interventions more effective and adaptable, closely aligning with the evolving needs of the community.
Strengths: Originality:
I think the originality of this paper quite impressive. The way it combines Large Language Models (LLMs) with Restless Multi-Armed Bandits (RMABs) to dynamically fine-tune public health policies is really innovative. Using human-language commands to adjust these policies is a clever idea that bridges advanced AI techniques and practical decision-making in a unique way. This creative approach brings a fresh perspective to public health, making it much more adaptable and responsive to changing needs.
Quality:
The quality of the research really stands out. The authors did a good job detailing their methodology, from the reward proposal loop to the simulation stages and the reflection mechanism for refining reward functions. For example, they explain how LLMs interpret policy preferences and generate reward functions, which are then fine-tuned through simulations. The experiments are well-thought-out and use real-world data from ARMMAN, which adds a lot of credibility. The results are impressive, demonstrating DLM can achieve near human-level performance. The authors also provide a comprehensive analysis, comparing their model to baseline methods using clear performance metrics. This thorough validation highlights the potential for further real-world applications, showcasing how the approach can dynamically adjust policies to meet evolving public health needs.
Clarity:
The paper is generally well-organized and easy to follow. The tables and figures do a great job of illustrating the key concepts and results. However, some parts could be more engaging and less technical, making it easier for a wider audience to understand. Simplifying some of the technical language and adding short summaries at the end of sections would help readers quickly get the main points without getting lost in the details. For example, a brief recap of the key findings at the end of the results section would be really helpful.
Significance:
Despite being based on simulations, this work shows great potential for real-world impact. The DLM can dynamically adjust RMAB policies to meet changing public health needs, which is incredibly important. The evaluation on real-world data show that this approach isn't just theoretical but has practical relevance. The findings suggest that the DLM could significantly improve how resources are allocated and how effective policies are, which is very promising. Being able to dynamically prioritize different demographic groups or regions based on evolving needs can lead to more targeted and efficient use of resources. This approach could have wide applications in various public health interventions.
Weaknesses: One major limitation of the paper is its reliance on a simulated environment for validation, which is understandable given the complexity of real-world testing. While simulations using real-world data, the findings would be significantly strengthened by real-world trials. A practical next step would be to outline a detailed plan for field testing the Decision-Language Model (DLM) in actual public health settings. This would include partnerships with health organizations for pilot studies, addressing potential ethical and logistical challenges, and establishing metrics for real-world success.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is there any theoretical foundation you rely on for designing the prompts used in your model? Do you think referring to sociological theories or frameworks could help in designing more effective and contextually relevant prompts?
The paper mentions issues with ambiguous language prompts leading to misaligned policies. Do you have any specific methods or future plans to quantify and address these ambiguities to improve the model's reliability?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The simulations in this work are impressive, but to make a real-world impact, the authors need to address some key areas. These include transitioning to actual field trials, expanding language support, ensuring the model can handle larger datasets, and dealing with ambiguous prompts. It's also important to consider potential negative societal impacts like data privacy, bias, and the effects on vulnerable groups. By tackling these issues and outlining a clear path for real-world use, the authors can significantly enhance the practical and ethical value of their work.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1) Simulated Environment Reliance: “The paper relies on simulations for validation. Real-world trials would strengthen findings. A plan for field testing DLM in actual public health settings, including partnerships for pilot studies and addressing ethical and logistical challenges, is needed.”
This work will be followed by a real world evaluation. Domain experts from the NGO played an integral role throughout the model iteration and design process. And we would also like to re-emphasize (see Section 5 and Appendix A) that any subsequent real-world evaluations are only undertaken after securing ethics approval from the NGO's ethics board which is registered with the ICMR (Indian Council of Medical Research). Please see the global Note on Ethical Considerations above for more information. Obtaining approvals and designing and conducting real world evaluation require significant effort from the NGO and may take up to several months to complete due to complexities involved in real-world trials. Success in simulations are required for any subsequent approvals, and our simulations indicate substantial potential gains from the system (as demonstrated in this paper).
2) Field Trials and Testing: “Transitioning to real-world testing is essential. This includes expanding language support, handling larger datasets, and addressing ambiguous prompts.”
We thank the reviewer for this comment and agree that these are critical steps for future envisioned deployment. We discuss potential strategies to handle ambiguous prompts below. As referenced in Sec. 5.6, we agree also that expanded testing across different languages is critical. This may intersect with ongoing research in multilingual LLM evaluation [10], especially in LLM reasoning from Indic languages [11] which are highly represented in the ARMMAN deployment regions. Please see above global comment for additional details in consideration of future envisioned real-world testing.
3) Ethical Considerations: “Address potential negative societal impacts such as data privacy, bias, and effects on vulnerable groups. Tackling these issues and outlining a clear path for real-world use will enhance the practical and ethical value of the work.”
We thank the reviewer for the comment. We summarize the ethical considerations discussed in our paper in the global comment above, and highlight these key sections below.
We include a discussion on societal impacts in Appendix A and a discussion on data usage in Appendix C. The real world dataset is completely anonymized by ARMMAN. Data exchange from ARMMAN to the researchers was regulated through clearly defined exchange protocols including anonymization, read-only researcher access, restricted use of data for research purposes only, and approval by the ARMMAN ethics review committee. Our method helps ARMMAN in training dynamic policies to support vulnerable groups and allows ARMMAN program managers to monitor state-feature distributions over underrepresented demographic groups (see Figure 1 description and Section 5). We will further emphasize above mentioned points on ethical considerations and societal impacts in the paper.
4) Prompt Design Foundation: “Is there a theoretical foundation for designing the prompts used in the model? Referring to sociological theories or frameworks could help in creating more effective and relevant prompts.”
The prompts in our model are primarily designed to capture the practical, real-world scenarios that could be encountered through callership patterns amongst mothers in India and in collaboration with ARMMAN. While there isn't a direct theoretical foundation in prompt design, the prompts are inspired by common decision-making processes and challenges faced by ARMMAN. In addition, we design them in such a way that they become progressively more difficult to reason about, where we include a mixture of tasks that require just recovering one feature, multiple features, and inferring the correct due to ambiguity. Thus, a substantial amount of time was spent thinking about designing the prompt, although the current prompts could benefit from principled approaches.
For example, we believe that integrating sociological theories or frameworks is a wonderful idea, and could indeed enhance the contextual relevance and effectiveness of prompts. For instance, a list of criteria such as the Social Determinants of Health [12] could be used to help guide the development of prompts that would likely improve community needs and values. In addition, a large body of literature is now emerging along the direction of “prompt engineering” [13, 14], or finding methods for improving prompt efficacy.
5) Ambiguous Language Prompts: “The paper notes issues with ambiguous prompts leading to misaligned policies. Are there methods or plans to quantify and address these ambiguities to improve reliability?”
One simple idea is to reject ambiguous prompts and ask for further clarifying information, which is tackled in other work that uses the disagreement between sampled outputs of the model [15] to quantify ambiguity. This is a well-studied area [16, 17], and there should be several strategies available that effectively mitigate ambiguous questions.
---
Rebuttal Comment 1.1:
Comment: I have read the Rebuttal, thanks for answering my question. | Summary: This paper proposes using a decision-language model for restless multi-armed bandit tasks (RMAB) in the public health domain. The authors evaluated their method in a simulation environment developed from a real-world dataset. The authors conducted experiments with 16 different prompts and compared their approach with baselines, demonstrating the effectiveness of their method. Their study provided insights into the generated reward design for RMABs.
Strengths: 1. The authors evaluated their method in a simulation environment developed from a real-world dataset and provided comparisons in the DLM-generated model with the baseline model.
2. The author provides the parameters of the model training process, which helps other researchers reproduce the process.
Weaknesses: 1. Avoid citing sources in the Abstract
2. The approach provided in this paper lacks real-world validation.
3. This paper did not discuss the ethical implications of using AI for decision-making in health resource allocation.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why use the ARMMAN dataset exclusively in this study? What language is used in this dataset? Given that the test involves prompts in English, was there any preprocessing of the dataset required?
2. In some parts of the training process, it is crucial to explain why specific numbers are chosen. For instance, the downstream policy is trained for 5 epochs, why 5? why are 100 simulation steps selected?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The approach provided in this paper lacks real-world validation. This paper should also discuss the ethical implications of using AI for decision-making in health resource allocation.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1) Abstract Citations: “Avoid citing sources in the Abstract.”
We thank the reviewer for the suggestion. We will remove this citation from the abstract. We include this citation at the end of the introduction.
2) Real-World Validation: “The approach provided in this paper lacks real-world validation.”
This work will be followed by a real world evaluation. Domain experts from the NGO played an integral role throughout the model iteration and design process. And we would also like to re-emphasize (see Section 5 and Appendix A) that any subsequent real-world evaluations are only undertaken after securing ethics approval from the NGO's ethics board which is registered with the ICMR (Indian Council of Medical Research). Please see the global Note on Ethical Considerations above for more information. Obtaining approvals and designing and conducting real world evaluation require significant effort from the NGO and may take up to several months to complete due to the complexities involved in real-world trials. Success in simulations are required for any subsequent approvals, and our simulations indicate substantial potential gains from the system (as demonstrated in this paper).
3) Training Process Explanation: “In some parts of the training process, it is crucial to explain why specific numbers are chosen. For instance, the downstream policy is trained for 5 epochs, why 5? Why are 100 simulation steps selected?”
We thank the reviewer for this suggestion and will include additional details on the selection of training hyperparameters in the Appendix. Choices for these hyperparameters were made following quantitative assessments over a hyperparameter search which showed improved stability and low overfitting for the selected values for our given dataset. Our choices were additionally qualitatively guided by prior works studying network-based multi-armed bandit policy training, which similarly use 100 steps in buffer for simulated settings [9]. We will include this information on hyperparameter selection in the Appendix.
4) Ethical Implications: “This paper should also discuss the ethical implications of using AI for decision-making in health resource allocation.”
We discuss this extensively in Appendix A, Social Impact Statement, as well as Appendix B, Dataset Description and Appendix C, Dataset Consent for Collection and Analysis. We summarize these points discussed in our paper in the global comment above, including an overview of the ethical considerations taken prior to and during this work with our partner NGO.
5) Dataset Usage: “Why use the ARMMAN dataset exclusively in this study? What language is used in this dataset? Given that the test involves prompts in English, was there any preprocessing of the dataset required?”
We focus on the ARMMAN dataset because it is vetted, real-world data collected with consent ethically. Such a dataset, created from a study over 7,668 mothers, has the potential for great impact, helping us gain insight into key policy design techniques that can help improve the deployment of public health resources across millions of beneficiaries. Furthermore, the partner organization ARMMAN has an operational AI/RMAB deployment that works to allocate resources. The collected dataset is also rich and not sparse, allowing a diverse set of tasks to be tested within it. Thus, there are multiple important reasons to work with this dataset. The dataset was already collected in English. Thus, no preprocessing of the dataset was required. Due to these key strengths and benefits, we focus our analysis entirely on this highly valuable source of data.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for answering my questions. | Summary: Restless multi-armed bandits (RMAB) are effective for resource allocation in public health but lack adaptability to changing policies. Large Language Models (LLMs) have potential in healthcare for dynamic resource allocation through language prompts but are understudied in this area. This work introduces a Decision-Language Model (DLM) for RMABs to fine-tune resource allocation policies using language prompts. It proposes using LLMs to clarify policy preferences, propose reward functions, and iteratively refine these functions through simulations. Key contributions include pioneering the use of LLMs for adapting public health resource allocation and demonstrating near-human-level policy tuning in a maternal and child care task. They introduce a reward proposal loop that improves LLM-generated reward functions using feedback from restless multi-armed bandit (RMAB) simulations. This allows LLMs to iteratively refine reward designs to achieve specific, human-specified policy outcomes.
Strengths: - It is very well written and fluent.
- They highlighted the comments and important parts which is really helpful to follow the context.
- The method is new in this application.
Weaknesses: - The related works are not comprehensive. It should bring some works in healthcare from other approach and also the application of LLM in other healthcare examples.
- The policy and critic is not identified till in the algorithm 1 and that is confused the reader. Since from the beginning it seems, the policy is also LLM. It requires more clarification and adjustment.
- It is true that DLM (with reflection) works better but it is not a good fair comparison. Justifying why without reflection works good or bad is necessary.
- The baseline are not fair. It is true that might be other methods with LLM is not yet applied to this healthcare problem but to claim why they choose this algorithm, comparing with other algorithms that use LLM is necessary. They need to bring enough evidence for their selection. In this case they need to bring baseline from the work with LLMs.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What are the feature z exactly? It is very helpful to bring some clear examples in section or the beginning of section 4.
- How the buffer is used in the Algorithm? It is not clear
- Figure 2 is not a clear way to show the summary of the results. Maybe a table is better since now the difference is not clear. Now the question is why default at some cases has the same performance as DLM (no reflection)?
Would not it because of the prompt?
- Why some methods like CoT is not used as a baseline? Or ReAct or Reflexion?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: It still needs human prompt at the algorithm which make it less practical in terms of type of the community.
The amount of its automation is not very clear. It is good to add it to the discussion.
They need to compare more baselines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1) “The related works section is not comprehensive…”
We thank the reviewer for the comment. There is growing research in LLMs for healthcare (see Sec. 1). In particular, a question summarization framework in healthcare domains has been proposed [1, 2]. Methods based on contrastive language image pretraining (CLIP) and LLMs to identify medical disorders, generate relevant context, and craft summarizes with visual information have been studied [3, 4]. ChatGPT applications in medical, dental, pharmacy, and public health have been investigated [5, 6]. However, the potential of LLMs to dynamically adapt resource allocation using language prompts, potentially enabling automated policy tuning using expert and community health insights, remains unstudied (see Sec. 1). We will highlight healthcare related works in the paper.
2) Clarify "policy and critic", "features (z)", and "buffer"
We describe the details of actor-critic algorithms in Appendix F. To improve clarity, we will move this explanation to earlier in Section 3. Our novel reward proposal loop enhances LLM-generated reward functions using RMAB simulation feedback, and is compatible with various policy learning algorithms, including the widely used actor-critic algorithms. The features describe demographic information such as age range, education level, and income; examples of features (z) are shown in Sec. 4.2, Sec. 4.4 and Sec. 5. We will provide examples earlier at the beginning of Section 4. We train each policy network using RMAB simulations following the concept of experience replay buffers first introduced by [7], widely used to train RL agents using previously observed transitions [8]. We record per-arm state, action, reward, next state, and lambda over n_s timesteps (see Alg. 1 line 11), and store these “trajectories” of arms in the buffer D, which is then used to update our actor-critic algorithms (see Alg. 1 line 13, App. F).
3) “The DLM with reflection works better, but the comparison is not a good fair comparison. Justifying why without reflection works good or bad is necessary.”
We appreciate the reviewer’s feedback here, but are unclear on why the comparison appears unfair. Could the reviewer clarify their concerns? We refer to DLM (No Reflection) as DLM-NR below. As highlighted (see Sec. 4.5, App. C), LLMs are trained on real-world data, and can therefore recognize and retrieve features relevant to natural language prompts. This is confirmed by our empirical findings: DLM-NR outperforms Default reward policy (see Fig. 2 and Sec. 5.4b), achieving a mean normalized reward (MNR) score of 0.87 +- 0.008 vs. Default score 0.57 +-0.027, with DLM-NR significantly outperforming Default in 11/16 tasks (see Sec. 5.4c). We justify the strong performance of DLM-NR in Table 1, demonstrating more than 70% feature recall in 9/16 tasks; thus, while DLM with Reflection potentially helps fine-tune feature selection and usage (see Fig. 2), DLM-NR still captures relevant features for reward functions without reflection.
4) “Figure 2 does not clearly summarize results… not clear why Default method sometimes matches the performance of DLM (no reflection).”
We present numerical results in Tab. 4, and observe that DLM-NR significantly outperforms Default in 11/16 tasks. In the remaining tasks, while DLM-NR still performs well in feature selection for reward proposals (see Tab. 1), reflection iterations may be required to fine-tune the usage of these features (see Fig. 3). In some cases, a prompt may have a ground truth Base reward that is close to the original Default reward, yielding higher Default performance; yet even in these cases (see Tab. 4, Idx. 2 “Hindi Speakers”) we find that DLM (Reflection) can help fine-tune features and feature weights to improve over DLM-NR and Default.
5) “The baselines are not fair … [why not use] CoT, ReAct, or Reflexion”
We note that DLM is a full framework for the use of LLMs in the reward design of an RMAB. Other cited methods such as Chain-of-Thought Reasoning (CoT) improve base LLM reasoning and, unlike our proposed model, are not a full framework. The listed methods are in fact compatible to use with our framework; we indeed attempted using CoT, but found it did not improve DLM reasoning by any significant margin (see attached PDF for results). We further emphasize that we compare against random (Fig. 2 red line) and no-action (Fig. 2 purple bar) allocation policy baselines. Thus, we compare against five other baselines in total. ReAct and Reflexion frameworks do not specify how to choose the state space, action space, and reward function for the LLM, which are all significant design decisions that DLM solves. It is unclear how they would be directly comparable to our multi-agent RMAB reward design loop.
6) “More baseline comparisons are needed”
See (5). We respectfully disagree. As no existing RMAB reward design loop exists before our method, all other comparisons would be additions on top of the proposed DLM framework. We also would emphasize to clarify that in Figure 2, our main results, we indeed compare against 5 other baselines:“No Action”, “Random”, “No Reflection”, “Base”, and “Default”. We clarify that the “Random” baseline is the red line in Fig. 2.
7) “The level of automation is not clear… the need for human prompts reduces its practicality”
The level of automation is akin to interacting with a chatbot, requiring only prompting and evaluation. Our method is significantly more automated than human design, allowing tasks to be specified via natural language. This significantly reduces the burden that nonprofits face supporting low-resource communities by enabling rapid policy adaptability without significant additional human effort. Note: in an actual deployment (see Sec. 5.5a, Sec. 5.6 and global comment above) a human operator would approve DLM-generated policy outcomes using output state-feature distributions, providing an additional layer of system scrutiny and safety.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors to respond my questions.
1- They need to be added in the paper.
2- I still think feature z is unclear. there should a clear and separate part for that.
3- that is what you explained for number 5 and 6.
4- The explanation for the result should be added in the paper to clarify in which cases this algorithm has limitations.
7- The system depends on the prompt. If the prompt is not designed well then the reward is not accurate and consequently the response is not correct.
5, 6- I strongly believe the baselines are not aligned with the claim of the paper. If the claim of the paper is the first time that LLM is used in Healthcare then the results of the sota of methods using LLM should also be in the paper as a reference. It is still not clear how much novelty of the method is worth with respect to the exciting methods.
For methods like react, reflexion, the states could be the same as the states used for the current method. They can similarly work with these methods as well. There are many works that used these algorithms in interactive/RL setting including.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response and feedback. We agree with the reviewer’s first point, and will include and emphasize all of the above descriptions in the main paper. Specifically, we will include the additional references to work in LLMs for healthcare, and clarify the description of features (z) in a separate section emphasized with a subsection heading. We will additionally emphasize the distinction in performances between Default, DLM-NR, and DLM (Reflection), including cases where Default performance is close to DLM-NR. We note here (point 4) that a higher Default performance does not necessarily imply poorer system performance; rather, this occurs when the Base policy is aligned closely with the Default reward. We find that, even in these cases whether Default reward performs well for a new prompt, we are still able to improve upon reward function proposals through the reflection procedure.
We also thank the reviewer for their response regarding baseline comparisons. We would like to clarify that our claim is not that we provide “the first time that LLM is used in Healthcare”, but rather “the first to propose using LLMs to adapt to changing resource allocation objectives in public health through _reward design in the RMAB setting_” as stated in the contributions in Section 1. Our goal is _not_ to find how best to use LLMs in this setting, but rather to show that it is possible in the first place, _particularly in the new multi-agent RMAB setting_. This motivates comparison against baselines from the RMAB setting, such as Default and Base rewards, rather than extensive analysis of the best LLM method. We note that adapting single-agent LLM methods like ReAct and Reflexion to the multi-agent RMAB setting is nontrivial. This requires designing a feedback mechanism using multi-agent simulations to guide LLM reward function proposals via state-feature distributions, and dually enabling LLM-generated control in the RMAB setting. _The design of these elements, such as the novel feedback mechanism, is one of the primary contributions of our method, rather than improving LLM reasoning agents such as through ReAct and Reflexion_. As RMAB planners solve a complex combinatorial optimization problem and are specialized for resource allocation, we make the intentional design choice to invoke a separate RMAB planner as a tool to verify outcomes and provide feedback, the key novelty of our framework. This avoids the risks involved with LLMs taking direct actions, especially in the healthcare setting; we instead use LLMs to solve the broader challenge involving natural language and language feedback.
_Encouraged by the reviewer feedback_, we also propose to add two additional baselines. The first we highlight above (attached PDF), containing additional experiments with chain-of-thought (CoT) reasoning as mentioned in the original review. While we do not find that CoT improved the DLM reasoning in our setting by any significant margin, the results highlight that the listed method of CoT is indeed compatible with our system. The second baseline we propose is an additional noisy-expert baseline, a perturbed Base reward intended to evaluate how an imperfect operator may perform in reward design given a language prompt. This perturbed-Base reward baseline also serves to demonstrate that the problem of coefficient selection in reward function design is indeed non-trivial. We find that this noisy-expert baseline achieves a mean normalized reward (MNR) of 0.87 +- 0.006, compared to DLM (No Reflection) 0.85 +- 0.008 and DLM (Reflection) 0.92 +- 0.006, demonstrating that our method achieves comparable performance to a noisy-expert designer _zeroshot_, and can improve upon zeroshot proposals effectively with the proposed reward reflection module. | null | null | Rebuttal 1:
Rebuttal: **Thank You**
We thank the reviewers for their insightful feedback and comments. We are encouraged to find that the reviewers recognized the paper as novel, introducing "a new method in this application" (R1), and specifically highlighting its “impressive originality” (R3) and “pioneering” (R1) use of Large Language Models (LLMs) with Restless Multi-Armed Bandits (RMABs) in the public health setting, described as "a clever idea that bridges advanced AI techniques and practical decision-making in a unique way" (R3). We are also grateful that the reviewers appreciated the evaluation of our method in "a simulation environment developed from a real-world dataset," adding credibility to our findings (R2, R3). We are pleased to hear that the reviewers found the paper "very well written and fluent" (R1), with effective use of tables and figures (R3). We also appreciate that the described methodology was noted as aiding reproducibility, including that "the author provides the parameters of the model training process, which helps other researchers reproduce the process" (R2).
**Note on Ethical Considerations**
We appreciate the reviewers raising important points regarding ethical considerations of AI solutions in data consent and privacy. We would like to highlight key sections from the paper that describe our deep consideration of these ethical guidelines during the design process of this work. All of the data collected for use in our _simulated_ public health setting was gathered in close collaboration with our partner NGO, following strict dataset anonymity guidelines (see Sec. 5.1, App. B) and receiving full consent for data collection from each ARMMAN study participant (see App. C.2). In our study, we conduct a _secondary_ analysis of this dataset, representing _simulated_ transition dynamics (see Sec. 5.1, App. C.1). ARMMAN retains full ownership and control over all data; we access _only_ an anonymized version of the data through clearly defined exchange protocols including anonymization, read-only researcher access, and restricted use of data for our secondary analysis (see App. C.2), which was approved by the ARMMAN Board of Ethics (see App. C), registered with the Indian Council of Medical Research (ICMR).
With respect to the use of AI for allocation of resources in _any future deployment_, we note that the proposed system is intended for use _only_ in cases where separate, _additional_ live call resources are available to help underrepresented groups (see Sec. 5.6, App. C.3). These additional live calls provide motivation for users to listen to information in the original voice calls; they do _not_ provide any new information. We therefore do _not_ withhold any original voice call resources through our system, and consider _only_ the additional resources available for the specific use case of dynamic allocation to underrepresented groups as desired by the NGO. This ensures that all the original health voice messages are always available to all participants in the program in any future deployment scenarios (App. C.3). The ARMMAN ethics board has previously approved _deployed_ studies testing RMAB policies for these described additional call resources \[18\], providing evidence of approval of such AI allocation policies (Sec. 1); however, in these cases the RMAB objective was fixed with hand-set objectives, rather than the adaptive policy design we present.
We further note: we do _not_ deploy the proposed method to the ARMMAN program. In any envisioned deployment, we expect that health experts can monitor outcome state-feature distributions of proposed policies, allowing an ethics board to decide whether to adopt system policy suggestions (see Sec. 5.6). Additionally, we note that any potential deployment would follow an extensive real-world evaluation undertaken after securing approval from the ARMMAN Board of Ethics (see Sec. 5.1). Recognizing the significant effort required from the NGO to obtain these approvals and design and conduct such studies, we follow standard deployment protocol to proceed with field trials _only_ when simulations indicate substantial utility to our NGO partner and their ethics board, as demonstrated in this paper. We will ensure that these considerations are emphasized in the paper.
**Response Citations**
[1] Lu et al., "Medical Question Summarization...," ACM TALLIP, 2024.
[2] Caciularu et al., "Long Context Question Answering...," NAACL, 2022.
[3] Ghosh et al., "Clipsyntel: clip and LLM synergy...," AAAI, 2024.
[4] Tiwari et al., "Dr. can see...," CIKM, 2022.
[5] Sallam et al., "ChatGPT applications in medical...," Narra J, 2023.
[6] Fatani, "ChatGPT for future medical...," Cureus, 2023.
[7] Lin, "Self-improving reactive agents...," Machine learning, 1992.
[8] Zhang & Sutton, "A deeper look at experience replay...," arXiv, 2017.
[9] Killian et al., "Restless and uncertain...," UAI, 2022.
[10] Bang et al., "A multitask, multilingual, multimodal evaluation...," arXiv, 2023.
[11] Singh et al., "IndicGenBench: A Multilingual Benchmark...," arXiv, 2024.
[12] WHO, "Social Determinants of Health."
[13] OpenAI, "Prompt Engineering."
[14] OpenAI, "Related Resources Around the Web."
[15] Cole et al., "Selectively answering ambiguous questions...," arXiv, 2023.
[16] Keyvan & Huang, "How to approach ambiguous queries...," ACM CS, 2022.
[17] Min et al., "AmbigQA: Answering ambiguous open-domain questions...," arXiv, 2020.
[18] Verma et al., "Restless Multi-Armed Bandits...," AAMAS, 2023.
Pdf: /pdf/ca02bcaa5a75db32ba90d24c4b234ed83665262c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Faster Repeated Evasion Attacks in Tree Ensembles | Accept (poster) | Summary: The paper proposes a method to speed up the robustness verification of tree-based classifiers for all examples in a dataset. Previous methods solve the robustness verification problem for tree-based classifiers, which is NP-Complete, for each instance separately. However, the paper highlights that finding adversarial examples from a set of instances of the same dataset often requires perturbing only a small subset of features, called relevant features. The paper proposes a verification algorithm that employs state-of-the-art sound and complete verification algorithms. First, it tries to generate the adversarial example from this subset of relevant features and, if it is not found, tries again, considering all the features. Moreover, the paper proposes an algorithm to find the set of relevant features such that the probability that generating an adversarial example by perturbing only those features fails, even though the adversarial example exists, is less than a threshold with a certain confidence. The experimental evaluation considers two state-of-the-art verifiers and 11 tabular datasets commonly used in robustness verification literature. The experimental results show that verifying robustness by considering only the subset of relevant features results in a speed-up of up to 12x for gradient-boosted models and 6x for Random Forests.
Strengths: **Original observation and algorithm that allow existing verifiers to speed up robustness verification of tree-based classifiers**: The observation that generating adversarial examples often requires perturbing only a subset of relevant features across instances of the same dataset is novel and leads to optimizing existing robustness verification algorithms. A disciplined approach is also provided to identify the subset of relevant features.
**Theoretical analysis of the guarantees provided by the proposed verification algorithm**: Two theorems and their proofs are provided regarding the guarantees of the verification algorithm: (i) if an adversarial example is found by perturbing only the subset of relevant features, the same adversarial example will also work when considering all the features; (ii) if an adversarial example exists, the algorithm is guaranteed to find it.
**The experimental settings considered are comprehensive and the results are significant**: The experimental evaluation is performed on several datasets and for two state-of-the-art verifiers. The results convincingly demonstrate the speed-up that can be achieved by considering only the relevant features. This is a significant result, as existing state-of-the-art verifiers struggle to verify the robustness of models based on decision trees.
Weaknesses: **Only one threat model is considered**: Even though the $\ell_\infty$ threat model is widely addressed in the literature, it would have been interesting to also consider the generalization of the proposal to other attackers like $\ell_1$ and $\ell_2$ attackers that are considered in the literature when considering tabular data [1]. The kantchelian verifier can be used, since it supports also $\ell_1$ and $\ell_2$ attackers.
**Missing comparison with other methods to improve the efficiency of robustness verification**: Proposals in the literature have addressed the problem of speeding up robustness verification and adversarial examples generation by training models that are amenable to robustness verification. First, neural networks have been considered [2], but recently, tree-based classifiers that admit robustness verification in polytime have also been explored [3]. The authors should consider comparing their methodology with [3], which is not addressed in the related work. They should highlight the pros and cons of their proposals with respect to this other way to make robustness verification of forests more efficient.
[1] Simonetto et. al., "A unified framework for adversarial attack and defense in constrained feature space", in IJCAI 2022.
[2] Xiao et. al., “Training for faster adversarial robustness verification via inducing relu stability,” in ICLR, 2019.
[3] Calzavara et. al., "Verifiable Learning for Robust Tree Ensembles", in CCS 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is a set of relevant features present when generating adversarial examples against other threat models like $\ell_1$ and $\ell_2$? In other terms, does the proposed approach determine a speed-up when considering other attackers beyond $\ell_\infty$?
How does the proposed approach compare to approaches that speed up the efficiency of robustness verification by exploiting other points of view, like training models amenable to robustness verification [3]? (See the Weaknesses section for more details.)
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have explained the limitations of their approach appropriately, as claimed in the paper checklist. Furthermore, the authors have also discussed the potential negative societal impact of their work in Section 6. The experimental and implementation details have been thoroughly documented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Why l-inf only?**
We choose to work with the l-inf norm as recent works on approximate evasion
attacks primarily focus on this scenario.
This favours the experimental evaluation as these methods are the most efficient in literature, while still working in a perfectly reasonable scenario where the attack is defined by the magnitude of the largest perturbation. At the same time, we believe other attack models should present the same advantages when exploiting our approach, as the number of considered features is always a proxy for the hardness of the evasion.
**Missing comparisons**
We thank the reviewer for the multiple useful references which will include and discuss in the paper! Ensuring polynomial verifiability is an exciting and important line of work. A drawback to current approaches for polynomial verifiability is that they clearly do result in (large) decreases in predictive performance. Whether this is permissible will depend on the considered application. We would also highlight that Section C in the supplement shows results for applying our approach to GROOT forests which are robustified ensembles (Vos \& Verwer ICML'21).
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: I thank the authors for their response.
It would have been interesting to see some results on your approach applied with $l_1$ or $l_2$-norm attackers, but I understand that computing these new results requires time. Additionally, since Veritas supports only $l_\infty$-attackers, this tool would have required an extension. However, I agree with your statement that the proposed approach should show advantages even when applied to these two threat models. I hope you will provide some experimental evidence on this fact in a next version of the paper.
I strongly encourage the authors to add the following to the paper:
- Acknowledgment that the choice to work only with $l_\infty$-norm attackers is a limitation of your evaluation. Although it is a popular attacker, $l_2$ and $l_1$ norms have also been adopted in the literature.
- The discussion on polynomial verifiability in the related work section, as you mentioned in your response.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We will absolutely add both points that you requested in the next version of the paper! | Summary: This paper studies adversarial attacks on tree ensemble models. The main contribution of this paper is to speed up (compared to *kantchelian* and *veritas*, please let me know if there is any misunderstanding) the process of crafting adversarial examples of tree ensemble models. It is claimed, under the setting of repeated adversarial attacks against the same model, that adversarial examples for tree ensembles tend to perturb a consistent but relatively small set of features, which is an interesting phenomenon. This paper advances the understanding of the adversarial robustness of tree ensemble models.
Strengths: + As its title suggests, this paper proposes a faster method (compared to *kantchelian* and *veritas*) for performing repeated adversarial attacks, which would help the development of more robust tree ensemble models. The claim is backed by comprehensive experiments.
+ The discussion in section 3 is easy to understand.
Weaknesses: + This paper does not comprehensively discuss the **related works**, which would lead to several problems:
+ To my knowledge, neural network is the most popular model in modern machine learning. I suggest further comparing the tree-ensemble models and neural networks in section 5. Besides, the existence of adversarial examples is first discovered in image classification (using NNs). I suggest providing an example of adversarial examples (in real-world applications) for tree ensemble models. I noticed that some experiments are on vision datasets like MNIST and FMNIST. I wonder whether tree ensemble models can beat (deep) neural networks in some tasks.
+ This paper only mentions two adversarial attack methods for tree ensemble methods. I cannot properly evaluate the contribution of this paper without a comprehensive comparison with related works.
+ As mentioned in the "Questions" part, the presentation of this paper can be further improved.
+ It is confusing to use lowercase Italian letters to represent algorithms and datasets.
Technical Quality: 3
Clarity: 2
Questions for Authors: + In Line 20, "generating adversarial examples is an NP-hard problem" seems unreasonable since many existing methods can efficiently craft adversarial examples. Do you mean generating adversarial examples for tree ensemble is an NP-hard problem?
+ In lines 28-29, the authors mentioned that previous works treat the clean examples in isolation while they make use of the regularities between clean examples to speed up adversarial attacks. The term "regularity" many times in this paper, but what is this regularity and what is the intuition behind this idea? I think it is better to illustrate the idea of "regularities between clean examples" with a figure.
+ In Lines 48-48, "we propose a theoretically grounded manner to quickly find this set of features". I cannot find the results corresponding to this claim.
+ What is the meaning of SAT and UNSAT? I suggest explaining the abbreviations when they first occur.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: There is a limitation statement in Lines 312-313.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **NNs vs Tree Ensembles**
Just because "neural network is the most popular model in modern machine learning" does not imply that all research should solely focus on them; we believe that diversity is also important.
Tree ensemble models like XGBoost are extremely popular, very easy to apply and still outperform NNs on tabular data (Grinsztajn et al. 2022).
We will add some references to techniques for the evasion of neural networks in Section 5. However, most state-of-the-art algorithms are now tailored to a specific model type. That is, they exploit properties of the model, e.g., the Kantchelian's MILP encoding exploits the logical structure of a decision tree. Hence, there are not always strong parallels between approaches for NNs and decision trees.
**Examples of adv. ex. for tree ensembles**
Adversarial examples are defined for tabular data as well as for image classification, and can therefore be generated for both NNs and tree ensembles. Consider a bank deploying a model to accept or deny loan requests by customers. A hacker may imperceptibly alter some user’s sensitive data to force a different outcome, e.g. a loan is rejected... but is accepted by subtly adding one month to the customer’s work seniority. We will add the example in the paper.
**Not enough baselines**
The reviewer does not provide any concrete baselines that they would expect to be included in the analysis.
As an exact attack, we used *kantchelian*'s implementation as it proved to be faster than alternative SMT encodings (cites 9-12). We then considered several state-of-the-art approximate evasion attacks. We choose *veritas* as it is the best performing one to the best of our knowledge. In Section B.1 in the Supplement, we show how *veritas* is preferable with respect to another popular evasion method (LT attack). Thus we are unsure what the reviewer expects without specific references.
**Problem Hardness**
In line 20, we mean that Kantchelian et al., ICML 2016 (cite 20 in the paper) showed that determining whether an adversarial example exists with a certain distance epsilon is NP-complete for decision trees. Finding the nearest adversarial example for a given normal example is not a decision problem, and hence we used the term NP-hard.
The same problem has been shown to be NP-complete for other model classes such as NNs [1].
[1] Guy Katz, Clark Barrett, David L Dill, Kyle Julian, Mykel J Kochenderfer, Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks, in Computer Aided Verification: 29th International Conference, CAV 2017.
**Regularities** The term "regularity" refers to the fact that in many attacks (for different test examples) the same small number of features are perturbed to generate a valid adversarial example. This is also shown in Figure 1. We will make this clearer in the text.
**Theoretically grounded manner to identify relevant features** Section 3.2 presents an algorithm to identify a set of relevant features.
The adoption of a statistical test ensures that the extracted feature set guarantees a low enough false negative rate on the *mixed* setting.
Namely, the statistical test guarantees with a $1 - \eta$ probability that the false negative rate is below a threshold $\tau$. Both $\eta$ and $\tau$ are set by the user.
**SAT/UNSAT** The first place that SAT/UNSAT are mentioned is Line 76, which describes the three possible outcomes of an evasion attack. SAT/UNSAT are defined immediately afterwards in Lines 78-80: the attack is SAT if a valid adversarial example is generated, UNSAT if a solution does not exist, or TIMEOUT if a solution could not be found within the allowed time limit. These are commonly used abbreviations
originating from mathematical logic.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
I have read the rebuttal, which addresses some of my concerns. I still have some follow-up questions.
1. About NNs v.s. Tree Ensembles. **The authors seem to have not fully understood my main concern.**
To be clear, I am not saying "all research should solely focus on NNs". As mentioned in my review, adversarial examples are first observed in *vision data* (or more precisely, in the last five years, the term "adversarial examples" in the ML community often refers to vision adversarial examples first observed in 2013). Since the tree ensemble models are "very easy to apply and still outperform NNs on *tabular data*" as claimed by the authors, **there is a gap between tree ensemble models and adversarial examples.**
The hack's example in the rebuttal helped me understand the adversarial examples in tabular data. I suggest the authors also explain **what the adversarial examples are in Table 1.** Providing some examples of the data and the corresponding adversarial examples in the datasets would help.
I also suggest the authors reconsider the choice of Figure 4. Maybe tabular data is more suitable here.
2. As for the baseline, in vision adversarial attacks, it is common to compare 10+ attack methods, while in this paper only two methods are compared. It is natural for a reader to ask whether this question (i.e., adversarial attacks against tree ensemble models) is in line with the community's taste.
3. About the Problem Hardness. According to the rebuttal, "generating adversarial examples is an NP-hard problem" in Line 20 is a term misuse. The question of "determining whether an adversarial example exists" discussed by Kantchelian et al., ICML 2016 and [1] is referred to as "robust verification/certification" in the literature, which is parallel to adversarial attack, adversarial defense, and adversarial training. The above two references can not support the claim that "generating adversarial examples is an NP-hard problem". The authors seem to be unfamiliar with the research on the adversarial robustness of machine learning models. In vision data, in most cases, the adversary can craft an adversarial example using a 10-step PGD. Therefore, I do not believe crafting adversarial examples is a computationally hard problem. Nevertheless, this paper significantly speeds up Kantchelian and veritas. The discussion here is mainly about the representation.
---
Reply to Comment 1.1.1:
Comment: We will respond to your points in reverse order.
3. Section 4.2 of Kantchelian et al. has a section entitled "Theoretical Hardness of Evasion".
This section studies the computational complexity of constructing what they call an evading example, which is what we refer to as an adversarial example for tree ensembles. The proof in this section of their paper shows that it is possible to cast a known NP-complete problem (3SAT) as an instance of an evasion problem for tree ensembles using a linear reduction. This means the usual: if one had a polynomial time evasion algorithm, it would be possible to solve any problem instance of the well-known NP-complete problem in poly time, which is generally seen as a contradiction.
Thus Kantchelian et al. is indeed showing what we state in the paper wrt to the hardness of constructing adversarial examples for tree ensembles. We would also highlight that other tree-based evasion attack papers make similar statements about the hardness of the problem (e.g., https://arxiv.org/pdf/2010.11598).
This theoretical result is not at odds with statement that in practice it may often be possible to easily find evading/adversarial examples; just like one can efficiently solve many 3SAT problems using modern satisfiability solvers.
2. We agree that is absolutely OK to ask to include additional baselines in an experiment. However, "I cannot properly evaluate the contribution of this paper without a comprehensive comparison with related works" is neither a constructive nor actionable comment. Could you please provide a specific attack that you would expect to see included in our evaluation?
While there may be 10s of approaches for NN, there are far fewer for trees. As far as we are aware, approaches developed for NNs are not directly applicable to trees. If you have specific attacks for NNs that are applicable to trees, please let us know.
As stated in our initial rebuttal, we have provided justifications for omissions in the paper. We omitted SMT-based approaches (See appendix E) because they are an exact approach like Kantchelian but do not perform as well as Kantchelian. Similarly, as discussed in B.1 we have omitted the LT attack because it has the same success rate while being an order of magnitude (or more) slower than Veritas. We are not convinced that it is meaningful to try to improve on the less performant approaches: even with improved run times one would still prefer the more performant Kantchelian and Veritas approaches in practice.
1. We will include the hack example and further explain adversarial examples in Tabular data. For another example of this in tabular data is about domain name registrations which is discussed in the introduction and expanded upon in our response to reviewer SynL. | Summary: This paper proposes a new mechanism for performing computationally-efficient adversarial attacks on decision tree ensembles. Specifically, it considers the setting where an attacker wants to attack *many* samples in a dataset at the same time, and considers the *average* time to attack each sample. (The paper focuses specifically on the L_infinity threat model, but could be applied more generally.)
The proposed method is inspired by the empirical observation that, when attacking various different samples on the same trained model, attacks on decision tree ensembles tend to focus on only a subset of the total space of features, and consistently leave many features un-perturbed for all samples. The key idea is to limit the search space of the attack on later samples to only those features which are likely to be necessary to perturb, based on earlier attacks. This speeds up the attacks on the later samples.
This paper therefore proposes an algorithm that, after fully attacking only a small number of samples (using a "base" attack algorithm that can either be a full-verification MILP solver or a heuristic technique) identifies the features most likely to be involved in an attack; specifically, it ranks all of the features by how often they are perturbed. Then, using several additional small subsets of samples, the algorithm determines, coarsely, the minimum number of features that must be considered in the search space (i.e., the cutoff in the ranking) in order for the attack success rate, perturbing only these features, to be close, with high probability, to the attack success rate when perturbing all features. Finally, the attacker attacks the rest of the samples, only perturbing the identified subset of features.
The paper considers two attack variants: the "pruned" variant which, in the final stage, *only* performs the attack on the subset of features, and the "mixed" variant, which, if the attack fails in the specified L_infinity ball when considering only the subset of features, will then "fall back" on the full-feature-space search attack. The "mixed" variant is guaranteed to have the same success rate as the full search, so only time comparisons are relevant when evaluating its success. Over a wide variety of standard datasets and decision tree ensemble models, the proposed methods are shown to produce significant speedups in attacks.
Strengths: - The presentation is extremely clear and precise, and the paper appears to be very technically sound.
- The problem setting is interesting, and the results are compelling.
- Assuming that this is in fact the first paper, as claimed, to consider the "high-throughput attack" setting for adversarial attacks on decision tree ensembles, then it seems to be a highly impactful result. (However, this is not my area of expertise, so there may be prior work that I am not aware of.)
Weaknesses: - The scope of interest in this work is perhaps somewhat limited: it is fairly specific to the problem of attacking many samples on a decision tree ensemble in a batch.
- The algorithm itself is simple, and empirically (rather than theoretically) motivated. However, it appears to be highly effective, so this is not a necessarily a problem.
- The Limitations section could use some work, or be omitted: it does not mention any limitations.
Technical Quality: 4
Clarity: 4
Questions for Authors: I have two suggestions for improvements to the algorithm; perhaps you could try these?
1. It seems odd to use effectively a "sparse" (L_0 constrained) adversarial attack when attacking under the the L_infinity threat model. Have you considered randomly perturbing (or perhaps maximally perturbing), within the L_infinity ball, all of the "other" features, that are not part of the identified sensitive-feature subset? This may give a somewhat-higher access rate on some classifiers, to the extent that the classifier is sensitive to random noise, without significantly increasing attack time.
2. In the "mixed" setting defined in the paper, the entire objective of feature pruning is to reduce runtime: there is no trade-off between final success rate and runtime. Therefore, it occurs to me that during the ExpandFeatureSet loop in algorithm 2, rather than selecting the smallest feature set that is below an arbitrary FNR threshold, one could instead just select the subset size that directly minimizes the average runtime, in the "mixed" case with fall-backs to full attacks. Note that we are already running full attacks and subset attacks on these samples, so this shouldn't take any additional time. For the sake of sample-efficiency, one elegant way to approach this would be as a stochastic multi-arm bandit problem, where the "arms" are the choices of feature set sizes, and the "reward" is the negative runtime. Any existing bandit algorithm could then be applied.
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: The limitations section is perfunctory, and could be expanded: for example, the limits on the situations in which this attack is relevant could be discussed further. Societal impacts are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the two extremely interesting and insightful suggestions! We are in the process of exploring these and we will mention these possible variations in the final version of the paper. We agree with the reviewer and we think they could be effective in some specific use cases of our algorithm.
**Randomly/maximally perturbing non-relevant features**
This is definitely a great observation! We believe that this is interesting and we are working on implementing this procedure. Perturbing the non-relevant features should be tried when the pruned setting returns UNSAT or TIMEOUT. In these cases, the results of the pruned setting are unclear; therefore randomly perturbing non-selected features might result in a label flip and hence save the time of running a full search (if the pruned setting returns SAT, perturbing other features would not make sense: we have a valid adversarial example and changing other features could affect its predicted label). In these cases, as opposed to starting from the base example x (which would be like CUBE attack (Andriushchenko et al. NeurIPS'19)), one probably wants to start from something found by the search process.
A side effect of adding random/maximal allowed perturbations is that the generated adversarial examples will be quite different than what would be found by the base algorithms (which is not problematic).
More generally, we have noticed that while *veritas* uses l-inf it tends, even in the *full* search, to apply very sparse changes. Our intuition is that this is due to the fact that we are working with trees: each split only uses one attribute and trees have limited depth which means that going from the root to the leaf only involves a small number of attributes.
**Just minimize runtime rather than selecting the smallest feature set**
This is also an extremely interesting point! The goal of the selected feature subset is indeed to reduce run time. The procedure proposed by the reviewer is effective and could benefit the *mixed* scenario. Our experiments hinted that a smaller feature subset is always reflected in a faster search, therefore we expect differences between the two approaches not to be consistent. However, this is a great idea and we are working to add it in the final version of the paper.
Finally, we highlight that if the user has specific needs in terms of feature selection, a customized feature selection strategy can be applied in place of the one described in 3.2, and the rest of the method can still be employed as it is.
**Limitations**
We will refine the *Limitations* sections to highlight that our approach is tailored to *repeated* evasion, and not cases where a single attack is performed.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for considering my suggestions, and agreeing to expand the limitation section. I am keeping my original score of 'Accept'.
I was slightly confused by your comment that "Our experiments hinted that a smaller feature subset is always reflected in a faster search, therefore we expect differences between the two approaches not to be consistent." To clarify, in my suggestion, I meant to refer minimizing to the _average_ runtime, _including the time for the full-feature search in instances where the "mixed" strategy falls back to performing a full search._ Therefore, a smaller feature set should not always lead to a faster average search. (Trivially, for example, if the feature set is of size one, then nearly every instance will fall back to full search, so the total time will be approximately equal to or even slightly greater than simply performing a full search.) By minimizing total average runtime directly, this should eliminate the need for an arbitrary success threshold hyper-parameter, at least in this "mixed" case.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response! We will expand the next version of the paper mentioning the suggested variations.
As for the last point, we meant “a smaller feature subset“ that still guarantees the bound on the FNR. Therefore the mentioned scenario with a feature set of size one would not be possible, as it would produce too many false negatives. The proposed alternative is now clear to us and we are working on it! | Summary: This paper proposes a new method for a faster generation of adversarial attacks on tree ensembles such as XGBoost and random forest. The proposed approach has two parts: first, a subset of relevant attributes in a tree is identified. This step is inspired by an empirical. observation that on tree ensembles, most of the adversarial examples tend to perturb only a limited subset of features. Once this subset is identified, the tree is pruned accordingly and the adversarial example for this pruned tree is generated. Since generating adversarial examples for a pruned tree would require less computational time, this method could generate an adversarial example faster. Theoretically, it is shown that if an adversarial example for the pruned tree is generated, it is also going to be an adversarial example for the full tree. Experimental results on a few tabular + image data shows that the proposed method could provide around an order of magnitude speed-up in generating an adversarial example.
Strengths: - The paper is well-written. It guides the reader well, and provides intuitions about the theoretical aspect of the work.
- The proposed method is simple and easy to implement.
- The experimental results demonstrate considerable speed-up in generating adversarial attacks.
Weaknesses: - The main weakness of this paper for me is its settings. As it is mentioned in Line 106, the threat model in the paper assumes that we have access to a subset of test examples during attack generation. This assumption plays a crucial role in the proposed method, as only by having a subset of test examples we can determine the attribute subset which is subsequently going to be used for pruning the tree. In most of the adversarial example generation literature, the assumption is that the attacker can attack the model with even a single example. Considering this, it would cast a doubt on the contributions of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can we generate the adversarial example with only a single example?
- A figure that shows the attack success rate (y-axis) vs the number of parallel test examples (x-axis) used for attack generation would probably provide a better insight on this shortcoming.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - I would probably encourage the authors to discuss the above limitation in their paper as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Assume access to a subset of test examples during attack**
We specifically look at the case where somebody wants to generate **many** adversarial examples.
There are several scenarios where this is the case. In areas like phishing (or fake webshops), attackers need to generate and register many web domains. Registrars are employing automated techniques (see cite 23) to flag suspicious registrations. In this case, the attacker would know what has and has not been approved by a registrar, hence having access to unseen test examples is usually not an issue. This scenario is briefly mentioned in the intro, but we can expand upon it.
Moreover, during model evaluation (i.e. before deployment) all existing methods for robustness checking require generating (or failing to generate) many adversarial examples, i.e., one for each example in a test set. For example, this is the case for adversarial accuracy (Andriushchenko et al. NeurIPS'19, Calzavara et al. DMKD'20, Vos \& Verwer ICML'21) and empirical robustness (Kantchelian et al. ICML'16, Chen et al. NeurIPS'19, Devos et al. ICML'21). Hence, in these relevant practical scenarios our approach would provide consistent run time improvements.
**Can we generate the adversarial example with only a single example**
Yes because the method would just fall back to the original search procedure (i.e., Kantchelian or Veritas) on the full ensemble. However, this would not be an interesting use case for our approach and there would be no benefit to using it to just generate one example.
**Figure with "number of parallel test examples" vs attacks success rate**
We are not sure we correctly interpret the reviewer's suggestion. If the "number of parallel test examples" stands for the number of examples used for the feature selection procedure described in 3.2, this number varies at each run depending on the results of the statistical test: as soon as it is guaranteed that the extracted feature subset ensures a low enough FNR, the procedure terminates, as outlined in Algorithm 2. Therefore in some datasets 100 examples are enough, and in some other cases we need up to 500 examples, making it difficult to summarize everything in a single figure. If the reviewer has a specific suggestion on how to do so, we will definitely insert it in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer SynL,
Can you please comment and indicate if your questions/concerns have been addressed by the authors' response? If you have any follow-up clarification questions, please ask your questions _as soon as possible_. We do not want to leave less than 24 hours to the authors to comment/respond to your response.
Your AC | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The main goal of this paper is to show that it is feasible to speed up the generation of adversarial examples against tree ensembles by perturbing only a small subset of the feature space. Their approach expands the current literature by offering an algorithmic alternative that takes advantage of the smaller subset of features instead of generating new examples from scratch. The paper shows empirical evidence from experiments on numerous datasets across different domains to show the speedup in the generation of such adversarial examples.
The key to the approach is to use a mixed approach to generating examples, where the algorithm uses the only the subset of features, or pruned setting, and only if it fails to deliver a satisfactory result or times out do they fall back to using the full method, which will find an example if one exists. Naturally, one would ask how to determine such a subset of features, which the authors propose a statistical test to add features that minimize the false negative rate by a given threshold.
Experimentally, they tested three tree ensembles on 10 datasets that were coerced to binary classification problems. The goal was to see how quickly they could generate 10,000 adversarial examples using their proposed methodology vs the current standard full approach. Evidence suggests that by pruning the ensemble to a subset of features significantly improves the generation of adversarial examples as well as helps scale the solution as tree depth increases when compared to the full approach.
Strengths: The biggest strength of this paper is that it is clearly written and easy to follow for a reader that has experience in tree-based methods for machine learning. Claims are well supported by experiments and there appears to be no significant roadblocks to reproducibility given the open nature of the datasets and that the authors plan to release code along with paper acceptance. Results are reported in both summary and detailed formats. The flow of the paper is consistent with current norms for papers in this vein of research.
Given that computational resources for large scale ML projects are at a premium, results like this can reduce the time to accomplish common tasks such as adversarial example generation are very helpful to practitioners and other researchers. If methods like this are then implemented in common packages, they can be of significant impact to the community writ large.
Weaknesses: The largest weaknesses in this paper, to me, are around its originality and overall theoretical impact on the field. This is an incremental improvement to adversarial generation that has its theoretical underpinnings in feature selection of trees and tree ensembles. There is significant prior work on feature selection for trees, and although this is a novel implementation of this concept for adversarial example generation speedup, it is not a concept that is particularly novel. The paper's reliance on a statistical test for feature selection introduces an element of sensitivity to parameter choices, such as the threshold for the test and the timeouts for the pruned and full methods. A detailed sensitivity analysis could help in understanding the robustness of the method to these parameter settings.
Technical Quality: 3
Clarity: 3
Questions for Authors: Do you have any data on parameter sensitivity? How sensitive is the method to the choice of parameters, such as the threshold for the statistical test and the timeouts?
How do different dataset characteristics, such as feature correlation and data sparsity, affect the performance of the proposed method? Are there specific types of datasets where the method performs particularly well or poorly?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There are limitations on the approach’s effectiveness given different dataset characteristics. The approach works better on high-dimensional data, but to what extent? Additionally, the authors could have added some statements of the risks of the use of this approach to create adversarial examples that could enable evasion attacks that could affect common systems. It may be helpful to explain the implications or informed hypotheses if this approach could better harden tree ensemble models and therefore reduce the threat. On another note, a limitation of this approach is the tradeoff for speed over completeness. It isn’t readily apparent to me how robust this method is to simply cherry picking the most vulnerable feature and creating all the examples from minor perturbations in that feature. So, while the generated examples are valid, they could represent a fragile set of examples.
Additionally, the datasets used are only for binary classification tasks, is this method appropriate or are the results as significant when given tasks outside binary classification?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Originality / Impact**
The paper's impact and originality lie in the insight that we can efficiently perform evasion attacks by only perturbing a restricted set of features.
To our knowledge, there is little work on exploiting the sequential nature of performing evasion attacks, i.e., existing methods solve problems in isolation and do not exploit similarities among problems. This is useful for robustness checking of learned models (e.g., computing adversarial accuracy, cf. cites 1-4-6-8-20-28 from the paper). See also the response to Reviewer SynL for other cases this may be useful.
One difference with respect to feature selection is that traditionally one selects features prior to training a model, i.e., the model is learned with a reduced feature set. We identify features after the model is learned (perhaps this is more akin to feature importance) and use the found features to simplify the model.
**Parameter Sensitivity**
**Threshold for statistical tests**
This would be interesting. In general, the threshold on the FNR $\tau$ is inversely proportional to the number of chosen features: the larger the selected feature subset, the lower the FNR will be. This is in tension with the goal of using as little of a feature subset as possible, to speed up the *pruned* setting. Moreover, a larger confidence $1-\eta$ increases the number of selected features as it shrinks the confidence interval for the empirical FNR (see 3.2).
We ran the sensitivity analysis proposed by the reviewer on a single dataset. The table below shows the *mixed* speed-up for *miniboone* (*veritas*, XGBoost) using different values for $\tau$ (FNR) and $1-\eta$ (confidence).
For all parameter settings, our approach improves upon the run time compared to always running a full search. For a fixed $\tau$, varying $1-\eta$ does not impact the selected features subset or performance. When $\tau = 0.05$, many features are selected and hence there is less pruning. $\tau=0.1$ and $\tau=0.25$ perform identically. When $\tau=0.5$ the feature subset becomes very small, and while the *pruned* setting is extremely fast there are many UNSATs and hence more calls to the *full* search. We will add a more expansive analysis in the Supplement.
| tau vs 1-eta | 0.8 | 0.9 | 0.95
|---------------|-------|---------|------
| 0.05 | 1.5x | 1.5x | 1.5x
| 0.1 | 2.6x | 2.6x | 2.6x
| 0.25 | 2.6x | 2.6x | 2.6x
| 0.5 | 1.7x | 1.7x | 1.7x
Our experiments used a conservative approach and set the false negative rate $\tau$ below 0.25 with confidence $1-\eta=0.9$. Empirically, this worked well and we achieve better results than the theory guarantees (average FNR of 7.5\%, see Q3 in Section 4).
**Timeouts**
Given the hardness of evasion (Kantchelian et al. ICML'16), timeouts always need to be explicitly handled (see lines 75-80).
The chosen timeouts of 1 (*kantchelian*) and 0.1 (*veritas*) work for all the adopted datasets. Increasing the timeout parameter will not have a big impact as timeouts are rare as shown in Table 7 in the supplement. Decreasing it will lead to too many calls to the full procedure, negating the benefits of pruning.
**Dataset Characteristics**
**How do feature correlation and data sparsity affect the performance of the proposed method?**
Given that we select a subset of relevant features, correlation between features is likely unproblematic as redundant features will likely not be selected by the feature selection algorithm. If a certain feature is spare, it will likely be left out from the subset of relevant features, as the signal that the model can pick up will be weaker.
**Are there specific types of datasets where the method performs particularly well or poorly?**
The method generally worked well on the diverse set of datasets made of tabular data of various dimensionality and image data we considered.
The approach would not excel when the dataset has low dimensionality (hence it does not make sense to extract a subset of features) or has a fully categorical domain (where the l-infinity norm loses meaning).
**High dimensional datasets: to what extent?**
We verified that our approach does not bring consistent run time improvements for datasets with less than 25 features.
Therefore, we considered all the popular high-dimensional benchmark datasets we could find in related works. Suggestions for extra datasets are always welcome!
**Risk Statement**
We will add a clearer risk statement. We will highlight that that while this work does make attacking tree ensembles faster, being able to efficiently perform repeated evasion attacks is a key to understand what attackers can do, and to improve the applicability of robustness checking and hardening techniques, which ultimately mean reducing the threat on deployed models.
**Robustness against simply modifying the most vulnerable features?**
Only modifying the most vulnerable feature would not work well in general and would indeed produce fragile examples. We identify a subset of relevant features so that on one hand we avoid losing time on "irrelevant" features, and on the other hand we still produce diverse enough adversarial examples. We discuss the quality of generated examples in Section D of the Supplement, using two different measures. In general, our method produces valid adversarial examples of sufficient quality (using *veritas*, even better than the *full* setting).
**Tasks beyond binary classification** Most existing approaches to performing evasion attacks on tree ensembles are defined for binary classification, which is why this paper follows the same approach (cites 6-8-20-33).
However, we do not see any reason why our approach should not work even on multi-class and regression problems.
---
Rebuttal Comment 1.1:
Comment: I have received the rebuttal comments and appreciate the author's response. Particularly, thank you for performing and reporting on the sensitivity analysis for the parameters. I also agree with your statements on the dataset characteristics, thank you for the clarifications. I will continue to review and discuss with the other reviewers.
---
Reply to Comment 1.1.1:
Comment: Thank you. We will add the clarifications and include the sensitivity study in the next version of the paper! | null | null | null | null | null | null |
Improving Adaptivity via Over-Parameterization in Sequence Models | Accept (poster) | Summary: This paper investigates the influence of over-parameterization on the adaptivity and generalization of sequence models. The work highlights the significance of eigenfunctions in kernel regression and introduces an over-parameterized gradient descent method to analyze the effects of varying eigenfunction orders. Theoretical results demonstrate that over-parameterized models can adapt to the underlying signal structure and outperform traditional methods. The research shows that deeper over-parameterization further enhances model generalization. This approach provides insights into improving the flexibility and performance of neural networks, especially in practical scenarios where network architecture and initialization conditions dynamically evolve.
Strengths: The paper presents a novel approach by leveraging over-parameterization in sequence models to enhance adaptivity and generalization. This originality is evident in its creative combination of kernel regression techniques and over-parameterized gradient descent methods, which have not been explored together in this context before. Its significance lies transforming the understanding of neural network adaptivity beyond the traditional NTK framework, providing a robust theoretical foundation for improving model performance in practical scenarios where network architecture and initialization are dynamic.
Weaknesses: One significant weakness of the paper is the limited experimental validation. While the theoretical results are compelling, the empirical evidence provided is not sufficient to fully support the claims made about the superiority of the over-parameterized models in practical scenarios. The paper would benefit from a more extensive set of experiments across various datasets and model architectures to demonstrate the robustness and generalizability of the proposed methods. Additionally, there is a need for more practical examples that illustrate how the approach can be applied in real-world settings, especially given the dynamic nature of network architecture and initialization conditions mentioned. Including these aspects would enhance the paper's contributions and practical relevance.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Can you provide more detailed theoretical justification or insights into why deeper layers contribute to better performance? Specifically, how does the introduction of additional layers influence the adaptation of eigenvalues, and are there any diminishing returns or optimal depth considerations that should be taken into account?
2. The theoretical results are promising, but the empirical validation is limited. Have you considered testing the proposed over-parameterized gradient descent method on a wider range of datasets and model architectures? How does the method perform in different domains, such as natural language processing or computer vision, compared to traditional methods and other over-parameterization techniques?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts in reviewing our paper and your comments.
We appreciate that you recognize that "the theoretical results are compelling".
We would like to address your concerns in the following:
> One significant weakness of the paper is the limited experimental validation.
>
> 2.The theoretical results are promising, but the empirical validation is limited.
We appreciate the reviewer’s concern about the experimental validation.
First, we would like to point out that while our experiments are mostly done on the sequence model,
**similar results hold for the kernel model under the non-parametric regression setting.**
We have performed additional experiments and we will include them in the new revision of the paper.
More importantly, the goal of our paper is to capture the dynamic nature of neural networks under the kernel framework
and investigate the **impact of the dynamic evolution of the kernel on the generalization performance**.
The reason why we consider a simplified model is that (1) directly analyzing a deep neural network is technically
significantly challenging;
(2) the simplified model enables us to gain a clear understanding of the dynamic evolution of the kernel and its impact
on generalization.
Moreover, the experiments included in our paper, particularly those in Appendix A, align well with our theoretical
results.
Therefore, our simplified model allows us to **gain insights and provides a solid theoretical foundation for
understanding more complicated over-parameterized models**.
From a higher perspective, our main insight is that over-parameterized models combined with gradient-based training
methods lead to a dynamically adaptive kernel that can adapt to the structure of the truth signal,
which we would like to refer to as the *"adaptive kernel" perspective*.
This perspective would be **a valuable stepping stone for understanding the generalization properties of more
complex neural networks**.
Under the guidance of this perspective, we will be able to consider more complicated over-parameterization and more
realistic models like fully connected deep neural networks.
Therefore, while our current model may not directly apply to real-world datasets in natural language processing or
computer vision, we think that our insights gained in this simplified setting will guide further research that explores
the practical implications of our findings in these and other areas.
We believe that starting from this point, we will be able to consider more realistic settings and understand more
complex models in the near future.
We hope this clarification addresses your concern and highlights the intended scope of our work.
> 1.Can you provide more detailed theoretical justification or insights into why deeper layers contribute to better
> performance? Specifically, how does the introduction of additional layers influence the adaptation of eigenvalues, and
> are there any diminishing returns or optimal depth considerations that should be taken into account?
Thank you for the insightful question. To summarize, the additional trainable parameters introduced by deeper layers
reduce the impact of misaligned initialization on the eigenvalues and enable the model to adapt more effectively to the
structure of the true signal, thereby improving generalization performance.
To provide a more detailed theoretical justification, let’s revisit the parameterization of our model: $\theta_j = a_j
b_j^D \beta_j$ for $j \geq 1$, where the initialization is given by $a_j(0) = \lambda_j^{1/2}$, $b_j(0) = b_0$,
and $\beta_j(0) = 0$. In this framework, the term $a_j b_j^D$ is responsible for learning the eigenvalues, while
$\beta_j$ captures the signal. Then, Proposition 3.4 provides key insights:
1. **For signal components (large $\theta_j^\star$)**: $a_j b_j^D$ increases to approximate $|{\theta_j^*
}|^{\frac{D+1}{D+2}}$ multiplied by a constant factor. This indicates that the eigenvalues become better aligned
with the true signal as $D$ increases, since $\frac{D+1}{D+2}$ is an increasing function of $D$.
2. **For noise components**: If $\lambda_j \leq \epsilon^{2/(D+2)}$, $a_j b_j^D$ does not exceed its initial value
by more than a constant factor, thereby controlling the generalization error contributed by noise. A larger $D$
allows this condition to be met for more components, thus enhancing the model's generalization performance.
While deeper layers facilitate more effective adaptation to the true signal, there are considerations regarding
diminishing returns and optimal depth:
1. **Training Time and Computational Cost**: As shown in Corollary 3.3, the required training time $t \asymp
\epsilon^{-\frac{2D+2}{D+2}}$ increases with $D$, as does the computational cost due to the added layers. This may
not be desirable in practical applications where computational efficiency is a concern.
2. **Limited Benefit**: The primary benefit of deeper layers is to mitigate the impact of misaligned initialization on
the eigenvalues. However, once $D$ is sufficiently large and this impact is no longer dominant, further increasing
$D$ may not yield significant improvements.
Therefore, the optimal depth is problem-dependent, with $D$ being large enough to minimize the influence of misaligned
initialization while balancing computational considerations.
---
Rebuttal Comment 1.1:
Title: Re:
Comment: I thank the authors for responding to my comments. I'd like to increase my score based on the authors' response. | Summary: The authors analyze the generalization error for kernel regression sequence models in the overparameterized regime.
They rigorously show that due the overparameterization, the learned parameters are better adapted to the underlying structure.
They also provide some numerical experiments corroborating their findings (in the appendix).
Strengths: - Clear motivation and presentation
- Rigorous analysis that could be useful for further research
Weaknesses: The main concern here, is that the setting is too constrained, in that the kernel map is assumed to be fixed.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The NTK is mentioned multiple times in the introduction, could you elaborate a bit more on the connection of your work to NTK?
- How would you go about approximating the (infinite sequence ) $\lambda_i$ in practice? How does this affect performance and guarantees?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive feedback and valuable comments.
We will address your concerns in the following:
> The main concern here, is that the setting is too constrained, in that the kernel map is assumed to be fixed.
In fact, the goal of our paper is to investigate the **impact of the dynamic evolution of the kernel on the
generalization performance**, so we are indeed considering a dynamic kernel/feature map.
We have tried to consider a family of feature maps $\Phi_θ(\cdot)$
parameterized by $θ$ and the model
$$
y=\Phi_θ(x)^{T}β+ε
$$
It is clear that it includes the neural networks as a sub-models and could (partially) capture the dynamic of feature
learning characteristic in neural networks. It is clearly a hard task to investigate the most general adaptive
kernel/feature models, however, some insight may be obtained from some simple families of feature maps.
For the kernel regression, the feature map is fixed, namely, $\Phi_θ(x)= (\sqrt{λ_1}e_1(x)
,\sqrt{λ_2}e_2(x),...)$, where $(e_j)_{j\geq 1}$ are orthonormal functions, and the kernel is $K(x,y)=\Phi(x)^{T}\Phi(
y)$.
As the first attempt to understand the benefit of dynamic feature maps,
we study the slightly complicated family $\Phi_θ(x)^{T}=(θ_1 e_{1}(x),θ_2e_{2}(x),....)$,
where the parameters $θ$ are learned during the training process.
However, there are some technical challenges to theoretically analyze the differential equation associated to the family
$\Phi_θ(x)$. Thus, we simplify the analysis by transitioning to the sequence model, which is justified by the celebrated Le Cam equivalence.
Here, we point out that while the eigenfunctions are fixed, **the feature map can change by learning the eigenvalues**,
and, as shown in the main results, **this dynamic evolution of the eigenvalues can greatly improve the generalization**.
Therefore, while simplified, our model still **capture the dynamic nature of kernels/feature maps** and
provides insights and a solid theoretical foundation understanding more complicated over-parameterized models.
From a higher perspective, our main insight is that *over-parameterized models combined with gradient-based training
methods lead to a dynamically adaptive kernel that can adapt to the structure of the truth signal*,
which we would like to refer to as the *"adaptive kernel" perspective*.
Starting from this point,
we believe we will be able to explore more realistic settings and more complex models in the near future.
Moreover, as an extension of this work, with extra technicalities, we are now able to analyze directly the
parameterization
$\Phi_θ(x)^{T}=(θ_1 e_{1}(x),θ_2e_{2}(x),....)$ in the Reproducing Kernel Hilbert Spaces (RKHS) (see the Future Work
section).
Due to the paragraph limit, we plan to explore it further in an extended journal version of this work.
#### Questions
1. > The NTK is mentioned multiple times in the introduction, could you elaborate a bit more on the connection of your
> work to NTK?
The motivation of this work is to go beyond the fixed kernel in the NTK theory and explore how the dynamic evolution
of the kernel improves the generalization performance.
The NTK theory shows that the training dynamics of deep neural networks can be approximated by certain kernel
regression with a fixed tangent kernel if the network width tends to infinity.
However, when the width is finite, the corresponding tangent kernel can be dynamically evolving during training,
which is shown empirically in recent works.
Therefore, in our work, we go beyond the NTK theory and explore the dynamic evolution of the kernel, particularly its
impact on the
generalization performance.
Since the analysis directly over fully-connected networks is extremely challenging, we simplify the setting by
considering parameterizing the kernel only by its eigenvalues and focusing on these eigenvalues.
Our results show that such dynamic evolution of the kernel can greatly improve the generalization performance
compared to the fixed kernel.
2. > How would you go about approximating the (infinite sequence $\lambda_i$) in practice? How does this affect
> performance and guarantees?
In our setting where we parameterize the kernel by the eigenvalues, we actually choose an infinite sequence
$\lambda_i$ first as the eigenvalues and then get the corresponding kernel, so the approximation of the eigenvalues
is not needed.
The usage of "eigenvalues" is just to be consistent with the fixed kernel framework (see the connection to NTK), and
these "eigenvalues" are just the initialization of the (dynamic) kernel in our setting.
Following our main result (Theorem 3.1 and Theorem 3.2), it suffices to choose eigenvalues with a fast decay such as
$\lambda_i = i^{-4}$ so that the extra error caused by the misalignment of the initial eigenvalues and the truth
signal is small.
---
Rebuttal 2:
Comment: I thank the authors for answering my questions. I keep my score. | Summary: This paper proposed an overparameterized gradient descent method. Its benefits on generalization has been verified both theoretically and empirically.
Strengths: 1.The paper is well-written with a clear structure. Motivations are well-explained on why the authors study the problem, and the illustrative examples are helpful in understanding the motivations.
2.Theoretical results are solid and well-organized. The authors made the theoretical settings clear: definitions are well-explained and assumptions are clear. Proofs of the theory are sound as far as I read into.
3.Experiments are provided in Appendix, which can verify the theoretical conclusions.
Weaknesses: 1.The insightful explanations for why the proposed method can improve generalization are lacked.
2.While the experiments in appendix have verified the effectiveness of overparameterized gradient descent, it is recommended to compare the performance of this new method with other algorithms, such as SGD or SAM.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: No negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and recognizing the contribution of our work.
We will address your concerns in the following:
1. > The insightful explanations for why the proposed method can improve generalization are lacked.
We are sorry for not presenting it clearer.
Generally speaking, the proposed over-parameterized gradient descent can improve generalization by adjusting the "
eigenvalues" to fit the structure of the truth signal, so that it can outperform the standard gradient descent.
In Section 3.3 paragraph "Learning the eigenvalues", we have further investigated why the over-parameterized gradient
descent can improve generalization. Proposition 3.4 in this paragraph shows that for the signal components, the
eigenvalues are learned to be at least a constant times a certain power of the truth signal magnitude;
while for noise components, the eigenvalues do not exceed the initial values by some constant factor. Therefore, the
over-parameterized gradient descent effectively adapts eigenvalues to the truth signal while mitigating overfitting
to noise, which leads to better generalization.
2. > While the experiments in appendix have verified the effectiveness of overparameterized gradient descent, it is
recommended to compare the performance of this new method with other algorithms, such as SGD or SAM.
Thank you for pointing out this.
We have conduct more experiments to compare the performance of the proposed over-parameterized gradient descent with
other algorithms, such as SGD or SAM.
The results show that without the over-parameterization, neither SGD nor SAM can achieve the optimal generalization
rate as the over-parameterized gradient descent does.
Theoretically, as far as we know, the SGD algorithm in kernel regression achieves the same generalization rate as the
standard gradient descent ([Lin and Volkan, 2020]), so that it also suffers from misalignment of the eigenvalues in
our setting and be inferior to the over-parameterized gradient descent.
#### References
[Lin and Volkan, 2020] Lin, Junhong, and Volkan Cevher. “Optimal Convergence for Distributed Learning with Stochastic
Gradient Methods and Spectral Algorithms.” Journal of Machine Learning Research 21 (2020): 147–1. | Summary: This paper investigates how over-parameterization can enhance generalization and adaptivity within the non-parametric regression framework. Drawing on insights from kernel regression and over-parameterization theory, the authors focus on the sequence model, which can approximate a wide range of non-parametric models, including kernel regression.
The authors highlight three key findings through mathematically stylized examples. First, they identify limitations of traditional kernel regression, including neural tangent kernel (NTK) theory, by demonstrating that the alignment (ordering) of eigenvalues can significantly impact generalization properties, even when eigenfunctions are correctly specified (Section 2). Second, by analyzing the gradient flow for a two-layer Hadamard neural network corresponding to the sequence model, they claim that this method can dynamically adjust eigenvalues to adapt to the underlying structure of the signal, achieving nearly the oracle rate with suitable regularization (Section 3.1). Finally, they show that adding depth can mitigate the impact of eigenvalue initialization, thereby improving generalization capability (Section 3.2).
Strengths: This work offers a clear and effective account of the limitations of the traditional kernel regression framework (including NTK theory) and the advantages of over-parameterized gradient descent. The simple yet concrete examples effectively support the authors' claims, and the accompanying intuitions are sensible and easy to understand. Additionally, the two main theorems (Theorem 3.1 and Theorem 3.2) are presented concisely and effectively, and the discussion in Section 3.3 is particularly helpful for interpreting the results.
Weaknesses: While this work makes significant theoretical contributions and provides a fresh perspective on the benefits of over-parameterization, there are areas where it could be improved:
**1. Narrow/Restrictive Settings:** The sequence model and the parameterization in Eq. (8) and Eq. (14), viz., the Hadamard neural network, might be too simplistic. I am curious if the insights in this paper can be extended to more realistic parameterizations, such as fully connected networks.
**2. Dynamically Evolving Kernels:** This paper addresses issues with misaligned eigenvalues and the benefits of over-parameterized gradient flow to mitigate them. However, during training, the kernel itself (and thus its eigenfunctions) may evolve, as empirically evidenced by [Seleznova and Kutyniok 2022] and [Wenger et al., 2023]. While the authors mention in Section 4 that this is left for future work, it would be helpful to understand if the eigenvalue alignment discussed in this paper can shed light on over-parameterization in practical settings beyond the stylized examples provided.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The meaning of the sentence in lines 172-175 is unclear. Could the authors elaborate on this? Additionally, it would be helpful if the sequence model were described earlier to enhance its comprehension. Perhaps the authors could include a background section or rearrange the description of the sequence model in lines 183-186 to be presented upfront, with appropriate references.
2. Regarding the adaptive choice of the stopping time discussed in lines 279-288, I can roughly follow the authors' point. However, it appears that there is an unspecified constant, which is not known a priori, needed to choose the stopping time $t \asymp \epsilon^{\frac{2D+2}{D+2}}$, making it not practically feasible either. Thus, I am wondering if the advantage is more aligned with universality, rather than practicality. Could the authors clarify this?
3. Including the numerical experiments from Appendix A into the main text would be beneficial, if possible. Additionally, could the authors design a simple experiment to support the practical relevance of eigenvalue misalignment and the discussions in this paper? For instance, would it be possible to demonstrate that it is common in real-world datasets that kernels are well-identified but eigenvalues are easily misaligned?
Typos/minor suggestions:
- Line 89: add transpose to $X$
- LIne 149: defines -> defined
- Line 205: Inspired -> Inspired by
- Line 235: shows the generalization error -> presents an upper bound for the generalization error
- Line 260: have -> present?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This is primarily a theoretical work, and the authors discussed the potential limitations of the work and potential future research directions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your efforts in reviewing our paper and recognizing the contribution of our work.
We appreciate your constructive feedback and will address your concerns in the following:
> Narrow/Restrictive Settings
>
> Dynamically Evolving Kernels
We fully agree with you that the dynamic evolving kernel/feature would explain the superiority of the neural network and
this is our start point of this manuscript. We have tried to consider a family of feature maps $\Phi_θ(\cdot)$
parameterized by $θ$ and the model
$$
y=\Phi_θ(x)^{T}β+ε
$$
It is clear that it includes the neural networks as a sub-models and could (partially) capture the dynamic of feature
learning characteristic in neural networks. It is clearly a hard task to investigate the most general adaptive
kernel/feature models, however, some insight may be obtained from some simple families of feature maps.
For the kernel regression, the feature map is fixed, namely, $\Phi_θ(x)= (\sqrt{λ_1}e_1(x)
,\sqrt{λ_2}e_2(x),...)$, where $(e_j)_{j\geq 1}$ are orthonormal functions, and the kernel is $K(x,y)=\Phi(x)^{T}\Phi(
y)$.
As the first attempt to understand the benefit of dynamic feature maps,
we study the slightly complicated family $\Phi_θ(x)^{T}=(θ_1 e_{1}(x),θ_2e_{2}(x),....)$,
where the parameters $θ$ are learned during the training process.
However, there are some technical challenges to theoretically analyze the differential equation associated to the family
$\Phi_θ(x)$. Thus, we looked for some alternative approaches.
Fortunately, the celebrated Le Cam equivalence could allow us to simplify the setting to a sequence model.
(Please see the answer of Question 1 for details)
Here, we point out that while the eigenfunctions are fixed, **the feature map can change by learning the eigenvalues**,
and, as shown in the main results, **this dynamic evolution of the eigenvalues can greatly improve the generalization**.
Therefore, while simplified, our model still holds a connection to the dynamic evolving kernel/feature models and
provides
insights and a solid theoretical foundation understanding more complicated over-parameterized models.
From a higher perspective, our main insight is that **over-parameterized models combined with gradient-based training
methods lead to a dynamically adaptive kernel that can adapt to the structure of the truth signal**,
which we would like to refer to as the *"adaptive kernel" perspective*.
Starting from this point,
we believe we will be able to explore more realistic settings and more complex models in the near future.
Moreover, as an extension of this work, with extra technicalities, we are now able to analyze directly the
parameterization
$\Phi_θ(x)^{T}=(θ_1 e_{1}(x),θ_2e_{2}(x),....)$ in the Reproducing Kernel Hilbert Spaces (RKHS) (see the Future Work
section).
Due to the paragraph limit, we plan to explore it further in an extended journal version of this work.
### Questions
1. > The meaning of the sentence in lines 172-175 is unclear. Additionally, it would be helpful if the sequence model
> were described earlier to enhance its comprehension.
We apologize for the confusion. In these lines, we are illustrating that **kernel regression can be simplified to a
sequence model using the so-called Le Cam equivalence**.
In detail, the Le Cam equivalence states that the minimax risk of estimating a function in a reproducing kernel
Hilbert space (RKHS) is equivalent to the minimax risk of estimating a sequence of coefficients in a sequence model.
Informally, if we multiply $\Phi(x_i)$ on both sides of the equation $y_i=\Phi(x_i)^{T}β+ε_i$ and take the mean over
$i=1,...,n$, we get
$\frac{1}{n}\sum_i \Phi(x_i)y_i = \frac{1}{n}\sum_i \Phi(x_i)\Phi(x_i)^T β + \frac{1}{n}\sum_i \Phi(x_i)ε_i$.
Using the orthogonality of $e_j$'s in $\Phi(x)^{T}=(\sqrt{λ_1}e_1(x) ,\sqrt{λ_2}e_2(x),...)$ and approximating the
empirical mean with the integration, we derive $\frac{1}{n}\sum_i \Phi(x_i)y_i \approx \Lambda β+\xi$, where $\xi$ is
a vector of Gaussian noise and $\Lambda$ is a diagonal matrix with entries $\lambda_j$, so we reach at the Gaussian
sequence model.
This equivalence allows us to focus on a more tractable model while retaining the essential characteristics needed
for our theoretical analysis. We hope this would clarify the point.
In the new revision, we will reorganize the content and make sure the model is properly introduced.
2. > Regarding the adaptive choice of the stopping time discussed in lines 279-288, I can roughly follow the authors'
> point.
We apologize for the confusion. Our point is to show the over-parameterized gradient descent method has also the
advantage of adaptivity over the vanilla gradient descent method.
For the vanilla gradient descent, the optimal stopping time $t \asymp ε^{-(2q\gamma)/(p+q)}$ is dependent on
the unknown parameters $p$ and $q$ of the signal; in contrast, for the over-parameterized gradient descent, the
choice $t \asymp ε^{-\frac{2D+2}{D+2}}$ does not require prior knowledge of the signal's structure. Moreover,
the hidden constant in this stopping time expression only need to be large enough with only a dependency on
Assumption 1, a weak assumption on the span of the signal (but not its specific structure).
So, **the stopping time is adaptive in the sense that it does not require prior knowledge of the signal's specific
structure**.
3. > Including the numerical experiments from Appendix A into the main text would be beneficial, if possible. ...
Thank you for your advice. With the additional content page if our paper is accepted, we will be able to include the
numerical experiments from Appendix A into the main text to enhance the readability of the paper.
For real-world datasets, we believe that the eigenvalue misalignment is a common phenomenon in practice.
*Due to the character limit, the details are given in a new comment block.*
---
Rebuttal 2:
Title: Remaining response for Question 3
Comment: 3. > Including the numerical experiments from Appendix A into the main text would be beneficial, if possible. ...
Thank you for your advice. With the additional content page if our paper is accepted, we will be able to include the
numerical experiments from Appendix A into the main text to enhance the readability of the paper.
For real-world datasets, we believe that the eigenvalue misalignment is a common phenomenon in practice.
* From the theory side, as discussed in Example 2.2 (Low-dimensional example), if the regression function of the
data exhibits an unknown low dimensional structure on the covariates, then using a fixed kernel that takes all
covariates into account would lead to misalignment of the eigenvalues.
Since the low-dimensional structure is very common in real-world datasets, the eigenvalues would be misaligned in
practice.
* To conduct experiment on real-world datasets,
we can estimate the regression function's coefficients over the eigen-basis of the kernel and check whether they
exhibit a sparse structure.
We have conducted new experiments on datasets like "California Housing" and "Concrete Compressive Strength", where
we
choose the eigen-basis to be the multi-dimensional Fourier basis.
The results show that the coefficients over the eigen-basis concentrate over a few components, indicating the
sparse structure of the regression function,
so misalignment of the eigenvalues would occur.
Therefore, the eigenvalue misalignment might be very common in real-world datasets.
---
Rebuttal Comment 2.1:
Title: Response to the Authors' Rebuttal
Comment: I thank the authors for addressing my questions and concerns. Since most of my inquiries were for clarification, I trust that the authors will incorporate the additional explanations and reorganization into their revision. However, I was unable to locate the real-world dataset experiments mentioned in response to my Question 3; if the experiments have already been conducted, could the authors attach them to the global rebuttal? It would be also beneficial to include these (along with the authors' response to my Q3) in the revision.
Overall, I believe this work makes a solid theoretical contribution, though I suggest some revisions to further improve clarity regarding the scope of contributions and limitations, as well as their relation to existing work (e.g., following the authors' discussion with Reviewer EWHq and Reviewer kYgJ) and future directions. Specifically, I recommend the following among others, some of which the authors have already committed to:
(i) Properly introduce the sequence model, including a concise yet self-contained exposition of the Le Cam equivalence, potentially in the Appendix if necessary.
(ii) Further clarify the dynamic adaptivity of the kernel. While I agree with the authors that a dynamic update of eigenvalues can lead to adaptive kernel choices, even with a fixed eigenbasis, this offers limited adaptivity compared to a more general adaptive kernel/feature model, as the authors also noted. It would be helpful if the authors could elaborate on the extent to which the current approach is effective and whether they believe an extension to a more general kernel model is needed.
With that understanding, I am inclined to raise my rating to a 6.
---
Rebuttal 3:
Title: Experiment details
Comment: We would like to thank you for raising the score and providing further valuable comments on our paper.
In the new revision, we will follow your advice to improve the clarity.
For the experiments on the real-world dataset, it seems that we missed the opportunity to post a global rebuttal where we can show figures and tables.
We apologize for this. We will include the results in the new revision of the paper.
Nevertheless, we would like to describe in detail the experiments and results here.
First, as discussed in Example 2.2 (Low-dimensional example) or Example 2.3, the misalignment happens when the order of the eigenvalues of the kernel mismatches the order of coefficients of the truth function.
For the experiment setup, we consider the multidimensional Fourier basis (the trigonometric functions) as the eigen-basis of the kernel,
which can be given by $e_{\mathbf{m}}(x) = \exp(2\pi i \langle{\mathbf{m},x}\rangle)$, where $\mathbf{m} \in \mathbb{Z}^d$.
For the commonly considered kernel that correspond to Sobolev space (see also Example 2.2), the kernel and also its eigenvalues are isotropic in the sense that
$\lambda_{\mathbf{m}} \asymp (1+||{\mathbf{m}}||_2^2)^{-r}$ for some $r > 0$.
Therefore, it gives an order of the eigen-basis.
For the real-world dataset, we compute the coefficients of the regression function over the eigen-basis of the kernel by the empirical inner product.
For "California Housing" and "Concrete Compressive Strength", the dimension of $x$ is 8, so we choose $\mathbf{m}$ up to $|m_i| \leq 2$, resuling in $5^8 = 390625$ coefficients.
Then, we plot the magnitudes of the coefficients in the order given by the kernel.
If the kernel is well aligned with the truth function, the coefficients should exhibit a smooth decay in this order.
However, the resulting plot show **multiple significant spikes**.
Also, among the coefficients, only very few components have large magnitudes, indicating the **sparse structure** of the regression function.
Together, these results suggest that the eigenvalues of the kernel are misaligned with the truth function in these datasets.
Please feel free to ask if you have any further questions or need more details on the experiments. Thank you. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Authors consider the problem of fitting data with given a kernel but with a different estimator than kernel regression, which is based on an overparameterized version of the gradient flow for the squared loss. This estimator is inspired by the behavior of first-order algorithms in the area of deep learning theory under overparameterized models. The interesting regime in their theoretical result is when the order of the coefficients of the target function (that we hope to learn) in the eigenbasis of the kernel does not match the decay in the eigenvalues of the kernel itself; in this regime, while the conventional kernel regression estimator is suboptimal and can have undesirable dimension dependencies, they show that their estimator achieves a better rate, and intuitively is able to exploit the low dimensionality and adapt the eigenvalues of the kernel to the coefficients of the target function, a task which kernel regression with a fixed kernel is unable to do. Their analysis starts with using the Le cam theorem on the equivalence of kernel fitting and sequence models, then instead of running the gradient flow for the L2 loss of the sequence model only on the functions coefficients, they also train on some hyperparameters for the sequence model that resemble the kernel eigenvalues.
Strengths: The theoretical result is interesting and connects to areas, first one in statistics about JS estimators and benefits of alternative estimators than fitting the data in the l-2 sense (which is equivalent to kernel regression in our context), and the deep learning theory literature where researchers are trying to figure our the benefits of training neural nets over fitting a fixed kernel, and what the role of overparameterization is in this regard.
Even though the algorithm is gradient flow and not computable in this paper, in a way it illustrate the benefit of training a simple overparameterized model and how it can is picking a kernel adaptively which works better than kernel regression with a fixed kernel. There are similar results in deep learning theory literature such as these:
[1] “Optimization and Adaptive generalization of Three layer Neural Networks”
[2] “What happens when SGD Reaches Zero Loss? – A Mathematical Framework”
[3] “Label noise (stochastic) gradient descent implicitly solves Lasso for quadratic parameterization”
Weaknesses: My major concern is that similar results to this one already exist in the literature and the authors have not compared their results to them and how they differ from the them.In particular, running gradient-based methods on overparameterized nets, such as the quadratic parameterization is known to recover the underlying sparse solutions [3], also [2] section 6.2. In particular, I am curious if your result can imply or be phrased as a sparse recovery type argument for high high dimensional regression (i.e. similar rate to lasso)? And can you quantitatively compare your end results/rates with these papers, independent of the differences between algorithms/initialization? The work of [Zhao et al., 2022] that you have already cited also seems to have such sparse recovery results for high dimensional regression for gradient descent run on overparameterized model. Please also clarify the difference between your setup/rates with them quantitatively.
Minor comments
This work [1] also shows that training overparameterized three-layer neural nets with a specific architecture leads to the algorithm adaptively picking the kernel (instead of NTK where the kernel is fixed), which seems similar in flavor to your result, so it is good mention/compare.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you compare your rate with the results in sparse recovery, if you use lasso instead of your estimator? specifically do you think the low dimensional example that you mention can be observed as some kind of sparsity that can be exploited by lasso?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The literature review/comparison with current result still needs work, upon addressing this I am happy to increase my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and valuable comments.
Following your advice, we will add the related papers that you mentioned.
> My major concern is that similar results to this one already exist in the literature and the authors have not compared
> their results to them and how they differ from them. I am curious if your result can imply or be phrased as a sparse
> recovery type argument for high dimensional regression.
Thank you for pointing out the related works and the connection between our result and sparse recovery.
Indeed, it is known that the over-parameterized nets with gradient-based methods tend to recover the sparse solutions,
as shown in the related works you mentioned and also [Hoff 2017], [Vaškevicius et al., 2019], etc., cited in our paper.
However, there are major differences on the settings and the results between our work and the related works,
which we will clarify in the following:
* **Analysis of Generalization Error**:
The papers [2,3] and also previous works like [Hoff 2017], [Gunasekar et al. 2017] are focused only on the
optimization process using gradient-based methods, showing sparsity properties of the final solution,
but **they do not consider generalization error bounds**.
In comparison, our work (and also related works [Vaškevicius et al., 2019], [Zhao et al., 2022]) focuses further on
the generalization error of the over-parameterization methods, which turns out to be a harder problem.
More specifically, the works on the optimization process often assume the absence of noise in the data and consider
the final interpolator as $t \to \infty$.
However, such an interpolator may not generalize well under the presence of noise.
In fact, as shown in our work and also [Vaškevicius et al., 2019, Zhao et al., 2022], proper early stopping is also
crucial for generalization.
Therefore, we have to analyze the full training process rather than the final convergence point to understand the
generalization error, which is a more challenging problem.
* **Linear regression vs. non-parametric regression** The most related works
are [Vaškevicius et al., 2019], [Zhao et al., 2022] that considered generalization performance of over-parameterized
models with gradient training under the setting of high dimensional linear regression.
In comparison, our work considers the non-parametric regression setting under reproducing kernel Hilbert spaces (
RKHS).
There are several main differences between our result and their results:
* *Problem settings*: They consider the high-dimensional linear regression where the input dimension (while growing)
is finite,
but we consider the non-parametric regression where the input dimension is directly infinite.
Also, they focus on the setting with the separation of signal components and noise components, while we consider
the more general setting of signal in the sequence model, allowing the magnitudes of the signal across components
to vary continuously.
* *Over-parameterization setup*:
[Zhao et al., 2022] considers the over-parameterization setup that $\theta = a \odot b$
where the initialization is $a(0) = \alpha \mathbf{1}$ and $b(0) = \mathbf{0}$;
[Vaškevicius et al., 2019] considers $\theta = u^{\odot D} - v^{\odot D}$ with $u(0) = v(0) = \alpha \mathbf{1}$.
However, their parameterization can not apply to the infinite dimensional case since the components are treated
equally and the $\ell^2$ norm of the estimator would be infinite after any step of training.
In comparison, our work considers parameterization $\theta_j = a_j b^D_j \beta_j$ with $a_j(0) = \lambda_j^{1/2}$,
$b_j(0) = b_0$ and $\beta_j(0)=0$,
so the components are treated differently.
Therefore, we have to tackle both the infinite dimensionality and also the interplay of different initialization,
which is more complicated.
* *Interpretation of the over-parameterization*: The previous works view the over-parameterization mainly as a
mechanism for implicit regularization,
while our work provides a novel perspective that over-parameterization adapts to the structure of the truth signal
by learning the eigenvalues.
Our perspective could provide more insights on the benefits of over-parameterization.
In the new revision, we will add a detailed comparison and discussion with the related works in the paper.
We hope this will clarify the novelty and contribution of our work.
> Can you compare your rate with the results in sparse recovery, if you use lasso instead of your estimator?
> specifically do you think the low dimensional example that you mention can be observed as some kind of sparsity that
> can
> be exploited by lasso?
Our results can be phrased for the setting of high dimensional regression with sparsity.
Taking a sparse signal $(\theta_j^*)_{j \geq 1}$ in our paper, e.g., $\theta_j^* = 1$ for $j \in S$, $|S| = s$ and
$\theta_j^* = 0$ for $j \notin S$,
we have $\Phi(\epsilon) = s$ and $\Psi(\epsilon) = 0$,
so we can find that the resulting rate is $O(s/n)$, which is the same as the rate of the lasso estimator.
For the low dimensional example in the sequence model, a JS estimator or thresholding estimator can indeed recover the
sparse signal and achieve the optimal generalization rate, as shown in Chapter 6 and 8 in [Johnstone 2017]. We believe
that a lasso-type estimator can also recover the sparse signal in this case. We think that a deeper connection between
over-parameterization and these estimators can be explored in future work.
### References
*Due to character limit, please see the next comment block for references.*
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for your response.
regarding your claims in the new response that previous work in NN theory only show structural result about convergence of gradient based methods and does not consider generalization, that is not true I think, e.g. reference [1] that I sent above also consider the generalization capability of the final NN weights (which achieves similar rates to what you get in classical sparse recovery). So I think I don't understand that part of your claim.
Regarding your comparison with Zhao et al., 2022, you mentioned they consider the l-2 case without weights so it cannot be used for the infinite weighted case, do you mean there is a fundamental barrier in generalizing their approach to cover the weighted infinite case, or is it that they just did not investigate this case in that paper?
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful comments.
**Regarding the first point**, we apologize for any miscommunication.
Our intention was to emphasize that while most research on over-parameterized models focuses on structural
results, there has been less exploration into the generalization benefits of over-parameterization. We appreciate your
reference to [1], which is indeed a significant work on the generalization capabilities of over-parameterized neural
networks that we had missed.
Our work shares some conceptual similarities with [1], particularly in viewing over-parameterization through an adaptive
kernel perspective. However, there are still differences in our approach: while [1] explores an adaptive kernel space
in the form of $G\odot K^\infty$ around the NTK space, our study examines an eigenvalue-parameterized kernel space with
a fixed eigen-basis. We believe that the diversity of approaches, including those in [1], will contribute to a deeper
understanding of the generalization properties in over-parameterized models. We will ensure to include [1] in our
revised introduction and add a detailed comparison within the paper.
**Regarding the second point**, there is indeed a fundamental difference between their approach and our approach in the weighted
infinite case. Specifically:
1. We have to track the differently weighted initialization across components;
2. We provide new insights on the benefits of over-parameterization by learning the eigenvalues, see Proposition 3.4;
3. We extend our study to deeper layers, demonstrating additional benefits, whereas Zhao et al., 2022 is limited to the
two-layer setting.
We appreciate your thorough review and the opportunity to improve our work.
---
Rebuttal 2:
Title: References
Comment: [Hoff 2017] Peter D. Hoff. Lasso, fractional norm and structured sparse estimation using a Hadamard product
parametrization. Computational Statistics & Data Analysis, 115:186–198, November 2017. ISSN 0167-9473. doi:
10.1016/j.csda.2017.06.007
[Gunasekar et al. 2017] S. Gunasekar, B. Woodworth, S. Bhojanapalli, B. Neyshabur, and N. Srebro. Implicit
regularization in matrix factorization. In Advances in Neural Information Processing Systems, volume 2017-December,
pages 6152–6160, 2017.
[Vaškevicius et al., 2019] Tomas Vaškeviˇcius, Varun Kanade, and Patrick Rebeschini. Implicit regularization for optimal
sparse recovery, September 2019. URL http://arxiv.org/abs/1909.05122.
[Zhao et al., 2022] Peng Zhao, Yun Yang, and Qiao-Chu He. High-dimensional linear regression via implicit
regularization. Biometrika, 109(4):1033–1046, November 2022. ISSN 0006-3444, 1464-3510. doi: 10.1093/biomet/asac010.
[Johnstone 2017] Iain M. Johnstone. Gaussian estimation: Sequence and wavelet models. 2017. | null | null | null | null | null | null |
DreamSteerer: Enhancing Source Image Conditioned Editability using Personalized Diffusion Models | Accept (poster) | Summary: This paper proposes DreamSteerer, a plug-in method that enhances existing text-to-image personalization techniques by improving the editability of source images.
DreamSteerer proposes a novel Editability Driven Score Distillation (EDSD) objective to improve the structural alignment between the source and edited image by performing score distillation with respect to personalized model parameters.
It identifies and addresses mode-trapping issues in EDSD using regularization with spatial feature-guided sampling. The proposed method enhances the editability of several T2I personalization methods.
Moreover, they introduce two key modifications to the Delta Denoising Score framework to enable high-fidelity local editing with personalized concepts.
Strengths: * The problem of enhancing the editability of personalized image editing conditioned on source images is significant and underexplored. This idea is original since it modifies the score distillation methods to better align with the personalized image editing task. They provide an easy-to-use model-agnostic method for any type of personalization.
* The quantitative experiment results are solid and demonstrate the method's effectiveness over the three baselines.
* The presentation and writing are reasonably clear and easy to follow.
Weaknesses: * The method lacks comparisons with editing baselines outside of personalization. It also lacks qualitative or quantitative comparison to the cited preliminary works [8, 14]. For the images that also appear in the Custom-Edit [8] such as rows 1,2,5 in Figure 1 of this paper, the result images in the Custom-Edit [8] paper appear to have much better quality.
* The "mode shifting regularization" approach is not convincing on the effectiveness of the proposed(Figure 5). In Fig.5, the claimed improvement in editing fidelity from Fig.5 (c) to Fig. 5 (d) does not seem obvious. (also between "(f)" and "(g)" )
* The paper is missing some important metrics for evaluation. In Section 5, the paper finds that automatic metrics do not fully reflect the superior performance of the proposed method. There is no image quality assessment (IQA) metrics used such as [ref1, ref2, ref3]. Also, the structural similarity metrics globally compare the edited image to the source image, which is unsuitable in this scenario since you want to maintain the similarity in the unedited background region, and a local similarity comparison is better suited. [Please check Questions]
* The method is claimed to be efficient in the abstract section. However, the proposed method's speed and cost are not analyzed or numerically measured. The method appears to require significant computation for a single edit. It is recommended that the computational overhead be reported compared to base personalization methods like DreamBooth.
* The example images in the paper have very low resolution, especially when zoomed in.
Technical Quality: 3
Clarity: 2
Questions for Authors: * In the Method section, how is the given personalized model trained, and on which data?
* In Figure 2, what are the reference images? Are they the same as those in Figure 3?
* In Table 1, what do the user preference numbers indicate?
* What is the computational overhead compared to a base personalization method like DreamBooth? How many UNet calls are required to edit one single image?
* How would the performance of the proposed method compare with the recent personalization methods like DreamMatcher?
* The paper is missing some important recent works in personalization:
* DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization (CVPR 2024)
* Style Aligned Image Generation via Shared Attention (CVPR 2024)
* Visual Instruction Inversion: Image Editing via Visual Prompting (NeurIPS 2023)
* MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models
* References for Weakness section:
* [ref1] C. Chen, J. Mo, J. Hou, H. Wu, L. Liao, W. Sun, Q. Yan, and W. Lin. Topiq: A top-down approach from semantics to distortions for image quality assessment. IEEE Transactions on Image Processing, 2024.
* [ref2] J. Ke, Q. Wang, Y. Wang, P. Milanfar, and F. Yang. Musiq: Multi-scale image quality transformer. In Proceedings of the IEEE/CVF international conference on computer vision, 2021.
* [ref3] W. Zhang, G. Zhai, Y. Wei, X. Yang, and K. Ma. Blind image quality assessment via vision-language correspondence: A multitask learning perspective. In IEEE Conference on Computer Vision and Pattern Recognition, 2023.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Discussed in Appendix J
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedbacks. Following are responses to your questions.
>**W1 Further comparison study**
We are targeting image editing with personalized concepts (not exist in vocabulary); thus, editing baselines outside of personalization are not applicable for our task.
**Comparison with Photoswap[1]**: Please refer to general comment part 1 for a setting-level difference and comparison of our work with subject swapping works like Photoswap[1].
**Comparison with Custom-Edit[2]**: In Fig.1, only row 5 uses the same images and the performance is comparable with the same base personalized model, Custom Diffusion. The others (row 1 and 2) either have different source or reference images. Meanwhile, for the cat-statue example in the ablation study shown in Fig.6, we use DreamBooth, which differs from Custom-Edit. For a fairer comparison, in Fig. B (lower) of the attached PDF, we use the same base model, Custom Diffusion, to compare with Custom-Edit. The result of Custom-Edit is more similar to a subject swapping (please refer to general comment part 1), lacking in structural preservation compared to our method (e.g., missing helmet, scarf and incorrect facial structure, etc.). Please refer to general comment part 2 for more comparisons with Custom-Edit and further discussions.
>**W2 Effectiveness in Fig.5**
To emphasize the difference, we provide a higher-resolution version of Fig.5 in Fig.A of the attached pdf. Without the Mode Shifting term, the images in Fig.5f tend to lose a portion of appearance fidelity compared to Fig.5e, resulting in a hybrid appearance closer to the source image (tends to be more silver rather than brown). Conversely, with the inclusion of the Mode Shifting term, the images in Fig.5g maintain appearance fidelity comparable to Fig.5e while displaying patterns akin to the source image, such as the blue collar and the presence of "two cats." This shows the effectiveness of the Mode Shifting term in steering the model to enhance editability for the source image without compromising the personalized subject's appearance information.
A similar observation can be obtained by comparing Fig.5c and Fig.5d, which depict editing outcomes derived from the same prompt. Without the mode shifting term, Fig.5c adheres to the structure of the source image yet displays inaccurate appearance, such as the incorrect coloration of the cat's face.
>**W3 IQA metrics & globally computed structural metrics**
**IQA metrics:** We have conducted user study, which is the mainstream way to assess edited image quality. The evaluation is enriched by using the mentioned Topiq, Musiq and LIQE with [3]. We employed the No-Reference (NR) versions of these metrics, as they are better aligned with the requirements of image editing.
| | Topiq | Musiq | LIQE |
|------------------------|-------|--------|-------|
| Textual Inversion | .577 | 68.391 | 4.234 |
| Textual Inversion+ours | **.593** | **70.064** | **4.390** |
| DreamBooth | .604 | 70.432 | 4.350 |
| DreamBooth+ours | **.608** | **71.857** | **4.441** |
| Custom Diffusion | .591 | 69.760 | 4.240 |
| Custom Diffusion+ours | **.612** | **71.450** | **4.414** |
Our work consistently improves the overall image quality of the baselines.
**Structural similarity metrics computed globally:** Maintaining a structural similarity for the unedited regions are also important as the editing may modify subject-irrelevant regions, which is undesired.
>**W4 Computational overhead**
Please refer the general comment part 3.
>**W5 Low resolution of images**
To ensure a smooth submission process, we reduced the size of the images, which resulted in a lower resolution. To ensure that our work is presented with clarity and detail necessary for thorough evaluation and understanding, we will replace all figures with their original, high-resolution versions for camera ready version (similar to the image quality in the attached pdf).
>**Q1 Training and data of personalized model**
We conduct evaluation based on publicly accessible checkpoints provided by the open repository[4]. These checkpoints, specifically for Textual Inversion, DreamBooth, and Custom-Diffusion, have been trained on 16 identical concepts from the ViCo dataset. They exhibit generation fidelity that aligns with the results reported in the original papers, which ensure a fair and direct comparison across different personalization methods.
>**Q2 Reference images in Fig.2**
Apologies for the oversight regarding the reference images in Fig.2. Indeed, they are identical to those presented in Fig.3. We will ensure that Fig.2 is updated accordingly in the camera-ready version.
>**Q3 User preference numbers**
These ratings assess the overall quality of the edited images where a higher score indicates better quality. The rating criteria include (1) structure and background preservation from the source image, (2) appearance and concept preservation from the reference images, (3) overall realism and quality of the edited image. Please refer to appendix K of the main paper for details.
>**Q4 Computational overhead compared to a base method**
Please refer to the general comment part 3.
>**Q5 Compare with DreamMatcher**
DreamMatcher is an interesting concurrent work. However, its focus is on boosting generation fidelity rather than editing, which is the core of our work. Due to this fundamental difference in purpose, it is not directly comparable to our work. We have acknowledged this work in our extended literature review.
>**Q6 Missing recent works**
We have added an extended literature review on T2I personalization, where the mentioned works are discussed (response to reviewer zDhi W1).
[1] Photoswap: Personalized subject swapping in images
[2] Custom-edit: Text-guided image editing with customized diffusion models
[3] https://github.com/chaofengc/IQA-PyTorch
[4] https://github.com/KU-CVLAB/DreamMatcher
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply.
The concerns were addressed and I am increasing my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your decision to increase the score and your dedicated effort in reviewing our paper! Your comments helped clarify and improve our work. | Summary: The paper introduces DreamSteerer, a method to enhance the editability of source images using personalized diffusion models in text-to-image (T2I) personalization. Existing methods often fail to maintain edit fidelity when applied to new contexts due to limited reference images and adaptability issues. DreamSteerer addresses this by introducing the Editability Driven Score Distillation (EDSD) objective, which enhances image editability. To mitigate a mode trapping issue in EDSD, the paper proposes mode shifting regularization with spatial feature guided sampling. This technique aligns the model more closely with the source image structure while preserving personalized concept appearances. Extensive experiments show that DreamSteerer significantly improves editability and efficiency across various T2I personalization baselines.
Strengths: 1、The method is designed as a plug-in compatible with arbitrary personalization baselines. This versatility enhances its applicability across various models and use cases
2、DreamSteerer requires only a small number of fine-tuning steps (~10) to achieve significant improvements, making it computationally efficient.
Weaknesses: 1、While the paper presents a novel approach, it would benefit from a more thorough discussion regarding its innovative aspects in comparison to existing work in Object Driven Image Editing, such as DreamEdit[1] and SWAPANYTHING[2]. It appears that similar mechanisms are already in place within these frameworks, and a detailed exploration of how the current method distinguishes itself or builds upon these concepts would be valuable.
2、A broader experimental evaluation is suggested. This should include comparative studies with existing image editing works and other Object Driven Image Editing techniques mentioned in [1,2].
3、The proposed method relies on certain heuristics, such as spatial feature guided sampling and the choice of specific parameters for mode shifting regularization. These heuristics may not generalize well across different datasets or tasks, leading to suboptimal performance in some scenarios.
[1] DreamEdit: Subject-driven Image Editing
[2] SWAPANYTHING: Enabling Arbitrary Object Swapping in Personalized Visual Editing
Technical Quality: 3
Clarity: 3
Questions for Authors: Figure 14 and 13 is same.
Insufficient resolution of figures provided in the paper
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper discussed the limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable feedbacks. Following are responses to your questions.
> **W1 Comparison to Object Driven Image Editing works**
We thank the reviewer for the constructive suggestion and acknowledgement on the novelty of our method. For a detailed discussion on how our approach differs from previous works in subject swapping, including a comparison with DreamEdit[1], please refer to general comment part 1. SwapAnything[3] is also an interesting concurrent work follows a similar setting as DreamEdit, which is different from the setting of our work. We will defer detailed comparison of SwapAnything to future work once they release their codes.
> **W2 Broader experimental evaluation**
**Comparison with different subject swapping work**: Please refer to general comment part 1 and response to W1.
**Comparison using different image editing method**:
Please refer to general comment part 2.
>**W3 Dependence on heuristics**
We thank the reviewer for mentioning the concern. Our method does not require a careful tuning on the hyper-parameters. We use the same learning rates as the original personalization baselines. Beyond this, the same set of hyper-parameters are shared among the personalization baselines, irrespective of the concept or source image involved.
> **Q1 Figure 14 and 13 is same. Insufficient resolution of figures provided in the paper**
* Thanks for noticing the duplicated image, we will avoid this for the camera-ready version.
* We apologize for the insufficient resolution of images in our main paper. To avoid potential network issues and ensure a smooth submission process, we reduced the size of the images, which resulted in a lower resolution. To ensure that our work is presented with clarity and detail necessary for thorough evaluation and understanding, we will replace all figures with their original, high-resolution versions for camera-ready version.
[1] DreamEdit: Subject-driven Image Editing
[2] Custom-edit: Text-guided image editing with customized diffusion models
[3] Swapanything: Enabling arbitrary object swapping in personalized visual editing
---
Rebuttal Comment 1.1:
Comment: Reviewer 2C3N, do you have any additional questions or feedback?
---
Rebuttal Comment 1.2:
Title: comment from reviewer
Comment: Thank you for answering my questions. After reading the author's rebuttal, I think the current version still needs revision in terms of (I) experiments on the sensitivity of the heuristic network to hyperparameters (the specific value of the hyperparameter λ is also missing in the implementation details), (ii) a more significant performance improvement compared to Custom-Edit, and (iii) further revisions to the current paper, such as supplementing the paper with additional details like λ and also the use of lossless compression to reduce the volume while maintaining the sharpness of the image. Therefore, I will keep my rating same.
---
Reply to Comment 1.2.1:
Comment: Thanks for your suggestions. Following are responses to your concerns.
>**Sensitivity of hyper-parameter $\lambda$**
In all our experiments, regardless of the personalization baseline used, we set $\lambda$ to 15. Figures 15-17 in the appendix illustrate that this parameter choice reliably produces guided samples that not only retain the structural layout of the source image but also ensure the appearance fidelity of the reference subject is preserved. This level of performance is consistent across a wide variety of personalization baselines, source images, and reference images. Our approach eliminates the need for users to meticulously adjust hyperparameters to achieve significant improvements in editability. A comprehensive ablation study on $\lambda$ will be detailed in the appendix of the camera-ready manuscript.
>**Improvement compared to Custom-Edit**
The improvement compared to Custom-Edit is less significant for metrics related to the alignment with source image such as SSIM and LPIPS. This is because Custom-Edit uses Prompt-to-Prompt as the base editing model which enforces the source structural alignment with the attention injection mechanism. However, when our method is applied as a plug-in, we consistently see enhanced performance in reference appearance alignment metrics (CLIP score) and overall image quality (Topiq, Musiq, and LIQE), as illustrated in Fig.B.
Additionally, as noted in part 2 of the general comments, the Prompt-to-Prompt model used by Custom-Edit fails to produce valid edits for DreamBooth due to its reliance on a latent-state inversion process with Null-Text Inversion, which lacks editability for certain personalized models. Unlike Prompt-to-Prompt, the DDS-SM base editing method used in our main paper does not encounter this issue and is effective across various personalization approaches. Our approach, using DDS-SM as the base editing method, is not limited to a specific personalization technique; instead, it offers a versatile plug-in solution that enhances performance across a broad range of metrics for different personalization baselines.
>**Lossless compression**
Thanks for the suggestion. We will consider this for camera-ready version of the manuscript and will ensure that the presented figures have sufficient resolution that matches the quality of images in the attached pdf file.
Thanks again for your valuable suggestions. We sincerely hope you can consider re-scoring our paper. | Summary: Aiming at addressing unsatisfactory editability on the source image, this paper proposes a novel plug-in method for augmenting existing T2I personalization methods. Specifically, this framework finetunes the personalization parameters by training a novel Editability Driven Score Distillation objective under the constraint of a Mode Shifting regularization term based on spatial feature-guided samples. Extensive experiments validate the effectiveness of the proposed method.
Strengths: 1. The proposed method can be applied current T2I personalization models to perform custom editing as a plug-in.
2. Efficiently, the framework DreamSteerer requires only a small number of fine-tuning steps (∼ 10) to achieve significant improvement in editability on a source image.
Weaknesses: 1. In related work, the authors lack a discussion of recent works for T2I personalization. Most of the works are in the first half of last year.
2. This framework introduces many computing loads in there modules in Fig.3, which worries me about the inference efficiency of this method. It is suggested that the authors can provide some explanation about computational complexity in experiment section.
3. There are some excellent methods for custom editing, such as [1] [2]. It is recommended to compared with these strong baseline to show the effectiveness of this method.
4. As for qualitative results in this paper, obviously, the proposed method forces consistency of the object shape between the source object and target object, resulting in the destruction of the edited object identity compared the reference. This may not be the editing result that the user wants. It is suggested the authors can provide analysis for this.
[1] Custom-edit: Text-guided image editing with customized diffusion models.
[2] DreamEdit: Subject-driven Image Editing.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Although DreamSteerer achieves high-fidelity editing results, its performance is still limited by the baseline model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and questions. Following are responses to your concerns.
> **W1 Insufficient related work of T2I personalization**
Thank you for the suggestion. We provide an additional literature review related to T2I personalization mostly from this year below:
Recent studies in encoder-based personalization[1-4] propose new training paradigms to condition a Diffsuion Model on single or multiple input images. Although these works can enable faster inference, they necessitate extensive pre-training and typically limit application to particular domains. New forms of conditionings, objectives or adaptors are proposed for different purposes such as stylization[5-7], improved identity preservation[8,9], composability[10-13], or generation fidelity[14]. Unlike these works, we focus on addressing an inherently different task of improving the editability of personalized Diffusion Models.
We will include these in our revised paper.
[1] JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation, CVPR 2024.
[2] Instantbooth: Personalized text-to-image generation without test-time finetuning, CVPR 2024.
[3] RealCustom: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization, CVPR 2024.
[4] Hyperdreambooth: Hypernetworks for fast personalization of text-to-image models, CVPR 2024.
[5] Customizing Text-to-Image Models with a Single Image Pair, arXiv 2024.
[6] Visual instruction inversion: Image editing via image prompting, NeurIPS 2023.
[7] Style aligned image generation via shared attention, CVPR 2024.
[8] When stylegan meets stable diffusion: a w+ adapter for personalized image generation, CVPR 2024.
[9] IDAdapter: Learning Mixed Features for Tuning-Free Personalization of Text-to-Image Models, CVPR 2024.
[10] Attention Calibration for Disentangled Text-to-Image Personalization, CVPR 2024.
[11] Orthogonal adaptation for modular customization of diffusion models, CVPR 2024.
[12] Omg: Occlusion-friendly personalized multi-concept generation in diffusion models, ECCV 2024.
[13] Magicfusion: Boosting text-to-image generation performance by fusing diffusion models, ICCV 2023.
[14] Dreammatcher: Appearance matching self-attention for semantically-consistent text-to-image personalization, CVPR 2024.
> **W2 Computational complexity**
Please refer to general comment part3.
> **W3 Comparison with existing methods**
**Comparison with DreamEdit[2]**:
Please refer to general comment part 1 for a discussion on how our work differ from subject-driven editing works like DreamEdit[2]. As illustrated in Fig.C of the attached rebuttal pdf, compared with our work, DreamEdit often exhibits severe distortion or failure in maintaining structural consistency with the source image.
**Comparison with Custom-Edit[1]**: Please refer to general comment part 2.
> **W4 Forcing shape consistency with source**
* Please refer to general comment part 1 for a clarification on how our work connects with rigid text-driven image editing and differs from subject swapping.
* Here we also illustrate several advantages that the feature of aligning with the source structure might offer for real-world applications:
* Enhanced creative experimentation: content creators have the freedom to explore different appearances based on different reference concepts without compromising the essence of the source image. This facilitates the exploration of diverse aesthetic outcomes with ease.
* Consistency and recognizability: in branding and media content creation, maintaining the original structure ensures that if viewers are familiar with the original content, it enhances brand recognition and ensures continuity of visual content.
* Contextual integrity preservation: in fields such as cultural heritage, aligning edits with the source structure can be crucial for maintaining the context and authenticity of visual information.
[1] Custom-edit: Text-guided image editing with customized diffusion models
[2] DreamEdit: Subject-driven Image Editing
---
Rebuttal Comment 1.1:
Comment: Reviewer zDhi, do you have any additional questions or feedback?
---
Rebuttal Comment 1.2:
Title: Official Comment by Reviewer zDhi
Comment: Thank you for your detailed reply. It addresses most of my concerns.
I will keep my rating to borderline accept.
---
Reply to Comment 1.2.1:
Comment: Thank you for your dedication and valuable input! We also appreciate your decision to maintain a borderline accept rating. Your suggestions have helped to improve and clarify our work. | Summary: DreamSteerer is a proposed fine-tuning pipeline for personalized diffusion models, designed to enhance the custom editability of these models. The authors point out that naively incorporating existing image editing and personalization methods—such as employing score distillation and DDS where the differentiable function is initialized by the source image latent space—results in low fidelity and lack of editability relative to the source image. To address these issues, they propose the following additions that comprise DreamSteerer:
1. Editability driven score distillation (EDSD), to increase the editability of the model by distillation information from the source model $\epsilon_{\phi_0}$ to the personalized diffusion model $\epsilon_{\phi_0}$ through a single-step perturbed source latent state
2. Spatial Feature Guided Sampling and mode shifting regularization to overcome the mode trapping issue introduced by EDSD (commonly observed in score distillation)
3. Source score bias correction and automatic subject masking to address the distribution shift from personalization, and to focus on the relevant aspects of the image during training respectively.
The authors present thorough qualitative and quantitative results across three metrics: semantic similarity, perceptual similarity, and structural similarity, along with a human preference evaluation. They demonstrate that DreamSteerer consistently improves upon existing baseline methods and include ablation studies to justify various components of the pipeline.
Strengths: Originality: The work is well-motivated and addresses a limitation in the intersection of personalization and image editing, which is a relatively underexplored area. Although DDS has been employed for general text-driven editing, the application of DDS to enhancing editability in the personalization setting is a novel approach. In addition to combining the existing ideas, the authors introduce further modifications to improve final results.
Clarity: The paper clearly explains each learning objective with detailed formulations, supported by an overarching diagram. The results are well-organized and articulated, presenting both qualitative and quantitative analyses. This structure provides a cohesive narrative that justifies the design decisions made throughout the work.
Quality: The proposed algorithm introduces a novel approach which improves over existing methods in the personalized image editing domain, and the work presents sufficient ablation studies over core components to reinforce the reliability of the proposed approach.
Significance: The proposed method for personalized image editing with few fine-tuning steps required has potential for widespread adoption. DreamSteerer's ability to maintain structural integrity while enabling personalization edits makes the work applicable across various creative domains, which may demand precise edits.
Weaknesses: The paper’s results would benefit from benchmarking on more extensive personalization datasets, such as CustomConcept101. Currently, the evaluation is based on only 16 concepts from the ViCo repository, a design decision that lacks strong justification. While the selected concepts show a reasonable level of diversity, evaluating the method across a broader set of personalized concepts would enhance the robustness and generalizability of the results.
Additionally, there are minor implementation details that could be clarified in the Experiments section. Specifically, the paper mentions using pre-trained checkpoints provided by ViCo. However, the ViCo GitHub repository does not link to checkpoints for Textual Inversion, DreamBooth, and Custom Diffusion, leading to some confusion about the source of these pre-trained checkpoints. Providing clear information on where these checkpoints were obtained would help improve the reproducibility of the work.
Technical Quality: 4
Clarity: 4
Questions for Authors: I have minor questions regarding the implementation details to clarify some confusion:
1. Source of Real-World Images: Can you provide more details on the 70 random real-world images used for editing? Were these images scraped from the web or sourced from an existing dataset? Additionally, were images containing a shared superclass used to enhance the feasibility of the editing tasks?
2. Clarification on Figure 5: In Figure 5, it’s unclear where the results in subfigures (e) to (g) originate from. Are these images based on specific source images, or are they entirely generated without any reference images, from the finetuned personalization model?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have adequately addressed the limitations and potential societal impact of their work. Specifically, they describe and visualize the limitations of DreamSteerer, which can be constrained by the baseline personalization model, and suggest ways to avoid fidelity inconsistencies in future similar works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful comments and questions. Following are responses for your questions.
> **W1 Benchmarking on more extensive personalization datasets**
Thank you for your feedback. Our decision to evaluate based on 16 concepts from the open repository[1] was driven by the availability of comparable, publicly accessible checkpoints. These checkpoints, specifically for Textual Inversion, DreamBooth, and Custom-Diffusion, are trained on the same concepts within the ViCo dataset and demonstrate generation fidelity consistent with their original publications. This approach allowed us to ensure a fair and direct comparison across different personalization methods. We acknowledge the importance of evaluating our method across a broader set of personalized concepts to enhance the robustness and generalizability of our results. We plan to explore this in future work, aiming to provide a more comprehensive evaluation across additional concepts and personalization baselines.
> **W2 Pre-trained checkpoints**
Thank you for pointing out the need for clarification regarding the implementation details. We apologize for any confusion caused by the oversight in our manuscript. The pre-trained checkpoints used in our work were sourced from the DreamMatcher GitHub repository[1], that has been trained on the ViCo dataset. We will update our manuscript to include this accurate source information to enhance the reproducibility of our work.
> **Q1 Source of Real-World Images**
The images we use are collected from 3 main sources:
* DreamBench[2]
* PIE-Bench[3]
* Web data
Our selection criteria were particularly focused on images where the source subject exhibited a significant structural difference from the reference subjects, yet maintained a shared superclass with these references. Such selection criteria enable us to better evaluate the robustness of the personalized editing with significant structural misalignments between source and reference subjects. We will include these details in our updated paper.
> **Q2 Clarification on Figure 5**
Thank you for seeking clarification regarding Figure 5 in our manuscript. We understand the confusion and are happy to provide a more detailed explanation.
These figures are random generations **using different diffusion models** based on the prompt 'A photo of a sks cat standing next to a mirror', where sks corresponds to the personalized concept in Fig.5b. Specifically:
* Fig.5e is generated by a base personalized model trained on the images shown in Figure 5b.
* Fig.5f results from fine-tuning the base model with only the Editability Driven Score Distillation (EDSD) term.
* Fig.5g results from fine-tuning the base model with both the EDSD term and the Mode Shifting term.
Our analysis reveals that without the Mode Shifting term, the images in Fig.5f tend to lose a portion of appearance fidelity compared to Fig.5e, resulting in a hybrid appearance closer to the source image. Conversely, with the inclusion of the Mode Shifting term, the images in Fig.5g maintain appearance fidelity comparable to the base model (Fig.5e) while displaying patterns more akin to the source image, such as the blue collar and the presence of "two cats." This demonstrates the effectiveness of the Mode Shifting term in steering the model to enhance editability for the source image without compromising the personalized subject's appearance information. We will include this in our revised paper.
[1] https://github.com/KU-CVLAB/DreamMatcher
[2] https://github.com/nousr/dream-bench
[3] https://github.com/cure-lab/PnPInversion
---
Rebuttal Comment 1.1:
Comment: Reviewer AnAq, do you have any additional questions or feedback?
---
Rebuttal 2:
Comment: Thank you for your positive feedback and the time you invested in reviewing our work. Your suggestions helped clarify and improve our work. | Rebuttal 1:
Rebuttal: ## Global comment
We thank all reviewers for their great effort and suggestions. Some general clarifications are provided below for better understanding of both our task and our method. We will include these details in our revised paper.
> **Part 1**
***Connection with text-driven image editing***
Traditional text-driven image editing methods generally fall into two categories: rigid editing (e.g., [1,2]) and non-rigid editing like[3]
We conclude the desiderata of rigid editing as follows:
* the overall structure of edited image should align with the source image,
* the edited part should follow the input instruction,
* the instruction-irrelevant part should be preserved as much as possible.
Non-rigid editing focuses on changing the view or pose of a subject in the source image while preserving the background.
In this work, **we consider extending text-driven rigid editing** to editing scenario like 'a photo of a (silver → $v*$) cat', where $v*$ represents a specific concept derived from reference images. This concept is more detailed than generic descriptions (e.g., 'brown'), capturing intricate appearance and semantic information. We provide a flexible plug-in to bridge the gap in editability between specific concept and textual conditionings using Diffusion Models, offering a unique contribution not addressed by existing methods.
***Difference with subject swapping***
Subject swapping methods like DreamEdit[4] and Photoswap[5] diverge from our approach in their main criteria. While these methods prioritize alignment with the source subject's location and pose, they do not necessitate maintaining the original structural details as our method does. Furthermore, these works demand a stricter preservation of subject identity and typically do not require a significant level of concept extrapolation like our work, e.g., from a short cat to a tall cat. In Fig.C of the attached rebuttal pdf, we provide a comparison with DreamEdit and Photoswap in scenarios involving significant structural gaps. Compared with our work, these works often exhibits severe distortion or fails to maintain structural consistency with the source image.
[1] Prompt-to-prompt image editing with cross attention control
[2] Instructpix2pix: Learning to follow image editing instructions
[3] Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing
[4] Dreamedit: Subject-driven image editing
[5] Photoswap: Personalized subject swapping in images
>**Part 2**
***Comparison using different image editing method***
* We employ a variant of Delta Denoising Score as the base editing model as this method provides stable editing results with all the personalized models we use.
* However, DreamSteerer is not restricted to a specific type of editing pipeline. We also evaluate the effectiveness of our method as a plug in for Custom-Edit[3], which directly combines Custom Diffusion with Prompt-to-Prompt. The numerical results are as follows:
| | CLIP B/32 | CLIP L/14 | LPIPS (Alex) | LPIPS (VGG) | SSIM | MS-SSIM | Topiq | Musiq | LIQE |
|-----------------|------------|-----------|--------------|-------------|------|---------|----------|--------|-------|
| Custom-Edit | .748 | .727 | .141 | .210 | .793| .899| .564 | 67.942 | 3.974 |
| Custom-Edit+ours | **.750** | **.729** | .141 | **.209** | .793| .899 | **.565** | **67.948** | **3.981** |
* As shown in Fig.B (upper) of the attached pdf, Custom-Edit might not adequately preserve the appearance information of the reference subject. Our proposed method can effectively improve the reference subject appearance preservation when implemented as a plug-in, without compromising the structural alignment with the source image.
* Meanwhile, as shown in Fig.D of the attached pdf, combining Prompt-to-Prompt with DreamBooth can introduce significant appearance artifacts in the edits, which is the main reason we did not use it as the base editing method. Prompt-to-Prompt relies on a source latent state inversion process, typically through Null-Text Inversion (NTI). However, parameter update during personalization can shift the model distribution for the source class, compromising the editability of the inverted latent state chain with NTI. In comparison, the Delta Denoising Score based edited method employed in our work does not require a inversion process, providing more robust performance across different types of personalization baselines. We believe this phenomenon worth further investigation and encourage future works to develop new inversion techniques specifically tailored for the personalized models.
>**Part 3**
***The computational cost of DreamSteerer***
The total sampling & fine-tuning time of DreamSteerer takes ~2 minutes on a single V100 GPU, that comes from two parts: (1) spatial feature guided sampling: this part is tuning-free and contains a DDIM inversion process on the single source image and a guided generation process. With a DDIM scheduler step size of 50, this process takes ~1 minute on a single v100 GPU, (2) the EDSD fine-tuning on personalized model parameters: this process takes a small number of 10 optimization steps with a batch size of 1 and a gradient accumulation step of 10, the fine-tuning time depends on the exact base personalized model. For the largest model DreamBooth, which uses full fine-tuning, it takes ~1 minute. We provide a comparison of the total number of UNet prediction steps (NFE) between our fine-tuning strategy and the baseline personalized models.
| | NFE |
|-------------------|------|
| Textual Inversion | 5000 |
| DreamBooth | 1600 |
| Custom Diffusion | 2000 |
| Ours | 200 |
The fine-tuning and inference of the whole pipeline is also able to be conducted on a single 3090 GPU for the largest base model DreamBooth.
Pdf: /pdf/552a86eab161be67fa6bd88a33bda30b3539b997.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improving Environment Novelty Quantification for Effective Unsupervised Environment Design | Accept (oral) | Summary: After rebuttal
I have upped my score to 7. I think this paper is good, as it makes a small, easy to implement and simple to understand change to existing UED methods, and it delivers improved empirical results.
-----
This paper uses a GMM-based method to quantity novelty of levels in UED. It uses the state-action distribution (unordered, i.e., not trajectory distribution) induced by the agent on a level and fits a GMM to the data from previously sampled levels. A new level's novelty can be determined by computing the likelihood of its induced state-action distribution under the fitted GMM.
Strengths: - The idea of using state-action distributions to represent a level is not new, but it makes sense.
- Using a fitted model and computing the likelihood using this also makes a lot of sense, compared to computing a pairwise novelty score between two levels.
- Having the ability to have different numbers of GMM kernels is an interesting idea, and seems to provide a good way of allowing complexity to be dynamically altered.
- Results demonstrate that the new method performs better than PLR/ACCEL, and the change in code is small.
Weaknesses: - While the focus on unordered transition tuples is mentioned as a benefit, I would think that there are certain environments where the temporal nature of trajectories are important. Could you comment on this please?
- Minor
- Line 88, for minimax one also often cites [1]
- The $\tau$'s beneath the max/expectation in equation (1) don't have the A and P superscripts.
[1] Pinto, Lerrel, et al. "Robust adversarial reinforcement learning." _International Conference on Machine Learning_. PMLR, 2017.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Algorithm 1, please indicate what the blue text means (I assume changes to ACCEL but being clear on this would be helpful.)
- Please provide aggregate results averaged over all of the minigrid eval. levels. It is hard at a glance to see how the algorithms compare.
- Do the GENIE generated levels actually look more diverse than those generated by e.g. ACCEL? Please show a sampling of levels.
- In table 1, PLR-GENIE has the most state-action coverage, but performs much worse than the ACCEL-based methods. This seems to contradict the claim that the increased diversity is causing better performance. Could you explain this please?
- There may be some confusion between your GENIE and [1]
- For car racing you do use images, is the image simply flattened? Could this have problems with e.g. not being translationally invariant?
[1] Bruce, Jake, et al. "Genie: Generative interactive environments." _Forty-first International Conference on Machine Learning_. 2024.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: - I think putting limitations in the main paper would be better. Figure 5 feels not super necessary and can be moved to the appendix if space is an issue.
- Some other limitations I can think of, please comment/explain
- Choosing the range of $K$ (number of GMM) kernels can be challenging?
- Do I understand correctly that you just concatenate the observation and actions to form $x$, which is then used to fit the GMM? How can this scale to much larger observations?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's careful attention to detail and the positive feedback provided. The following clarifications will hopefully address your concerns and strengthen the case for our paper:
### Weaknesses
**Q: While the focus on unordered transition tuples is mentioned as a benefit, I would think that there are certain environments where the temporal nature of trajectories are important. Could you comment on this please?**
A: Revieiwer kpCs brought up the same point and we have answered it above (see question 1 in discussion with Revieiwer kpCs). TL;DR: agree and GENIE's design choice on prioritising individual transition tuples is flexible to accommodate non-Markovian states.
**Q: Minor: 1. Line 88, for minimax one also often cites [1]; 2. The $\tau$'s beneath the max/expectation in equation (1) don't have the A and P superscripts.**
A: Appreciate the sharp eye, we have revised this.
### Questions
**Q: Algorithm 1, please indicate what the blue text means (I assume changes to ACCEL but being clear on this would be helpful.)**
A: Thank you, we have made this clearer in the revised manuscript.
**Q: Please provide aggregate results averaged over all of the minigrid eval. levels. It is hard at a glance to see how the algorithms compare.**
A: Figure 4a is what you are looking for. The IQM and Optimality Gap are metrics introduced by the rliable library (Agarwal et al., 2021) used for fair aggregation of performances across different tasks (levels) and comparing the performance of algorithms.
**Q: Do the GENIE generated levels actually look more diverse than those generated by e.g. ACCEL? Please show a sampling of levels.**
A: Answered in our global rebuttal. The sections "GENIE Introduces Level Complexity", "Low Regret but High Novelty Levels Provide Interesting Experiences" and their accompanying plots visually demonstrates the diversity of the levels generated by GENIE.
**Q: In table 1, PLR-GENIE has the most state-action coverage, but performs much worse than the ACCEL-based methods. This seems to contradict the claim that the increased diversity is causing better performance. Could you explain this please?**
A: It is important to note that there are fundamental differences in the curriculum generation mechanisms of ACCEL and PLR outside of GENIE's control. ACCEL mechanism initiates the curriculum with "easy" levels (e.g. a Minigrid with no walls and only the goal) and leverages minor edits (mutation) to gradually introduce complexity to the levels. In contrast, PLR relies on domain randomization (DR) to generate new levels. DR lacks the fine-grained control over the difficulty progression that ACCEL's mutation-based method offers. As a result, even though GENIE exposes the PLR agent to a wider coverage of state-action pairs, the PLR teacher does not present these experiences to the student in an order that facilitates optimal learning. The inherent difference in curriculum generation mechanism between the two algorithms (i.e. ACCEL and PLR) admits a significant difference in performance from the get-go that cannot be recovered by GENIE. To summarize, GENIE enhances the state-action space coverage for both ACCEL and PLR, but ACCEL's gradual curriculum complexity introduction mechanism simply capitalizes on that better.
**Q: There may be some confusion between your GENIE and [1]**
A: Acknowledged in global rebuttal.
**Q: For car racing you do use images, is the image simply flattened? Could this have problems with e.g. not being translationally invariant?**
A: All algorithms incorporate a CNN model over the images, in accordance with previous UED literature. We thank you for bringing up this point and we have included a small section in the appendix to make this clear.
**Q: Choosing the range of $K$ (number of GMM) kernels can be challenging?**
A: Actually, $K$ does not need to be constricted to a fixed range and can be adapted online. Metrics like silhouette score (that we used) or other metrics such as AIC/BIC provide a score for the fit of the GMM and Bayesian methods can be applied to search for the $K$ (not bounded to a range) that fulfils a desired threshold of the metric. The reason why we used a fixed range for $K$ in GENIE is simply because we found that a range of 6-15 kernels already provided significant improvements to the GENIE-augmented algorithms and did not necessitate further optimization.
**Q: Do I understand correctly that you just concatenate the observation and actions to form $x$, which is then used to fit the GMM? How can this scale to much larger observations?**
A: Yes, your understanding is correct. Regarding the dimensionality issue, it is an issue which can be easily remedied. Dimensionality reduction techniques such as Principal Component Analysis (PCA) or learned autoencoders can be employed to scale down large observation spaces without significant loss of information. This is a direction that is also mentioned in the "Future Work and Limitations" section of the appendix of the main paper. High-dimensional state spaces present issues for deep learning methods in general, and remedies used by the policy algorithm to scale down the state space can be applied in parallel for the fitting of GMMs in GENIE.
---
Rebuttal Comment 1.1:
Title: Discussion
Comment: Thank you for your detailed response!
A few more follow ups
- For car racing I meant specifically how the states are input into the GMM and not the policy. Do you use a CNN as a preprocessing step before the states are passed to the GMM/do you use the agent's CNN, or do you just flatten the obs and pass it to the GMM?
- Could you please expand a bit more on how you can incorporate a temporal relationship between states? Naively the thing to do would be to effectively frame stack to get an augmented state space. Are there other ways?
- Relatedly, the policy's representation does not have to be the same as GENIE's, right? So you could framestack for one but not the other?
- Then regarding the increased state coverage. If I understand correctly, you are saying the order & diversity matters, and not diversity alone? In that case I think rephrasing the discussion in l297-l300 would be beneficial, as I at least did not get that impression from reading it.
---
Reply to Comment 1.1.1:
Title: Replying to Reviewer jT1H
Comment: Thank you for the very prompt response!
**Q: For car racing I meant specifically how the states are input into the GMM and not the policy. Do you use a CNN as a preprocessing step before the states are passed to the GMM/do you use the agent's CNN, or do you just flatten the obs and pass it to the GMM?**
A: For the Car Racing domain, we are also using the CNN-preprocessed states for the GMM.
**Q: Could you please expand a bit more on how you can incorporate a temporal relationship between states? Naively the thing to do would be to effectively frame stack to get an augmented state space. Are there other ways?**
A: There is active research regarding incorporating temporal information between states. Frame-stacking as you mentioned, is a simple method to incorporate temporal information for pixel environments. Other general methods would be to include recurrent or attention layers. For more specific hierarchical RL methods, temporal abstractions via decomposing sequences into simpler sub-tasks and operating over different temporal scales can be considered.
**Q: Relatedly, the policy's representation does not have to be the same as GENIE's, right? So you could framestack for one but not the other?**
A: That is correct. The representation used in GENIE is flexible and really depends on the degree of abstraction/specificity the developer is interested in regarding novelty in states.
**Q: Then regarding the increased state coverage. If I understand correctly, you are saying the order \& diversity matters, and not diversity alone? In that case I think rephrasing the discussion in l297-l300 would be beneficial, as I at least did not get that impression from reading it.**
A: Not quite. Our use of the word "order" in different contexts can admittedly be confusing, but we appreciate the opportunity to clarify. In Lines 184-187, we expressed a key strength of GENIE being its prioritization of diversity in individual induced experiences of the environment, independent of the order in which they are presented. In this context, "order" is **intra**-environment and refers to the **sequence of experiences (state-action pairs)** presented by a **single environment**.
For our previous response regarding the differences in ACCEL and PLR, we were talking about "order" in the **inter**-environment context, more specifically how the **UED algorithm's teacher** presents the **sequence of environments** (i.e. the curriculum). GENIE enhances the diversity of individual experiences throughout all the environments in the entire curriculum but the sequence in which these diverse environments is presented in the curriculum depends on the underlying algorithm. The reason why ACCEL performs better than PLR is because the former's bootstraps its curriculum with simple levels (e.g. empty mazes) and gradually increases in complexity via minor mutations but the latter simply uses domain-randomized levels throughout.
We hope the above clarifies the confusion. If not, we're happy to provide more specific examples and clarifications. | Summary: This paper focus on the unsupervised environment design (UED) problem, whereby a student trains in an adaptive curriculum of environments proposed by a teacher. The authors propose GENIE, a method for assessing novelty of environments, which essentially means the teacher prioritizes environments with high exploration potential/info gain for the student. Experiments are conducted in three different relevant domains and the results are clear and show a reasonable gain. I thus favor acceptance.
One thing to note is the general idea of choosing environments based on novelty (rather than regret) is itself a new idea, thus, the authors do not need to focus as much on the use of GMMs, which may be limiting.
Strengths: * The method makes sense as it is essentially an intrinsic reward for the policy, i.e. selecting environments where the agent will have higher information gain.
* The empirical results are strong, with the method improving both PLR and ACCEL in three different domains.
* The use of the evaluation protocol from Agarwal et al is always refreshing.
* Ablations are sensible and easy to follow.
Weaknesses: * There is a strong assumption that all transitions are independent, which is not always true. What if there is an environment where an agent needs to conduct some initial behavior (e.g. finding a key) and then using it to act later (e.g. opening a door)? It just feels like the method is designed based on the toy environments we use in RL research, and is not actually scalable for larger more complex domains in the future.
* Hate to be that person, but the name GENIE is taken by multiple works already and recently by Bruce et al (2024) in a highly relevant paper. It would be recommended to find a more relevant acronym or simpler name.
* The arrows in figure 2 are too small.
Technical Quality: 3
Clarity: 3
Questions for Authors: What other approaches could be used to assess novelty at larger scale? Can you show any examples of levels that have high novelty but low regret, and turn out to be useful training levels for the agent?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Discussed in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's feedback and the recognition of the key strengths of our method. The following clarifications will effectively address your concerns and further enhance our paper:
### Weaknesses:
**Q: There is a strong assumption that all transitions are independent, which is not always true. What if there is an environment where an agent needs to conduct some initial behavior (e.g. finding a key) and then using it to act later (e.g. opening a door)? It just feels like the method is designed based on the toy environments we use in RL research, and is not actually scalable for larger more complex domains in the future.**
A: Temporal information can be accounted for by using the augmented states and it should be addressed by the underlying RL mechanism (hierarchical RL, constrained RL, etc.). We would like to highlight that GENIE's design to focus on individual *(s, a)* transitions and not entire trajectories is agnostic to these augmented state representations used by the policy. On this note, we would make this advantageous characteristic of GENIE's design clearer in the revised manuscript."
**Q: Hate to be that person, but the name GENIE is taken by multiple works already and recently by Bruce et al. (2024) in a highly relevant paper. It would be recommended to find a more relevant acronym or simpler name.**
A: Acknowledged in global rebuttal.
**Q: The arrows in figure 2 are too small.**
A: Thank you and we have fixed this in the revised manuscript.
### Questions:
**Q: What other approaches could be used to assess novelty at larger scale?**
A: With regards to assessing novelty at a large scale, Section B in the appendix touches on how dimensionality-reduction techniques can be paired with GMMs for better scalability to higher-dimensional domains. As you pointed out, there is no need to restrict ourselves to GMMs. However, on the flip side, we showed that such a simple yet general method for quantifying novelty and combining it with regret could result in significant empirical gains. GENIE's main contribution lies in demonstrating the importance of novelty in UED and highlighting how it complements minimax regret. That opens the door to future UED research to look into incorporating novelty into their curriculum.
**Q: Can you show any examples of levels that have high novelty but low regret, and turn out to be useful training levels for the agent?**
A: Addressed in the global rebuttal (see section "Low Regret but High Novelty Levels Provide Interesting Experiences", i.e. Figure 2's explanation)
---
Rebuttal Comment 1.1:
Title: No change to score
Comment: Thank you for the rebuttal. I don't see much scope to increase my score, I think the paper has sufficient merit to be accepted. If it is accepted please include a discussion on scaling the approach in the future work/conclusion. This could even be by combining it with an environment generator like Bruce et al's Genie :)
---
Reply to Comment 1.1.1:
Title: Thank you Reviewer kpCs
Comment: Thank you for considering our rebuttal and for your continued confidence in our paper.
> If it is accepted please include a discussion on scaling the approach in the future work/conclusion. This could even be by combining it with an environment generator like Bruce et al's Genie :)
Absolutely, we are enthusiastic about the potential for future work to further develop and broaden our novelty-driven autocurricula approach. Thank you once again for your time and support. | Summary: This paper proposes using novelty quantification in an unsupervised environment design for training a more generalizable policy. Built on an intuition that environments with unfamiliar states are novel environments, their proposed algorithm uses Gaussian mixture models to allow an RL agent to explore novel environments. The authors of this paper compare their proposed method in various benchmarks against multiple baselines to show empirical improvement in performance.
Strengths: This is a well-written paper with a structure that is easy to follow. The key concepts are simple and easy to understand.
Weaknesses: - The idea of using unfamiliar states to quantify uncertainty seems similar in concept to the curiosity-driven approaches in RL. However, this paper does not address the relevant literature. It would be much better if the authors could explain how their idea fits in with the findings and theory in curiosity-based RL approaches and what the paper contributes to this avenue of thinking.
- When it comes to UED in curriculum learning to train a more generalizable policy, there is Genetic Curriculum (Song et al, 2022) which also uses UED in curriculum learning to train a generalizable policy. Since the paper was also evaluated on the BipedalWalker and BipedalWalkerHardcore environments, it would be better to compare and contrast how GENIE with Genetic Curriculum.
- GENIE uses a fixed window FIFO buffer, but would this be able to reach an equilibrium? For example, if the agent explores level set A at the expanse of forgetting level set B, and goes back to exploring level set B at the expanse of forgetting C, and so on, would agents trained by GENIE converge to a steady-state behavior?
- Finally, there are some minor typos and editorial mistakes. For example, line 63, PLR, and ACCEL are mentioned without explaining what those are.
Technical Quality: 3
Clarity: 3
Questions for Authors: The questions I hope to be addressed by the authors are the ones listed above. 1) Explanations on how this paper fits in with curiosity-based approaches and the GENIE's contributions, 2) explanations and comparisons with Genetic Curriculum, and 3) Will the agents trained by GENIE reach a steady-state equilibrium?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors has addressed the limitations of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's time and valuable feedback. We believe the concerns raised generally pertain to broader problems in RL and are not inherent problems of GENIE. The following clarifications will explain our stance and strengthen the case for our paper. We kindly request the reviewer to reconsider the score, given that the issues raised do not detract from the contributions of GENIE and the promising direction of novelty-driven autocurricula:
### Questions
**Q: Using unfamiliar states to measure uncertainty is conceptually similar to curiosity-driven approaches in RL. However, the paper doesn't review the relevant literature. It would be helpful if the authors explained how their idea aligns with curiosity-based RL theories and what their paper adds to this field.**
A: We appreciate the reviewer's astute observation regarding the conceptual similarity between our approach and curiosity-driven RL. While we acknowledge that our original manuscript should have addressed this relationship more explicitly, we're grateful for the opportunity to clarify these connections and distinctions. Indeed, both curiosity-driven RL and our UED approach leverage the concept of novelty or unfamiliarity to guide learning. However, they differ significantly in their application and theoretical foundations. Curiosity-driven learning literature is build on prioritising interesting experiences in a **static environment** [1], or across a set of **predefined tasks** [2]. Meanwhile, UED is focussed on **generating environments** that are interesting/useful for learning. UED shapes the learning curriculum itself rather than the exploration strategy within environments. This is analagous to the difference between Prioritized Experience Replay [3] from traditional RL and Prioritized Level Replay [4] from UED. The former is an "inner-loop" method to prioritize past experiences for training and the latter is an "outer-loop" method that uses past experiences to inform the collection/generation of future experiences. In the same vein, curiosity-driven learning is focussed on prioritizing novel experiences for policy updates but GENIE is focussed on generating/curating levels that can induce these novel experiences. This fundamental difference in purposes means that theoretical and empirical comparison between curiosity-driven approaches and GENIE is not as direct. As such, we focussed our attention mostly towards current novelty measures in UED literature, which we are of more relevance. Still, we thank the reviewer for making this observation as the general audience would also appreciate clarity on this matter. We have included a short section clarifying the distinctions between curiosity-driven learning and GENIE in the appendix of the revised paper.
[1] Pathak, D., Agrawal, P., Efros, A. A., & Darrell, T. (2017). Curiosity-driven exploration by self-supervised prediction.
[2] Burda, Y., Edwards, H., Pathak, D., Storkey, A., Darrell, T., & Efros, A. A. (2018). Large-Scale Study of Curiosity-Driven Learning.
[3] Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2016). Prioritized experience replay.
[4] Jiang, M., Grefenstette, E., & Rocktäschel, T. (2021). Prioritized level replay.
**Q: In UED for curriculum learning to train generalizable policies, Genetic Curriculum (Song et al., 2022) also employs UED. Since both papers were evaluated on BipedalWalker and BipedalWalkerHardcore, it would be useful to compare GENIE with Genetic Curriculum.**
A: Thanks for pointing to the work by Song et al. (2022). The other UED papers had not referenced this work or compared against this work. This is likely due to the fact that their problem definition in their paper differs from the *Underspecified Partially Observable Markov Decision Process* (UPOMDP) setting in UED and there is not many parallels other than sharing a commonly-used domain. Also, the original POET [5] paper which they compare against used a 5D-BipedalWalker domain (we used 8D), and it is not clear in their paper which domain they use. To the best of our knowledge and thorough search, their code repository is not publicly accessible and we are unable to replicate their work to compare the results.
[5] Wang, R., Lehman, J., Clune, J., \& Stanley, K.O. (2019). Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions.
**Q: GENIE uses a fixed window FIFO buffer. Will this achieve equilibrium? For instance, if the agent cycles through level sets A, B, and C, potentially forgetting each set as it explores others, will agents trained by GENIE converge to steady-state behavior?**
A: Your description of a potential "mode collapse" behavior is reasonable. However, this seems to be pointing at the general negative effects of catastrophic forgetting within the broader RL literature and function approximation methods. While GENIE does not explicitly guarantee convergence to a steady-state behavior with respect to novelty, it's important to note that this challenge is not unique to our approach. The leading approaches, PLR and ACCEL are also unable to provide robustness guarantees against such oscillatory exploration patterns. It would be interesting to start having conversations on how we can bridge insights from "continual learning" and "stability-plasticity dilemma" literature with UED, and see how minimax-regret and GENIE's novelty could possibly circumvent the negative effects of catastrophic forgetting.
**Q: Finally, there are some minor typos and editorial mistakes. For example, line 63, PLR, and ACCEL are mentioned without explaining what those are.**
A: Thank you for pointing this out, we have fixed this in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: The reviewer thanks the authors for a well-detailed rebuttal. The reviewer has updated the scores accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you Reviewer KtoT
Comment: We sincerely appreciate your engagement in the review process. We are glad that our rebuttal has strengthened your confidence in our paper. | Summary: This paper proposes adding a domain-general metric for promoting novelty to state of the art UED methods in order to help the environment generator better explore and cover the space of environments. The novelty bonus is based on a surprise of a (state, action) pair model which is a learned Gaussian mixture model.
Strengths: The method is clearly useful and easy to implement, fixing a limitation of existing UED approaches which are known to miss modes in the space of levels. Empirically the method seems to perform strongly, matching or exceeding existing methods. This is a valuable direction to pursue as it is important for the UED community to have a sense for how novelty effects performance, and the approach addresses a well-known flaw in existing UED approaches.
Weaknesses: The method would be more convincing if the generated levels for each method where visualised and compared directly. Ideally one could demonstrate that there are a motif of level being generated which were not being generated before. For instance, displaying the distribution over stump hight late in training of ACCEL and ACCEL-GENIE to show that ACCEL-GENIE has a more even coverage of the space, and sampling a few levels randomly from each so that this can be visually confirmed.
Similarly, the paper would be much improved if the mechanism behind the results was dug into in more depth. If the theory is that the diversity term results in kinds of high-regret levels being presented which were left out of the buffer previously, it would be ideal to demonstrate that this happens directly.
On line 258 results are over-claimed, since ACCEL-GENIE is within the margin for error of ACCEL in all, or nearly all, of the environments. This claim should be corrected. I don't think this is essential for acceptance of the paper, as the minigrid benchmark appears pretty close to being saturated, and it serves largely as an MNIST for the field. The empirical results in the other two environments stand on their own.
#### Clarity:
It would be good to have error bars in Table 1
The plots in Figure 7 should be fixed so the colors of the same method match
There is a stray parenthesis on line 303
Equation 1 shows the PAIRED objective "optimized" for deterministic domains, the PAIRED objective as a direct comparison between the two expected returns is canonical.
I would also suggest reconsidering the name, as another prominent environment-generating model named GENIE has been released recently. I expect that, in talking about this work, people will want to talk about how the two could be combined, which could get quite confusing.
In the abstract I think it is not clear that novelty, on it's own, is critical for agent's generalisation ability. Two situations which are different but not in a way that is relevant for the task and are processed by the network in the same way are effectively the same, and would not effect generalisation ability.
It also appears that in a few places the "underspecified" in UPOMDP is taken to mean that there is a one-to-many mapping between parameters and environments. In general this is not the case, as the minigrid environment has a one-to-one mapping between parameters and environments, the "underspecified" simply means that these parameters are not given by the designer. It is fair to want UED algorithms to work even when there is such a one-to-many mapping, but it is not required as part of the problem formulation.
Technical Quality: 3
Clarity: 3
Questions for Authors: What sort of qualitative difference do you see between ACCEL-GENIE and ACCEL levels?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's insightful comments and recognition of the valuable direction of novelty-driven autocurriculum that GENIE is pushing for the UED field. Also, it is always refreshing to engage with a reviewer who has in-depth knowledge of UED. The new results within the global rebuttal and our following clarifications would be of interest to you and strengthen the case for our paper:
### Weaknesses
**Q: The method would be more convincing if the generated levels were visualized and directly compared. Ideally, demonstrate that new motifs are being generated. For example, display the distribution over stump height late in training of ACCEL and ACCEL-GENIE to show more even coverage by ACCEL-GENIE, and randomly sample a few levels from each for visual confirmation.**
A: Answered in global rebuttal, refer to the section "GENIE Introduces Level Complexity" explanation.
**Q: The paper would be improved by exploring the mechanism behind the results in more depth. If the theory is that the diversity term introduces high-regret levels previously excluded from the buffer, it should be demonstrated directly.**
A: This is something we have thought about while working on GENIE. The main struggle with trying to demonstrate this is that both the regret and novelty metric are inherently policy-dependent. As such, it is impossible to directly measure whether prioritising or not prioritising a level via GENIE would lead to discovering a higher regret level down the road because the divergence in realities would result in different policies with non-commensurable subjective regrets. However, one way we can indirectly measure this is by observing whether a greedy selection of high regret levels (as in ACCEL and PLR) or a balanced regret-novelty prioritization (as in ACCEL-GENIE and PLR-GENIE) results in better cumulative/mean regret in the replay buffer across the training horizon. The section "Prioritizing Novelty Actually Increases Regret" in our global rebuttal demonstrates that GENIE actually results in higher regret across the replay buffer in the Car-Racing domain despite not directly optimising for it.
**Q: On line 258, results are overstated, as ACCEL-GENIE's performance is within ACCEL's margin of error in nearly all environments. This claim should be corrected. This isn't essential for paper acceptance, as the minigrid benchmark is nearly saturated and serves as an MNIST for the field. The empirical results in the other two environments are sufficient.**
A: We have amended line 258 to exclude the remark on ACCEL-GENIE's outperformance over its predecessor but the point on PLR-GENIE's clear improvement over PLR still holds.
**Q: Add error bars to Table 1. Ensure matching colors for the same method in Figure 7. Fix the stray parenthesis on line 303. Equation 1 should reflect the canonical PAIRED objective, comparing the two expected returns directly, rather than being "optimized" for deterministic domains.**
A: Thank you for the attentiveness to detail, we have addressed this in our revised manuscript.
**Q: Consider renaming the model since another prominent environment-generating model named GENIE has been released recently. This will avoid confusion when discussing how the two could be combined.**
A: Addressed in global rebuttal.
**Q: In the abstract I think it is not clear that novelty, on its own, is critical for agent's generalisation ability. Two situations that are different but not in a way that is relevant for the task and are processed by the network in the same way are effectively the same, and would not affect generalisation ability.**
A: We are actively refining our abstract to more effectively advocate for the benefits of novelty-driven autocurricula methods, while being mindful of the brevity throughout this rebuttal process. Regarding the notion that "different situations processed similarly by the network do not affect generalization ability," we believe this holds true primarily for fixed curriculum methods but not necessarily for autocurricula methods. We hope to have a constructive discussion about this.
Although the two situations present differently but provide similar learning experiences (with relation to the task) at the current moment, they both independently inform the collection/generation of future environments. The fact that these environments behave differently (in state-action distribution) indicates the potential that they can lead to totally different and novel environments down the training horizon, especially with mutation-based methods. Therefore, even if these scenarios yield similar learning signals initially, prioritizing them for the unique state-action distributions they cover via GENIE remains beneficial for generalisation. This is an exciting discussion which can be correspondingly highlighted in our revised manuscript.
**Q: It also appears that in a few places the "underspecified" in UPOMDP is taken to mean that there is a one-to-many mapping between parameters and environments. In general this is not the case, as the minigrid environment has a one-to-one mapping between parameters and environments, the "underspecified" simply means that these parameters are not given by the designer. It is fair to want UED algorithms to work even when there is such a one-to-many mapping, but it is not required as part of the problem formulation.**
A: We acknowledge that the one-to-many mapping between free parameters and environments, while common, is not a mandated characteristic of UED. We have amended the manuscript to reflect that nuance more clearly (e.g., "entails there is a one-to-many mapping" is revised to "possibly entails a one-to-many mapping").
### Questions
**Q: What sort of qualitative difference do you see between ACCEL-GENIE and ACCEL levels?**
A: Answered in global rebuttal, refer to the section "GENIE Introduces Level Complexity".
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal by Authors
Comment: Thank you for your response. The added visualisations increase my confidence in the method, so I will be increasing my score accordingly.
> As such, it is impossible to directly measure whether prioritising or not prioritising a level via GENIE would lead to discovering a higher regret level down the road because the divergence in realities would result in different policies with non-commensurable subjective regrets.
One test that could show this effect would be if the policies from both realities found the levels from the GENIE reality to have higher regret. I agree if this was not the case it would not be negative evidence.
Another good measure would be to fix a policy (and freeze it so it does not train) and run both to see which finds higher regret levels.
---
Reply to Comment 1.1.1:
Title: Thank you Reviewer qR9x
Comment: Thank you for your engagement and providing such technical feedback. We truly appreciate your confidence in our work.
> One test that could show this effect would be if the policies from both realities found the levels from the GENIE reality to have higher regret. I agree if this was not the case it would not be negative evidence.
> Another good measure would be to fix a policy (and freeze it so it does not train) and run both to see which finds higher regret levels.
Also, thanks for suggesting these creative experimental setups. We'll begin implementing these experiments and will incorporate the results in our revised manuscript. | Rebuttal 1:
Rebuttal: # Global Rebuttal
## 1. Addressing GENIE's Name
There is a general consensus among the reviewers that "GENIE" might be confused with the method recently introduced by Bruce et. al. (2024). We agree that it would probably be wise to choose a different name for the framework, allowing both important works to receive their own deserved spotlights. However, in the meantime, we will still use the acronym "GENIE" when referencing our framework during this rebuttal phase to make things less confusing for everyone. We will definitely come up with a different name and implement it across the revised version of the paper.
## 2. Special Thanks to All Reviewers
We would like to express our gratitude to all the reviewers for their time in providing constructive evaluations and generally positive reception of our work.
## 3. New Results and Explanation
We present new results that are collected in response to the reviewers' comments (please refer to the attached 1-page PDF). Note that due to limitations in time and computation, we were only able to selectively run experiments.
### 3.1 GENIE Introduces Level Complexity
Figure 1 presents the difficulty composition (introduced by POET (Wang et al., 2019) of replayed levels for ACCEL and ACCEL-GENIE over various training intervals (based on metrics defined in Table 1). It is clear that ACCEL predominantly favors "Easy" to "Moderate" difficulty levels. In contrast, ACCEL-GENIE increasingly incorporates "Challenging" levels into its replay set over time. This difference highlights the benefits of integrating GENIE's novelty metric into the level replay selection criteria.
The disparity in level difficulty distribution between ACCEL and ACCEL-GENIE is a critical factor in their observed performance differences. ACCEL's training curriculum tends to remain within a comfort zone, where levels with high regret (approximated by TD-error) are greedily selected. This approach constrains the student to a limited subset of simplest environments where it can minimize its maximum prediction error. However, this narrow focus limits the student's ability to generalize, as it minimizes exposure to more complex scenarios. On the other hand, ACCEL-GENIE’s incorporation of the novelty metric actively selects more challenging levels. This strategy pushes the student beyond its comfort zone, exposing it to unfamiliar and more challenging environment parameters (e.g. higher stump heights and wider pit gaps). As a result, the student is forced to explore a broader state-action space, enhancing its robustness to out-of-distribution scenarios and leading to the discovery of higher regret levels.
Note that our figure differs from Figure 11 in Parker-Holder et al. (2022) which shows the difficulty distribution of the levels **generated and added into the buffer**, but not the actual levels selected by the teacher for the student to **replay/train on**. On that note, this also demonstrates that GENIE remedies an inefficiency in the original ACCEL algorithm, which is the fact that the mutation-based generation constantly produces high complexity levels ("Challenging" and above) but none are actually selected to train the student.
At the moment, we currently lack statistics on the difficulty composition of levels replayed by PLR and PLR-GENIE but their similar performances suggest that the difficulty compositions are likely comparable.
[1] Parker-Holder, J., Jiang, M., Dennis, M., Samvelyan, M., Foerster, J.N., Grefenstette, E., \& Rocktaschel, T. (2022). Evolving Curricula with Regret-Based Environment Design.
### 3.2 Low Regret but High Novelty Levels Provide Interesting Experiences
Next, we visually present the effect of the novelty metric on Minigrid levels in the level replay buffer of PLR-GENIE by ablating regret. Specifically, we highlight levels that feature the lowest regret (bottom 10) yet exhibit the highest novelty (top 10); these are showcased in the first row of Figure 2. Conversely, levels that score within the lowest 10 for both regret and novelty are displayed in the second row of the same figure.
Visually, we can observe that levels with high novelty and low regret present complex and diverse scenarios that challenge the student. In contrast, the levels displayed in the second row, characterized by low regret and low novelty, often resemble simple, empty mazes that offer limited learning opportunities.
While it is not feasible to present every example level here, the contrast between the two groups is stark. Levels selected based on low regret but high novelty are significantly more varied and intricate than those chosen for their low novelty, despite both groups having low regret scores. This demonstrates that incorporating novelty alongside regret in the selection process enhances the ability to identify levels that present more interesting trajectories (experiences) to the student for learning.
### 3.3 Prioritizing Novelty Actually Increases Regret
Finally, Figure 3 shows the mean, median and summed regret in the level replay buffer of PLR$^\perp$ and PLR-GENIE across the training horizon. Surprisingly, PLR-GENIE results in comparable/slightly greater levels of regret across the training distribution despite not directly optimising for it. This observation demonstrates that prioritising novelty in the levels can actually lead to higher regret levels being discovered.
Pdf: /pdf/0ff6f41108650b9d14ea0b2cd8ae78cf9506c6f8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Hierarchical Uncertainty Exploration via Feedforward Posterior Trees | Accept (poster) | Summary: The paper proposes a method to obtain a hierarchical representation of samples from the posterior distribution of inverse problems. This method can be integrated into any existing approach to learn and sample from the posterior distribution. It involves learning a tree structure where each node represents a prototype reconstruction, with child nodes refining their parent node. This allows for the exploration of different modes and levels of detail in the solution space.
Strengths: **Originality.** While the method has connections to post-hoc hierarchical clustering of samples, it is novel in that it allows for faster visualization of the posterior distribution samples at test time.
**Quality and clarity.** Overall, the paper is well-structured, easy to follow, and sufficiently motivates the problem. However, the clarity of the presentation of the proposed loss function could be improved.
**Significance.** The problem of informatively visualizing the variability in samples from the posterior distribution of inverse problems is impactful.
Weaknesses: * While I truly appreciate the importance of informatively visualizing samples from the posterior distribution, the proposed method seems to require several heuristics for training to yield desirable results (e.g., discussions in sections 3.3 and 3.4). Are the computational complexity gains of the proposed method, compared to performing post-hoc hierarchical clustering, worth the need for tuning the parameters in these heuristics?
* It seems like the architecture of the network required to predict the tree depends on the tree depth and width as $K^d$, where $d$ is the tree depth and $K$ is the branching factor. This appears to be a very limiting factor in the depth and width of the tree that can be learned. How does the cost of learning the tree structure as we scale $K$ and $d$ compare to the cost of post-hoc hierarchical clustering?
Technical Quality: 3
Clarity: 3
Questions for Authors: * When augmenting existing methods for learning conditional generative models (for learning the posterior), how does the loss associated with the tree structure affect the learning of the generative model? How do the authors prevent the proposed loss from negatively biasing the learning of the posterior distribution?
* What is the computational overhead of the proposed method compared to simply training a generative model to learn the posterior distribution? What is the cost of post-hoc hierarchical clustering? How many inverse problems need to be solved to justify the extra training time?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Authors provide a discussion on the limitations of their
method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Tree loss function and training scheme**
Thanks for pointing out this is not clear enough. We will include an appendix with the explicit training algorithm to better clarify the loss function and our overall training scheme. Kindly note that we trained our models from scratch on a standard dataset of a **single** posterior sample per input, and did not augment existing conditional generative models. After training with a hierarchical MCL loss, for every input, our model learns to predict a tree that hierarchically discretizes the posterior distribution. In our experiments, the various posterior sampling baselines are only used in the comparisons to benchmark our results and quantify the quality of the predicted trees.
**Runtime comparisons**
Table 1 compares the speed of our method and the baselines at test time, as ultimately this is what matters to the user. Our method is at least 2 orders of magnitude faster than the baseline in neural function evaluations (NFEs). However, as we detail next, our method is also orders of magnitude faster than the baselines in training time.
* **Training time**: The diffusion-based baselines we compared against are all zero-shot methods that rely on a pre-trained unconditional diffusion prior (DDPM). Hence, the “effective” training time of these methods is the time it took to learn the DDPM prior, which for CelebA-HQ $256\\times 256$, Ho et al. stated it was $\\approx$ 63 hours on **8** V100 GPUs. For comparison, training our method (from scratch) on a CelebA-HQ $256\\times 256$ restoration task took $\\approx$5 hours on a **single** A6000 GPU (see Appendix A.3).
* **Testing time**: As reported in Table 1, our method is at least 2 orders of magnitude faster than the baselines. As mentioned in checklist item 8 (Experiments Compute Resources), we reported neural function evaluations (NFEs) as an architecture-agnostic measure. However, our architecture is significantly lighter than the U-net used in the compared posterior sampling baselines (both in terms of compute and memory footprint). Therefore, the numbers in Table 1 are actually underestimating the speedup factor introduced by our method. Putting aside the lower memory footprint, a single forward pass with a batch of 1 on an A6000 GPU with our architecture lasts 5$\\pm$0.1 ms, compared to 20$\\pm$0.4 ms for the U-Net used by DDRM/DDNM/RePaint, and 140$\\pm$8 ms for MAT. To ensure this is clear enough, in the camera-ready version we will include another column in Table 1 translating NFEs to runtime in seconds further emphasizing the speed advantage of posterior trees. (The rebuttal PDF includes Table 1 with runtime reported in GPU seconds).
**Required hyper-parameter tuning to stabilize training**
As mentioned in Section 3.4 L179, we observed that fixing the hyperparameters $\\varepsilon\_0=1,t\_0=5$ worked well for all experiments, and did not need to optimize these further. In general, it might be that for some tasks this strategy is suboptimal, requiring more careful tuning. Nonetheless, given that our training time is relatively short, this added computational burden is manageable, and it is still beneficial to use our method due to its much faster inference at test time.
**Scaling to wider/deeper trees**
Indeed, as mentioned in Section 5 (Discussion and Conclusion), our method is limited by the number of output leaves as we amortize the inference of the entire tree to a single forward pass. However, note that this limitation also exists for the baselines. To compute the baseline trees, we need to perform a two-step procedure: (1) Sampling $N\_s=100$ images from the posterior, and (2) performing hierarchical $K$-means $d$ times to build a tree of degree $K$ and depth $d$. In our experiments with $K=3,d=2$ (i.e. a tree with 9 leaves), we noticed that using less than $N\_s=100$ images often led to degenerate trees with one or more leaves having 1 sample or less. For example, consider the case of using $N\_s=9$ samples. Even if the posterior is perfectly balanced (often not the case), each leaf will have only 1 sample to “average” at depth 2\. Therefore, if we want to use the baselines for trees with significantly more leaves, we need to sample enough images ($N\_s\\gt 100$) per test input which scales poorly as well.
Ultimately, the goal is to present the user with only a **few** representative prototypes **summarizing** the different options. Otherwise, navigating the underlying possibilities becomes tedious and time-consuming as the user has to skim through many images for every input.
---
Rebuttal Comment 1.1:
Comment: I appreciate the thorough response from the authors. My concerns have been addressed. I will increase my score. | Summary: This work proposes a technique to predict a tree-structured hierarchical summarization of a posterior distribution using a single forward pass of a neural network. The technique is an amortized hierarchical version of the oracle loss in multiple choice learning. Experiments show the method is effective at hierarchically representing the posterior both qualitatively and quantitatively.
Strengths: 1. The method is simple, efficient, and sound.
2. It addresses a fundamental and practically relevant problem (hierarchically visualizing complex distributions).
3. The presentation of the method and the experiments is excellent.
Weaknesses: 1. Table 1 reports number of function evaluations, but it's not clear how that translate to runtime since the time per function call and extent of parallelization varies .
2. The qualitative evaluation in Figure 1 and Figure 4 are not very informative. It's often hard to tell the differences between many of the nodes or judge whether there is an unambiguous underlying hierarchical structure.
3. The clustering / hierarchy produced the method is based on $L_2$ distance, which is not a very useful metric in many applications including images. Though as the authors discussed, this can be potentially addressed e.g. by first embedding the inputs with an autoencoder. Empirically demonstrating that this limitation can be addressed would be important for adoption of this method in many applications.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Can you apply this method when the reconstruction loss is not an $L_2$ loss but a generic function?
2. How does the runtimes compared in Table 1?
3. Can you show how the methods compare as a function of runtime in Table 1? It's possible that you don't need to run that many function evals for the baselines to get similar performance.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors discussed limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Comparing runtime in neural function evaluations vs GPU seconds**
Please see Table 1 in the rebuttal PDF where runtime is reported in seconds. As mentioned in checklist item 8 (Experiments Compute Resources), we reported neural function evaluations (NFEs) as an architecture-agnostic measure. However, our architecture is significantly lighter than the U-net used in the compared posterior sampling baselines (both in terms of compute and memory footprint). Therefore, the numbers in Table 1 are actually underestimating the speedup factor introduced by our method. Putting aside the lower memory footprint, a single forward pass with a batch of 1 on an A6000 GPU with our architecture lasts 5$\\pm$0.1 ms, compared to 20$\\pm$0.4 ms for the U-Net used by DDRM/DDNM/RePaint, and 140$\\pm$8 ms for MAT. To ensure this is clear enough, in the camera-ready version we will include another column in Table 1 translating NFEs to runtime in seconds further emphasizing the speed advantage of posterior trees.
**Baselines performance as a function of compute**
Note that to compute the baseline trees we need to perform a two-step procedure: (1) Sampling $N\_s=100$ images from the posterior, and (2) performing hierarchical $K$-means $d$ times to build a tree of degree $K$ and depth $d$. Therefore, saving computation in this process requires sampling fewer images $N\_s$ per test input. However, in our experiments with $K=3,d=2$ (i.e. a tree with 9 leaves), we noticed that using less than $N\_s=100$ images often led to degenerate trees with one or more leaves having 1 sample or less. For example, consider the case of using $N\_s=9$ samples. Even if the posterior is perfectly balanced (often not the case), each leaf will have only 1 sample to “average” at depth 2\. In fact, it is likely that to achieve the optimal performance with the baselines more than $N\_s=100$ samples are needed, which would be even more computationally demanding. To better clarify this point, we will include an additional appendix discussing the success probability of building a tree with $K^d=9$ leaves as a function of the number of sampled images $N\_s$.
**Learning posterior trees with other loss functions**
This is a great point. A distinction should be made between the measure used for the clustering (i.e. determining the associations of samples to clusters) and the loss used within each cluster to determine the cluster representative. Our approach can work without modification with any association measure (e.g. LPIPS, some domain-specific classifier, etc.). However, changing the within-cluster loss requires some modifications. Specifically, the use of the $L\_2$ loss is what provides us with the hierarchical decomposition of the posterior mean (see Eqs. (2)-(7)). In particular, when using the $L\_2$ loss, each cluster representative becomes the posterior cluster mean, and a weighted combination of those representatives gives the overall posterior mean (which is the tree root). This allows us to have the network output only the leaves of the tree, which implicitly define the entire tree (as the nodes of each level are obtained as linear combinations of the nodes of its children).
We will add an appendix discussing the distinction between association losses and within-cluster losses, and will include an illustration of using an association loss that is not $L\_2$. As for generalizing the method to work with other within-cluster losses, this is a great avenue, which we leave for future research.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I don't see the architecture-agnostic nature of NFE necessarily as a strength at all since what matters in practice is the runtime (and memory considerations), which depends not only on architecture, but also hardware utilization of each method. Regarding the provided runtime table, why did you choose to use a batch size of 1? Using a larger batch size should lead to significant speed up for the baselines since sampling is parallelized.
---
Reply to Comment 1.1.1:
Title: Runtime detailed comparison
Comment: Indeed, memory footprint and hardware parallelization ultimately determine the overall runtime. We tried to refrain from factoring these in our calculation as these numbers are hardware-specific depending on the available GPU. Nonetheless, our method also has a lower memory footprint and is just as amenable to parallelization. We apologize if our previous answer needed to be clearer. The table below benchmarks the speed of the forward pass and the GPU memory usage as a function of the batch size on an A6000 GPU with 48 GB:
| | | Ours | || | DDRM/DDNM/RePaint | || | MAT | |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Batch size | 1 | 10 | 430 | 1 | 10 | 97 | 1 | 10 | 30 |
| forward pass (ms)|14$\pm$0.2|39$\pm$0.2|1150$\pm$2|34$\pm$0.4|245$\pm$0.4|2315$\pm$6.5|150$\pm$8|867$\pm$3.8|2599$\pm$13.6|
| GPU memory (GB) | 1.3 | 2.1 | 46.9 | 1.85 | 7.5 | 46.8 | 3.0 | 16.0 | 47.2 |
For each method, we tested 3 different batch sizes: 1, 10, and the maximal number of samples that fit in memory. The forward pass speed and the memory usage in each setting were tested 100 times using CUDA events (reported as mean$\\pm$std). Our method is extremely fast and parallelizable, enabling the inference of 430 test images with a single forward pass of $\approx$1.15 seconds.
**How does this compare to the baselines for a single test image?**
Our method is far superior even for a single test image. For simplicity let us assume the cost of running hierarchical $K$-means is negligible, and that we can squeeze a batch size of $\approx$100 samples for the diffusion-based samplers, and $\approx$“33.33” samples for MAT. Sampling $N\_s=100$ posterior samples with DDRM (fastest diffusion baseline, requires only 20 denoising steps) lasts 46.3 seconds, and with MAT lasts 7.8 seconds. In comparison, our method requires only 0.014 seconds, leading to a $557\\times$ speedup compared to the fastest baseline. | Summary: This work proposes to solve the problem of quantifying and visualising the uncertainty in the solutions of ill-posed inverse problems like image-to-image translation, image re-construction, inpainting, etc. The paper proposes to do so by using 'Posterior-trees' where the authors make use of the result that optimizers of Eq. (1) form CVT of the posterior. Further application of simple Bayes rule and simple probability allows for construction of a hierarchal tree to quantify as well as visualize the uncertainty associated with the predicted solution of the inverse problem at hand.
Strengths: 1. The paper is well written with clear ideas and rationale.
2. The idea of using hierarchal trees to quantify and visualize the uncertainty associated with the posterior of inverse problem is indeed novel. Although the core idea on which the paper builds upon is already presented in [45].
3. Nonetheless, extending the results of [45] to construct a heirarchal version of it is notable.
4. The results and visualizations are satisfying.
Weaknesses: 1. I think the authors should provide more elaborate background on CVT and results of [45], either in main text or in supplementary for ease of the reader.
2. One thing that I did not understand is - what is the trade-off between breadth and depth of the constructed tree? For example - what is the difference between having more children nodes with low depth and having less children nodes with high depth. This would help in further understanding the advantages and limitations of the proposed method.
3. Another point that I am a bit skeptical about is that the method operates directly in the image space (or the space of the input of the problem at hand). In case of images, this space is very high-dimensional. In such high-dimensional spaces, capturing all the variations of the solution specially through a discrete tree-like structure is problematic. I don't know how the proposed method is able to handle it.
Overall I enjoyed reading the paper, and the paper certainly seems to be novel, albeit, incrementally. Hence, I lean for weak acceptance.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Providing more background on CVT and the results of \[45\]**
We thank the reviewer for bringing this to our attention. In the camera-ready version, we will include an additional appendix summarizing the main results of \[45\] upon which we build.
**Tradeoff between breadth and depth of the constructed trees**
Kindly note that we discussed and visualized this tradeoff for the task of digit inpainting in Appendix F and Figs. A7-A8. We apologize for not referring to this appendix from the main text. We’ll add a reference, thanks for catching this.
In short, the choice of $K$ and $d$ determines the layout of the output space tessellation. The degree $K$ controls the emphasis/over-representation devoted to weaker posterior modes. A smaller degree leads to more emphasis on weaker modes of the posterior as tree depth $d$ grows. However, the optimal layout $K$ and $d$ is task and input-dependant, and setting these adaptively is an interesting direction for future research.
**Number of prototypes and operating in input/pixel space**
This is a very good point. Please note that ultimately the goal is to present the user with only a **few** representative prototypes **summarizing** the different options. Otherwise, navigating the underlying possibilities becomes tedious and time-consuming as for every input the user has to skim through many many images. In our case, we chose these prototypes to be the cluster centers in pixel space at different levels of granularity. However, as we mentioned in Section 5 (Discussion and Conclusion), averaging in pixel space might lead to off-manifold “blurry” reconstructions, which could (possibly) be tackled by applying posterior trees in the latent space of an autoencoder such as VQ-VAE \- an intriguing direction for future work.
**Novelty**
While our approach is conceptually indeed a hierarchical extension of \[45\], as the reviewer noted, our work has several non-trivial contributions compared to \[45\], some of which are unique to the setting of working with tree structures:
* Proposing an efficient architecture that enables predicting **diverse** solutions (and their likelihoods) compared to the fully shared (likelihood-free) architecture of \[45\], as we verify in Appendix A (Figs. A1-A2).
* Preventing tree collapse and stabilizing the optimization steps through a novel adaptive regularization scheme and a weighted sampler that ensures proper leaf optimization with Adam (Appendices B-C, Figs. A3-A4).
* Demonstrating, to the best of our knowledge, the first successful application of MCL with diverse results for high-resolution image-to-image regression tasks.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for your response. I am convinced with the response to my questions, further the response provided by the authors to Reviewer VHHJ helped me in further clarification. I am increasing my score accordingly! | null | null | Rebuttal 1:
Rebuttal: Updated runtime table in seconds
Pdf: /pdf/25fe6a74ceb9d960e6beb85ea192cb6d004a607d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Direct Unlearning Optimization for Robust and Safe Text-to-Image Models | Accept (poster) | Summary: The paper proposes a new method for unlearning diffusion-based generative models. The proposed method uses the basic idea of reinforcement learning with human feedback. To collect pairs of images to be unlearned and corrected for preference optimization, the author uses SDEdit to modify the NSFW content. Experimental results show that the proposed model successfully unlearns harmful content from the generated images.
Strengths: The use of SDEdit for curating semantically similar paired images sounds novel and effective.
Weaknesses: - Although this paper shows some promising results about diffusion model unlearning, the main technical contribution is limited to a combination of SDEdit and DiffDPO.
- It is clearly an overstatement that the preference optimization does not affect the quality of the unrelated concepts (in line 149).
- I have concerns about the reproducibility of this work.
- Minor
- The main text and appendix have multiple typos and require careful proofreading.
- Typo: Eq 17 → Eq 11 in the main text
- Type: line 275: lambda?
- Is DCO in line 285 a typo?
Technical Quality: 3
Clarity: 3
Questions for Authors: - The reason why the gradient ascent term reduces the model's denoising ability, even in the presence of the KL term, is unclear. Could you provide more details on how adding a L_prior loss, which minimizes the difference between denoising directions at time T, can address this issue? The explanation is insufficient to grasp the logic behind the loss.
- What does ‘prior’ mean in ‘prior preservation performance’? If LPIPS measures the perceptual similarity between images, can we just use perceptual similarity? Since the prior in ‘prior preservation performance’ and the prior in ‘L_prior’ are different, it’s a bit confusing to follow section 4.3.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are well-addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and for recognizing the novelty and effectiveness of our approach in using SDEdit for curating paired images.
---
> *[W1] Novelty of Technical Contribution*
We want to clarify that our main contribution is proposing **a new image-based unlearning framework** to overcome the limitations of the previous prompt-based approach, not just combining two well-known techniques, SDEdit and DPO.
As our framework is new, we provide **a new perspective on unlearning by formulating the diffusion models unlearning problem as preference optimization**. This perspective naturally arose from our image-based approach and offers a fresh paradigm for tackling unlearning challenges.
We then introduce the **a output-preserving regularization** to specifically preserve unrelated features, a crucial aspect not addressed in standard preference optimization. Additionally, we adapt preference optimization for unlearning by using SDEdit to generate paired data, eliminating the need for human labeling.
In summary, our work introduces a novel image-based unlearning framework, reframes unlearning as preference optimization, and incorporates output-preserving regularization. These innovations collectively offer a robust, efficient approach to diffusion model unlearning, addressing key limitations of existing methods.
---
> *[W2] How can we ensure that DUO does not affect unrelated features?*
We acknowledge that claiming DUO has absolutely no impact on unrelated features would be an overstatement. However, our comprehensive evaluations demonstrate that DUO significantly reduces the impact on unrelated features compared to existing baselines.
As detailed in the global rebuttal (paragraph 3 and Fig. R3), we conducted comprehensive evaluations of DUO's prior preservation performance. Our results demonstrate that DUO maintains model prior for unrelated concepts while effectively removing unwanted content.
In our revised manuscript, we will provide a more nuanced discussion, emphasizing that DUO significantly reduces impact on unrelated features compared to existing baselines, while still acknowledging the potential for minor, unintended effects.
---
> *[W3] Reproducibility*
We have taken several steps to address reproducibility concerns:
- **Robustness to Random Seeds**
We have conducted experiments using different random seeds (1 to 4) to demonstrate the stability of our results. Figure R5 in the global rebuttal PDF shows that DUO's Pareto curve is not sensitive to random seed variation and consistently outperforms baseline methods. Figure R6 provides qualitative evidence of similar unlearning results across different seeds.
- **Code Release**
To ensure full reproducibility and contribute to the field's advancement, we commit to releasing our unlearning and evaluation code upon publication.
We believe these measures significantly enhance the reproducibility of our work. We welcome any suggestions for additional experiments or information that could further address reproducibility concerns.
---
> *[W4] Minor errors*
Thank you for bringing these typos and grammatical errors to our attention. We have made the following corrections:
- L104: DSO → DUO
- L175, 180, 506: Eq 17 → Eq 11
- L275: lambda → output-preserving regularization
- L285: DCO → DPO
- L271: Ulike → Unlike
- Figure 6: Quanlitative → Qualitative
We appreciate your careful review. These corrections will be reflected in the final manuscript.
---
> *[Q1] Logic Behind L_prior Loss*
The L_prior loss, i.e., output preservation regularization, was designed to maintain the model's prior effectively, even when the KL divergence regularization is weak. While DPO's KL divergence regularization theoretically prevents significant deviation from the pretrained model's distribution, there's a trade-off:
- Strong KL regularization hinders effective unlearning.
- Weak KL regularization can cause the model to lose its denoising capability due to the gradient ascent term.
L_prior addresses this by **enforcing output preservation for specific inputs**, particularly those unrelated to unsafe concepts.
Consider $x_t^- = \sqrt{\alpha_t} x_0^- + \sqrt{1 - \alpha_t} \epsilon$, where $x_0^-$ represents an unsafe image and $\epsilon \sim N(0, I)$. Our goal is to unlearn features related to $x_0^-$ while maintaining the model's behavior for features related to $\epsilon$. L_prior achieves this by preserving outputs for $x_T = \epsilon$, effectively maintaining the model prior for unrelated concepts while allowing unlearning of unsafe content.
Figure 8 demonstrates that L_prior's effect becomes more pronounced as beta decreases, highlighting its importance in maintaining model performance on safe concepts.
---
> *[Q2] Meaning of Prior Preservation (1-LPIPS)*
Prior preservation is a crucial objective in unlearning, aiming to maintain the model's performance on concepts unrelated to the unlearned content.
To quantify this, we measure how much the model's output changes for unrelated concepts after unlearning. This is done by generating images using MS COCO prompts with both the pretrained and unlearned models. We use LPIPS to measure the perceptual difference between these generated images. Prior preservation is defined as 1-LPIPS. A higher value indicates that the unlearned model generates perceptually similar images to the pretrained model when given the same prompts and initial noise.
This metric helps us ensure that our unlearning method effectively removes unwanted content without significantly impacting the model's performance on unrelated topics.
To avoid confusion in the final manuscript, we will change the notation from L_prior to L_output for the output-preserving regularization term, distinguishing it clearly from the prior preservation metric.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses with additional experiments. After reading the other reviews and rebuttal, I have decided to raise my score to 6. I hope a future revision will address the potential limitations and include additional experiments.
---
Reply to Comment 1.1.1:
Comment: We are pleased that we have been able to address the reviewer's concerns. We sincerely appreciate the time and effort the reviewer dedicated to reviewing our paper and providing valuable feedback. We will incorporate these insights, including the new experiments presented in this rebuttal, into our final manuscript. Thank you for helping us improve the quality of our work. | Summary: This paper proposes a diffusion unlearning optimization framework to achieve NSFW visual content removal in T2I models. Specifically, the authors develop an image-based unlearning method that utilizes curated paired image data (unsafe images and their corresponding safe images generated by the SDEdit model) for preference optimization. Additionally, they introduce a regularization term to preserve the denoising capability of the diffusion model. This work claims to be the first to apply preference optimization to the unlearning problem. Experimental results demonstrate that their method effectively removes unsafe visual concepts without significant performance degradation on unrelated topics.
Strengths: 1. The model is well-explained, and the paper is well-written.
2. The novel perspective of applying preference optimization to the unlearning problem is interesting and promising.
3. The red teaming results are encouraging.
Weaknesses: 1. The authors only fine-tuned Stable Diffusion v1.4 for evaluation, which employs U-Net for denoising. However, other popular text-to-image (T2I) diffusion models, such as PixArt[1] and Stable Diffusion v3, which use Transformers for denoising, are not mentioned. It is somewhat limited in discussing the proposed method on only one T2I model. Can the proposed method be applied to the above models?
2. The unsafe concept may introduce some unrelated visual features when generating images, such as image details. How can the authors ensure that the proposed method does not affect these unrelated features when removing unsafe content?
[1] PixArt-$\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis. ICLR 2024
Technical Quality: 4
Clarity: 3
Questions for Authors: Please refer to the above weakness section.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Refer to previous sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging our paper's clarity and the novelty of connecting preference optimization to unlearning.
---
> *[W1] Applicability of DUO to Diffusion Transformers*
We appreciate your inquiry about DUO's applicability to other architectures. We have successfully extended our evaluation to include Transformer-based models. As shown in Figures R1 and R2 in the global rebuttal PDF, DUO effectively removes unwanted concepts in Stable Diffusion 3, which employs a Transformer architecture (mmDiT).
We will incorporate these findings into our revised manuscript to provide a more comprehensive evaluation of DUO's capabilities across different model architectures.
---
> *[W2] Impact of DUO on Unrelated Features*
We acknowledge that claiming DUO has absolutely no impact on unrelated features would be an overstatement. However, our comprehensive evaluations demonstrate that DUO significantly reduces the impact on unrelated features compared to existing baselines.
As detailed in the global rebuttal (paragraph 3 and Fig. R3), we conducted comprehensive evaluations of DUO's prior preservation performance. Our results demonstrate that DUO maintains model prior for unrelated concepts while effectively removing unwanted content.
In our revised manuscript, we will provide a more nuanced discussion, emphasizing that DUO significantly reduces impact on unrelated features compared to existing baselines, while still acknowledging the potential for minor, unintended effects.
---
Rebuttal Comment 1.1:
Title: Reply to Authors
Comment: Thanks for your reply. The authors have addressed my concerns. After considering the comments from other reviewers and your reply, I plan to keep my rating.
---
Reply to Comment 1.1.1:
Comment: We are glad to see that the reviewer's concerns have been addressed. We appreciate the time and effort the reviewer dedicated to reviewing our paper and offering valuable feedback. | Summary: The authors introduce synthesized image data and preference optimization for concept unlearning in diffusion models. Additionally, they consider the regularization of model preservation performance to ensure a balanced approach.
Strengths: 1. The presentation is clear and well-structured.
2. There is a solid theoretical basis for the proposed preference optimization.
Weaknesses: 1. The efficacy of the proposed method is heavily dependent on the quality and diversity of the synthesized image pairs. The utilization of a small dataset, consisting of only 64 pairs, raises significant concerns about the potential for overfitting. To enhance the credibility of the findings, it is imperative for the authors to conduct comprehensive ablation studies. These studies should explore the impact of different sets of synthesized image pairs on model performance, potentially revealing critical insights into the method's robustness and generalizability.
2. The current robustness assessments of the study are limited by the employment of relatively weak attack scenarios. For a more robust evaluation of the unlearned models, it is crucial to incorporate a stronger, white-box attack methodology. The use of UnlearnDiffAtk [1], a commonly applied tool for assessing the robustness of unlearned diffusion models, is recommended. This approach would not only adhere to contemporary research standards but also significantly bolster the validity of the results. Furthermore, detailed reporting of the attack success rate (ASR) associated with UnlearnDiffAtk would provide a more precise quantification of the models' resilience against sophisticated attacks.
[1] "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy to Generate Unsafe Images ... For Now", ECCV 2024.
Technical Quality: 2
Clarity: 3
Questions for Authors: Check the weaknesses section for more details.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Effectiveness of the proposed method heavily relies on the synthesized image pairs and different synthesized image pairs might cause high performance variance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, thank you for acknowledging that our paper is clear and well-structured.
---
> *[W1] Impact of the number of synthesized image pairs on DUO performance*
Thank you for your constructive comment. Figure R7 from the global rebuttal PDF demonstrates how varying the number of synthesized image pairs affects the Pareto curve. The figure clearly shows that when the number of pairs is less than 64, there is a noticeable improvement in the Defense Success Rate. However, increasing the number beyond 64 pairs does not yield significant changes in the Pareto curve.
This analysis suggests that 64 pairs provide a good balance between performance and computational efficiency for our method. We will include this discussion in our final manuscript to address concerns about the dataset size and its impact on model performance.
---
> *[W2] Stronger White-Box Red Teaming results*
We appreciate your valuable suggestion to incorporate more robust evaluation techniques. We have conducted additional experiments using UnlearnDiffAtk [1], a state-of-the-art white-box attack method designed for assessing the robustness of unlearned diffusion models.
As illustrated in Figure R4 of our global rebuttal PDF, **DUO achieves Pareto-optimal performance compared to existing baselines** when subjected to UnlearnDiffAtk. This means that DUO provides the best trade-off between maintaining model performance and resisting attacks, outperforming other methods in both aspects simultaneously.
We will incorporate a detailed analysis of the UnlearnDiffAtk results in our final manuscript, including qualitative comparisons to baseline methods.
[1]: Zhang et al., To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now https://arxiv.org/abs/2310.11868
---
Rebuttal Comment 1.1:
Comment: The authors' response has addressed my concerns, and I am raising my rating to 6. I recommend including the newly conducted experiments in the revision to enhance readers' understanding and provide additional insights.
---
Reply to Comment 1.1.1:
Comment: We are pleased that we have been able to address the reviewer's concerns. We sincerely appreciate the time and effort the reviewer dedicated to reviewing our paper and providing valuable feedback. We will incorporate these insights, including the new experiments presented in this rebuttal, into our final manuscript. Thank you for helping us improve the quality of our work. | Summary: The authors address the issue of adversarial attack to diffusion model generating inappropriate image contents in this paper, critiquing the previous work on unlearning technique unlearns harmful prompt but making themself vulnerable to adversarial prompt. Instead they propose the image-based unlearning technique to remove unsafe visual features while retain the performance to generate image for safe concepts. During the training, they propose the method to generate a pair of safe and unsafe content and employ preference optimization to encourage diffusion model to generate the safe counterpart. The present the result in the evaluation to show that their model is better at defending adversarial attacks such as SneakyPrompt, Ring-A-Bell and Concept Inversion.
Strengths: - The paper is is structured in a clear and logical manner, making it easy to follow. Both the introduction and the related work sections are well-written providing comprehensive context for the problem and the clear overview of previous work.
- The necessity of addressing this problem is convincingly justified, and the proposed solution is closely aligned with this motivation. The derivation of the proposed method is elaborated in detail and is technically sound.
- The authors demonstrate strong results in the evaluation and show convincing results successfully defend adversarial attack. The present ablation study to prove the effectiveness of output preserving regularization.
Weaknesses: - There is a missing detail in experiments setup. The authors mention in line 194 that they decompose the violence concept into four categories and apply DUO to each of them and then merge the final models. If I understand correctly, the authors apply DUO independently for the four concepts resulting in four final models with LoRA. Please clarify how merging works to combine the four models.
Technical Quality: 3
Clarity: 4
Questions for Authors: Please clarify the missing detail
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors discuss the limitation in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, thank you for your recognition of our paper's clear structure and strong evaluation results.
---
> *[W1] Construction of Violence Unlearning LoRA*
We appreciate the opportunity to clarify this process. As you correctly surmised, we applied DUO independently to four subcategories of violence: "blood", "suffering", "gun", and "horror". For each concept, we trained a separate LoRA using DUO.
To create the final model, we merged these individual LoRAs. Specifically, if we denote the pretrained model weight as $W$, and the LoRA matrices trained on the $i$-th concept as $A_i$ and $B_i$, the merged weight matrix can be expressed as:
$\tilde{W} = W+ \sum_i B_iA_i$
This approach allows us to comprehensively address the violence concept while maintaining the efficiency of LoRA-based fine-tuning. We will provide a detailed explanation of this merging process in the appendix of our final manuscript to ensure full clarity and reproducibility. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful feedback, which we will integrate into the final manuscript.
We appreciate the acknowledgment of our paper's clear structure and presentation (Ub4z, tvVC, rmhF), the novelty of connecting preference optimization to unlearning (rmhF), the use of SDEdit for curating paired images (huMh), and our robust results in defending against adversarial attacks (Ub4z, rmhF).
---
### Applicability of DUO to Diffusion Transformers
**Reviewer *rmhF* asked DUO's applicability to diffusion models beyond U-Net architecture, e.g., diffusion transformers.**
We appreciate this important question. We have extended our evaluation to include Transformer-based models. As shown in Figures R1 and R2 of the global rebuttal PDF, **we successfully applied DUO to Stable Diffusion 3 (SD3)**, which uses the transformer-based mmDiT architecture. The evaluation process for SD3 was identical to that used for SD1.4v. In the Pareto curve, each point from left to right represents $\beta \in \{100, 250, 500, 1000, 2000\}$, with a consistent learning rate of 3e-5 across all experiments.
Additionally, FID scores on MS COCO demonstrate that applying DUO to SD3 maintains comparable fidelity to the pretrained model:
| SD3 | DUO ($\beta=500$) | DUO ($\beta=250$) |
|:-----:|:-----:|:-----:|
| 21.83 | 21.26 | 20.49 |
These results confirm DUO's effectiveness across different diffusion model architectures, including Transformer-based models like SD3.
---
### Impact of DUO on Unrelated Features
**Reviewers *rmhF* and *huMh* asked how DUO affects unrelated features when removing unsafe content.**
We appreciate the opportunity to clarify this important aspect of our work. Our "prior preservation" metric (1-LPIPS) specifically measures the model's ability to maintain unrelated feature generation. This is evaluated by comparing images generated from MS COCO prompts using both pretrained and unlearned models. A high prior preservation score indicates that the unlearned model produces perceptually similar images to the pretrained model when given the same prompts and noise.
To further validate our approach, we evaluated the model's ability to generate visually similar but safe concepts. For example, we compare the generation of red images (e.g., ketchup, strawberry jam) after removing the Violence concept, which is closely related to "Blood". The table below presents mean and standard devidation of LPIPS scores between 128 images generated from the unlearned model and the pretrained model. Lower scores indicate less impact on unrelated features. These results demonstrate that DUO effectively maintains the capability to generate visually similar but unrelated concepts compared to the existing methods.
| unlearned concept | safe concept | ESD | UCE | SPM | DUO (black box) | DUO (white box) |
|:-----------------:|:------------:|:------:|:------:|:------:|:---------------:|:---------------:|
| **Nudity** | Woman | 0.58 ± 0.11 | 0.42 ± 0.16 | **0.31 ± 0.15** | *0.33 ± 0.12* | 0.55 ± 0.17 |
| | Man | 0.58 ± 0.10 | 0.31 ± 0.17 | *0.15 ± 0.13* | **0.14 ± 0.09** | 0.20 ± 0.13 |
| **Violence** | Ketchup | 0.69 ± 0.15 | 0.51 ± 0.16 | **0.20 ± 0.15** | *0.23 ± 0.12* | 0.35 ± 0.14 |
| | Tomato sauce | 0.58 ± 0.19 | 0.38 ± 0.16 | **0.11 ± 0.13** | *0.18 ± 0.12* | 0.28 ± 0.12 |
| | Strawberry jam | 0.56 ± 0.13 | 0.42 ± 0.15 | **0.13 ± 0.12** | *0.20 ± 0.11* | 0.31 ± 0.12 |
**bold**: first place
*itelic*: second place
Additionally, Figure R3 in the PDF provides qualitative evidence of DUO's capability to maintain the capability to generate unrelated features.
However, we acknowledge that claiming DUO has absolutely no impact on unrelated features would be an overstatement. In our final manuscript, we will provide a more nuanced discussion, emphasizing that DUO significantly reduces impact on unrelated features compared to existing baselines, while still acknowledging the potential for minor, unintended effects.
Pdf: /pdf/23a4e106470e9971b8b1a44bcff7fba70a9ec59c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AverNet: All-in-one Video Restoration for Time-varying Unknown Degradations | Accept (poster) | Summary: This paper studies the time-varying unknown degradations in videos and proposes an all-in-one video restoration network to recover corrupted videos. Specifically, the network consists of two modules named PGA and PCE, which are designed to address the pixel shifts issue caused by time-varying degradations and to tackle multiple unknown degradations, respectively. Through the collaboration of them, the network could effectively handle the time-varying unknown degradations.
Strengths: 1. The problem of time-varying unknown degradations studied in this work is practical and challenging. In real-world scenarios, the degradations in videos dynamically change over time, and their types and levels are always unknown.
2. Compared with classic video restoration methods that deal with one specific degradation, the proposed method could handle time-varying and multiple unknown degradations with one model.
3. The paper comprehensively discussed existing video restoration methods and all-in-one image restoration methods, as well as their differences from the proposed method.
Weaknesses: 1. As shown in Table 2, although the proposed method could effectively handle time-varying degradations with different variation intervals, the variation intervals of the test sets are fixed. Could the proposed method handle degradations with variable intervals?
2. The experiments are only conducted on the test sets with combined degradations. How about the performance on single type of degradation with variable levels?
3. There is a recent method [1] that deals with multiple degradations. What are the differences between this method and the proposed one? Additionally, the authors should include it in the related works.
[1] Yang, et al. Video adverse-weather-component suppression network via weather messenger and adversarial backpropagation. ICCV, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Potential impacts and limitations have been discussed in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:Evaluation on degradations with variable intervals.**
As suggested, we conduct new experiments by synthesizing new test sets with variable degradation intervals. The results show that our method effectively handles degradations with variable intervals. Specifically, the test sets are synthesized based on DAVIS-test and the intervals are randomly sampled from [t-v, t+v].
Table 1. Quantitative results on the test sets with variable intervals.
| Test Sets | t=6, v=3 | | t=12, v=6 | |
| ------- | ----------- | ---------- | ----------- | ---------- |
| | PSNR | SSIM | PSNR | SSIM |
| RVRT | 33.8849 | 0.9306 | 34.3231 | 0.9330 |
| AverNet | **34.0313** | **0.9338** | **34.3317** | **0.9344** |
**Q2: The performance on single type of degradation.**
As suggested, we synthesize new test sets, each containing only a single type of degradation, and evaluated the models on these sets. The results, as shown in Table 2, demonstrate that AverNet consistently outperforms RVRT across all types of degradation.
Table 2. Quantitative comparisons on single type of degradation.
| Degradation | Noise | | Blur | | Compression | |
| ------- | --------- | ---------- | --------- | ---------- | ----------- | ---------- |
| | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM |
| RVRT | 36.35 | 0.9603 | 35.65 | 0.9545 | 34.49 | 0.9431 |
| AverNet | **36.88** | **0.9641** | **36.65** | **0.9618** | **34.63** | **0.9466** |
**Q3: The differences between the proposed AverNet and ViWS-Net.**
The differences between AverNet and ViWS-Net [1] are discussed below and will be included in the related works. First, AverNet aims to handle time-varying unknown degradations, whereas ViWS-Net specifically focuses on a single type of degradation within one video. Additionally, AverNet employs prompt-guided modules to conditionally restore videos, while ViWS-Net explicitly optimizes a weather discriminator to classify the degradations and guide the restoration.
[1] Video Adverse-Weather-Component Suppression Network via Weather Messenger and Adversarial Backpropagation.
---
Rebuttal Comment 1.1:
Title: After reading the response and all comments, I decide to raise my score.
Comment: Thank you for providing the additional experiments and comprehensive explanation. The feedback has successfully addressed my previous concerns, and the additional experiments further demonstrate the flexibility and effectiveness of the proposed method. As the first study to address time-varying unknown degradations in videos, this paper presents an effective solution for restoring corrupted videos with a single model. I am confident that this work will make a significant contribution to the field of video restoration and be of great benefit to the community. Accordingly, I have raised the score.
---
Rebuttal 2:
Title: Thanks for the reviewer's reply!
Comment: Dear Reviewer 2PV4,
Thank you for your positive feedback and for raising your score. We appreciate your approval of our revisions and are glad that the additional experiments and discussions have addressed your concerns. Your suggestions throughout the review process have been invaluable. | Summary: The authors propose a prompt learning based framework for all-in-one video restoration with time-varying degradations. Their work employs a prompt-guided alignment module to overcome pixel shifts caused by time-varying degradations. Multiple unknown degradations are learned through a prompt-conditional module.
Strengths: - The authors have a solid motivation to address time varying degradations in videos in an unified restoration setting
- Their proposed method consistently outperforms prior work in their considered settings
Weaknesses: - The data pipeline used in this study appears overly simplistic and disrupts the content dependencies of corruptions like noise, which varies with overexposed or underexposed frames. Instead of merely adding random degradation types to random video frames, it would be more realistic to simulate the severity of these degradations over time (e.g., increasing JPEG compression, blur, or noise), thereby preserving temporal dependencies.
- Merely increasing the variation intensity by reducing the number of frames per degradation is too simplistic. As mentioned earlier, increasing the severity of degradation is more beneficial.
- Following the practicality aspect of proposed method, the authors do not provide evaluation on realistic degraded videos, such as VideoLQ or NoisyCity4, whether their time-varying degradation model can compete with prior work in this more complex setting.
- The efficiency comparison in Table 1 appears inaccurate, as the number of parameters for both PromptIR and AIRNet is incorrect. There is a significant discrepancy between the officially reported numbers and those listed in Table 1.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the model performance change under more complex degradation pipelines used in works such as BSRGAN or Real-ESRGAN? It is also not clear why these degradation pipelines were not at least considered as a starting point.
- How does the model perform when adding multiple degradation to the same frame snippets or having a collection of different degradations per frame snippet instead of sequentially adding different single degradations to the video snippets
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1&Q2:Increase the severity of degradations over time.**
As suggested, we conduct new experiments by synthesizing four test sets with progressively worsening degradations and the results prove the effectiveness of our method. Specifically, in the first test set, different types of degradations are gradually introduced, with their intensities increasing over time. In the other three test sets, only one type of degradation is added, with its intensity increasing over time. Since the models have seen various degradations and their variations during training, we directly apply them to these test sets without retraining. The results are presented in the table below. From the tables, one could observe that our AverNet is effective in handling various degradation changes and outperforms RVRT in the settings where degradations worsen over time.
Table 1. Quantitative results on increasing degradation severity over time. Multiple degradations denote noise, blur, and compression are gradually added and their severity worsen with time. Noise, Blur, and Compression denote single type of degradation that worsen with time.
|DAVIS-test|Multiple Degradations||Noise||Blur||Compression||
|-|-|-|-|-|-|-|-|-|
|Metric|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|
|RVRT|33.26|0.9201|35.81|0.9541|28.78|0.8264|28.99|0.8613|
|AverNet|**33.41**|**0.9238**|**36.38**|**0.9577**|**29.24**|**0.8389**|**29.31**|**0.8804**|
|**Set8**|**Multiple Degradations**||**Noise**||**Blur**||**Compression**||
|Metric|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|
|RVRT|29.63|0.8623|33.64|0.9457|26.99|0.7864|28.69|0.8794|
|AverNet|**29.77**|**0.8656**|**33.88**|**0.9482**|**27.19**|**0.7924**|**28.89**|**0.8876**|
**Q3:Evaluations on realistic degraded videos.**
As we do not find the NoisyCity4 dataset, we can only conduct evaluations on the realistic video dataset VideoLQ [1]. The results in Table 2 show that our AverNet is more effective in dealing with realistic and complex degradations.
Table 2. Quantitative results on realistic video dataset VideoLQ.
| Method | RVRT | AverNet |
| -------------- | ------ | ---------- |
| NIQE ↓ | 4.6602 | **4.6234** |
| PI ↓ | 3.6491 | **3.6464** |
| CNNIQA **↑** | 0.5470 | **0.5487** |
| HyperIQA **↑** | 0.4547 | **0.4560** |
| CLIPIQA **↑** | 0.3800 | **0.3899** |
[1] Investigating Tradeoffs in Real-World Video Super-Resolution.
**Q4:Parameters for PromptIR and AirNet.**
To comprehensively compare the models, we use the PyTorch model profiling API THOP to calculate the parameters. THOP calculates parameter counts using hooks on modules, which may result in lower values than those officially reported. We recalculate and update the results, i.e, PromptIR has 35.60M parameters, and AirNet has 8.93M parameters.
**Q5:Clarification on our degradation pipeline and that of BSRGAN and Real-ESRGAN. The performance under pipelines of BSRGAN and Real-ESRGAN.**
We argue that the degradation pipelines used in BSRGAN [1] or Real-ESRGAN [2] are not necessarily more complex than ours. In fact, our method has a comparable level of complexity with them. To provide a clearer comparison, we have summarized the key components of our pipeline alongside those of BSRGAN and Real-ESRGAN in Table 3 below. Additionally, our pipeline is designed for all-in-one video restoration that addresses time-varying unknown degradations in videos, whereas these pipelines are developed for blind image super-resolution.
Table 3. Comparison of degradation pipelines.
| Degradation Types | Blur | Downsampling | Noise | Compression |
| ----------------- | --------------------------------- | ------------ | ------------------------------------------------------------ | --------------------------------------- |
| BSRGAN | Gaussian Blur| Resize| Gaussian Noise, Processed camera sensor Noise| JPEG Compression |
| Real-ESRGAN | Gaussian Blur, 2D sinc filter |Resize|Gaussian Noise, Poisson Noise, Color Noise, Gray Noise|JPEG Compression|
| Ours | Gaussian Blur, Resizing Blur |-|Gaussian Noise, Poisson Noise, Speckle Noise |JPEG Compression, Video Compression |
As suggested, we synthesize new test sets based on the pipelines of BSRGAN and Real-ESRGAN to evaluate the performance of our models. The results, presented in Table 4, show that our AverNet achieves comparable or even superior performance on both test sets synthesized through BSRGAN and Real-ESRGAN. Note that downsampling operation was removed to maintain the frame size.
Table 4. Quantitative comparisons on the test sets synthesized through BSRGAN and Real-ESRGAN.
| Pipeline | BSRGAN | | Real-ESRGAN | |
| -------- | --------- | ---------- | ----------- | ---------- |
| Metric | PSNR | SSIM | PSNR | SSIM |
| RVRT | 26.31 | **0.7004** | 25.36 | **0.6036** |
| AverNet | **26.34** | 0.6977 | **25.37** | 0.5971 |
[1] Designing a Practical Degradation Model for Deep Blind Image Super-Resolution.
[2] Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
**Q6:Clarification on the degradation pipeline.**
Actually, our pipeline adds multiple degradations simultaneously to the same frame snippets. Specifically, in our pipeline, each type of degradation has a 0.55 probability of being sampled and applied to per snippet. In other words, each frame snippet in our test sets usually involves multiple degradations. Consequently, our experiments indeed evaluate the models on test sets containing multiple degradations per snippet, rather than just a single degradation per snippet.
---
Rebuttal Comment 1.1:
Comment: I appreciate the reviewers' responses during the rebuttal process. However, my primary concern about the data generation pipeline remains unresolved. The results in Table 2 and Table 4 are still unconvincing. While I agree that addressing the TUD problem is a crucial next step, the current paper does not sufficiently compare the proposed generation pipeline with prior approaches, nor does it seem to be thoroughly developed. As a result, I must maintain my current score.
---
Rebuttal 2:
Title: Clarification on the data generation pipeline.
Comment: Dear Reviewer Zzy9,
We appreciate your approval of the TUD problem raised in the paper and would like to address your concerns as follows.
As you noted, addressing the TUD problem is a crucial next step in the filed of video restoration. To study this problem, we developed the pipeline to simulate the data of TUD. The previous pipelines of BSRGAN and Real-ESRGAN are not applicable as their goal is to generate images with mixed degradations for blind image super-resolution. In contrast, our pipeline aims to synthesize videos with time-varying degradations, which is well-aligned to our research purpose, i.e., all-in-one video restoration for TUD problem. Experimental results demonstrate that our method can effectively address the TUD problem compared with the baselines.
We hope these clarifications could address your concerns. Thank you once again for your feedback and for helping us improve our work. | Summary: The paper considers the problem of all-in-one restoration in videos, which is fundamentally different from images due to time-varying notion of degradations affecting the videos. The paper proposes prompt based modules to condition the restoration of frames on.
Strengths: **S1.** The paper extends the problem of all-in-one restoration from the image domain to the video setting.
**S2.** A recurrent prompt-based architecture is proposed for the said purpose.
**S3.** Two datasets are synthesized, due to lack of such datasets, based on seven degradations with varying intensity of degradation over time.
Weaknesses: **W1.** In longer videos (Set8), the performance of RVRT is very comparable to the proposed AverNet. Although RVRT does not include any prompts (implicit or explicit) to condition the restoration on.
**W2.** The paper lacks thorough exploration of the problem. Considering weather-induced degradations as base (instead of just DAVIS/Set8), and synthesizing the video datasets then would have been a more challenging problem to evaluate the effectiveness of the prompts in longer videos.
**W3.** It would benefit to include a baseline that considers the problem of all-in-one video restoration (or multiple degradations with one model) since those architectures are designed to condition the restoration procedure on the degradation information [1], [2]. I agree with the reasoning about [2] in line 84 onwards, however results on [1], and/or [2] would indicate the importance of the proposed formulation (i.e., the conditioning should take into account/model time-varying degradations).
[1] Video Adverse-Weather-Component Suppression Network via Weather Messenger and Adversarial Backpropagation
[2] Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for Video Adverse Weather Removal
Technical Quality: 3
Clarity: 3
Questions for Authors: **Q1.** It is unclear up until Table 5 ablation what $t$ refers to i.e, the interval in key frames. If $t$ refers to the key frame interval, what does variation intensity mean in "Evaluation on Different Variation Intensity" paragraph, and how is interval in key frames used to synthesize the datasets?
**Q3.** Have the authors considered a non-prompt based setting for ablation experiments? In Table 4, all scenarios have prompts.
**Q2.** Have the authors considered a controlled setting wherein the degradations necessarily worsen with time i.e., severe noise, blur, etc. are introduced as time increases? This setting would highlight how well the prompts can adapt to the changing degradations.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations, and societal impact are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:Effectiveness of prompts in longer video Set8 with complex degradation changes.**
Actually, our prompt-based AverNet is more effective in dealing with complex degradation changes in longer video Set8. To highlight how well the prompts can adapt to changing degradations, we synthesize new test sets with increasing degradation severity based on Set8. The results are shown in Table 1, from which one could observe that the prompts endow AverNet with a greater capacity to handle degradation changes in long videos. In detail, we synthesize four test sets where the degradations progressively worsen over time. In the first test set, different types of degradations are gradually introduced, with their intensities increasing over time. In the other three test sets, only single type of degradation is added, with the intensity increasing over time.
Table 1. Quantitative results on Set8 with increasing degradation severity over time. Multiple degradations denote noise, blur, and compression are gradually added and their severity worsen with time. Noise, Blur, and Compression denote single type of degradation which worsen with time.
|Set8|Multiple Degradations|| Noise||Blur||Compression||
|-|-|-|-|-|-|-|-|-|
|Metric|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|
|RVRT|29.63|0.8623|33.64|0.9457|26.99|0.7864|28.69|0.8794|
|AverNet|**29.77**|**0.8656**|**33.88**|**0.9482**|**27.19**|**0.7924**|**28.89**|**0.8876**|
**Q2:Effectiveness on weather-induced degradations.**
As suggested, we synthesize a new dataset with time-varying weather degradations based on the video dataset REVIDE [1]. The results in Table 2 demonstrate that our prompt-based AverNet effectively handles weather-induced degradations. Specifically, we introduce three types of weather degradations (i.e., haze, snow and rain) through our data pipeline and the synthesis approaches similar to [2,3]. Due to time limitation, we train models on the dataset with 200k iterations, and use a fast baseline BasicVSR++ for comparison. RVRT is not compared since its training is too time-comsuming to finish in the rebuttal phase.
Table 2. Quantitative results on weather-induced degradations.
|Method|PSNR|SSIM|
|-|-|-|
|BasicVSR++|37.36|0.9704|
|AverNet|**39.82**|**0.9740**|
[1] Learning to Restore Hazy Video: A New Real-World Dataset and A New Method.
[2] Blind Image Decomposition.
[3] Relationship Quantification of Image Degradations.
**Q3:Comparisons with all-in-one video restoration methods.**
As suggested, we compare our method with ViWSNet [1] and present the results in Table 3. The results show that ViWSNet struggles with handling time-varying degradations and produces unsatisfying results. We speculate that this is because ViWSNet imposes a strong assumption through loss function, i.e., only single type of degradation exists in single video.
Note that Diff-TTA [2] is not compared since its code is not available, and we were unable to reproduce it during the rebuttal phase.
Table 3. Quantitative results of ViWSNet on DAVIS-test and Set8.
|Test Sets|DAVIS(t=6)||DAVIS(t=12)||DAVIS(t=24)||Set8(t=6)||Set8(t=12)||Set8(t=24)||
|-|-|-|-|-|-|-|-|-|-|-|-|-|
|Metric|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|
|ViWSNet|16.38|0.5278|16.38|0.5273|16.38|0.5289|13.73|0.3579|13.72|0.3605|13.70|0.3574|
|AverNet|34.07|0.9333|34.09|0.9339|34.28|0.9356|31.73|0.9219|31.47|0.9145|32.45|0.9189|
[1] Video Adverse-Weather-Component Suppression Network via Weather Messenger and Adversarial Backpropagation.
[2] Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for Video Adverse Weather Removal.
**Q4:Clarifications on variation intensity t and key frame interval T.**
The confusions may arise from the misunderstanding betweeen the variation intensity 't' in the data pipeline and the keyframe interval 'T' in the PCE module. Specifically, the lowercase ‘t’ controls the interval of degradation changes in the data pipeline. A smaller t corresponds to a higher variation intensity of degradations. For instance, t=6 indicates degradation changes every six frames. Additionally, the uppercase 'T' is the hyperparameter in the PCE module that controls the interval of keyframes. For example, T=12 indicates PCE module select one keyframe every twelve frames.
**Q5:Non-prompt ablation studies.**
We carry out non-prompt ablation study as suggested. As shown in Table 4, non-prompt baseline shows a significant drop in both the PSNR and SSIM metrics, highlighting the effectiveness of the two prompt-based modules.
Table 4. Ablation studies on the proposed prompt-based modules.
||PGA|PCE|DAVIS-test||Set8||
|-|-|-|-|-|-|-|
||||PSNR|SSIM|PSNR|SSIM|
|(A)|||32.43|0.8910|27.99|0.8404|
|(B)||✓|32.59|0.9157|30.14|0.8958|
|(C)|✓||32.99|0.9156|29.80|0.8755|
|(D)|✓|✓|34.09|0.9339|31.47|0.9145|
**Q6:Experiments on degradations worsen with time.**
As suggested, we carry out experiments under the above setting. The results are presented in Table 5, from which one could observe that our prompt-based AverNet significantly outperforms RVRT, which demonstrates the prompts effectively adapt to the changing degradations.
Table 5. Quantitative results on test sets with increasing degradation severity over time. Multiple degradations denote noise, blur, and compression are gradually added and their severity worsen with time. Noise, Blur, and Compression denote single type of degradation which worsen with time.
|DAVIS-test|Multiple Degradations||Noise||Blur||Compression||
|-|-|-|-|-|-|-|-|-|
|Metric|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|
|RVRT|33.26|0.9201|35.81|0.9541|28.78|0.8264|28.99|0.8613|
|AverNet|**33.41**|**0.9238**|**36.38**|**0.9577**|**29.24**|**0.8389**|**29.31**|**0.8804**|
|**Set8**|**Multiple Degradations**||**Noise**||**Blur**||**Compression**||
|Metric|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|PSNR|SSIM|
|RVRT|29.63|0.8623|33.64|0.9457|26.99|0.7864|28.69|0.8794|
|AverNet|**29.77**|**0.8656**|**33.88**|**0.9482**|**27.19**|**0.7924**|**28.89**|**0.8876**|
---
Rebuttal 2:
Title: Post Rebuttal Comments
Comment: I thank the authors for the thorough rebuttal. I have gone through the other reviewers' comments, and authors' rebuttal. My comments are addressed, and therefore I am raising my score to borderline accept.
I think the work is important, but exploring more complex degradations (such as rain/haze/snow) in more depth in the TUD setting would have been more interesting.
---
Rebuttal Comment 2.1:
Title: Thanks for the reviewer's reply!
Comment: Dear Reviewer wJZg,
Thank you for your positive feedback and for raising the score. We will include additional results about time-varying unknown degradations as you suggested, and provide a more thorough discussion in the revision. Besides, we will continue to explore the weather degradations under the TUD setting in our future works. Thank you again for the constructive suggestions and the approval of this work. | Summary: This paper presents a video restoration method capable of addressing time-varying unknown degradations (TUD) with a single model. The proposed method employs two modules, i.e., the prompt-guided alignment (PGA) module and the prompt-conditioned enhancement (PCE) module in the propagation to leverage the temporal information for restoration. Experiment results on various types of degradations demonstrate the effectiveness of the proposed method.
Strengths: 1. This paper considers a more practical and valuable problem named TUD in video restoration and presents a feasible solution to handle TUD with a single model.
2. The proposed modules take advantage of prompt learning to handle TUD and explicitly consider the degradations during propagation, which is interesting and innovative.
Weaknesses: 1. While the paper first studies time-varying unknown degradations, these degradations are synthesized through a degradation model, which may not accurately reflect real-world degradation distributions.
2. The intervals t of degradation variations in the test sets are all multiples of 6, which is the interval during training. It is uncertain that the proposed could generalize well on other intervals such as 9.
3. Previous works [1,2] could adaptively selected keyframes based on video changes. In contrast, the PCE module selects keyframes according to a fixed interval T, Why do not take the adaptive methods?
4. Some related works [3,4] are not included as they guided this area a lot. The authors should discuss them too.
[1] Yule Li, Jianping Shi, Dahua Lin: Low-Latency Video Semantic Segmentation. CVPR 2018: 5997-6005.
[2] Yu-Syuan Xu, Tsu-Jui Fu, Hsuan-Kung Yang, Chun-Yi Lee: Dynamic Video Segmentation Network. CVPR 2018: 6556-6565.
[3] Jiaqi Ma, Tianheng Cheng, Guoli Wang, Qian Zhang, Xinggang Wang, Lefei Zhang: ProRes: Exploring Degradation-aware Visual Prompt for Universal Image Restoration. CoRR abs/2306.13653 (2023).
[4] Yuang Ai, Huaibo Huang, Xiaoqiang Zhou, Jiexiang Wang, Ran He: Multimodal Prompt Perceiver: Empower Adaptiveness, Generalizability and Fidelity for All-in-One Image Restoration. CoRR abs/2312.02918 (2023)
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses section.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. The authors should provide the discussion of limitations of this paper.
2. Related works should be addressed in a more sufficient way.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Evaluations on the realistic video dataset.**
We further evaluate the effectiveness of our pipeline and network on realistic video dataset VideoLQ [1]. The results show that the models trained on our pipeline generalizes well on the realistic degradations. As shown in Table 1, our network and RVRT are not re-trained on the realistic video datasets such as RealVSR [2] but still shows great performance on VideoLQ.
Table 1. Quantitative results on the realistic video dataset VideoLQ.
| Method | RVRT | AverNet |
| ---------- | ------ | ---------- |
| NIQE ↓ | 4.6602 | **4.6234** |
| PI ↓ | 3.6491 | **3.6464** |
| CNNIQA ↑ | 0.5470 | **0.5487** |
| HyperIQA ↑ | 0.4547 | **0.4560** |
| CLIPIQA ↑ | 0.3800 | **0.3899** |
[1] Investigating Tradeoffs in Real-World Video Super-Resolution.
[2] Real-world video super-resolution: A benchmark dataset and a decomposition based learning scheme.
**Q2:Other degradation intervals.**
As suggested, we conduct experiments on different intervals to show the generalization of AverNet. From Table 2, one could observe that AverNet shows better performance on two additional intervals.
Table 2. Quantitative results on different intervals t=9 and t=15.
| Method | t=9 | | t=15 | |
| ------- | --------- | ---------- | --------- | ---------- |
| Metric | PSNR | SSIM | PSNR | SSIM |
| RVRT | 33.92 | 0.9320 | 34.07 | 0.9347 |
| AverNet | **34.01** | **0.9356** | **34.17** | **0.9373** |
**Q3:Adaptive keyframe selection.**
Following [1,2], we adopt an adaptive strategy to select the keyframes, which chooses the frames with largest changes as keyframes. The results are presented in Table 3, which show that the adaptive strategy only brings slight improvement in SSIM. However, the computational burden of the adaptive strategy is not negligible in practice.
Table 3. Quantitative comparisons between fixed and adaptive keyframe strategy.
| Keyframe Strategy | Fixed | Adaptive |
| ----------------- | ------ | -------- |
| PSNR | 34.09 | 34.09 |
| SSIM | 0.9339 | 0.9341 |
[1] Low-Latency Video Semantic Segmentation.
[2] Dynamic Video Segmentation Network.
**Q4:More related works [1,2] should be discussed.**
As suggested, we will discuss the all-in-one image restoration methods [1,2] in the following and include them in the related works. MPerveiver [1] proposes a multimodal prompt learning approach to exploit the generative priors of Stable Diffusion to achieve high-fidelity all-in-one image restoration. ProRes [2] introduces additional visual prompts to incorporate task-specific information and utilize the prompts to guide the network for all-in-one restoration.
[1] ProRes: Exploring Degradation-aware Visual Prompt for Universal Image Restoration.
[2] Multimodal Prompt Perceiver: Empower Adaptiveness, Generalizability and Fidelity for All-in-One Image Restoration.
**Q5:Discussion on limitations.**
The training data for AverNet is based on the video synthesis approach, which generates videos with time-varying unknown degradations close to real-world scenarios. However, the corruptions in the real-world videos are complex and difficult to be simulated. Therefore, in real-world applications, our AverNet may need further validation and improvement.
---
Rebuttal 2:
Title: Reminder for review
Comment: Dear Reviewer hhCd, I have noticed that you have not yet responded to the authors' rebuttal. I kindly urge you to engage in a discussion with the authors at your earliest convenience to help advance the review process. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Variational Flow Matching for Graph Generation | Accept (poster) | Summary: This paper proposed a variant of flow matching for graph generation from a variational perspective, called CatFlow. The main idea is not to estimate the marginal velocity, but instead to estimate the posterior first and then evaluate the marginal velocity as the expectation of the conditional velocity w.r.t. the posterior. In the case of linear conditional velocity and Gaussian variational approximation, the equivalence to vanilla flow matching has been established and connection to score-based models have been discussed. A simplified mean-field variational approach has also been proposed based on the observation that marginal posterior matching (for individual dimensioins) is enough for posterior mean estimation for linear conditional velocity. Experiments on several graph/molecule generation tasks demonstrate the effectiveness of the CatFlow.
Strengths: The paper is written clearly. The variational formulation and the connection to standard flow matching is new. The authors have also shown that a weighted average of the training objective of VFM provides a bound on the log-likelihood of the model.
Weaknesses: The main weakness of the paper is that the core idea of learning the posterior first (i.e., the variational formulation) and evaluating the marginal velocity using the expected conditional velocity w.r.t. the posterior has already been proposed in Dirichlet Flow [1]. Although the author claimed that Dirichlet Flow uses the specific dirichlet distribution as the conditonal probability, this is mainly a design choice which does not affect the generality of Dirichlet Flow for modeling discrete objectives such as graphs and molecules. Given this, the lack of an appropriate comparison to Dirichlet Flow in the experiments represents a significant drawback of the current paper.
The paper also lacks a brief introduction to the baseline methods, which would help contextualize the results and better highlight the contributions of the proposed approach.
[1] Hannes Stark, Bowen Jing, Chenyu Wang, Gabriele Corso, Bonnie Berger, Regina Barzilay, and Tommi Jaakkola. Dirichlet flow matching with applications to dna sequence design. arXiv preprint arXiv:2402.05841, 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Although it is reasonable to match the marginal posterior using mean-field approximation, it is still not clear if this simple mean-field approximation is good enough to capture the posterior mean. Can the author provide some intuition when the mean-field approximation is good enough?
2. In flow matching, an error bound for the generated distribution in term of flow matching error is provided. Can a similar result be provided here in terms of the variational approximation error?
3. How would VFM perform when the conditional velocity is not linear?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations has been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer vH9B,
We would like to thank you for the useful questions and positive words about our work. We will first answer each question and then propose concrete changes to the paper to address them in the final version.
> The main weakness of the paper is that the core idea of learning the posterior first (i.e., the variational formulation) and evaluating the marginal velocity using the expected conditional velocity w.r.t. the posterior has already been proposed in Dirichlet Flow [1]. Although the author claimed that Dirichlet Flow uses the specific dirichlet distribution as the conditonal probability, this is mainly a design choice which does not affect the generality of Dirichlet Flow for modeling discrete objectives such as graphs and molecules. Given this, the lack of an appropriate comparison to Dirichlet Flow in the experiments represents a significant drawback of the current paper.
We agree that this work is directly related to the recently proposed work on Dirichlet FM and that an experimental comparison is warranted in this context. We have included such a comparison in our main response.
To briefly comment further on the relationship to Dirichlet FM: the cross-entropy loss and the velocity field in equations (9) and (10) of the Dirichlet FM paper are analogous to those in our paper, which (incidentally) are *also* described in Equations (9) and (10) in our manuscript. From a methodological perspective, the main distinction is that the Dirichlet FM paper is framed in terms of a probability path on the simplex, whereas we arrive at CatFlow by way of a more general variational perspective that admits both categorical and continuous models as special cases. Moreover, CatFlow is not defined through a forward process on the simplex and simply operates in $\mathbb{R}^n$.
**Concrete Steps:** As shown in the main response, we have now performed additional experiments to compare against Dirichlet FM. As we wrote, we did make an earlier attempt to perform a comparison at the time of submission, but we felt that the (poor) performance we observed in our initial results was not sufficiently representative to merit inclusion. We believe that we have addressed this with these new results and hope this also addresses the main concern raised by the reviewer.
> The paper also lacks a brief introduction to the baseline methods, which would help contextualize the results and better highlight the contributions of the proposed approach.
That is a fair point!
**Concrete steps:** We will mention the most important baselines in the main text, and add a small description of the other ones in the appendix.
> Although it is reasonable to match the marginal posterior using mean-field approximation, it is still not clear if this simple mean-field approximation is good enough to capture the posterior mean. Can the author provide some intuition when the mean-field approximation is good enough?
We are not sure why this might be a concern, perhaps the reviewer could elaborate?
Mean-field variational inference is known to under-approximate the posterior variance when minimizing an exclusive/reverse KL divergence. This is a consequence of the fact that the variational family cannot capture correlations in the posterior. However, the posterior mean estimates are generally known to be accurate. As such, we have *no* indication that these distributions would *not* be able to provide a good estimate of the mean. This also appears to be born out by empirical results.
**Concrete Steps:** We will amend the paper to provide some discussion on this point.
> In flow matching, an error bound for the generated distribution in term of flow matching error is provided. Can a similar result be provided here in terms of the variational approximation error?
**Answer:** We are not aware of error bounds in the concurrent works on Flow Matching by Lipman et al., Albergo at al., and Liu et al. Is the reviewer referring to the error bounds developed by Benton and colleagues [1]? It might indeed be possible to derive an analogous result starting from an assumed bound on the approximation error in the posterior mean, rather than an approximation error in the velocity field. However, this is not something that we have considered in the context of this submission.
[1] Joe Benton, George Deligiannidis, Arnaud Doucet. Error Bounds for Flow Matching Methods. TMLR 2024.
> How would VFM perform when the conditional velocity is not linear?
This is a very interesting question.
Just to make sure we are on the same page: The linearity assumption we care about is *the linearity of the conditional field in the endpoint*, i.e. the fact that $u_t(x \mid x_1)$ is linear in $x_1$. This is subtly different from the fact that in flow matching, $u_t(x \mid x_1)$ itself is a linear interpolation, which is a design choice in FM typically referred to as the ‘optimal transport’ formulation.
The linearity in $x_1$ is, however, a standard assumption often made in e.g. diffusion models and flow matching models. We show that under this common linearity assumption, optimising VFM using a 'mean-field approximation' can actually find the exact solution. In this setting the mean-field approximation is not an approximation at all!
The question of what happens if this linearity assumption does not hold is one which, as far as we know, cannot be tackled by the standard diffusion/flow matching techniques out there. However, using VFM one could simply resolve to learning the loss $\mathcal{L}(\theta) = -\mathbb{E}[\log q_t^\theta (x_1 \mid x)]$ through stochastic gradients for some distribution $q_t^\theta$ that is not fully factorized. This is actually a promising direction for future research and might be a good example of where a variational point of view can offer new perspectives relative to the standard point of view.
Thank you so much for your time and effort reviewing our work.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer vH9B, we wondered if the added results and comparison to Dirichlet FM addressed your main concerns regarding our work. Thanks a lot for your review once more. | Summary: This work presents variational flow matching framework VFM and introduces CatFlow, an application to generation of categorical data such as graph. The paper reformulates flow matching using variational perspective and with linear assumption on the conditional vector field achieves tractable objective. The authors derive the connection with the score-based models and show that the framework provides variational bound on the model likelihood.
Strengths: - To the best of my knowledge, the variational framework for flow matching is novel and the provided theoretical works seem solid.
- Connection with the score-based model provide intuitions on the method.
- Application to discrete data seems reasonable and the benefits over standard flow matching is well explained.
Weaknesses: - The reason CatFlow outperforms previous diffusion models is unclear. Is it because VFM provides variational bound on the model likelihood? Ablation studies on why CatFlow outperforms other diffusion models would strengthen the work.
- CatFlow should be validated on larger datasets used in recent works [1, 2], for example Planar and SBM, since community-small and ego-small datasets consists of very small graphs which is not suitable for evaluating generative models.
- In particular, validity metric should be used to evaluate the method, instead of only relying on MMD. Evaluating only with MMD is not appropriate as MMD may fail to catch important graph characteristics, for example, small MMD does not guarantee the generated graphs is actually a community graph. Evaluating on Planar or SBM dataset and comparing the results of V.U.N. (valid, unique, and novel) is necessary to strongly argue the advantage of the proposed method.
- Comparison with discrete diffusion model like DiGress should be done on more datasets other than QM9. This also leads to experiments on Planar or SBM datasets or in molecular generation tasks, like GuacaMol dataset.
Minor correction: citation in line 148 should be fixed
Technical Quality: 3
Clarity: 3
Questions for Authors: I would appreciate it if the authors could address the weakness above.
Also, I recommend more explanation on Figure 1 as it would privde the readers to get better understanding on CatFlow.
In summary, I believe the paper provides theoretical contributions, but the experimental validation needs more room to improve.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations at the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer WAZn,
We would like to thank you very much for your positive comments and thoughtful questions about our work. We will first answer each question and then propose concrete changes to the paper to address them in the final version. Since some questions are related, we grouped them together in our answer.
> The reason CatFlow outperforms previous diffusion models is unclear. Is it because VFM provides variational bound on the model likelihood? Ablation studies on why CatFlow outperforms other diffusion models would strengthen the work.
>
Thanks a lot for this question. Our hypothesis is that CatFlow outperforms diffusion-based baselines for similar reasons that continuous FM has proven a strong competitor to diffusion in e.g. image generation, which is to say that a linear interpolation in the probability path appears to work well in practice. It is of course difficult to demonstrate conclusively why any method works better than any other method. Indeed, to our knowledge no such demonstration exists for continuous FM either.
**Concrete steps:** We will discuss the intuition for why CatFlow might outperform diffusion-based baselines more clearly in the text.
> CatFlow should be validated on larger datasets used in recent works [1, 2], for example Planar and SBM, since community-small and ego-small datasets consists of very small graphs which is not suitable for evaluating generative models.
In particular, validity metric should be used to evaluate the method, instead of only relying on MMD. Evaluating only with MMD is not appropriate as MMD may fail to catch important graph characteristics, for example, small MMD does not guarantee the generated graphs is actually a community graph. Evaluating on Planar or SBM dataset and comparing the results of V.U.N. (valid, unique, and novel) is necessary to strongly argue the advantage of the proposed method.
Comparison with discrete diffusion model like DiGress should be done on more datasets other than QM9. This also leads to experiments on Planar or SBM datasets or in molecular generation tasks, like GuacaMol dataset.
>
Thank you for the suggestion – we fully agree! We have run CatFlow on Planar and SBM now and have obtained SOTA performance on these datasets as well, being on par / a tiny bit better than Digress on all metrics, including V.U.N. (all reported in the added PDF). We will report these values in the final version of the paper. Moreover, we would like to highlight that in the submitted manuscript, we also ran large-graph experiments (see appendix B), on which we did obtain strong performance as well.
Sadly, we have not been able to run the GuacaMol dataset in time for the response deadline, but we will aim to include that in the final version of the paper.
**Concrete steps:** Add results Planar and SBM in the final version of the paper, including extra metrics. Where appropriate, we will rerun the existing experiments with more metrics to provide a better comparison to existing methods.
We will also make sure the citation mentioned will be corrected and the text in the figure will be improved.
Last, thank you so much for your time reviewing our work. | Summary: This paper introduces a new variational inference framework of flow matching with a focus on applying the framework on discrete data generation. Instead of using the squared norm in standard flow matching, the paper proposes a variational distribution to the conditional path, which is used in the vector field. The paper shows that when the variational distribution is identical to the conditional path, the approximate vector field equals to the target one. The paper then uses mean-field factorisation of the variational distribution and applies linear conditional vector field. Based on the proposed framework, the paper mainly applies it for discrete data generation including the tasks of abstract graph generation and molecular generation. Experiments show that the proposed method outperforms other diffusion-based or flow-matching-based methods.
Strengths: 1. The technical contribution and significance of the paper are good. The paper provides a novel angle of formulating flow matching as a variational inference problem, which is theoretically sound and conceptually intuitive.
2. The paper derives a general framework first and then implement it on the task of discrete data generation. The paper also provides comprehensive theoretical analysis on the connections to related methods.
3. The experimental results of the proposed method are convincing.
4. The paper is well-written in general.
Weaknesses: In general, the quality of the paper is good. But I have a few questions on clarity. Please see below.
Technical Quality: 4
Clarity: 4
Questions for Authors: but not all of them are weaknesses.
1. In standard flow matching, one needs to minimise the squared norm between the parameterised vector field and the true vector field. That's why one needs to compute the (conditional) vector field. But in variational flow matching, it seems that the final objective is the maximisation of the data likelihood in terms of $\theta$. I wonder how the vector fields come into play in this case. Are they used for computing the variational distribution somehow?
2. Related to the previous question, it would be good to have more details of the parameterisation of $q$ or the implementation of $\theta$.
3. Mean-field approximation of the variational distribution and linear formulation of the conditional vector field are used, more for efficiency consideration. Can the authors provide more discussions on when these approximation and assumption will be the bottleneck of performance?
4. Although the code is provided, it might be good to provide an algorithm of pseudo code in the paper.
5. It would be good to have more explanations of Figure 1.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer 5EeZ,
First of all, we would like to thank you very much for your positive words and useful points about clarity. We will first answer each question and then propose concrete changes to the paper to address them in the final version.
> In standard flow matching, one needs to minimise the squared norm between the parameterised vector field and the true vector field. That's why one needs to compute the (conditional) vector field. But in variational flow matching, it seems that the final objective is the maximisation of the data likelihood in terms of $\theta$. I wonder how the vector fields come into play in this case. Are they used for computing the variational distribution somehow?
If we understand the question correctly, the reviewer is wondering how the role of the conditional vector field in VFM differs from its role in standard FM, particularly during training.
During *generation*, we explicitly use $u_t(x | x_1)$ to compute the approximate velocity field $v_t^{\theta}(x) = \mathbb{E}_{q_t}[u_t(x | x_1)]$.
However, during *training*, we indeed do *not* compute it explicitly to evaluate the objective.
With that said, the conditional velocity field $u_t(x | x_1)$ generates a probability path $p_t(x | x_1)$, which in turn defines a posterior probability path $p_t(x_1 | x)$, which we approximate with a variational distribution $q_\theta(x_1 | x)$. In other words, the learned variational approximation implicitly depends on the choice of conditional velocity field $u_t(x | x_1)$, even if this field does not show up explicitly in the the objective. Please let us know if this answers your question, we would be happy to clarify further.
**Concrete steps:** We will dedicate a paragraph in the paper to explaining this better. We will also add one algorithm block for training and one for sampling/generation that clearly summarises the steps.
> Related to the previous question, it would be good to have more details of the parameterisation of $q$ or the implementation of $\theta$.
>
Yes, we fully agree! Since the goal in VFM is to predict $\mathbb{E}[x_1 \mid x_t]$, the goal is to train a network to predict the parameter needed to compute this expectation based on input $x_t$ at time $t$, e.g. the mean of a Gaussian distribution or the probability vector of a categorical distribution.
**Concrete steps:** We will add a detailed section in the appendix describing the parameterisation of $q$ and $\theta$. Specifically, we will do this for 1) the general case, 2) the Gaussian case, and 3) the Categorical case in the final version.
> Mean-field approximation of the variational distribution and linear formulation of the conditional vector field are used, more for efficiency consideration. Can the authors provide more discussions on when these approximation and assumption will be the bottleneck of performance?
>
As we understand it, the reviewer has two related questions here, pertaining how model expressivity/performance are affected by (1) the mean-field parameterization and (2) the assumption of linearity of the conditional vector field in $x_1$.
*Implications of (1)*: Theorem 1 states that if (2) holds, the mean-field parameterization is not an “approximation’’ as much as it is a “simplification”.
The approximate flow field $v_\theta(x)$ matches the flow field $u(x)$ *exactly* whenever for each component of the posterior mean $\mu_1^d$ under $p_t$ we have that $\mu_1^d = \mathbb{E}_{q_t}[x_1^d | x]$.
This is to say that the *only* requirement in VFM under assumption (2) is that each component $d$ of the mean of the variational distribution must match the mean of the posterior probability path. A mean-field parameterization therefore does not compromising expressivity/performance at all in this setting.
*Implications of (2)*: The assumption of linearity in $x_1$ for the conditional flow field holds for most existing flow matching methods and indeed an analogous assumption hold for many diffusion-based models as well. We would therefore not consider this a particularly strong restriction in practice, and certainly not something that might become a bottleneck for model performance relative to existing flow matching and diffusion-based methods.
**Concrete steps:** We will dedicate an extra paragraph in the paper to elaborating on these points.
> Although the code is provided, it might be good to provide an algorithm of pseudo code in the paper.
>
We fully agree. This would also help highlight the fact that, from an implementation point of view, VFM is not more complicated than FM.
**Concrete steps:** We will add 1) a general code block, 2) a code block for the Gaussian case, and 3) a code block for the categorical case. We will also add a jupyter notebook with these cases as examples, and we are currently working on a small pytorch library to supply all code which is to be released soon.
> It would be good to have more explanations of Figure 1.
>
Yes, we agree. Moreover, we think that in general some extra figures should added, with the aim of making the difference between VFM and standard FM more clear.
**Concrete steps:** We will add this description in. Moreover, we will add the extra figures mentioned above.
Again, thank you so much for the time you spent reviewing our work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I am happy to keep my original rating of the paper. | null | null | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their thoughtful reading of the manuscript and their detailed comments. We are happy to hear that reviewers overall appear in agreement that this is a clearly written paper that provides a novel variational perspective on flow matching, develops useful connections with related approaches, and demonstrates good empirical results on graph generation tasks.
We respond to individual points and questions by each reviewer below. We would like to highlight two points raised by reviewers about additional experimental evaluations. We include results for these evaluations in the attached PDF, and will incorporate these results in the manuscript:
- Reviewer `WAZn` made the helpful suggestion to evaluate on the **Planar** and **SBM** datasets, which we have done. Results show that CatFlow attains SOTA performance on these datasets. We would also like to call attention to additional results on larger graphs in **Appendix B** (which were already in the original submission).
- Reviewer `vH9B` comments that a comparison with **Dirichlet FM** would be helpful. We completely agree. As we wrote in our manuscript, we were unfortunately not able to get this method working sufficiently well in our initial (limited) experiments. We have since invested additional time and effort and are now able to report a comparison that we believe is representative. The results show that even though Dirichlet FM outperforms standard FM in graph generation tasks, CatFlow obtains better performance in the tasks considered in our work. Though not visible in this table, we note that CatFlow also was faster to train in our experiments.
- We added an additional comparison with a baseline trained with the normal **Flow Matching** objective. To adapt this model to categorical data, we simply select the nearest one-hot vector on the simplex at the final step of generation.
Additional changes that reviewers can expect mainly pertain to points of clarification. In each of our responses, we use the text “**Concrete Steps”** to summarise what changes we intend to make in response to reviewer comments.
Pdf: /pdf/97c28e698f7969c4885b8f2734ce883899b944f7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
State Space Models on Temporal Graphs: A First-Principles Study | Accept (poster) | Summary: The paper proposes an approach for processing discrete-time temporal graphs via an extension of state-space models (SSMs). In this direction, the main contribution of the paper is a generalization of the HIPPO framework for graph structured data (named GHIPPO), which defines how the state of different nodes should be updated through time to compress the evolution of nodes features (a solution capable of dealing with multiple changes that might happen between two snapshots is additionally introduced in the paper). In experimental evaluation, an SSM-based solution implemented with the proposed GHIPPO framework (named GraphSSM) outperforms a variety of approaches on both the moderately sized DBLP-3, Brain, Reddit and DBLP-10 datasets, and the large-scale arXiv and Tmall datasets.
Strengths: While I’m personally not an expert in SSM models, I generally found the paper to be an interesting read (to the best of my knowledge there is not much work on applications of SSMs to graph-structured data, and as such the work highlights a research direction possibly worthy of further exploration). The presentation of the paper is generally understandable (albeit not always clear in some spots, see weaknesses), and allows a reader unfamiliar with the SSM literature to grasp the overall idea of the paper. The experimental evaluation of the paper appears well done (with the proposed approach outperforming prior art on a variety of different datasets), although there might be room for improvement.
Weaknesses: Generally I do not have many weaknesses to highlight in the paper. From a technical perspective, I would want to highlight that:
1) the proposed approach might face some issues with heterophilic graphs (as the smoothness in the node state imposed in equation (3) is not necessarily a good regularizer in that setting - see point 5 below)
2) the proposed methodology is limited to discrete-time temporal graphs, and further work would need to be done to extend the model to a continuous-time setting (this is also highlighted by the authors themselves in the paper).
From a different perspective, a weakness of the paper is probably its exposition, which in some parts is a little confusing and could be improved. In this direction:
1) lines 67, I believe there is a typo and the authors should replace “features the dynamics” with “the features dynamics”
2) line 74 to 77 are quite unclear and should probably be refactored
3) line 105, “recurrence model” should probably be replaced with “recurrent model”
4) Table 1, while it is true that classic transformers have a quadratic space and time complexity, there are also efficient transformers (e.g. Performs) that show linear space and time complexity. I believe these are not mentioned in the paper and probably they should, in order to provide to the reader a complete picture of where we stand with transformer architectures.
5) I believe the smoothness in node state imposed in equation 3 with the dirichlet energy regularizer makes sense only on homophilic graphs, and might not be a meaningful prior to use when dealing with heterophily. This is not pointed out in the paper and probably it should be highlighted as a possible limitation.
6) Lines 180-181, the authors mention that what minimizes equation 3 is a piecewise vector. I believe what they mean is that the solution is piecewise over time, I think this should probably be clarified in the paper as it might be a source of confusion
7) line 184-185, how the nodes' memories are parameterized as polynomials is very unclear here. How are those polynomials defined? What is the input to such polynomials? It might be that a reader familiar with the HIPPO framework would be aware of such details, but the lack of an explanation in the text makes it hard to draw a complete picture here.
8) lines 335-336, GraphSSM with S5 doesn’t always achieve poor performance (it is indeed the best performing model on DBLP-3 in table 4). This should be clarified in the text.
9) line 339, I believe there is a typo and the “and” at the beginning of the sentence should be fixed.
Technical Quality: 2
Clarity: 2
Questions for Authors: One thing that I think would be helpful to improve the strength of the paper is to run some experiments where the proposed SSM module is swapped with a RNN or transformer architecture. As we can see from the ablation study of table 5, it appears that the mixing strategy one might decide to use is relevant to outperform prior art. As a result of this, I wonder what would be the performance that a comparable solution would have with the very same architecture used by the authors but without using any SSM approach. This will clearly provide an indication on whether using SSMs is beneficial for achieving the desired performance (which is the main claim of the paper) or whether the results we see in Table 2 and 3 are only due to the mixing strategy.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable and insightful comments, and we provide our responses below.
**W1: heterophilic graphs issues**
Thank you for your question. We agree with you that equation (3) reflects a prior belief that the node representation generated by GHiPPO should be homophilic under the representation level [3] at any time point. While being empirically effective, it poses significant challenge when the underlying temporal graph has more complex characteristics, and we will discuss it in revisions of our paper. Indeed, we do think that defining a proper homophily metric when the underlying graph is treated as a process is itself a valuable future research question.
**W2: Typos or misunderstandings in Line 67, Lines 74-77, Line 105, Lines 180-181, and Line 339**
We appreciate the reviewer for pointing out the careless errors. In the revision, we will make thorough proofreading to enhance readability by correcting any typos and improving the writing.
**W3: Table 1 may not be entirely accurate**
Thank you for bringing this to our attention. We agree that Table 1 may not be entirely accurate in all cases. In this context, "Transformers" refers to traditional "softmax Transformers." We will include a note to clarify this and prevent any potential misunderstandings. We will also provide a comprehensive discussion on this matter, as outlined below:
Indeed, there are close relationships between transformers with finite kernel feature maps (which are essentially what efficient transformers rely on), RNNs and SSMs [1, 2]. All of the three allow a recurrent parameterization and constant-memory-linear-compute inference. The primary difference is the update rule during recurrence: Linear transformers essentially use a unitary state matrix that is not learnable, while standard RNNs use learnable but not well-controlled state matrix. SSMs use an improved and learnable state matrix by adopting more careful initializations.
**W4: On semantics of node memory**
We will add a more accessible and detailed introduction to the background on HiPPO abstraction in revisions of our paper. Specifically, by an $N$-dimensional node memory at time $t$ we mean the coefficients of some order-$N$ polynomial (under the orthogonal polynomial basis) that optimally approximate the node features (viewed as a function over time) up until time $t$. The rationale of this representation is that a finite dimensional function space spaned by polynomials are characterized by their coefficients, thereby allowing us to **embed functions into finite dimensional vectors**. Now we answer your question: The inputs to the approximation polynomial is time $t$. The coefficients $u(t)$ of the polynomial is determined by an optimal approximation criteria, and constitues the node memory at time $t$.
**W5: Performance of GraphSSM-S5**
Thank you for your suggestion. We will address it and provide clarification in the revisions.
**W6: Ablation study on swapping SSMs with RNN and Transformer**
Thak you for your insightful suggestion. We replaced the SSM architecture with RNN and Transformer and present the results below. As observed, SSMs prove to be a superior choice compared to RNNs and Transformers. While Transformers excel in NLP tasks, they are not well-suited for discrete graph sequences.
||**DBLP-3**||**Brain**||**Reddit**||**DBLP-10**||
|--------------------|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|
||**Micro-F1**|**Macro-F1**|**Micro-F1**|**Macro-F1**|**Micro-F1**|**Macro-F1**|**Micro-F1**|**Macro-F1**|
|GraphSSM-RNN|84.24±1.3|83.10±1.4|92.28±1.5|92.43±1.0|43.11±1.7|43.15±1.7|73.46±1.5|72.93±1.5|
|GraphSSM-Transformer|85.02±1.1|84.98±0.8|93.47±1.5|93.11±1.1|43.48±0.7|43.11±0.5|75.65±0.8|74.32±0.6|
|GraphSSM-S4|85.26±0.9|85.00±1.3|93.52±1.0|93.54±0.9|**49.21±0.5**|**49.05±0.7**|**76.80±0.3**|**76.00±0.4**|
|GraphSSM-S5|**86.29±1.0**|**85.78±0.9**|93.00±0.4|93.01±0.4|44.75±0.4|44.79±0.4|75.19±0.6|73.95±0.4|
|GraphSSM-S6|86.10±0.5|85.70±0.6|**93.80±0.3**|**94.47±0.6**|43.11±0.9|42.85±1.1|74.09±0.3|73.16±0.2|
**Q1: Link prediction results**
Thank you for your suggestion. The results for the link prediction task are presented below. We follow the experimental settings of ROLAND, where the model leverages information accumulated up to time t to predict edges in snapshot t + 1.
||DBLP-3||Brain||Reddit||DBLP-10||
|-----------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|
||AUC|AP|AUC|AP|AUC|AP|AUC|AP|
|STAR|94.21±0.43|91.80±0.51|56.10±0.48|55.24±0.71|98.21±0.59|98.32±0.32|86.81±0.23|85.3±0.31|
|tNodeEmbed|93.19±0.37|90.22±0.42|55.21±0.29|56.32±0.36|98.39±0.41|98.10±0.23|87.32±0.41|86.54±0.25|
|EvolveGCN|94.01±0.62|91.05±0.39|56.33±0.41|56.91±0.55|98.77±0.39|98.80±0.13|88.32±0.24|87.19±0.50|
|SpikeNet|92.53±0.57|90.11±0.51|54.95±0.58|55.88±0.64|97.97±0.12|97.06±0.24|86.88±0.17|85.40±0.18|
|ROLAND|95.01±0.55|91.25±0.38|56.87±0.41|56.02±0.23|98.76±0.11|98.99±0.14|89.42±0.30|88.91±0.23|
|GraphSSM-S4|95.47±0.23|91.58±0.24|57.74±0.45|56.92±0.21|99.59±0.32|99.24±0.20|90.62±0.45|90.12±0.37|
|GraphSSM-S5|**96.45±0.15**|92.41±0.17|**58.49±0.33**|**57.40±0.31**|99.66±0.21|99.35±0.12|90.99±0.29|90.80±0.56|
|GraphSSM-S6|95.88±0.31|**92.52±0.28**|56.77±0.41|57.16±0.25|**99.70±0.13**|**99.42±0.14**|**91.16±0.32**|**91.19±0.24**|
[1] Transformers are rnns: Fast autoregressive transformers with linear attention. ICML, 2020.
[2] Transformers are SSMs: Generalized models and efficient algorithms through structured state space duality. arXiv 2024.
[3] Luan, Sitao, et al. "Is heterophily a real nightmare for graph neural networks to do node classification?." arXiv preprint arXiv:2109.05641 (2021).
---
We hope our responses were helpful in adequately addressing your earlier concerns. In case you have any further questions or comments, please let us know, and we will gladly respond.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response. I don't have further question and I'm happy to raise my score to a 6
---
Rebuttal 2:
Comment: Thank you for your response and for raising the score. We are pleased to address the concerns you have raised. | Summary: The paper introduces GRAPHSSM, a novel state space model framework for temporal graphs. GRAPHSSM extends state space models (SSMs) by incorporating structural information through Laplacian regularization to handle the dynamic behaviors of temporal graphs. The framework aims to overcome limitations of recurrent neural networks (RNNs) and Transformers in modeling long-range dependencies and managing computational complexity. The authors propose the GHIPPO abstraction for memory compression and various mixing mechanisms to handle unobserved graph mutations. Extensive experiments on multiple temporal graph benchmarks demonstrate the effectiveness and efficiency of GRAPHSSM.
Strengths: - The paper is well-written and I enjoy reading the paper. The derivation of the method and the extension from SSM is technically sound
- The the overall research direction is well-motivated and interesting. SSM is indeed a good potential candidate for modelling temporal graph.
Weaknesses: - There already exists an important family of temporal graph neural networks (TGNNs) that use memory mechanism (see [1] and [2] for examples) and resemble a state transition model. The paper seems to neglect/be unaware of this line of related work.
- There are parts of the method/extension that are not clear (see questions below for more details)
- There are a couple of serious limitations regarding the empirical study:
1) the baselines for the paper are rather outdated. The paper did not compare with the more recent TGNN models (e.g. above-mentioned memory-based temporal graph neural network, which is the SOTA for TGNN)
2) the performance improvement from the proposed baselines is still marginal
3) the conical task for temporal graph neural should be link prediction which is missing from the current study
[1] Rossi, Emanuele, et al. "Temporal graph networks for deep learning on dynamic graphs." arXiv preprint arXiv:2006.10637 (2020).
[2] Zhang, Yao, et al. "Tiger: Temporal interaction graph embedding with restarts." Proceedings of the ACM Web Conference 2023. 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Q1: I fail to understand the second part of Eq.(3). Why does minimizing the second term of Eq.(3) amounts to increasing smoothness? In addition, I don't think the interpretation/remark given in line 179-180 are correct. If $\alpha$ goes to infinity, the whole objective, Eq.(3), is dominated by the second term. There is a degenerate solution $Z(s) = \mathbf{0}$ (zero vector) that would minimize the objective. Which part of the objective/model prevent this degenerate solution?
- Q2: Theorem 1 is saying that GHIPPO is "nice" because its parameter update can be described by the given ODEs. Can you further explain why this property is nice? does the benefit comes from better efficiency or inference performance? and how?
- Q3: the discretization in the HIPPO paper was for computation purposes. In the case of temporal graph, where event and snapshot are inherently discrete. What is the (optimization) objective for discretization? i.e., what should be considered as a "good discretization" in this case?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N.A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Missing comparison of related works [1,2] and advanced methods**
Thank you for your suggestions. We have throughly reviewed the literature in temporal graph learning and have awared of several advanced works such as [3] and [4]. However, to our best knowledge, they are focusing on the **continuous-time tempora graph leanring** research and it is unfair to consider them as baselines for comparison under the discrete-time settings. We have explicitly clarified the situation in Related Work, and we will add a brief discussion with GraphSSM and continus-time methods in the next revision.
**W2: Performance improvement is marginal**
First of all, we respectively disagree with that the performance improvement is marginal. GraphSSM consistently achieves state-of-the-art performance and outperforms the runner-up baselines by significant margins, especially on the Reddit dataset. Secondly, scientific research is not a race; it is essential to place strong emphasis on other aspects of a method besides a single performance metric. GraphSSM shows superior performance while also exhibiting lower memory and computational overheads compared to other methods. This is also an important contribution of our work.
**W3: Link prediction results**
Thank you for your suggestion. The results for the link prediction task are presented below. We follow the experimental settings of ROLAND, where the model leverages information accumulated up to time t to predict edges in snapshot t + 1.
||DBLP-3||Brain||Reddit||DBLP-10||
|-----------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|
||AUC|AP|AUC|AP|AUC|AP|AUC|AP|
|STAR|94.21±0.43|91.80±0.51|56.10±0.48|55.24±0.71|98.21±0.59|98.32±0.32|86.81±0.23|85.3±0.31|
|tNodeEmbed|93.19±0.37|90.22±0.42|55.21±0.29|56.32±0.36|98.39±0.41|98.10±0.23|87.32±0.41|86.54±0.25|
|EvolveGCN|94.01±0.62|91.05±0.39|56.33±0.41|56.91±0.55|98.77±0.39|98.80±0.13|88.32±0.24|87.19±0.50|
|SpikeNet|92.53±0.57|90.11±0.51|54.95±0.58|55.88±0.64|97.97±0.12|97.06±0.24|86.88±0.17|85.40±0.18|
|ROLAND|95.01±0.55|91.25±0.38|56.87±0.41|56.02±0.23|98.76±0.11|98.99±0.14|89.42±0.30|88.91±0.23|
|GraphSSM-S4|95.47±0.23|91.58±0.24|57.74±0.45|56.92±0.21|99.59±0.32|99.24±0.20|90.62±0.45|90.12±0.37|
|GraphSSM-S5|**96.45±0.15**|92.41±0.17|**58.49±0.33**|**57.40±0.31**|99.66±0.21|99.35±0.12|90.99±0.29|90.80±0.56|
|GraphSSM-S6|95.88±0.31|**92.52±0.28**|56.77±0.41|57.16±0.25|**99.70±0.13**|**99.42±0.14**|**91.16±0.32**|**91.19±0.24**|
**Q1: On remark 1 and degenerate solutions**
Thank you for pointing out the degenerate solution and we shall restate the remark in future revisions of our paper. Technically, the minimizers of Laplacian quadratic forms shall satisfy a smoothness condition over connected components of the underlying graph (please refer to our discussion in appendix B.1). It is correct that zero is a trivial solution that satisfies the smoothness requirement yet contains no useful information. Therefore the optimal solution is only non-trivial when the objective is a combination of $\ell_2$ approximation and Laplacian regularization. Under such scenarios, the node features might be noisy themselves, and the Laplacian regularizer serves as a way to use neighborhood features as a (dynamic) denoiser.
**Q2: What are nice properties of GHiPPO solutions?**
The GHiPPO solutions are desirable in the following aspects:
- It allows a linear dynmaical system (LDS) representation under specific choice of approximation configurations, which is a nice property of HiPPO framework that is inherited by GHiPPO. Without a reasonable ODE representation, the approximation framework would have been only technically of interest but impractical.
- ODE representations allow a constant memory update rule via utilizing a suitably defined discretization procedure. This ensures that the inference time memory and computational complexity is well-controlled.
- GHiPPO also inherits the versatility of HiPPO in that the LegS configuration is not the only one that allows ODE parameterization. Indeed we may use alternative approximation schemes such as fourier approximation and the resulting framework is still efficiently computable and practical.
**Q3: The semantics of discretization**
This is a very good question, and we think this is an important algorithmic challenge raised by the GHiPPO abstraction. In our framework, we view the underlying temporal graph as a collection of node feature processes together with a dynamically-changing topological relation among them. Under this framework, we regard graph snapshots as discrete observations of this underlying graph process. The objective of discretization is thus to develop a node state update rule that respects the graph dynamics, which is the central topic we discussed in section 3.2 of our paper. In particular, the existence of dynamic topological relations brings significant challenges to the discretization scheme (note that without topological relations, we can directly apply the ZOH discretization rule in a node-wise fashion) due to unobserved mutations of the underlying temporal graph which is conceptually depicted in equation (7) and figure 2 in our paper. Therefore, to study what should be a good discretization in this case, we first establish a dicretization rule in hindsight (the oracle discretization in theorem 2) and propose approximations to the oracle discretization thereafter.
[3] DyGFormer: Towards Better Dynamic Graph Learning: New Architecture and Unified Library, NeurIPS 2023
[4] SimpleDyG: On the Feasibility of Simple Transformer for Dynamic Graph Modeling, WWW 2024
---
Thank you again for taking the time to review our paper. We hope our responses could clarify your concerns, and hope you will consider increasing your score. If we have left any notable points of concern unaddressed, please do share and we will attend to these points.
---
Rebuttal Comment 1.1:
Comment: I would like first to thank the authors for their diligent response. However, I still found my main concerns not fully addressed after reading the response.
1. I don't think "they focus on modelling continuous graph" is a sufficient reason for not comparing them with the proposed method. Continuous VS discrete are often modelling choices most of the time, and memory-based TGNN have been applied to the dataset used in the paper [1,2]. I also think the modelling choice of this paper is also assuming the underlying process is continuous?
The most appealing reason why I think a detailed comparison is necessary between memory-based TGNN and the proposed method is their memory mechanism. Memory-based TGNN captures dynamic/temporal information of the graph by incrementally processing information and distilling it into a so-called memory vector, which in a sense can be viewed as the state of the graph/node. Therefore, I think the mechanisms between memory-based TGNN and the method proposed are very similar. I am very keen to understand the fundamental difference and why we need this potentially new family of models for dynamic graphs.
2. I am very confused with the choice of experimental setting for the link prediction task. Based on my understanding, ROLAND actually focused on the efficiency of TGNN and studied a learning setting that is different from the typical ones (e.g., [1,2]). They studied and proposed a framework that is related to continual/incremental learning.
[1] Huang, Shenyang, et al. "Temporal graph benchmark for machine learning on temporal graphs." Advances in Neural Information Processing Systems 36 (2024).
[2] Poursafaei, Farimah, et al. "Towards better evaluation for dynamic link prediction." Advances in Neural Information Processing Systems 35 (2022): 3
---
Reply to Comment 1.1.1:
Title: Gentle Reminder
Comment: Dear Reviewer PUCW,
We sincerely appreciate your insightful review and feedback comments. As the author-reviewer discussion deadline (Aug 13) is approaching, we would like to check if you have any other remaining concerns about our paper.
**We have faithfully responded to ALL your comments. If our responses have adequately addressed your concerns, we kindly hope that you can consider increasing the score.** We understand that you have a demanding schedule, and we appreciate the time and effort you dedicate to reviewing our paper.
Kind regards,
Authors
---
Rebuttal 2:
Title: Response to Reviewer PUCW (Part 1/2)
Comment: Thanks for acknowledging our efforts in the rebuttal. We greatly appreciate your prompt response during the rebuttal period. We will now address your additional concerns and questions below.
**Response to Q1:** Thank you for the question regarding TGNNs. In our initial rebuttal, due to limited characters we only tried to answer your question in a somewhat conventional way by pointing out modeling on discrete-time dynamic graphs (DTDG) and continuous-time dynamic graphs (CTDG) are usually considered as different tasks[3]. We agree with you that there are more profound connections between the modeling of DTDG, CTDG and the GHiPPO abstraction we proposed. Our perspectives are three-fold:
+ (i) We will show that DTDG and CTDG are two different lossy observation schemes of the underlying graph process in the GHiPPO abstraction.
+ (ii) We discuss why the current SSM framework based on discretizations are not yet readily suitable for handling CTDGs.
+ (iii) We return to CTDG models as you have mentioned, and present some empirical results suggesting that they might indeed be not an ideal solution to DTDG problems.
### (i) DTDG, CTDG and GHiPPO
Recall that in our formulation of the underlying graph process, the node features evolve continuously and the topological relations among nodes allow finite (countable) mutations. Let's rewrite the process in equation (2) of the paper:
$G(0) \overset{\mathcal{E}_1}{\longrightarrow} G(t_1) \overset{\mathcal{E}_2}{\longrightarrow} G(t_2) \longrightarrow \cdots \longrightarrow G(t\_{M-1}) \overset{\mathcal{E}_M}{\longrightarrow} G(t_M) = G(T).$
Now we take a closer look at DTDG and CTDG representations of the above process:
- **DTDG**: In DTDG representations, we do not directly observe the events, but we observe the entire graph at certain time spots resulting in a serious of snapshots. In this spirit, **DTDG has complete latitudinal information, but is lossy regarding longitudinal information**.
- **CTDG**: In CTDG representations, we have complete observations of events, but upon each event information, we do not observe the features of the rest of the nodes (that do not participate in those specific events). Therefore, **CTDG has complete longitudinal information, but is lossy regarding latitudinal information**.
Next we proceed to why we choose to only study the former one (DTDG).
### (ii) Handing CTDGs using SSM discretizations is challenging
In section 3.2 of our paper (especially theorem 2), we established the discretization scheme upon an ideal, discrete observation. (We observe the graph snapshot at each mutation events) We believe that this one reasonably hints tha gap between possible empirical approximations in either DTDG or CTDG scenarios. In DTDGs, we believe approximations using available snapshots are possible since from hindsight, the ideal representation is a convex combination of the snapshot representations at the mutation times. The approximation bias mostly comes from fewer snapshots, and we use mixing strategies to mitigate the biases.
However, in CTDG scenarios, we miss the majority of information in each snapshot. Up till now we have not yet figured out any practical and sound solutions to this issue. Besides, consturcting snapshots from CTDGs is itself a very impractical method. It is possible that the GHiPPO abstraction might not be a very good fit for a principled study of CTDGs (might be too stringent, in our opinion). Hence, we regard the modeling of CTDG to be beyond the scope of GraphSSM.
### (iii) CTDG models and their performance on DTDG graphs
As you have pointed out, many CTDG models are based on a **Message Passing combined with Recurrent State Update (MPRSU)** scheme, which is closely related to GraphSSM (and indeed, related to many other DTDG methods like ROLAND as well). Recall that, SSMs are also a special case of linear RNNs from the computational viewpoint. However, **SSMs utilize a finer-grained definition of memory (via deriving them from an online approximation problem) that yields more stable and performant RNN variants as compared to GRU or LSTM.** This is indeed what we tried to establish in our paper---We equip the node-wise memory in a principled way that is formalized as the optimal approximation coefficients of the Laplacian-regularized online approximation problem. We have shown that this formulation leads to better DTDG modeling frameworks, and in our ablation study we also show the efficacy of using SSM schemes rather than ordinary RNNs or even softmax transformers. So far as we have noticed, most TGNN methods have no mechanistic interpretations of their memory mechanisms. Besides, as we have discussed earlier, it is unclear if SSM schemes are readily applicable to CTDG frameworks.
Finally, we have gained some additional empirical insights from additional experiments. We conducted evaluations of CTDG methods on the DBLP-3, Brain, Reddit, and DBLP-10 datasets.
---
Rebuttal 3:
Title: Response to Reviewer PUCW (Part 2/2)
Comment: The comparison CTDG baselines include memory-based methods TGN and TIGER (as you mentioned in the initial review), as well as attention-based methods TGAT and DyGFormer. Results for the node classification tasks are presented below:
||**DBLP-3**||**Brain**||**Reddit**||**DBLP-10**||
|---------|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|
||Micro-F1|Macro-F1|Micro-F1|Macro-F1|Micro-F1|Macro-F1|Micro-F1|Macro-F1|
|TGN|80.43|80.08|89.50|89.59|40.66|40.50|70.01|68.83|
|TIGER|81.53|81.74|90.49|90.33|41.17|40.98|72.48|71.14|
|TGAT|82.59|82.14|90.98|90.25|42.79|42.40|73.15|72.02|
|DyGFormer|84.77|84.51|92.01|91.32|45.24|44.51|72.40|71.86|
|GraphSSM|**86.29±1.0**|**85.78±0.9**|**93.80±0.3**|**94.47±0.6**|**49.21±0.5**|**49.05±0.7**|**76.80±0.3**|**76.00±0.4**|
Due to time and space limitation, we are currently presenting the experimental results of the node classification task on the four datasets. Further experiments are being conducted and can be presented in a few days if it is necessary. As observed, CTDG methods demonstrate relatively poor performance on the four datasets, which is inline with our above claims. Indeed, many CTDG methods require fine-grained time information to capture the evolution patterns between edges, which is impractical for DTDGs with coarsened snapshot information.
**Response to Q2:** Firstly, we acknowledge that the learning setting in ROLAND differs from that of other works [1, 2]. The main differences arise from diverse contexts of dynamic graphs, as detailed below:
+ ROLAND focus on the **discrete-time dynamic graphs (DTDG)**, which is also the main focus of our work. In this setting, our goal is to predict edges in snapshot 𝑡 + 1 based on the snapshots accumulated up to time 𝑡. This is the general setting in discrete-time temporal graphs and is widely used in literature including several representative works, e.g., EvolveGCN[4].
+ For TGB[1] and DGB[2], both benchmarks focus on the **continuous-time dynamic graphs (CTDG)**, whose goal is to predict the occurenc of a link between two given nodes at a specific time point.
+ While we agree that "Continuous vs. discrete are often modeling choices most of the time", there are differences in terms of the associated time information in the graphs for modeling. In the context of CTDG, link prediction is **fine-grained**, with each link being associated with an exact timestamp. This allows for predicting evolving edge streams at specific time points. However, in DTDG, link prediction is **coarse-grained**. Each edge in a snapshot has relative time information, and we can only predict whether a link will occur in the next time span, such as in the next week or month. In this regard, we were unable to perform link prediction under the setting of [1,2].
Secondly, while ROLAND studied and proposed a framework that is related to continual/incremental learning, the experimental setup for the link prediction task exactly follows the classic DTDG settings in previous works[4,5,6,7]. This is detailed in the ROLAND paper ("Task" section in Section 4.1).
Finally, to avoid any potential confusion, we detail our experimental setting for the link prediction task:
+ **Train-val-test splits:** For each dataset, we use $G_1, G_2, · · · , G_{T −1}$ as the training data, $G_{T}$ as test graph. Also, we use $G_{T-1}$ as the validation graph during training for hyperparameter tuning. Each model leverages the T-1 visible graphs to prediction the occurence of edges in $G_T$.
+ **Positive and negative edges:** For the link prediction problem, the positive test set consists of the edges that appear in $G_T$ and do not present in $G\_{T-1}$, while the negative test set consists of the edges that do not appear in $G\_{T-1}$ and $G_T$.
+ **Evaluation metrics:** AUC and AP, two standard metrics in link prediction tasks.
---
If any concern still remains that might prohibit a positive recommendation of this work, we would appreciate if you could let us know.
### Reference
[3] Kazemi, Seyed Mehran, et al. "Representation learning for dynamic graphs: A survey." Journal of Machine Learning Research 21.70 (2020): 1-73.
[4] Pareja, Aldo, et al. "Evolvegcn: Evolving graph convolutional networks for dynamic graphs." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020.
[5] You, Jiaxuan, Tianyu Du, and Jure Leskovec. "ROLAND: graph learning framework for dynamic graphs." *Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining*. 2022.
[6] Zhang, Kaike, et al. "Dyted: Disentangled representation learning for discrete-time dynamic graph." *Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining*. 2023.
[7] Sankar, Aravind, et al. "Dysat: Deep neural representation learning on dynamic graphs via self-attention networks." Proceedings of the 13th international conference on web search and data mining. 2020. | Summary: This paper investigates SSM theory to temporal graphs by integrating structural information into the online approximation with laplacian regularization term.
Strengths: 1. The proposed method has the theoretical support to show the effectiveness of the proposed method.
2. The experimental results show that the proposed method outperform most of the baseline methds.
3. This paper is well-motivated.
Weaknesses: 1. Since one advantage of the SSM based methods is its small parameter size, then what's the size of the model compared with other baseline methods? It's better to list the number of parameters to show the efficiency of the proposed method.
2. The transformer based methods have better performance as shown in Table 1, do you include any transformer-based methods for the experimental comparison?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Since one advantage of the SSM based methods is its small parameter size, then what's the size of the model compared with other baseline methods?
2. The transformer based methods have better performance as shown in Table 1, do you include any transformer-based methods for the experimental comparison?
3. What is the time complexity of the GraphSSM?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors provide the limitation in appendix E.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive reviews. Your concerns are addressed as follows:
**W1 & Q1: Parameter size of GraphSSM and baseline models**
Thank you for your insightful suggestion. Per your suggestion, we have now included the comparison of parameter sizes between GraphSSM models and baselines on a large dataset arXiv.
| | STAR | tNodeEmbed | EvolveGCN | SpikeNet | ROLAND | GraphSSM-S4 | GraphSSM-S5 | GraphSSM-S6 |
| ----- | ---- | ---------- | --------- | -------- | ------ | ----------- | ----------- | ----------- |
| arxiv | OOM | OOM | 2.5M | 780K | 1.2M | 294K | 93k | 229K |
As observed from the table, GraphSSM, particularly GraphSSM-S5, shows superior parameter efficiency compared to other methods. This is attributed to the advantages offered by SSM models.
**W2 & Q2: Comparison of transformer-based methods.**
In this work, we focus on **discrete-time temporal graphs**, which typically have long sequence lengths and are not suitable scenarios for transformers. Current works on transformer-based temporal graph learning mainly focus on **continuous-time temporal graphs**, leaving a significant gap for discrete-time temporal graphs. In this regard, we did not and cannot include transformer-based baselines in our experiments. And it is unfair to directly apply continuous-time methods to our specific settings for comparison. Therefore, we conduct an ablation study on GraphSSM by substituting the SSM architecture with Transformer and present the results below.
| | **DBLP-3** | | **Brain** | | **Reddit** | | **DBLP-10** | |
| -------------------- | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: |
| | **Micro-F1** | **Macro-F1** | **Micro-F1** | **Macro-F1** | **Micro-F1** | **Macro-F1** | **Micro-F1** | **Macro-F1** |
| GraphSSM-Transformer | 85.02±1.1 | 84.98±0.8 | 93.47±1.5 | 93.11±1.1 | 43.48±0.7 | 43.11±0.5 | 75.65±0.8 | 74.32±0.6 |
| GraphSSM-S4 | 85.26±0.9 | 85.00±1.3 | 93.52±1.0 | 93.54±0.9 | **49.21±0.5** | **49.05±0.7** | **76.80±0.3** | **76.00±0.4** |
| GraphSSM-S5 | **86.29±1.0** | **85.78±0.9** | 93.00±0.4 | 93.01±0.4 | 44.75±0.4 | 44.79±0.4 | 75.19±0.6 | 73.95±0.4 |
| GraphSSM-S6 | 86.10±0.5 | 85.70±0.6 | **93.80±0.3** | **94.47±0.6** | 43.11±0.9 | 42.85±1.1 | 74.09±0.3 | 73.16±0.2 |
As observed, GraphSSM-Transformer does not exhibit significantly superior performance in learning from discrete graph sequences compared to SSM-based architectures, despite introducing more memory and computation overheads.
**Q3: Time complexity of the GraphSSM**
GraphSSM is a discrete-time sequence model that performs a message passing step for each graph snapshot, followed by an SSM model for sequence learning. Assuming we have $L$ GNN layers and the sequence length is $T$, the full-batch training and inference time complexity for each graph snapshot with $L$ layers can be bounded by $\mathcal{O}(L|\mathcal{E}|d)$, which represents the total cost of sparse-dense matrix multiplication or message passing. We assume a hidden dimension of $d$ across all layers for simplicity.
| | |
| ----------------------- | -------------------------------- |
| Graph message passing | $\mathcal{O} (TL\| \mathcal{E}\| d)$ |
| Graph sequence learning | $\mathcal{O}(T)$ |
The main bottleneck of GraphSSM is the graph message passing step, which can be further optimized using techniques such as graph sampling[1] or graph condensation[2].
[1] GraphSAINT: Graph Sampling Based Inductive Learning Method. ICLR 2020.
[2] Graph Condensation for Graph Neural Networks. ICLR 2022.
---
We would be grateful if you tell us whether the response answers your questipns of GraphSSM, if not, what we are lacking, so we can provide better clarification. Thank you for your time.
---
Rebuttal Comment 1.1:
Title: Reply to authors' rebuttal
Comment: I would like to thank the authors for the detailed response. I don't have further question.
---
Rebuttal 2:
Comment: Thank you for your response. We greatly appreciate your thoughtful feedback and the time you dedicated to reviewing our paper! | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Tractable Inference Perspective of Offline RL | Accept (poster) | Summary: This paper considers the RvS setting, where a sequence model is learned and used for a control-as-inference policy extraction. Naive sequence models fail due to overly optimistic reward-to-go in stochastic environments. It is possible to alleviate this with rejection sampling, but this suffers from the curse of dimensionality. This paper proposes the use of Tractable Probabilistic Models, which allow the evaluation of probabilistic queries given various degrees of observability. The proposed method uses a beam search procedure to evaluate sequences of actions to locate the best sequence. Results are presented on D4RL Mujoco benchmarks and stochastic game environments.
Strengths: - The paper identifies a key issue in RvS methods, and proposes a new model to address it. It takes advantages of TPM models to evaluate returns under various degrees of masking.
- The paper is written in a clear way.
- Empirical results are promising, showing a consistent improvement in both deterministic and stochastic benchmarks.
- The method is agnostic to the base policy distribution used, meaning it can be easily adapted.
Weaknesses: - It seems like the strength of TPM comes from beam search conducted on trajectories sampled from a separate policy model (e.g. TT, DT). This comes with the downside of a more costly inference and requires training multiple models. Is there a baseline comparison that utilizes the same computational efficiency?
- Is there a simple baseline to implement for the beam search value estimator that does not require a specialized TPM? To be thorough, it would be nice to decouple the beam search idea from the use of a TPM.
- It would be good to define RvS abbreviation early on.
- More detail on the formulation of the TPM model would clarify the method. How is the model trained and with what architecture?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: A brief limitations section is present, which somewhat addresses the main limitations of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Comment #1: whether beam search is important in Trifle
we will first clarify the relationship between Trifle and beam search. Then we conduct a comprehensive ablation study on beam search algorithms to confirm Trifle's superiority.
As mentioned in the general response, the key insight of Trifle to solve challenge #1 is to utilize tractable probabilistic models to better approximate action samples from the desired distribution $p(a_t | s_{0:t}, \mathbb{E}[V_t] \geq v)$. We highlight that the most crucial design choice of our method for this goal is that: Trifle can effectively bias the per-action-dimension generation process of any base policy towards high expected returns, which is achieved by adding per-dimension correction terms $p_{TPM} (V_t \geq v | s_t, a_{t}^{\leq i})$ (Eq. (2) in the paper) to the base policy.
While the rejection sampling method can help us obtain more unbiased action samples through a post value(expected return)-estimation session, we only implement this component for TT-based Trifle (not for DT-based Trifle) for fair comparison, as the DT baseline doesn't perform explicit value estimation or adopt any rejection sampling methods. Therefore, the success of DT-based Trifle strongly justifies that the correction term computed by TPM can effectively bias the actions generated by DT towards higher expected returns.
Moreover, the beam search algorithm comes from TT. Although it is a more effective way to do rejection sampling, it is not the necessary component of Trifle.
**Ablations over beam search/rejection sampling**
For TT-based Trifle, we adopted the same beam search hyperparameters as reported in the TT paper (in the official GitHub repo https://github.com/jannerm/trajectory-transformer). And we conduct ablation studies on beam search hyperparameters in **Table 2** of PDF to investigate the effectiveness of Trifle's each component:
- Trifle consistently outperforms TT across all beam search hyperparameters and is more robust to variations of both planning horizon $H$ and beam width $W$.
- (a)Trifle w/ naive rejection sampling >> TT w/ naive rejection sampling (b) Trifle w/o rejection sampling >> TT w/o rejection sampling. In both cases, Trifle's superior performance originates from that it can positively guide action generation (similar to DT-based Trifle vs DT).
- Trifle w/ beam search > Trifle w/ naive rejection sampling > Trifle w/o rejection sampling >> TT w/ naive rejection sampling. Although other design choices like rejection sampling/beam search help to better approximate samples from the desired distribution $p(a_t | s_{0:t}, \mathbb{E}[V_{t}] \geq v)$, the per-dimension correction terms computed by Trifle used to guide per-dimension-action generation plays a very significant role.
### Comment #2: the downside of a more costly inference and requires training multiple models.
Trifle is efficient in training. It only takes 30-60 minutes (~20s per epoch, 100-200 epochs) to train a PC on one GPU for each Gym-Mujuco task (**Note that a single PC can be used to answer all conditional queries required by Trifle**). In comparison, training the GPT model for TT takes approximately 6-12 hours (80 epochs).
To evaluate inference-time efficiency, we conduct a more detailed runtime analysis, and the main results are shown in Figure 1 of the attached PDF. Figure 1 (left) expands Table 5 and plots the stepwise inference-time scaling curve of Trifle vs TT with varying horizons. We can see that: As we increase the horizon, the relative slowdown is mitigated. This is because, across different horizons, **TPM-related computation consistently requires ~1.45s computation time**, which additional computation overhead is diminishing as we increase the beam horizon. Figure 1 (right) shows that Trifle's runtime (TPM-related) scales **linearly** w.r.t. the number of action variables, which indicates its efficiency for handling high-dimensional action spaces.
Note that there are recent breakthroughs [1] in designing efficient PC implementations, which can significantly speed up the computation of Trifle (both training and inference).
[1] Liu, Anji, Kareem Ahmed, and Guy Van den Broeck. "Scaling Tractable Probabilistic Circuits: A Systems Perspective." arXiv preprint arXiv:2406.00766 (2024).
### Comment #3: define the RvS abbreviation earlier
We thank the reviewer for the suggestion and will revise the paper accordingly.
### Comment #4: details on the formulation, structure, and training algorithm of the TPM model
We thank the reviewer for their suggestion. We provide a concise and intuitive introduction to the TPM we adopted: Probabilistic Circuits (PCs). We describe the representation and structure of PCs in Section 2 and provide a detailed and formal introduction in Appendix B. We adopted the Hidden Chow-Liu Tree PC structure and used EM to train the model. Please refer to Appendix B.2 for more details about the adopted PC structures and the parameter learning algorithm.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for a detailed response. These clarifications are helpful in giving a more clear picture of the method, and I would encourage the authors to add these rationales into the experimental setup section. The description of the TPM model in Appendix B is brief, and the paper would benefit from a more thorough description as the average RL practitioner would be unaware of the details of the TPM model. I think this is a solid paper integrating the probabilistic circuits framework into RL, and I will opt to maintain my current score.
---
Rebuttal 2:
Comment: Thank you for your thoughtful feedback and appreciation of our response, and we appreciate your recognition of our work as solid. We will revise the paper to include the additional experimental details and a more comprehensive description of TPMs, as suggested. If you have any further questions, we are pleased to discuss them with you. | Summary: The paper provides empirical evidence that sequence models are able to identify promising actions, but that their policies at inference-time can be suboptimal. The paper proposes to use a tractable probabilistic model to bias the generated action sequences towards optimal ones. The paper provides fairly extensive empirical validation for its claims.
I think the paper could benefit from merging and rephrasing Sections 4 and 5. In its current state, I found it difficult to gather the proposed algorithm. I would also consider adding a (reduced) algorithm box to the main paper.
Strengths: * The experimental evaluation of the claims is thorough and includes a wide selection of relevant baselines.
* Trifle appears to be on par or slightly better than a number of other approaches.
Weaknesses: * It is mentioned that a viable alternative to Trifle are sequence models augmented by components learning the Q-function (e.g., QDT). However, Trifle is not compared empirically against these methods. I believe such a comparison would significantly strengthen the paper.
* I find Figure 2 (middle) slightly misleading. Why does it not directly plot the actual returns (and instead the "optimality score")? The correlation between "optimality score" and actual returns appears to be pretty weak in the relevant range. Why are middle and right not merged into one single figure?
* In the Gym-MuJoCo benchmarks, DD (conditional generative modeling) seems to perform roughly as well as Trifle. I believe that a more extensive comparison between the approaches would benefit the paper. Why would Trifle be preferable?
Technical Quality: 3
Clarity: 2
Questions for Authors: * In Figure 2, what is the definition of the "optimality score"?
* Why is the formulation of eq. (1) preferred over training a policy $\pi$ which maximizes expected return plus a (KL-)regularization term that encourages it to stay close to the data-generating distribution (such as common in the RLHF literature, e.g., [1])?
* How do you choose $t'$?
* Why is the additional rejection sampling step (lines 206-207) necessary given that $\tilde{p}$ is already proportional to the probability of having large multi-step value?
* How does the efficiency compare against conditional generative modeling (e.g., DD) and approaches such as QDT, given that TPMs can be slow to train?
[1]: Rafailov, Rafael, et al. "Direct preference optimization: Your language model is secretly a reward model." Advances in Neural Information Processing Systems 36 (2024).
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: See weaknesses and questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Comment #1: comparison with models augmented by Q-function learning components (e.g., QDT)
We compare the TT-based Trifle with QDT in Appendix 4.1, and DT-based Trifle with QDT in **Table 3(a)** of the rebuttal PDF. The results demonstrate that DT-based Trifle significantly outperforms QDT, supporting our claim that in such scenarios, improving inference-time performance is more critical.
Additionally, since QDT enhances the training side with more accurate training labels and Trifle improves the inference side with better action sampling, both methods could be combined to achieve even better performance.
### Comment #2: definition of the inference-time optimality score
We define the "inference-time optimality score" as a proxy for inference-time optimality. This score is primarily defined over a state-action pair $(s_t, a_t)$ at each inference step. In Figure 1 (middle) and Figure 1 (right), each sample point represents a trajectory, and the corresponding "inference-time optimality score" is defined over the entire trajectory by averaging the scores of all inference steps.
The inference-time optimality score at time $t$ is defined by the quantile value of $R_t := \mathbb{E}_{V_t \sim p^a(\mathrm{RTG}_t \mid s_t, a_t)} [V_t]$ ($a_t$ is the action selected by the agent) under the distribution $p(V_t \mid s_t)$. This value will be high if the sampled action has a high estimated expected return. Please refer to the official comments for more details.
### Comment #3: the correlation between the optimality score and the actual returns
(Following the above response)
The above definition uses $p(V_t \mid s_t)$ ($V_t = RTG_t$) to measure the quality of $a_t$. The validity of this proxy is based on two assumptions. First, in environments like gym-mujoco, the $ RTG_t $ labels provided in the offline dataset are of high quality and are a strong indicator of actual return (justified by Figure 1 (left)). Second, the $p(V_t \mid s_t)$ fitted using GPT can accurately approximate $p(V_t \mid s_t) := \sum_{a_t} p(V_t \mid s_t, a_t) \cdot p(a_t \mid s_t)$. Given the high-dimensional action space, the second assumption is challenging to verify directly. However, Figure 1 (right) shows a clear positive correlation between this proxy and the actual return, indicating that trajectories with higher scores often achieve higher final returns, with the small slope largely attributable to scale differences.
In summary, we have theoretically and empirically verified the effectiveness of the "inference-time optimality score" as a proxy. Figure 1 (middle) compares the inference-time performance of different policies by visualizing the distribution of the "inference-time optimality score" over multiple trajectories.
### Comment #4: comparison with DD
We thank the reviewer for drawing our attention to the interesting connection between Trifle and DD. First, as shown in **Table 3(b)** of the PDF, we highlight that Trifle outperforms DD on 7 out of 9 MuJoCo benchmarks in Table 1 and is more robust with smaller stds.
On the methodology side, DD adopts diffusion models to fit state-action sequences and then generates rewarding sequences by conditioning on high return. From the inference side, similar to DT, DD aims to sample action conditions on high return, while Trifle aims to improve upon existing RvS algorithms (e.g., TT and DT) by allowing them to sample action conditions on the *expected* return. While DD achieves superior performance compared to some other baselines, their main technical contribution is orthogonal to that of Trifle.
Next, in Section 4.2, we highlight Trifle's effectiveness in handling stochastic environments by efficiently and exactly computing the multi-step value estimate (Eq. (3)). This can significantly improve value estimation in stochastic environments. Since DD also requires a $p (V | s_{0:t}, a_{0:t})$ to guide action sampling, this value component of Trifle can be potentially incorporated into DD. We will include the above discussions in the next version of the paper.
### Comment #5: connection with methods that train an entropy-regularized policy such as DPO
We thank the reviewer for their insightful comment.
In our humble opinion, we do not need to prefer one over the other as a policy trained with better objectives (e.g., that trained by some value-based offline RL methods) can be further utilized by RvS algorithms to obtain even better policies. For example, by using the Q values from a pre-trained IQL agent (which implies a better policy) for value estimation, TT can be significantly improved to be much better than both the vanilla TT and IQL.
In the case of Trifle, the question is whether we want to pay the additional inference-time overhead to achieve better performance. According to the general response, the additional components of Trifle take about 1.45s, which is negligible compared to the runtime of the base algorithm.
### Comment #6: how to choose t'
We choose $t' = t+3$. See the comments below for a detailed elaboration.
### Comment #7: why is the additional rejection sampling step necessary in lines 206-207
The necessity of rejection sampling comes from the hardness to draw exact action samples conditioned on high expected returns, which is shown in Theorem 1. For m-Trifle in stochastic envs, we need to compute $\mathbb{E}[ V_t^{\mathrm{m}} \big ]$ using TPM via a post-value estimation step and perform rejection sampling to obtain more unbiased samples from the desired distribution in Eq. (1).
### Comment #8: training time of Trifle and baseline methods
It takes 30-60 mins to train a PC (the adopted TPM) on one GPU for each Gym-Mujuco task. In comparison, training the GPT model for TT takes approximately 6-12 hours.
### Comment #9: detailed elaboration of the proposed algorithm
Please refer to the official comments for a detailed response to this comment, thanks very much!
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed comments to all reviewers and conducting a number of additional experiments and ablation studies.
I believe that these strengthen the paper and I increase my score to 6 accordingly.
> Comment #3: the correlation between the optimality score and the actual returns
Could the authors clarify why in Figure 2 (middle) they do not compare against the actual returns instead of the optimality proxy? In my understanding, the use of the proxy skews the results and makes them harder to interpret correctly. In particular, it appears to me that this biases the results in this figure to look more favorably for Trifle.
---
Reply to Comment 1.1.1:
Title: Further Clarifications about Figure 2
Comment: Thanks for the active feedback! Detailed comparisons against actual returns can be found in Table 1 of the paper, with Section 6.1 providing an in-depth analysis. Additionally, we have included ablation studies to validate the effectiveness of each component of Trifle in achieving superior performance.
More importantly, the final performance is determined by both training-time and inference-time optimality. In Figure 2, our primary objective was to conceptually isolate the inference-time optimality issue and examine the inference-time performance of each method, thus highlighting our argument that existing RvS approaches underperform as a result of suboptimal inference-time performance.
We hope this clarifies our intention. Thank you again for your insightful question.
---
Rebuttal 2:
Title: Details of Inference-time Optimality Score
Comment: The specific steps for calculating the inference-time optimality score for a given inference step $t$, given $s_t$ and a policy $p(a_t \mid s_t)$, are as follows:
1. Given $s_t$, sample $a_t$ from $p_{TT}(a_t \mid s_t)$, $p_{DT}(a_t \mid s_t)$, or $p_{Trifle}(a_t \mid s_t)$.
2. Compute the state-conditioned value distribution $p^s(V_t \mid s_t)$.
3. Compute $R_t := \mathbb{E}_{V_t \sim p^a(RTG_t \mid s_t, a_t)} [V_t]$, which is the corresponding estimated expected value.
4. Output the quantile value $S_t$ of $R_t$ in $p^s(V_t \mid s_t)$.
To approximate the distributions $p^s(V_t \mid s_t)$ and $p^a(V_t \mid s_t, a_t)$ (where $V_t = RTG_t$) in steps 2 and 3, we train two auxiliary GPT models using the offline dataset. For instance, to approximate $p^s(V_t \mid s_t)$, we train the model on sequences $(s_{t-k}, V_{t-k}, \ldots, s_t, V_t)$.
Intuitively, $p^s(V_t \mid s_t)$ approximates $p(V_t \mid s_t) := \sum_{a_t} p(V_t \mid s_t, a_t) \cdot p(a_t \mid s_t)$. Therefore, $S_t$ indicates the percentile of the sampled action in terms of achieving a high expected return, relative to the entire action space.
---
Rebuttal 3:
Title: Detailed Elaboration of the Proposed Algorithm
Comment: Thank the reviewer for the suggestion. We will include a subsection to describe the algorithm and also provide an algorithm table in the next version of the paper.
The high-level idea of Trifle is that it can be built upon many RvS algorithms (e.g., TT, DT) to solve two key problems that are widely observed in the literature: (i) sample actions condition on high *expected* return, and (ii) estimate state-action values under stochastic transition dynamics. In the following, we first explain what modifications Trifle applied to solve both challenges; we then describe how the modifications are used to design algorithms upon existing RvS algorithms.
**Scenario 1: sampling actions condition on high *expected* return**
As described in Section 3, this problem can be formulated as: given a joint distribution over $p(s_{0:t}, a_{t}, V_{t})$, we want to query $p(a_t | s_{0:t}, \mathbb{E}[V_{t}] \geq v)$ (formally defined in Equation (1)). However, Theorem 1 illustrates that it is NP-hard to exactly compute $p(a_t | s_{0:t}, \mathbb{E}[V_{t}] \geq v)$ even when $p(s_{0:t}, a_{t}, V_{t})$ follows a simple Naive Bayes distribution, which guides Trifle to find *good approximations* of the query.
In general, we first train a PC (the adopted TPM) to fit the joint distribution over $p(s_{t-k}, a_{t-k}, V_{t-k},...,s_t, a_{t}, V_{t})$ from the offline dataset. This step is agnostic to the base RvS algorithm Trifle is built on. Then, given any component that computes or samples from $p(a_t | s_t, V_t = v)$ (DT) or $p(a_t | s_t)$ (TT), we replace it to a good approximation of $p(a_t | s_t, \mathbb{E}[V_t] \geq v)$ by augmenting these proposal distributions with a correction term.
Specifically, we utilize the pretraind PC to compute the per-dimension correction term $p_{TPM} (V_t \geq v | s_t, a_{t}^{\leq i})$ by marginalizing out unseen action variables $a_t^{i+1},...,a_t^k$ and this forms an exponentially large action space, aiming to bias the actions generated by the base policy towards high expected return $\mathbb{E}[V_t]$. Notably, we query this conditional probability for each dimension $a_t^i$, as the inference proceeds, the variable $i$ will increase progressively, resulting in different marginal queries. However, thanks to PCs' tractabiltiy, we can use a single model to answer arbitrary marginalization queries. Moreover, the computation can be done exactly without any approximation and scales **linearly** to the number of action variables, which is highly efficient. In Appendix C.2, we present the concrete algorithm of how PCs (the adopted TPM) compute conditional probabilities.
Note that the rejection sampling methods via beam search steps is only adopted by the specific TT-based Trifle algorithm described in Section 4. DT doesn't apply beam search as TT so we don't implement rejection sampling for DT-based Trifle to ensure fair comparision.
**Scenario 2: estimating state-action values under stochastic transition dynamics**
Many existing RvS algorithms requires explicitly evaluating $p(V_t | s_{0:t}, a_{t})$. However, the single-step value estimate $V_t = \mathrm{RTG}_t$ will be very inaccurate under stochastic environments. This is because RTGs heavily depend on the randomness that determines state transitions. To combat this inaccuracy, it is common to use multi-step value estimates instead. TD(1) and TD(λ) are notable examples. TD(1) relies on full returns to update value estimates, while TD(λ) introduces a balance between n-step returns and full returns using a parameter λ, allowing for more accurate value estimation.
While theoretically promising, classical multi-step value estimates generally require careful Monte Carlo sampling of different future state sequences, which makes it hard to obtain low-variance estimates. However, as introduced in Appendix C.2, with TPMs' ability to compute arbitrary conditional distributions, we can efficiently (in one forward pass and one backward pass of the TPM ) compute the multi-step value estimate in Eq. (3). This allows us to enjoy the better accuracy of the multi-step value estimates while mitigating the inefficiency to compute them.
---
Rebuttal 4:
Title: Details of the m-Trifle algorithm and how we choose t'
Comment: For m-Trifle in stochastic Taxi environment, we largely follow the Algorithm 3 of Appendix 3.1, here's a more concise and complete version:
(1) choose t'>t and compute $p_{GPT}(a_t | s_t)$, $p_{TPM}(V_{t'}>v | s_t, a_t)$
(2) sample N action candidates from $p_{Trifle}(a_t|s_t) = p_{GPT}(a_t | s_t)\times p_{TPM}(V_{t'}>v | s_t, a_t)$
(3) for each action candidate $a_t$, sample future actions $a_{t+1},...,a_t'$ autoregressively from $p_{Trifle}$ for future evaluation.
(4) for each action candidate and each $h \in [t+1,t']$ , compute $p_{TPM} (r_h\ | s_t, a_{t:h})$ by marginalizing over intermediate states $s_{t+1:h}$ and also $p_{TPM} (V_{t'}\ | s_t, a_{t:t})$ , where $V_{t'} = RTG_{t'}$
(5) for each action candidate, compute:
$E[ V_t^{m} ] = \sum_{h=t}^{t'} \gamma^{h-t} E_{r_h \sim p_{TPM} (\cdot | \tau_{\leq t}, a_{t+1:h})} [ r_h ] + \gamma^{t'+1-t} E_{RTG_{t'} \sim p_{TPM} (\cdot | \tau_{\leq t}, a_{t+1:t'})} [ V_{t'} ]$
(6) select the action with maximum $\mathbb{E}[ V_t^m ]$ to excute in the environment.
We choose $t' = t+3$. | Summary: This paper studies offline reinforcement learning and argues that tractability, e.g., the ability to answer probabilistic queries, is important for performance improvement. As a result, the authors propose a model that utilizes modern tractable generative models to answer arbitrary marginal/conditional probabilities. Comprehensive experiments demonstrate that the new model can achieve better performance in most cases.
Strengths: The paper is overall well-written and easy to follow. The considered problem is interesting and relevant. A comprehensive set of experiments also verify the merits of the proposed approach.
Weaknesses: While the proposed approach seems sound, the novelty and significance might be limited. The key idea in Sec 4 seems to be based on classical rejection sampling methods, though with some additions of correction terms.
Could the authors provide some discussions regarding computation complexity of the proposed methods?
Technical Quality: 2
Clarity: 3
Questions for Authors: See above
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Comment #1: connection between rejection sampling and the correction terms, and the corresponding computational complexity
We thank the reviewer for their constructive feedback. As mentioned in the general response, Trifle can be applied to many existing RvS algorithms (e.g., TT, DT) to mitigate the identified two challenges. To highlight, the per-action-dimension correction terms in Eq. (2) computed by TPM are crucial to enhance action sampling and mitigate the first challenge. And the rejection sampling methods via beam search steps, are only adopted by the specific TT-based Trifle algorithm described in Section 4. DT doesn't apply beam search as TT so we don't implement rejection sampling for DT-based Trifle. Therefore, the success of DT-based Trifle strongly justifies that the correction term can effectively bias the actions generated by DT towards higher expected returns.
In the following, we will first discuss **how Trifle addresses both challenges** and why TPM matters, as well as the **computation complexity of the proposed methods**. Then we conduct ablation studies on rejection sampling to confirm Trifle's superiority.
**Challenge 1: Sampling actions conditioned on high expected return**
As described in Section 3, this problem can be formulated as: given a joint distribution over $p(s_{0:t}, a_{t}, V_{t})$, we want to query $p(a_t | s_{0:t}, \mathbb{E}[V_{t}] \geq v)$ (formally defined in Equation (1)). Note that the expectation over $V_t$ is very important because otherwise if we ignore the expectation operator and directly sample from $p(a_t | s_{0:t}, V_{t} \geq v)$, we may sample actions with a low probability that lead to high returns. This causes significant performance degradation as discussed in [1]. Then we naturally want to know whether we can exactly compute this quantity. However, Theorem 1 illustrates that it is NP-hard to exactly compute $p(a_t | s_{0:t}, \mathbb{E}[V_{t}] \geq v)$ even when $p(s_{0:t}, a_{t}, V_{t})$ follows a simple Naive Bayes distribution, which guides Trifle to find *good approximations* of the query.
To achieve better approximations, we utilize TPM to compute per-action-dimension correction terms $p_{TPM} (V_t \geq v | s_t, a_{t}^{\leq i})$ to bias the actions generated by any prior policy towards high expected return $\mathbb{E}[V_t]$. Note that we compute the correction term exactly for each dimension $a_t^i$, which is highly non-trivial as we have to marginalize out unseen action variables $a_t^{i+1},...,a_t^k$ and this forms an exponentially large action space. This is why intractable models suffer and we need to leverage the ability of TPMs to compute arbitrary conditional probabilities. Notably, the computation for TPM is exact without any approximation and scales **linearly** to the number of action variables, which is highly efficient. We will later show ablation results that confirm **the superiority of the correction term both with and without the rejection sampling step**.
**Challenge 2: Estimating state-action values under stochastic transition dynamics**
The second major challenge is the inaccuracy of the RTG labels when the environment transition dynamics are highly stochastic. This is because RTGs heavily depend on the randomness that determines state transitions. To combat this inaccuracy, it is common to use multi-step value estimates instead. TD(1) and TD(λ) are notable examples. TD(1) relies on full returns to update value estimates, while TD(λ) introduces a balance between n-step returns and full returns using a parameter λ, allowing for more accurate value estimation. An application of this is the Proximal Policy Optimization (PPO) algorithm, which uses the eligibility trace governed by TD(λ) to establish better value estimates. This approach helps PPO achieve more stable and accurate value estimation by effectively integrating future reward information over multiple timesteps.
While theoretically promising, classical multi-step value estimates generally require careful Monte Carlo sampling of different future state sequences, which makes it hard to obtain low-variance estimates. However, as introduced in Appendix C.2, with TPMs' ability to compute arbitrary conditional distributions, we can efficiently (in one forward pass and one backward pass of the TPM ) compute the multi-step value estimate in Equation (3). This allows us to enjoy the better accuracy of the multi-step value estimates while mitigating the inefficiency of computing them.
**Ablation studies on the rejection sampling/beam search**
For TT-based Trifle, we adopted the same beam search hyperparameters as reported in the TT paper (in the official GitHub repo https://github.com/jannerm/trajectory-transformer). We conduct ablation studies on beam search hyperparameters in **Table 2** of PDF to investigate the effectiveness of Trifle's each component:
- Trifle consistently outperforms TT across all beam search hyperparameters and is more robust to variations of both planning horizon $H$ and beam width $W$.
- (a)Trifle w/ naive rejection sampling >> TT w/ naive rejection sampling
(b) Trifle w/o rejection sampling >> TT w/o rejection sampling.
In both cases, Trifle's superior performance originates from that it can positively guide action generation (similar to DT-based Trifle vs DT).
- (a) Trifle w/ beam search > Trifle w/ naive rejection sampling > Trifle w/o rejection sampling >> TT w/ naive rejection sampling.
(b) Trifle w/ naive rejection sampling $\approx$ TT with beam search.
Although other design choices like rejection sampling/beam search help to better approximate samples from the desired distribution $p(a_t | s_{0:t}, \mathbb{E}[V_{t}] \geq v)$, the per-dimension correction terms computed by Trifle used to guide per-dimension-action generation plays a very significant role.
[1] Paster, Keiran, Sheila McIlraith, and Jimmy Ba. "You can’t count on luck: Why decision transformers and rvs fail in stochastic environments."
---
Rebuttal 2:
Comment: Dear Reviewer DwFR,
Thank you again for your valuable feedback and suggestions. In our previous response, we have provided a detailed explanation and supplemented our arguments with theoretical analysis and experimental validation. To help you better understand the improvements we made, I would like to summarize the key points:
1. **One of our main contributions:** Our method introduces per-action-dimension correction terms that can be exactly and efficiently calculated by TPMs to enhance the action sampling procedure. These correction terms significantly improve the performance, and our method does not necessarily need to be combined with rejection sampling or beam search to be effective. For example, **DT-based Trifle achieves significant performance gains compared to DT without using any rejection sampling**, proving that the per-action-dimension correction terms play a critical role in guiding the action generation process.
2. **Additional ablation study over rejection sampling:** We provided additional ablation studies (**Table 1** in the attached PDF) to show that
(i) The proposed correction term works well across different base algorithms (DT and TT). Specifically, as indicated by the last row of Table 1(b) in tha attached PDF, TT w/o rejection sampling suffers a significant performance drop, while **Trifle works well even w/o rejection sampling (competitive results as TT w/ beam search)**.
(ii) Trifle can be effectively combined with rejection sampling or beam search to further boost performance, which leads to state-of-the-art results in 7 out of 9 Gym-MuJoCo environments.
3. **Another equally important contribution:** Our method enables **exact and efficient multi-step value estimation**, which significantly enhances performance in stochastic environments as shown in Section 6.2.
We believe we have addressed the concerns raised, but if you have any further questions or issues, we would be happy to discuss them with you. We look forward to your feedback.
Thank you again for your time and consideration. | Summary: The paper introduces Trifle (Tractable Inference for Offline RL) that leverages Tractable Probabilistic Models (TPMs) to enhance the performance of offline RL tasks. The paper emphasizes that beyond the expressiveness of sequence models, tractability--efficiently answering probabilistic queries--is crucial for accurate evaluation in offline RL, particularly in environments with high stochasticity and action constraints. In particular, Trifle uses a type of TPMs called Probabilistic Circuits (PCs), which supports the computation of arbitrary marginal and conditional probabilities in linear time with respect to their size. In practice, Trifle uses a mixture of single-step (return-to-go) and multi-step value estimates to condition action generation, allowing Trifle to handle both optimal and suboptimal return-to-gos effectively. Trifle enhances traditional rejection sampling by incorporating TPMs to adjust the proposal distribution, improving the likelihood of sampling high-return actions. It employs beam search to maintain and refine high-probability action sequences, ensuring actions are within the offline data distribution and likely to yield high rewards. Additionally, Trifle uses an adaptive thresholding mechanism to dynamically select expected return thresholds, maintaining robust performance across various evaluation settings.
The paper provides comprehensive empirical comparisons across nine Gym-MuJoCo benchmarks, a stochastic Taxi environment, and action-space-constrained tasks. Trifle consistently outperforms state-of-the-art baselines, including Trajectory Transformers (TT), Decision Transformers (DT), and various imitation learning and offline TD learning methods. These results demonstrate Trifle's robustness and effectiveness, particularly in challenging stochastic and constrained environments, underscoring its potential to advance offline RL methodologies.
Strengths: **Originality**: The paper presents a novel perspective by highlighting the importance of tractability in offline RL. By introducing TPMs into this domain, it addresses a gap not thoroughly explored by existing approaches, which typically focus on the expressiveness of sequence models.
**Quality**: The methodology is robust and well-supported by theoretical insights and extensive empirical evaluations. The experiments demonstrate that Trifle significantly outperforms strong baselines on diverse benchmarks, including challenging stochastic environments and safe RL tasks with action constraints.
**Clarity**: The paper is well-written and organized, making complex concepts accessible. The inclusion of detailed experimental setups and comprehensive results aids in understanding the advantages of Trifle over existing methods. I appreciate the overview on TPMs and the build up from section 3 to 5 from the theoretical to the practical implementation.
**Significance**: By demonstrating that tractability can greatly enhance evaluation-time performance, the paper paves the way for future research to develop more inference-aware RL approaches. The results on diverse and challenging benchmarks highlight the practical benefits of integrating TPMs into RL algorithms.
Weaknesses: 1. Limited evaluations on complex tasks (e.g. in the D4RL benchmark beyond the nine studied). There is perhaps a challenge (computational?) to scale up the PCs on more complicated environments. Even if the authors do not provide the results for these, perhaps it would be worth discussing any challenges on scaling up Trifle to these more complex tasks.
2. More detailed ablation studies. The paper includes some ablation studies, such as comparisons of Trifle variants with and without Q-value-based action filtering, demonstrating the effectiveness of exact inference. However, it would benefit from additional detailed ablations to isolate the contributions of other components, such as the adaptive thresholding mechanism and beam search strategy, to offer deeper insights into which elements are most critical to Trifle's performance improvements.
3. Scalability / Efficiency Analysis. The paper lacks a thorough analysis of Trifle's computational complexity and scalability. Discussing the trade-offs between the benefits of tractability and the computational overhead introduced, especially in high-dimensional action spaces, would provide a clearer picture of Trifle's practical feasibility. Including detailed runtime comparisons with other methods would be beneficial. The current runtime analysis indicates a 1.5 to 3 times increase in inference time compared to base models, but a more comprehensive analysis would be useful (e.g. plotting out a scaling law curve for Trifle vs TT in table 5 with more horizon, etc.)
Technical Quality: 3
Clarity: 4
Questions for Authors: Related to the above’s weaknesses:
1. How does the adaptive thresholding mechanism and beam search hyperparameters affect the performance of Trifle across different environments and datasets? Having some ablation studies on this would be helpful to suggest practical values for future work.
2. Could you provide more intuitive explanations and discuss the practical implications of the theoretical guarantees provided in Sections 4.1 and 4.2? While the theoretical results are well-presented, further clarity on how these results guide the design and implementation of Trifle would help readers better understand the theoretical contributions and their impact on performance. Specifically, discussing practical scenarios where these theoretical guarantees are most beneficial would strengthen the paper.
3. Have you considered evaluating Trifle on more complex tasks like the Antmaze benchmark?
Post rebuttal: increased my score from 6 to 7.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have addressed some limitations of their work, specifically the dependency on expressive TPMs and the current inefficiency of PCs compared to neural network packages. However, there is no discussion on the potential negative societal impact of their work as the method was tested on simulated environments with no immediate societal impact.
One possible limitation to discuss is also how Trifle scales in high-dimensional action spaces, which might introduce computational overhead and impact performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Comment #1: practical implications of the theoretical guarantees in Secs. 4.1 and 4.2
Thanks for the constructive comment. The theoretical results are used to elaborate two key inference-side challenges. Both challenges are introduced in the general comment, and we provide further details in the following. We will incorporate the following discussion into the next version of the paper.
**Sampling actions conditioned on high expected return**
As described in Section 3, our goal is to sample from $p(a_t | s_{0:t}, \mathbb{E}[V_{t}] \geq v)$. However, Theorem 1 illustrates that it is NP-hard to exact compute even when $p(s_{0:t}, a_{t}, V_{t})$ follows a simple Naive Bayes distribution, which guides Trifle to find *good approximations*.
Specifically, we utilize TPM to compute the correction term $p_{TPM} (V_t \geq v | s_t, a_{t}^{\leq i})$ to bias the actions generated by any prior policy towards high expected return $\mathbb{E}[V_t]$. Note that we compute the correction term exactly for each dimension $a_t^i$, which is highly non-trivial as we have to marginalize out unseen action variables $a_t^{i+1},...,a_t^k$ and this forms an exponentially large action space. We have **more ablation studies** on the effect of the TPM-provided terms in the response to comment #2.
**Estimating state-action values under stochastic transition dynamics**
As discussed in the general comment, we need to use multi-step value estimates under stochastic environments for better accuracy. While it is generally hard to obtain low-variance estimates, with TPMs' ability to compute arbitrary conditional distributions, we can efficiently compute the multi-step value estimate in Eq. (3). This allows us to enjoy the better accuracy of the multi-step value estimates while mitigating the inefficiency to compute them.
### Comment #2: effect of the beam search hyperparameters
We conduct additional ablation studies on both the adaptive thresholding mechanism and rejection sampling/beam search.
**Ablation studies on the adaptive thresholding mechanism**
We report the performance of TT-based Trifle with variant $\epsilon$ vs TT on Halfcheetah Med-Replay in **Table 1(a)** of the rebuttal PDF. Trifle is robust to $\epsilon$ and consistently outperforms TT.
We also conduct ablation studies comparing the performance of the adaptive thresholding mechanism with the **fixed thresholding mechanism** on two environments in **Table 1(b)** by fixing different $v$s. The table shows that the adaptive approach consistently outperforms the fixed value threshold in both environments.
**Ablation studies on the rejection sampling/beam search**
As mentioned in the general response, for the Gym-MuJoCo benchmark, we only adopt rejection sampling/beam search for TT-based Trifle and not for DT-based Trifle. Therefore, the success of DT-based Trifle strongly justifies the effectiveness of the TPM components.
For TT-based Trifle, we adopted the same beam search hyperparameters as reported in the TT paper. We conduct ablation studies on beam search hyperparameters in **Table 2** of PDF to investigate the effectiveness of Trifle's each component:
- Trifle consistently outperforms TT across all beam search hyperparameters and is more robust to variations of both planning horizon $H$ and beam width $W$.
- (a) Trifle w/ naive rejection sampling >> TT w/ naive rejection sampling (b) Trifle w/o rejection sampling >> TT w/o rejection sampling. In both cases, Trifle can positively guide action generation.
- Trifle w/ beam search > Trifle w/ naive rejection sampling > Trifle w/o rejection sampling >> TT w/ naive rejection sampling. Although other design choices like rejection sampling/beam search help to better approximate samples from the high-expected-return-conditioned action distribution, the per-dimension correction terms computed by Trifle play a very significant role.
### Comment #3: computational complexity analysis
Thanks for the valuable suggestion.
First, we conduct a more detailed runtime analysis, and the main results are shown in Figure 1 of the attached PDF.
Figure 1 (left) in the rebuttal PDF expands Table 5 and plots the step-wise inference-time scaling curve of Trifle vs TT with varying horizons. As we increase the horizon, the relative slowdown is mitigated. This is because, across different horizons, TPM-related computation consistently requires ~1.45s computation time, which additional computation overhead is diminishing as we increase the beam horizon.
Moreover, Trifle is efficient in training. It only takes 30-60 minutes to train a PC (the adopted TPM) on one GPU for each Gym-Mujuco task (note that we only need one PC per task). In comparison, training the GPT model for TT takes approximately 6-12 hours (80 epochs).
Note that there are recent breakthroughs [1] on designing efficient PC implementations, which can significantly speed up training and inference of Trifle.
[1] Liu et. al. "Scaling Tractable Probabilistic Circuits: A Systems Perspective." arXiv preprint arXiv:2406.00766 (2024).
### Comment #4: evaluation on the Antmaze task
We tried to run the Antmaze task but have not successfully reproduce the performance of TT+Q (as reported by the original paper, the vanilla TT fails due to the sparsity of the rewards) since there is no official implementation on GitHub and we have not found third-party implementation for this. We will keep trying to reproduce during the discussion period.
### Comment #5: potential negative societal impact
Thanks. We will include the following discussion:
This paper proposes a new offline RL algorithm, which aims to produce policies that achieve high expected returns given a pre-collected dataset of trajectories generated by some unknown policies. When there are malicious trajectories in the dataset, our method could potentially learn to mimic such behavior. Therefore, we should only train the proposed agent in verified and trusted offline datasets.
---
Rebuttal 2:
Title: Details of the Adaptive Thresholding Mechanism
Comment: the adaptive thresholding mechanism is adopted when computing the term $p_{TPM} (V_t \geq v | s_t, a_{t}^{\leq i})$ of Equation (2), where $i \in \{1, \dots, k\}$, $k$ is the number of action variables and $a_t^{i}$ is the $i$th variable of $a_t$. Instead of using a fixed threshold $v$, we choose $v$ to be the $\epsilon$-quantile value of the distribution $p_{TPM}(V_t|s_t,a_{t}^{< i})$ computed by the TPM, which leverage the TPM's ability to exactly compute marginals given incomplete actions (marginalizing out $a_{t}^{i:k}$). Specifically, we compute $v$ using $v = max_r\{r\in\mathbb{R}|p_{TPM}(V_t\geq r|s_t,a_{t}^{< i})\geq 1-\epsilon\}$. Empirically we fixed $\epsilon$ for each Gym-MuJoCo environment and $\epsilon = 0.2$ or $0.25$, which is selected by runing grid search on $\epsilon \in [0.1,0.25]$.
---
Rebuttal 3:
Title: Discussion about Trifle's Scalability
Comment: The scalability of Trifle to more complex tasks is also an interesting question. Some of the key challenges to scaling up an offline RL algorithm include (i) high-dimensional action spaces, and (ii) more complex transition dynamics and policy. Figure 1 (right) shows that Trifle's runtime (TPM-related) scales linearly w.r.t. the number of action variables, which indicates its efficiency for handling high-dimensional action spaces. The second largely depends on the performance of modeling complex sequences. While we do not have a definite answer on the scalability of the adopted TPMs, there are abundant recent papers that significantly improve the expressive power of TPMs.
---
Rebuttal Comment 3.1:
Comment: Thank you for the detailed response to my concerns and questions, along with the additional results in the rebuttal PDF. This helped clarify the implications of the theorems, the effects of the beam search parameters, the computational complexity / scalability, and details of the adaptive thresholding mechanism. I encourage the authors to incorporate these details in the final version of the paper. I am raising my score from 6 to 7.
---
Rebuttal 4:
Comment: Thank you very much for supporting our paper and increasing the score! And thanks for your valuable suggestions during the rebuttal period to help us improve the paper. We will incorporate the additional details in the final version according to your comments. We sincerely appreciate your feedback! | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive feedback and for acknowledging our paper as novel, well-presented, and comprehensively evaluated. We summarize common questions and concerns raised by the reviewers in the following.
**Key technical differences of Trifle compared to other RvS or offline RL algorithms**
As highlighted in our paper as well as in the literature (e.g., [1]), there are several challenges in RvS algorithms from a tractable inference perspective. This paper identifies two such challenges and proposes to solve them using **tractable probabilistic models**.
**Challenge #1**: Sampling actions condition on high **expected** return
Given a generative model of the joint probability distribution $p(s_{0:t}, a_{0:t}, V_{0:t})$, our goal is to sample actions $a_t$ condition on the current state $s_{0:t}$ and high expected future returns (i.e., high $\mathbb{E}[V_t]$), which can be concisely written as $p(a_t | s_{0:t}, \mathbb{E}[V_t] \geq v)$ (Eq. (1) in the paper). However, the condition on the expected future return makes it a hard inference task given generative models over the joint distribution. We show that this query is NP-hard to compute exactly even when the joint distribution follows Naive Bayes (Thm. 1). We propose a promising approximate sampling algorithm for $p(a_t | s_{0:t}, \mathbb{E}[V_t] \geq v)$ that leverages the ability of TPMs to compute arbitrary conditional distributions. This component can be directly used to replace policy components such as $p(a_t | s_{0:t})$ (e.g., TT) and $p(a_t | s_{0:t}, V_t = v)$ (e.g., DT) used in existing RvS algorithms. Empirical evaluations demonstrate the addition of this component significantly improves over the base algorithm.
**Challenge #2**: Estimating state-action values under stochastic transition dynamics
Another major challenge comes from the inaccuracy of the RTG), an estimate of the return, provided in the offline datasets. This problem is even worse in stochastic environments, where the labeled RTGs have very high variation. To combat this inaccuracy, it is common to use multi-step value estimates such as TD(1) and TD(λ). Specifically, we want to compute $\mathbb{E} [r_t + \cdots + r_{t'} + V_{t'+1} | s_{0:t}, a_{t:t'}]$ give a joint distribution represented by a generative model. However, computing such values requires implicitly marginalizing out future states $s_{t+1:t'}$, which is computationally intractable for most models. We propose to use TPM to exactly compute such terms, which leads to a significant performance gain in stochastic environments.
**The relationship between Trifle and rejection sampling/beam search algorithm**
The key insight of Trifle to solve challenge #1 is to utilize tractable probabilistic models to better approximate action samples from the desired distribution $p(a_t | s_{0:t}, \mathbb{E}[V_t] \geq v)$. We highlight that the most crucial design choice of our method for this goal is that: Trifle can effectively bias the per-action-dimension generation process of any base policy towards high expected returns, which is achieved by adding per-dimension correction terms $p_{TPM} (V_t \geq v | s_t, a_{t}^{\leq i})$ (Eq. (2) in the paper) to the base policy.
While the rejection sampling method can help us obtain more unbiased action samples through a post value(expected return)-estimation session, we only implement this component for TT-based Trifle (not for DT-based Trifle) for fair comparison, as the DT baseline doesn't perform explicit value estimation or adopt any rejection sampling methods. Moreover, the beam search algorithm also comes from TT. Although it is a more effective way to do rejection sampling, it is not the necessary component of Trifle, either.
Next, following the suggestions of the reviewers, we conduct a comprehensive ablation study over beam search/rejection sampling in Table 2 of the attached PDF. The results:
- **Trifle with beam search > Trifle with naive rejection sampling > Trifle w/o rejection sampling >> Base policy w/o rejection sampling**
- **Trifle w/o rejection sampling >> TT with naive rejection sampling**
- **Trifle with naive rejection sampling $\approx$ TT with beam search**
strongly justifies our claim. (We analyze the results in detail to each reviewer)
**Computational efficiency/scalability of Trifle**
For the inference-side efficiency, we conduct a more detailed inference-time evaluation, and the main results are shown in Figure 1 of the attached PDF. Specifically, Figure 1 (left) expands Table 5 and plots an inference-time scaling curve of Trifle vs TT with varying horizons, where we can draw the same conclusion as Appendix D.3 of the paper: Since TPM-related computation consistently requires ~1.45s computation time across different horizons, the relative slowdown of Trifle is diminishing as we increase the beam horizon. Figure 1 (right) shows that Trifle's runtime (TPM-related) scales **linearly** w.r.t. the number of action variables, which indicates its efficiency for handling high-dimensional action spaces.
Trifle is also efficient in training. It only takes 30-60 minutes (~20s per epoch, 100-200 epochs) to train a PC on one GPU for each Gym-Mujuco task (**Note that a single PC model can be used to answer all conditional queries required by Trifle**). In comparison, training the GPT model for TT takes approximately 6-12 hours (80 epochs).
Note that there are recent breakthroughs [2] in designing efficient PC implementations, which can **significantly speed up** the computation of Trifle (both training and inference).
[1] Paster, Keiran, Sheila McIlraith, and Jimmy Ba. "You can’t count on luck: Why decision transformers and rvs fail in stochastic environments."
[2] Liu, Anji, Kareem Ahmed, and Guy Van den Broeck. "Scaling Tractable Probabilistic Circuits: A Systems Perspective."
Pdf: /pdf/c979cd38d3640acd5ba71b7cf48f789423a38493.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Catastrophic Goodhart: regularizing RLHF with KL divergence does not mitigate heavy-tailed reward misspecification | Accept (poster) | Summary: The authors presents a study on the RM in RLHF. Theoretical results of the paper assume that the reward from the RM consists of a true reward plus a noise term which is modeled as a random variable. Under this model the authors show that
- If the noise terms are heavy-tailed there exists policies with vanishing KL-divergence but infinite proxy reward.
- If the noise terms are IID and light-tailed, then KL divergence suffices to guarantee good outcomes
The authors also investigate the actual rewards given by real RMs. They find that the rewards on random sequences are roughly
Strengths: - The paper is well written and easy to read.
- The theoretical results are relevant.
Weaknesses: - there is probably some discrepancy between the theory and practice. The results are asymptotic, it would be better if the statements could hold in non asymptotic regimes.
- The experimental results seems to show that the rewards are not heavy tailed, and thus the theory seems to suggest that KL-divergence should work well. We already know that it does work well from practice, so there is no novel take-home message.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The theoretical results are related to asymptotically heavy tails. What would happen in I take my RM and clip the output to be in [-10000, +10000]. I’m guessing in practice nothing would happen, but would the theoretical results would be invalid since the tails disappear?
- In practice the output of the RM isn’t a random variable, it’s deterministic. Can you comment on the validity of the assumption that the rewards are random?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the thoughtful criticism. We considered writing more about how asymptotic results relate to real life, and will do so in the next version of the paper. We additionally hope that this addresses your concerns:
* *Asymptotic vs bounded/clipped:* With a bounded RM the asymptotic results would indeed not apply. In practice, the same pattern of rare failures producing most of the reward and resulting in near zero utility would likely apply, with sufficiently regularized policies having, say, a 0.1% chance of 10000 reward and otherwise matching the base policy. This could change if error and utility are nonindependent and the base policy is close to optimal so that it actually encounters lots of clipped rewards. (See also the next paragraph, and the last bullet point of our rebuttal to SFxG.) Though it would be possible to prove things about the bounded case, we presented the asymptotic results because we thought they were much cleaner.
* *Take-home message:* We think there is an interesting take-home message. It is true that KL divergence penalties work sufficiently well to control overoptimization in current language models, though overoptimization has been demonstrated in practice by the results of Gao et al. But our theoretical results show that light-tailed + independence *prevents* overoptimization. Therefore in such settings we conclude that overoptimization is most likely a result of independence being violated, perhaps as textual features that increase error also decreasing utility. We have improved section 7.4, which discusses this, in the latest version.
* Deterministic RM: Our theorems still apply when the reward model is deterministic. The randomness in our theorems comes from the sentences generated by the LLM (which are random) being passed through a the deterministic reward model, which results in a joint distribution over $V$ and $X$. One of the contributions of our paper is the framing that, to study the final achieved reward, it is sufficient to examine this 2d distribution. Did we understand your question correctly?
We welcome any more comments you might have. Thank you for the review! | Summary: This paper analyzes a phenomenon called "catastrophic Goodhart": in RL training, suppose the learned reward function is the sum of the true utility function and some noise, then if (1) the true utility and the noise are independent and light-tailed, maximizing reward with KL regularization can also give high utility, while if (2) the noise is heavy-tailed, then there exists a policy with arbitrarily high reward and arbitrarily small KL penalty, but its true utility is close to the initial policy. Empirical results suggest that open-source reward models could be light-tailed.
Strengths: This paper considers whether KL regularization is enough to handle reward misspecification, which is an important problem. This paper also gives clean answers to this question: if utility and noise are independent and light-tailed, then KL regularization is enough, while if noise is heavy-tailed, we may have very little improvement in utility even with high proxy reward and small KL penalty. I think these results are nice and enhance our understanding of this problem.
Weaknesses: One missing piece is the implicit regularization effects of the optimization algorithms: although there exists a policy with low utility even under a KL penalty, probably the training algorithm can avoid those bad policies and still find a good one. It will be nice if we can understand whether certain optimization techniques can solve this issue.
Some additional comments:
1. The notation $U$ and $V$ are used inconsistently in the paper: on line 34, $U$ denotes the true utility and $V$ denotes the proxy reward, while on line 71 their meanings are reversed.
Technical Quality: 3
Clarity: 3
Questions for Authors: I have one question regarding Theorem 4: the final conclusion is that the expected utility can be arbitrarily large, but if the utility function is bounded, I don't understand why its expectation can be arbitrarily large.
I also have a more open-ended question: if we make the proxy reward function bounded (e.g., by clipping), do we have similar results?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the positive feedback. To address the weaknesses and questions:
* *Implicit regularization:* we acknowledge this as a weakness, and have some ideas for how implicit regularization can prevent Goodharting. But there is not really any evidence yet, and we would be excited to see follow-up work like this.
* *Typo on line 34:* we have fixed this in the latest version. Thank you for pointing it out!
* *Question about theorem 4:* Thank you for pointing out that we need to assume unbounded light-tailed $V$, we inadvertently dropped it and will explicitly state it for the final version. For the result to hold, the proxy reward should be unbounded so that $\lim _{\beta \rightarrow 0^{+}} \frac{1}{2} E[V \mid \pi] = \infty$ in the proof (for reference $\beta$ is the regularization coefficient of the KL term).
* *When proxy reward is bounded:* The optimal policy is Boltzmann rational. So how much utility we get will be determined by whether, for state-actions which are close to optimal, the reward is from utility or error. We roughly think that if you clip at large enough values, the asymptotic results in this paper will basically transfer.
With more aggressive clipping, you tend to get both error and utility components when the reward is optimized, so neither of the scenarios in our theorems holds (neither wholly error in heavy-tailed nor arbitrarily high utility in light-tailed). It is possible we can characterize how much utility or error the solution will have based on their ratios for the largest $V$, but we’ll leave that to future work.
Clipping can also distort the reward (see response to reviewer G3eM), so while it prevents overoptimization somewhat it also rewards incorrect behavior.
We would welcome any more questions or feedback you have. Thank you for your review!
---
Rebuttal 2:
Title: Clarification on bounded case
Comment: We wanted to add some information about the bounded proxy reward case:
* A correction: above, we said the optimal policy was Boltzmann rational, but in addition to the exponential term it still has the base likelihood.
* The best-of-N results from the comment to ngHC look very similar when reward is clipped to [-100,100], though in that case utility will increase again for very large N, and also when noise is Levy-distributed. When reward is clipped to [-10, 10], we no longer see overoptimization if utility is heavy-tailed. | Summary: This paper investigates the effectiveness of using KL divergence for regularization in reinforcement learning from human feedback (RLHF), particularly when dealing with reward functions that have misspecified errors. It introduces the concept of "catastrophic Goodhart," a scenario where policies can achieve extremely high rewards with heavy-tailed errors, but without actual improvement in utility.
Strengths: The paper addresses a critically important problem within the domain of RLHF --- the challenge of dealing with reward misspecification, particularly in scenarios where the error distribution is heavy-tailed. The authors defined the problem as "catastrophic Goodharts" and provided analysis accordingly.
Weaknesses: [writing / presentation] The paper contributes to the field of LLM safety/alignment, yet the writing could be made more engaging and accessible. Improving clarity, enhancing the depth of discussions, and ensuring visual and structural appeal could greatly increase its impact and readability. These changes would not only cater to experts in the field but also to readers who may be new to this area of study.
[Experiments] the experimental evidence could be further bolstered to validate the presented claims comprehensively. Specifically:
- The experiments primarily focus on simulated environments. Including real-world applications could demonstrate the practical implications and limitations of the findings more effectively.
- Exploring a wider variety of reward functions, particularly those commonly used in commercial or industrial settings, could help ascertain the generalizability of the catastrophic Goodhart phenomenon across different contexts.
- Additional experiments comparing KL divergence with other regularization or penalty methods could offer a clearer understanding of its relative efficacy and limitations.
- How about other optimization methods like Best-of-N where KL divergence term is not used?
- Will the implementation like LoRA affect the proposed effect?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the weakness section
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: please see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, which gives several intriguing suggestions for experiments, as well as presentation improvements.
As for readability and clarity, we have made various edits in the latest revision of the paper, including streamlining the background section for readers unfamiliar with AI alignment, clarifying the relationship of DMRMDP to RLHF, and improving presentation of graphs. Also see several changes we have already or plan to address in the rebuttal to reviewer G3eM.
We agree that further experimental evidence could be valuable, with some caveats.
* *Wider variety of reward functions / real-world experiments:* We think work here would be valuable, but it could risk being redundant with the wealth of examples of overoptimization in the existing literature (see concluding paragraph).
* *Best-of-N experiments:* This would complement our theorems 5 and 6 on optimization by conditioning, which describe the best-of-N distribution. We have already done the experiment to show that best-of-N succeeds in a toy light-tailed regime but fails in a toy heavy-tailed regime, and we will include this in the appendix of the final version.
* *Additional experiments comparing KL divergence with other regularization or penalty methods:* We will add a synthetic experiment with KL divergence, in response to this review and that of reviewer G3eM. The experiment demonstrates whether a reward function with artificially heavy-tailed error causes catastrophic Goodhart in the KL divergence setting. We are not sure which other regularization schemes we should try: the RLHF literature uses KL divergence, and Best-of-N is an alternative proposed precisely to avoid overoptimization. Perhaps an integral probability metric like the Wasserstein distance, or quantilizing optimization, which only goes up to a percentile of the misspecified reward? It would be good if you could clarify which schemes would be important to try here.
* *LoRA:* We think studying LoRA would be valuable to nail down exactly when inductive bias prevents catastrophic Goodhart, but considering that a search over LoRA hyperparameters and environments (see paragraph 4 of rebuttal to G3eM for why a single environment is as yet insufficient) would increase the compute requirements by orders of magnitude, this would be valuable follow-up work.
As for why we do not plan to implement all experiments suggested, we see this work as primarily providing a theoretical framework for experimental work already done by others, especially Gao et al [1] and specification gaming [2, 3]. The latter gives 60 examples of real-world utility loss due to specification gaming, several of which we think are examples of "catastrophic Goodhart". The link to previous specification gaming work will be clarified in the latest version.
We thank reviewer ngHC for their feedback. We hope that the theorems, existing experiments, and additional experiments listed above (demonstration of catastrophic Goodhart, and best-of-N study) address the given concerns. Please let us know what weaknesses remain and what we can do to address them, and consider revising your review if we have addressed your concerns to your satisfaction. We would welcome any further feedback and comments. Thank you!
[1] Gao, L., Schulman, J., and Hilton, J. Scaling laws for reward
model overoptimization. In Krause, A., Brunskill, E., Cho, K.,
Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings
of the 40th International Conference on Machine Learning,
volume 202 of Proceedings of Machine Learning Research,
pp. 10835–10866. PMLR, 23–29 Jul 2023. URL https://
proceedings.mlr.press/v202/gao23h.html
[2]: https://deepmind.google/discover/blog/specification-gaming-the-flip-side-of-ai-ingenuity/
[3]: https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml
---
Rebuttal Comment 1.1:
Title: thanks for clarify
Comment: I appreciate the authors' detailed response. Can the authors please also provide the results they promised to provide?
---
Reply to Comment 1.1.1:
Title: Best-of-N results
Comment: The best-of-n experiment results are as follows:
We created a synthetic experiment by letting reward U = X + V, where X and V are independent and sampled from different probability distributions, consistent with our theoretical assumptions. We vary N from 1 to 65536, do 100 trials of taking the best-of-N sample with highest U, and note whether V goes towards 0 (overoptimization) or not.
* Possible distributions for V are normal and t-distribution with df=10.
* Possible distributions for X are normal, t with df=3, t with df=5, lognormal, and Levy. All of these are heavy-tailed except for normal.
* V is scaled to a standard deviation of 2 and X has s.d. of 1 (except for the Levy distribution, which has infinite variance), representing that in ordinary regimes most of the variance comes from utility rather than error.
Results (See image at: https://imgur.com/a/mo02KET )
* When the error X is normal and thus light-tailed, V increases monotonically with N, consistent with our Theorem 6.
* In 5 of 6 cases when X is lognormal or student-t, V first increases then starts to decline around N=10^2 or 10^3. When X is (t, df=5) and V is (t, df=10), V instead peaks around N=65536 but declines by N=1048576. This is consistent with Theorem 5.
* When X is Levy-distributed, utility never goes significantly above zero because Levy distribution is too heavy-tailed. In this scenario optimization completely fails.
RLHF experiments are still in progress and we will share results soon. We will be using Llama3 on OpenRLHF and adding noise to an open reward model which we deem "true".
---
Reply to Comment 1.1.2:
Title: PPO results
Comment: In addition to the best-of-N results in the comment above, we examined PPO with artificially heavy-tailed rewards.
* We used OpenRLHF to train Pythia 1B with a reward model derived also from Pythia 1B, on the default OpenRLHF prompt dataset.
* We used the reward model to represent true utility, and a heavy-tailed error term based on the number of "the" tokens was added to get the proxy reward.
* The kl_target=0.5 option was used to dynamically adjust KL penalty, as we mention is done in Ziegler et al (2020).
* Rewards were not clipped. (Reward clipping can be useful to prevent overoptimization, but is not always used in PPO, see point 1 of our rebuttal to G3eM)
* Response length was limited to 256. Other hyperparameters were unremarkable but can be provided if needed.
Results:
* Initially the policy balanced maximizing error and utility, generating completions like `the suggestions are shown in the images of the Progress Spinner of the completed list view.` (It's only 1B so not very coherent)
* Later in training the policy achieved extremely large values of reward with similar or lower KL divergence by generating normal text followed by long strings of repeated "the" tokens. (Due to the negative correlation, utility became negative. Note theorems 2 and 3 are still relevant because they have no independence assumptions)
We think this validates that the basic pattern of catastrophic Goodhart can occur in RLHF under conditions of heavy-tailed error. Limitations include the artificial nature of the reward and small size of the models, but because the theorems are very generalizable facts, we expect the same to hold in other conditions, and we hope this has addressed your concerns. | Summary: The paper first considers a theoretical stylized model of jointly distributed utility and (mis-specified) rewards and proves a novel result that for any heavy tailed reference distribution, it is possible to find another distribution that arbitrarily approximates it in terms of the KL divergence, yet has an unbounded reward. Beginning with this basic result, the authors then attempt to apply these insights to the popular technique of regularizing the policy optimization procedure in RLHF using a learned reward model with the KL divergence to the base SFT policy. Experiments on open source LLM reward models are conducted to investigate how closely the assumptions made in the theory match with current state of the art models.
Strengths: - The paper identifies a novel conceptual insight around the heavy tailedness of the utility/rewards as being an important factor in reward hacking.
- Theoretical results are sound and novel, along with some attempts to also ground this in experimental validation.
Weaknesses: - The clarity in some sections can be improved -- see questions below.
- The notion of heavy tailed versus light tailed, while theoretically appealing and convenient, seems quite sensitive to parameterization and/or scale in practice. While the authors do mention this briefly in footnote 2 (Page 6), it appears that simple reward reparametrization can transform the resulting distributions in a way that is not robust to the notion of heavy/light tailedness.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The definition of DMRMDP in Section 4 is unclear. Is there any precedent to this particular definition? For instance, are you always required to end a trajectory in one of the sink states? If not, are rewards also defined for partial trajectories?
- Is the converse of Theorem 1 also true? i.e. if Q is not heavy tailed, does the maximum achievable mean stay bounded as $\epsilon \rightarrow 0$?
- In the experimental results section, random sampling and adversarial sampling are used to generate the reward samples which are estimated from another model. However, it is hard to know the true utility, unlike in the case of Gao et. al. (2023), who perform a synthetic study with known controlled true utilities. Since the paper's insights and results are more conceptual than practical at the moment, wouldn't it be interesting to study a synthetic experimental setup similar to Gao et. al. where both true utilities and proxy utilities can be measured explicitly?
- When you speak of a "heavy/light tail under a given policy $\pi$, one possibility is to consider the action conditioned expected utility, or the unconditional version where action is simply a latent variable that determines the final reward. Is this distinction significant for any of the results? If so, which one is being assumed, and what are the implications for the alternate version on the basic result?
Nits:
- Figure reference in L110 seems like a typo.
- Figure 1 has fonts which are too small to read.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for raising several questions and highlighting where the paper lacks clarity.
1. *On reparameterizing rewards to change whether they are heavy-tailed:*
Reward can be reparameterized; however, in settings where the true reward is heavy-tailed, making reward artificially light-tailed or bounded can reward behavior that is unintended.
* For example, a stock-trading agent should be rewarded by profit, but stock returns are known to be heavy-tailed. If we cap or otherwise transform rewards into $[-1, 1]$, it will have no incentive to take into account huge gains or losses. Since RLHF rewards as implemented in Ziegler et al are unbounded, clipping or transforming rewards could itself cause reward misspecification. We think this also applies when rewards are bounded but potentially very large.
* In some cases, mostly when the reward is not the true intended one, it is possible to reparameterize the reward without adverse effects. In the RL literature for Atari games, rewards are changes in score clipped to $[-1, 1]$ [1].
Thank you for highlighting this, we will discuss when reward can or cannot be reparameterized in Section 7.
2. *On precedent for DMRMDP*
* As for precedent, deterministic MDPs are common in the literature e.g. in [2], but we have not seen the Markovian returns property in this exact form. With our definition, we are not trying to introduce a new setting, but rather to list the properties of RLHF that imply our theorems, so the relevance of our results is clear.
* We intend that trajectories must terminate in a sink state. Thank you for pointing out it is unclear, we will clarify the definition.
3. *Is the converse of theorem 1 also true?*
Theorem 3 is effectively a converse to Theorem 1, and shows that if Q is light-tailed, the maximum achievable mean is no higher than the mean of Q in the limit as $\epsilon \to 0$. We will restate this theorem as an equivalence statement. Very good point!
4. *Synthetic experimental setup*
We agree that a synthetic experimental setup would add value, and plan to have one in the camera ready. One difficulty is in the modeling choices to make for a realistic setup, especially if we want to relax independence between utility V and error X. (A big takeaway for us was that, because independence + light-tailed error precludes overoptimization in theory, but we observed RMs to have light tails, the overoptimization observed in experiments like Gao et al is likely from broken independence, e.g. a negative correlation between error and utility. This broken independence could take many forms and different models could easily result in different outcomes.)
5. *Precise meaning of heavy/light tail under a policy*
We are taking actions as latent variables that determine the reward, but taking the expectation over randomness in the reward assigned by the environment for each possible trajectory (note in Theorem 2, $g(\tau)$ is the average reward of trajectory $\tau$). So if a certain policy in a certain environment has a 50/50 chance between producing trajectories $\tau_a$ and $\tau_b$, and the final state of $\tau_a$ gets reward with a normal distribution with mean 10, and the final state of $\tau_b$ gets reward with a Student-t distribution with mean 20, the relevant distribution of $U$ will be discrete with a 50/50 chance between 10 and 20.
All these questions as well as the two nitpicks will be, or have been already, addressed in the next version of the paper. We welcome any more comments or questions you might have, as these are good feedback that has improved the paper. In addition, we invite you to revise your score if your concerns have been answered to your satisfaction. Thank you for a thoughtful review!
[1]: https://arxiv.org/abs/1709.06009 Machado, Marlos C., Marc G. Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, and Michael Bowling. "Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents." Journal of Artificial Intelligence Research 61 (2018): 523-562.
[2]: Ronald Ortner, Online regret bounds for Markov decision processes with deterministic transitions, in: Algorithmic Learning Theory, 19th International Conference, ALT 2008, Proceedings, 2008, pp. 123–137.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of rebuttal
Comment: Thank you for the response and the clarifications. I have update my initial review's score accordingly. | Rebuttal 1:
Rebuttal: Thank you to all the reviewers for their thoughtful feedback, constructive criticism, and recognition of our paper's contributions. We appreciate the time and effort invested in evaluating our work and suggesting improvements.
The reviewers unanimously agreed the paper is technically sound and that it is relevant to an important problem in RLHF: reward misspecification. The reviewers' main concerns centered around clarity, experimental depth, and the relationship between theory and practice. We have addressed these as follows:
1. Clarity and presentation:
- We have improved the overall clarity and readability of the paper, particularly in the background section and graph presentations (see resp. to ngHC).
- We clarified the relationship between DMRMDPs and RLHF (see resp. to G3eM).
- We've fixed notation inconsistencies and typos (e.g., the missing unboundedness assumption in Theorem 4, thank you SFxG for pointing it out).
2. Synthetic experiments: we've conducted new experiments demonstrating our results with Best-of-N in a synthetic reward model. We also plan to do experiments optimizing the synthetic reward for KL divergence regularization, and possibly other kinds of regularization (see responses to ngHC and G3eM).
3. Implicit regularization/inductive bias in optimization: reviewers SFxG and ngHC asked about studies of how the RLHF setup regularizes the solutions found, beyond what is just optimal. As part of the synthetic experiments in point 2, we will study the implicit regularization of optimization of LLMs. However, we believe studying LoRA should be part of future work.
4. Bounded rewards: in responses to znAw and SFxG, we have provided more intuition on what happens with bounded or clipped rewards, and will add that to the paper. Concrete theoretical results on this are difficult and should be part of future work.
We believe these changes and clarifications significantly improve the paper while maintaining its core contributions. The theoretical framework we provide offers a new lens for understanding existing experimental work on specification gaming and overoptimization (see response to ngHC).
We would be excited to continue discussing, and remain committed to addressing any remaining concerns. We kindly ask the reviewers to consider revising their ratings if they feel we've adequately addressed their initial concerns.
Thank you again for your valuable input, which has undoubtedly strengthened our paper. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Coupled Mamba: Enhanced Multimodal Fusion with Coupled State Space Model | Accept (poster) | Summary: This paper extends the state space model Mamba into the multi-modal domain. The authors propose utilizing separate Mamba blocks to process each modality and suggest conditioning the state of each modality on the others to facilitate modality interaction. They further introduce a parallelism technique for the coupled Mamba transition. The framework is evaluated on three benchmark datasets.
Strengths: 1. The attempt to extending Mamba model to multi-modal domain is a motivating direction, which may facilitate the efficiency of future multimodal foundation model.
2. The model achieved good evaluation results on 3 benchmark datasets, while reducing the GPU memory usage to linear cost.
3. Detailed mathematical derivation of the new global convolutional kernel is provided.
Weaknesses: 1. The demonstration of the parallelism is not clearly illustrated. The original Mamba model's essence lies in the selection mechanism and the hardware-aware parallel scan. The parallelism achieved through the global convolution kernel in S4 is no longer as effective in the Mamba framework, given that the transition matrices at each time step are conditioned on the input. This inspired the introduction of the parallel scan method in the Mamba paper. While the authors show that coupled Mamba can be computed using a global convolution, the question remains, how is the parallelism achieved in the context of Mamba's scanning process?
2. The idea of using a sequential model to model each modality and then employing a simple interaction scheme (in this paper, sum) to enable modality fusion is straightforward, but maybe too simple to model the complex modality interaction.
3. Comparison with existing multi-modal Mamba models, for instance, [1-2], should be presented.
[1] VL-Mamba: Exploring State Space Models for Multimodal Learning
[2] Cobra: Extending Mamba to Multi-Modal Large Language Model for Efficient Inference
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Table 9, does 'average (concat) fusion' means averaging (concating) the features from all modality and then go through a single mamba block?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and insightful assessment of our paper; we are deeply grateful for the time and expertise you dedicated.
**W1: Parallelism through global convolution:**
S4 relies on a global convolution kernel to achieve parallelism, while Mamba iterates on Equation 3 to compute intermediate results, ultimately accelerating computation using a global convolution kernel. In our Coupled Mamba, we retain the parallel scanning scheme within each modality. During the parallel scanning process, we iteratively combine intermediate results from each modality, integrating historical states across modalities. Simultaneously, we derive a multimodal global convolution kernel to fuse results from intra-modal parallel scanning. This design ensures that computations for each convolution kernel and input subregion remain independent, facilitating parallel processing of different convolution kernels or input blocks.
**W2: Summation is too simple to model complex modality interaction**
Our method utilizes the state transition matrix $S_m$ to evolve the dynamic process of multimodal fusion. While summing up historical states might seem straightforward, our model effectively filters historical information from different modalities through the coupling matrix $G$, retaining only the filtered, crucial information. Additionally, we introduce self-learning scalars that enable the model to adaptively learn the contribution of each modality to the task. Numerous experiments have demonstrated that, compared to other widely used fusion schemes such as averaging and cascading, our summation scheme not only enhances speed but also delivers superior performance.
**W3: Comparison with other multi-modal Mamba models**
Thank you for suggesting the concurrent multimodal Mamba work, including VL-Mamba and Cobra. Both VL-Mamba and Cobra are based on large language models (LLM) and are designed for VQA tasks. Our Coupled Mamba introduces Mamba to general multimodal tasks instead. In addition, VL-Mamba uses VSS Block to extract features from visual input, and then aligns textual features with visual features for fusion and inputs them into LMM for training. Cobra uses two visual models, DINOv2 and SigLIP, to extract visual features, and then aligns visual features with textual features for multimodal fusion. The Mamba models in both VL-Mamba and Cobra serve merely as feature extractors, and the multi-modal feature fusion is conducted within a LLM. They do not present any new schemes for multi-modal fusion. We will add both qualitative and quantitative comparisons with other concurrent multimodal Mamba works, including the proposed VL-Mamba and Cobra in the revision.
**Q1: meaning of 'average (concat) fusion'**
Yes, 'average fusion' directly averages multimodal features before processing them with a single Mamba, while 'concat fusion' concatenates multimodal features. We will make this point clearer.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, especially for the part that clarified the process of parallelism. I've raised my score to borderline accept. | Summary: This paper proposes the coupled mamba to address the problem of multimodal data fusion. The core architecture of the coupled mamba is derived in the form of Equation (6) by improving upon Equation (5). Extensive ablation experiments validate the effectiveness of the proposed coupled mamba.
Strengths: The article's logic is clear, the charts are coherent, and the proposed method alleviates the shortcomings of the original Mamba in handling multimodal tasks, contributing to the community.
Weaknesses: 1、I noticed that some of the references in the article are incomplete and inconsistent in format. Please handle this carefully. \
2、Why does $S_o$ in Algorithm 1 have the same form as in SSM? Is it reasonable to keep it consistent with SSM?\
3、The relationship between Equation (5) and the probability transition matrix in CHMH seems insignificant, although they are similar in form.\
4、The Equation (6) is a clever maneuver, but directly summing indicates that the coefficients in front of the hidden states of each modality are all 1. Is this design reasonable? Moreover, there is a lack of corresponding ablation experiments to validate this design.\
5、In Algorithm 1, the dimensions of $S_o$ : (B, L, E, N) and those of $S_m$ mentioned on page five are inconsistent.\
6、Some experiments, such as Table 1 and Table 2, lack corresponding inference time, FLOPs, or parameter data.\
7、In the Introduction, there is an extra space in 'Transformer-based'.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1、Why is $S_o$ consistent with the form in SSM, rather than being obtained through input? What is its relationship with the probability transition matrix in Coupled Hidden Markov Models?\
2、What is the relationship between Formula 5 and the probability transition matrix in CHMH? Are they merely similar in form?\
3、In Formula 5, the term $\bar{A}_{m,m}$ should differ from other $\bar{A}$ terms, as it is most relevant to the current state $h_t^m$. Has this aspect been considered in this paper?\
4、Are there relevant ablation experiments to verify the validity of the simplification from Equation (5) to Equation (6)?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Please refer to Weaknesses and Limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and insightful comments and feedback, and we are deeply grateful for the time and expertise you dedicated to reviewing our paper..
**W1: Incomplete and inconsistent references**
We thank the reviewer for pointing out this issue. We will carefully proofread and correct all incorrect references, formatting issues, and typos.
**W2: The form of $S_0$ is the same with SSM**
In our Coupled Mamba, $S_0$ is obtained by multiplying the input-related $\overline{A}$ and the devised coupling matrix $G$. Please refer to the detailed proof in the appendix. It can be interpreted as a shared state transition matrix that transfers the coupling states based on a certain probability derived from the comprehensive state at time $t-1$. We use this same form for consistency with SSMs for unification and better interpretability.
**W3: The relationship between Eqn. 5 and the probability transition matrix of CHMM**
Thank you for the valuable feedback. CHMM has been widely adopted for multi-modal fusion [1, 2], utilizing the probability transition matrix to integrate information from multiple modalities. Our design of Equation 5 is inspired by CHMM, as the state propagation sequences in SSM resemble the state chains in CHMM. Equation 5 can also be interpreted as the probability transition between continuously evolving states, analogous to the transition of discrete states in CHMM.
**W4: Eqn. 6 sets weights of hidden states from all modalities to 1**
In our implementation, we use a learnable scalar to weight each modality in Eqn. 6. This has not been reflected in the paper, and we will clarify this point in the revision. Additionally, in response to your suggestion, we conducted a comparative experiment by setting the weights of all historical states to 1. The results are shown in Table 5 (in the PDF). By incorporating a learnable scalar to weight each modality in the equation, we address the issue of varying contributions from different modalities to the task, thereby improving the performance of multimodal fusion.
**W5: Inconsistent dimensions of $s_0$ and $s_m$:**
Thanks for pointing this issue out. Sorry for the typos. The correct dimensions for $S_{m}$ are [B,L,E,]. We will correct in the revision.
**W6: Lack corresponding inference time, FLOPs, or parameter data in Table 1**
We apologize for the missing information. In addition to providing only the inference time, FLOPs, or parameter data, we further compared our Coupled Mamba with several SOTA methods on inference time, FLOPS, and parameters. The results are shown in Fig. 2, Fig. 1, and Tab. 3 of the rebuttal PDF. Additionally, we also compared memory usage in Fig. 3. Our method exhibits the lowest memory consumption and fastest inference speed.
**W7: In the Introduction, there is an extra space in 'Transformer-based**
Thank you for your careful review. We will make revisions in future versions.
**Q1**: Please refer to the reponse to **W2**.
**Q2**: Please refer to the reponse to **W3**.
**Q3:** $\bar{A}_{m,m}$ should differ from $\bar{A}$.
Thank you for your suggestion and feedback. Note that all $\bar{A}{i,m}$ $, i \in 1 \ldots M$ in Eqn. 5 are learned parameters during network training. To preserve the generalizability of the model, we do not specifically emphasize the intra-modal transition $\bar{A}_{m,m}$, and instead let the network learn to prioritize modalities.
**Q4: Ablation experiment on simplification from Eqn. 5 to Eqn. 6**
Thank you for the valuable suggestion. We have conducted ablation study follow your suggestion, as illustrated in **Table 4** in the rebuttal PDF. The results indicate that this simplification reduces both the number of parameters and memory usage, and also improves inference speed, with minimal impact to the performance.
[1]Garg A, Naphade M, Huang T S. Modeling video using input/output Markov models with application to multi-modal event detection[J]. Handbook of Video Databases: Design and Applications, 2003: 23-44.
[2]ZHANG Yu-zhen,DING Si-jie,WANG Jian-yu,DAI Yue-wei,CHEN Qian.Event Detection by Fusing Multimodal Objects Using HMM[J].Journal of System Simulation,2012,24(8):1638-1642.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses. My concerns are resolved. I'll raise the score to WA. | Summary: The paper addresses the challenge of multi-modal fusion in deep learning. Current fusion methods struggle to capture complex intra- and inter-modality correlations. Recent state space models like Mamba show promise but are limited in fusing multiple modalities efficiently. The paper propose Coupled Mamba, key aspects include: A coupled state transition scheme that allows information exchange between modalities while maintaining individual propagation. A state summation and transition mechanism to enable parallel computing with multiple inputs. Derivation of a global convolution kernel for efficient parallelization.
The model architecture consists of multiple Coupled Mamba blocks, each processing different modalities and aggregating states before transitioning to the next time step.
Experiments were conducted on three multi-modal sentiment analysis datasets: CMU-MOSEI, CH-SIMS, and CH-SIMSV2.
Results show improved performance over existing methods as well as faster inference and GPU memory savings compared to transformer-based approaches. Effective performance on both aligned and unaligned multi-modal data.
Strengths: 1. The method demonstrates consistent performance and good efficiency compared with existing works.
2. The paper provides a detailed mathematical derivation of the coupled state transition process,
3. The idea of coupling mamba is novel.
Weaknesses: The authors seem to have combined two Mamba models without providing enough insights into how this combination actually improves multi-modal fusion.
There's no clear analysis or interpretability of what the model is learning or how it's fusing information across modalities. This makes it difficult to understand why their approach is fundamentally better than existing methods, including transformers.
The speed and linear complexity benefits are inherited from the original mamba model, so they are apparent advantages for all state space-related works.
The experiments are limited to sentiment analysis tasks. The generalizability of the method to other multi-modal tasks or domains is not demonstrated or discussed.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In table 9, what does Mamba Fusion mean?
2. Can the authors provide more interpretability behind combining to mamba structure and explain how the modalities interact between the two mamba?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and insightful comments and feedback, and we are deeply grateful for the time and expertise you dedicated to reviewing our paper..
**W1: Provide insights over combining Mamba branches for multi-modal fusion**
Effective multi-modal fusion hinges on balancing inter-modality information exchange while preserving independent intra-modality learning [1,2]. Our approach, Coupled Mamba, achieves this by integrating historical information across modalities while maintaining intra-modal state autonomy. Specifically, Coupled Mamba fuses the state of each modality's chain with adjacent chains from previous time steps across modalities. This integration ensures that each modality's current memory incorporates crucial historical context from multimodal data, progressively building a comprehensive model over time. We will make this point more clear.
**W2: Lack analysis or interpretability of the model's multi-modal fusion mechanism**
Thanks for the insightful feedback! In our Coupled Mamba, we use a separate Mamba branch for each modality, and fuses the state of each modality's chain with adjacent chains from previous time steps using a learnable state transition matrix ($A_{i, m}$ in Eqn. 5 and $S_{m}$ in Eqn. 6) to ehance multi-modality fusion. The state transition matrix selectively integrates important historical information from other modalities, while neglects information that's insignificant. By utilizing the coupling transition between states, Coupled Mamba is tightly connected in time steps, effectively modeling multimodal data.
This mechanism is fundamentally different from existing multimodal fusion schemes, where existing fusion methods can be categorized into
1. **attention mechanisms.** this method achieves multimodal information fusion through cross attention mechanisms.
2. **the encoder decoder method.** This method extracts high-level features of different modalities through an encoder, maps them to a low dimensional space, and then uses a decoder to generate predictions from these latent representations.
3. **Graph neural network method.** This method models multimodal data by constructing graph structured data.
These designs all focus on learning good multi-modality features. In constrast, Coupled Mamba ensures intra-modal indenpence (as we use separate Mamba branches for each modality), and fuse important information from multi-modalities using the learnable state transition scheme, which emphasize more on the information fusion process. We believe that's the reason why our approach outperforms across all multi-modal tasks in experiments.
We will explain this point clearer in the revision.
**W3: The speed and linear complexity advantage is from Mamba**
We only partially inherit the speed and linear complexity advantage of the original Mamba, by preserving the parallel scan scheme within each modality. However, the original Mamba and other state-space models were originally designed to process unimodal data and cannot handle cross-modal fusion well. When naively adapted for multi-modal data, as indicated by the 'concat' and 'mamba' fusion approach shown in Table 9 in the main paper, they require higher memory and computational resources, resulting in inferior performance compared to our coupled design.
**W4: Experiments are limited to sentiment analysis tasks**
Thanks for this helpful suggestion! We conducted additional experiments on the **BRCA benchmark** and **MM-IMDB benchmark**, and the results are presented in **Table 1,2** in our rebuttal PDF. The BRCA benchmark focuses on predicting the PAM50 subtype classification of breast cancer, based on three complex data modalities, i.e., the mRNA expression, DNA methylation and miRNA. while the MM-IMDB dataset is used for movie genre classification from movie intro, images. Despite the differences in data modalities and tasks, our Coupled Mamba achieves the best performance on both benchmarks compared to existing SOTA methods, demonstrating its generalizability.
**Q1: Meaning of "mamba fusion" in Table 9**
"mamba fusion" means establishing a seperate Mamba branch for each modality. Subsequently, the output features of these Mamba branches are merged through a weighted aggregation mechanism, consistent with the late-fusion paradigm embodied by methods such as LMF and Mult. We will further elaborate on this process in the revision.
**Q2: Provide more interpretability of combining Mamba structures**:
Please refer to our response to **W1**
[1] Wang, Yikai, Wenbing Huang, Fuchun Sun, Tingyang Xu, Rong Yu, and Junzhou Huang. 2020. "Deep Multimodal Fusion by Channel Exchanging." NeurIPS 2020.\
[2] Li, Yaowei, Ruijie Quan, Linchao Zhu, and Yi Yang. n.d. "Efficient Multimodal Fusion via Interactive Prompting." CVPR 2023
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses. I am leaning towards accepting.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback. We need to make a correction to our response to W3. While we do partially inherit the speed and linear complexity advantages of the original Mamba, our focus was on developing a fusion mechanism to extend Mamba's capability for better handling cross-modal data.
When compared to naive adaptation methods like 'concat' and 'mamba fusion,' our Coupled Mamba has similar inference speed but consumes more memory due to the specialized mechanism we developed for cross-modality fusion. Despite the increased memory usage, the performance improvement from Coupled Mamba is relatively significant compared to these naive adaptations. Please refer to the 2nd-5th tables we presented to Reviewer JRDw for more details. | Summary: This work proposes Coupled SSM (State Space Models) to fuse multiple modalities effectively with SSM. Instead of fusing multi-modal features directly, the proposed method couples state chains of multiple modalities while maintaining the independence of intra-modality state processes. Specifically, they first propose an inter-modal hidden states transition scheme to fuse multiple modalities effectively. Then, they propose an expedited coupled state transition scheme to adapt the hardware-aware parallelism of SSMs for efficiency. Experimental results on classification task on CMU-MOSEI, CH-SIMS, CH-SIMSV2 show promising performance of the proposed method.
Strengths: 1. The proposed method couples the state chains of multiple modalities while maintaining the independence of intra-modality state processes.
2. The proposed method is more memory efficient compared to cross-attention with the increasing sequence length.
3. Results on several benchmark datasets show the superior performance of the proposed method.
Weaknesses: 1. The improvement in the regression task on CMU-MOSEI seems marginal compared to the baselines.
2. In line 201, the authors conclude that the results in Tables 3 and 4 show the SOTA performance of the proposed method regardless of whether the data are aligned or not. However, both results in Tables 3 and 4 are performed on unaligned data. Besides, the robustness of the proposed method needs further verified, since I cannot find the results that support the claims about robustness.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The discussions about the limitations of this work would be further improved.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and insightful comments and feedback, and we are deeply grateful for the time and expertise you dedicated to reviewing our paper..
**W1: The improvement in the regression task on CMU-MOSEI seems marginal**
The CMU-MOSEI dataset mostly consists of short sequences: most of the videos, audios, and texts are only 1-5 seconds in duration. Mamba is more suitable for modeling relatively long sequences, which results in marginal improvement of Coupled Mamba on the CMU-MOSEI dataset. In contrast, for datasets with longer sequences, such as data in CH-SIMS, CH-SIMSV2, BRCA, and MM-IMDB datasets, our Coupled Mamba can effectively enhance multi-modal fusion and outperforms the state-of-the-arts by a relatively large margin. We will include a discussion on the impact of sequence lengths in the limitations section.
**W2: Less experiments on aligned data and evidences of robustness:**
The fewer experiments on 'aligned' data are influenced by the composition of the datasets used, where there is a predominance of unaligned data. This is reasonable, as aligned data is more difficult to collect. Moreover, many existing classification methods for the CMU-MOSEI dataset primarily focus on unaligned data; hence, we use unaligned data for a fair comparison. Besides, experiments on aligned data in the main paper Table 1 have demonstrated our excellent performance for aligned data input. Furthermore, unaligned data is more challenging to process, and our approach achieves superior performance regardless of aligned or unaligned data.
For robustness, we have conducted experiments on aligned/unaligned data (Table 1 of the main paper), various modalities (text, audio, video, mRNA, miRNA, DNA, as shown in Tables 1 and 2 in the rebuttal PDF), sequence lengths (ranging from 100 to 500), and text in different language domains (English vs. Chinese). All experiments show that our Coupled Mamba achieves superior performance, demonstrating its robustness across various data input.
---
Rebuttal Comment 1.1:
Comment: First, I mean the claim "It can be seen from the results...the proposed fusion method achieves state-of-the-art (SOTA) regardless of whether the data are aligned or not" is not rigorous, as experiments in Tables 3, 4, and 10 are conducted on unaligned data. It would be better to consider the results from Table 1 and make such a conclusion.
Second, I think the author may have confused robustness with generalization, and I believe robustness is about the model performance under noisy inputs.
---
Reply to Comment 1.1.1:
Comment: Thank you for your further feedback.
**On the unrigorous conclusions from Tables 3, 4, and 10**: We apologize for the misunderstanding. You are correct that the expression was unrigorous if it referred only to Tables 3, 4, and 10. In the revision, we will update the phrase to 'It can be seen from Tables 1, 3, 4, and 10 ...' for a more accurate and precise expression.
**On the evaluation of robustness**: We apologize for the confusion. To validate the robustness of our model, we tested our Coupled Mamba under conditions where part of the data was missing, and with noisy input (by adding a certain level of Gaussian noise).
For testing on missing data, we followed the experimental setup in [1]. We conducted experiments on the CMU-MOSEI dataset by creating a random mask with the same shape as the original tensor, where each element is drawn from the Bernoulli distribution B(1-p). This means each element has a p% probability of being 1 (retained) and a (1-p)% probability (i.e., the missing rate (MR)) of being 0 (missing). We then multiplied this random mask with the original tensor, causing the areas with a mask value of 0 to result in missing data in the original tensor. The results are presented in the table below, with the numbers for other baselines copied from [1] due to rebuttal time constraints. Our method demonstrates the best performance. Please note that the left side of **/** shows Acc_2, while the right side denotes the F1 Score.
| Datasets | MR | DCCA | DCCAE | MCTN | MMIN | GCNET | **Coupled Mamba** |
|------------|----|--------|--------|--------|--------|--------|------------------|
| CMU-MOSEI | 0.0 | 80.7/80.9 | 81.2/81.2 | 84.2/84.2 | 84.3/84.2 | 85.2/85.1 | **85.5/85.6** |
| | 0.1 | 77.4/77.3 | 78.4/78.3 | 81.8/81.6 | 81.9/81.3 | 82.3/82.1 | **82.6/82.7** |
| | 0.2 | 73.8/74.0 | 75.5/75.4 | 79.0/78.7 | 79.8/78.8 | 80.3/79.9 | **81.1/80.9** |
| | 0.3 | 71.1/71.2 | 72.3/72.2 | 76.9/76.2 | 77.2/75.5 | 77.5/76.8 | **81.0/81.0** |
| | 0.4 | 69.5/69.4 | 70.3/70.0 | 74.3/74.1 | 75.2/72.6 | 76.0/74.9 | **78.4/78.5** |
| | 0.5 | 67.5/65.4 | 69.2/66.4 | 73.6/72.6 | 73.9/70.7 | 74.9/73.2 | **77.4/77.7** |
| | 0.6 | 66.2/63.1 | 67.6/63.2 | 73.2/71.1 | 73.2/70.3 | 74.1/72.1 | **75.1/75.4** |
| | 0.7 | 65.6/61.0 | 66.6/62.6 | 72.7/70.5 | 73.1/69.5 | 73.2/70.4 | **74.1/74.2** |
| Average | | 70.3/71.2 | 72.6/71.2 | 77.0/76.1 | 77.3/75.4 | 77.9/76.8 | **79.4/79.5** |
For the evaluation on noisy input, we added Gaussian noise to the input at three noise levels, with standard deviations of 1, 2, and 3. Due to the limited time available for the rebuttal, we were only able to compare our results with Mult [2] as a reference. The performance of Coupled Mamba declines much slower as the noise level increases. The left and right sides of **/** represent Acc_2 and F1 Score, respectively.
| | Std=0 | Std=1 | Std=2 | Std=3 |
|-----------------------|------------------|------------------|------------------|------------------|
| Mult | 82.5 / 82.3 | 79.8 / 80.1 | 77.1 / 76.9 | 74.6 / 74.8 |
| CrossAttention | 84.6 / 84.5 | 82.3 / 82.2 | 80.2 / 80.4 | 78.4 / 78.6 |
| **Coupled Mamba** | **85.6 / 85.5** | **84.6 / 84.3** | **83.2 / 83.3** | **81.4 / 81.3** |
The experiments on missing data and noisy input demonstrate that our Coupled Mamba model is robust.
**DCCA** Andrew G, Arora R, Bilmes J, et al. Deep canonical correlation analysis[C]//International conference on machine learning. PMLR, 2013: 1247-1255.
**DCCAE** Wang W, Arora R, Livescu K, et al. On deep multi-view representation learning[C]//International conference on machine learning. PMLR, 2015: 1083-1092.
**MCTN** Pham H, Liang P P, Manzini T, et al. Found in translation: Learning robust joint representations by cyclic translations between modalities[C]//Proceedings of the AAAI conference on artificial intelligence. 2019, 33(01): 6892-6899.
**MMIN** Zhao J, Li R, Jin Q. Missing modality imagination network for emotion recognition with uncertain missing modalities[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021: 2608-2618.
**GCNET** Lian Z, Chen L, Sun L, et al. GCNet: Graph completion network for incomplete multimodal learning in conversation[J]. IEEE Transactions on pattern analysis and machine intelligence, 2023, 45(7): 8419-8432.
[1] Wang Y, Li Y, Cui Z. Incomplete multimodality-diffused emotion recognition[J]. NeurIPS,2024,36.
[2] Tsai Y H H, Bai S, Liang P P, et al. Multimodal transformer for unaligned multimodal language sequences[C],ACL 2019. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful feedback and appreciate the time and expertise they have dedicated to evaluating our work. We are encouraged by the positive comments highlighting the strengths of our approach: "superior and consistent performance" (JRDw, pbQ3, FSJZ, qhp2, nXDG), "demonstrates good efficiency" (pbQ3, FSJZ, nXDG), "clear logic & motivating direction" (JRDw, qhp2, nXDG), "novel idea" (FSJZ), and "detailed mathematical derivation" (FSJZ, nXDG).
Reviewers expressed concerns regarding the lack of speed and memory analysis (JRDw, FSJZ, qhp2), and the need for experiments on other multi-modal tasks besides sentiment analysis (JRDw, FSJZ), lack ablation on Eqn. 5 and Eqn. 6 (qhp2). In response, we have conducted additional experiments and included detailed figures and tables in the rebuttal PDF to address these concerns, coupled with our reviewer-specific replies.
**Summary of figures and tables in the rebuttal PDF in response to reviewers' feedback:**
**Table 1**: Experimental results on the **BRCA benchmark**, classifying breast cancer PAM50 using mRNA (mR), DNA methylation (D), and miRNA (miR) expression data.
**Table 2**: Experimental results on the **MM-IMDB Benchmark**, classifying movie categories using image (I) and text (T) modalities. Our method achieved the best results in both MicroF1 and MacroF1 indicators, with a **2.06%** improvement in MicroF1.
**Table 3**: Comparison of parameters across different methods under identical conditions (same number of layers, hidden dimensions, etc.), demonstrating Coupled Mamba's superior performance with reduced complexity.
**Figure1, 2, and 3**: Showing FLOPs, Inference Time, and Memory Usage across varying sequence lengths for different models. Coupled Mamba exhibits lower computing resource requirements, faster inference speed, and significantly reduced memory usage.
**Table 4**: Comparison between Eqn. 5 and its simplified version Eqn. 6, showing effective reduction in model parameters, inference time, and memory usage with our simplification.
**Table 5**: Results comparing fixed weights (all modalities set to 1) and adaptive scalar learning weights in Eqn. 6 on the CH-SIMS dataset. Adaptive learning scalars improve model performance by enabling nuanced multimodal fusion, learning varying contributions of modalities to the task.
**Here are some articles that supplement the baseline comparisons in the experiments:**\
**CRidge** : Van De Wiel M A, Lien T G, Verlaat W, et al. Better prediction by use of co‐data: adaptive group‐regularized ridge regression[J]. Statistics in medicine, 2016, 35(3): 368-381.\
**GMU** : Arevalo J, Solorio T, Montes-y-Gómez M, et al. Gated multimodal units for information fusion[J]. arXiv preprint arXiv:1702.01992, 2017.\
**CF**: Hong D, Gao L, Yokoya N, et al. More diverse means better: Multimodal deep learning meets remote-sensing imagery classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 59(5): 4340-4354.\
**MOGONET**: Wang T, Shao W, Huang Z, et al. MOGONET integrates multi-omics data using graph convolutional networks allowing patient classification and biomarker identification[J]. Nature communications, 2021, 12(1): 3445.\
**TMC** : Han Z, Zhang C, Fu H, et al. Trusted multi-view classification[C]//International Conference on Learning Representations. 2020.\
**MM-Dynamics** : Han Z, Yang F, Huang J, et al. Multimodal dynamics: Dynamical fusion for trustworthy multimodal classification[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 20707-20717.\
**LRMF** : Liu Z, Shen Y, Lakshminarasimhan V B, et al. Efficient low-rank multimodal fusion with modality-specific factors[J]. arXiv preprint arXiv:1806.00064, 2018.\
**MFM** : Tsai Y H H, Liang P P, Zadeh A, et al. Learning factorized multimodal representations[J]. arXiv preprint arXiv:1806.06176, 2018.\
**MI-Matrix** : Jayakumar S M, Czarnecki W M, Menick J, et al. Multiplicative interactions and where to find them[C]//International conference on learning representations. 2020.\
**RMFE** : Gat I, Schwartz I, Schwing A, et al. Removing bias in multi-modal classifiers: Regularization by maximizing functional entropies[J]. Advances in Neural Information Processing Systems, 2020, 33: 3197-3208.\
**CCA** : Sun Z, Sarma P, Sethares W, et al. Learning relationships between text, audio, and video via deep canonical correlation for multimodal language analysis[C]//Proceedings of the AAAI conference on artificial intelligence. 2020, 34(05): 8992-8999.\
**RefNet** : Sankaran S, Yang D, Lim S N. Multimodal fusion refiner networks[J]. arXiv preprint arXiv:2104.03435, 2021.\
**DynMM** : Xue Z, Marculescu R. Dynamic multimodal fusion[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 2575-2584.
Pdf: /pdf/985b794e28d975413eaa8606346bf16a18c4a203.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces a coupled mamba model for multi-modal fusion. The multi-modal hidden states are fused inside of Mamba, so the current state learns not only from a single modality but the correlation of all modalities. The experiments on multi-modal sentiment analysis show that the proposed model outperforms other baselines.
Strengths: - The proposed method makes sense, and it outperforms other baselines.
- The motivation is reasonable.
- The ablation study shows that the coupled fusion is better than other common fusion approaches (cross-attention, average, concat, and simple fusion).
Weaknesses: 1. Lack of speed and memory analysis: An important aspect of the proposed method is the memory and speed overhead. It's expected that the overhead is not more than cross-attention (as shown in Figures 3 and 4). However, the paper missed comparisons with other fusion methods, especially the ones that use the inputs directly instead of hidden states.
2. The experiments are only on multi-modal sentiment analysis. I believe the proposed method is pretty general for any multi-modal data and tasks. Showing a variety of applications will make the paper more convincing and appealing.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. The original Mamba uses a parallel scan (recurrent representation) for parallel computing. However, the authors converted Mamba to the convolutional representation. Since eq 6 mainly adds the addition operation (to combine hidden states from different modalities), it should work with the parallel scan. What is the reason to convert to the convolutional representation for parallelization?
2. The proposed method is compared with average, concat, and mamba fusions in the ablation study, but the explanation of these methods is missing. I assume the average and concat are done directly on the input. How does the mamba fusion work?
3. What does unaligned/aligned mean in the experiments (Tables 1 and 6)?
4. Some details about the dataset are missing:
- What's the sequence length of each dataset?
- What modalities do the datasets contain?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The limitation is discussed as a part of the ethical implications discussion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough and insightful feedback. We genuinely appreciate the effort and expertise you invested in reviewing our paper.
**W1: Lack of speed and memory analysis**
We thank the reviewer for this insightful feedback. In our rebuttal PDF, we have included a comparison of memory usage(**Fig 3**), inference speed(**Fig 2**), flops(**Fig 1**), and parameter size (**Table 3**) with Mult[1], MISA[2], TFN[3], and LMF[4], all using inputs directly instead of hidden states. Additionally, Tab. 3 presents a detailed comparison of parameters. From both Fig. 1 and Tab. 3, it is evident that our Coupled Mamba consumes significantly less memory and achieves the fastest inference speed for sequences from short to long.
**W2: Expriments on more multi-modal data and tasks to demonstrate generalizability**
Thanks for this helpful suggestion! We conducted additional experiments on the **BRCA benchmark** and **MM-IMDB benchmark**, and the results are presented in **Tab 1,2** in our rebuttal PDF. The BRCA benchmark focuses on predicting the PAM50 subtype classification of breast cancer from mRNA, DNA, miRNA data, while the MM-IMDB dataset is used for movie genre classification from text and images. Despite the differences in data modalities and tasks, our Coupled Mamba achieves the best performance on both benchmarks compared to existing SOTA methods, demonstrating its generalizability.
**Q1: Reason of converting to the convolutional representation for parallelization**
The parallel scanning scheme in Mamba computes independent partial states recurrently from inputs and then combines these partial states using a global convolution in a hierarchical manner to achieve parallelism. In our Coupled Mamba, we adapt this approach by conducting recurrently computation of partial states within each modality. We enhance this process with a summation operation through a global convolutional kernel, enabling multi-modal fusion capabilities while preserving the inherent advantages of parallelism. To be more specific, the parallel scanning process generates intermediate results, which we fuse using a summation scheme with a global convolutional kernel across modalities. This scheme is inherently parallelizable and accelerates the fusion of multimodal data. We will clarify this point further in the revision.
**Q2: Explanation of *average*, *concat*, and *mamba* fusions in the ablation study**
**average** fusion involves directly averaging the multi-modal features. **concat** fusion concatenates the multi-modal features. The fused feature is then fed into a Mamba model, followed by a pooling and head layer for the final output. For **Mamba fusion**, we create a separate Mamba branch for each modality. The output features of each branch are then combined using a weighted summation (following late fusion schemes as in LMF, Mult). We will clearify this in revision.
**Q3: The meaning of *unaligned* and *aligned* in experiments**
**aligned** data refers to data from different modalities that are precisely synchronized in both spatial and temporal dimensions, such as synchronized audio and video. In contrast, **unaligned** data refers to instances where different modalities are not synchronized, presenting a more challenging scenario. Our method demonstrates superior performance on both aligned and unaligned data.
**Q4: Some details about the dataset are missing**
Our experiments utilize datasets comprising data from **video, audio, and text modalities**. The CMU-MOSEI dataset includes both aligned and unaligned data, with sequence lengths of 50 for aligned text, audio, and video, and 50, 500, and 375 for unaligned text, audio, and video, respectively. The CH-SIMS and CH-SIMSV2 datasets contain exclusively unaligned data, with sequence lengths of 39, 400, and 55 for text, audio, and video in CH-SIMS, and 50, 925, and 232 in CH-SIMSV2.
[1] Tsai, Yao-Hung Hubert, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. “Multimodal Transformer for Unaligned Multimodal Language Sequences.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. doi:10.18653/v1/p19-1656.
[2] Hazarika, Devamanyu, Roger Zimmermann, and Soujanya Poria. 2020. “MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis.” Cornell University - arXiv,Cornell University - arXiv, May.
[3] Zadeh, Amir, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. “Tensor Fusion Network for Multimodal Sentiment Analysis.” arXiv: Computation and Language,arXiv: Computation and Language, July.
[4] Liu, Zhun, Ying Shen, Varun Bharadhwaj Lakshminarasimhan, Paul Pu Liang, AmirAli Bagher Zadeh, and Louis-Philippe Morency. 2018. “Efficient Low-Rank Multimodal Fusion with Modality-Specific Factors.” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). doi:10.18653/v1/p18-1209.
---
Rebuttal Comment 1.1:
Title: response to the rebuttal
Comment: Thank you for the answers.
I still have a concern regarding speed and memory analysis.
The authors added Figures 1-3 in the rebuttal pdf (FLOPS, inference speed, and memory comparisons). However, the comparing methods are not SSM/Mamba-based, and none of these models are recent SOTA. It's unclear whether the memory and speed benefits come from Mamba. Could authors add these additional models to Figures 1-3?
1) different fusion methods (avg. concat, and mamba fusion from Table 9)
2) more recent methods like [44]
3) some models from Tables 1 and 2 (rebuttal pdf)
Also, please add citations for Tables 1 and 2 in the rebuttal pdf. It's unclear whether these models are comparable.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback. Follow your suggestion, we conducted additional experiments on memory and inference speed using five methods: Average, Concat, Mamba Fusion, IMDer, and DynMM. We present the result in the below table.
| | Seq=100 | Seq=200 | Seq=300 | Seq=400 | Seq=500 |
|--------------|----------|----------|----------|----------|----------|
| **Average** | 24.58/0.0006 | 41.98/0.0006 | 59.39/0.0006 | 76.80/0.0006 | 94.20/0.0006 |
| **Concat** | 49.15/0.0006 | 95.23/0.0006 | 131.04/0.0007 | 174.08/0.0007 | 205.84/0.0007 |
| **Mamba Fusion** | 41.98/0.0068 | 60.42/0.0068 | 80.90/0.0070 | 103.42/0.0071 | 134.57/0.0073 |
| **IMDer** | 464.37/0.7154 | 492.74/0.7352 | 535.41/0.7587 | 650.87/0.7994 | 693.54/0.8124 |
| **DynMM** | 62.46/0.2821 | 226.30/0.3133 | 301.06/0.3314 | 353.28/0.3546 | 467.97/0.3915 |
| **Coupled Mamba**| 58.36/0.0069 | 103.42/0.0071 | 151.00/0.0073 | 205.82/0.0074 | 244.73/0.0075 |
The data on the left and right sides of the **/** represent memory usage (MB) and inference speed (s), respectively. The results show that Coupled Mamba is significantly faster (by 50 to 100 times), and consuming less memory than both IMDer and DynMM. Moreover, our Coupled Mamba outperforms Mamba Fusion (Table 9 in main paper) with similar speed and slightly higher memory usage, demonstrating that our design is both efficient and effective.
**Here are some articles that supplement the baseline comparisons in the experiments:**
**CRidge** : Van De Wiel M A, Lien T G, Verlaat W, et al. Better prediction by use of co‐data: adaptive group‐regularized ridge regression[J]. Statistics in medicine, 2016, 35(3): 368-381.
**GMU** : Arevalo J, Solorio T, Montes-y-Gómez M, et al. Gated multimodal units for information fusion[J]. arXiv preprint arXiv:1702.01992, 2017.
**CF**: Hong D, Gao L, Yokoya N, et al. More diverse means better: Multimodal deep learning meets remote-sensing imagery classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 59(5): 4340-4354.
**MOGONET**: Wang T, Shao W, Huang Z, et al. MOGONET integrates multi-omics data using graph convolutional networks allowing patient classification and biomarker identification[J]. Nature communications, 2021, 12(1): 3445.
**TMC** : Han Z, Zhang C, Fu H, et al. Trusted multi-view classification[C]//International Conference on Learning Representations. 2020.
**MM-Dynamics** : Han Z, Yang F, Huang J, et al. Multimodal dynamics: Dynamical fusion for trustworthy multimodal classification[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 20707-20717.
**LRMF**: Liu Z, Shen Y, Lakshminarasimhan V B, et al. Efficient low-rank multimodal fusion with modality-specific factors[J]. arXiv preprint arXiv:1806.00064, 2018.
**MFM** : Tsai Y H H, Liang P P, Zadeh A, et al. Learning factorized multimodal representations[J]. arXiv preprint arXiv:1806.06176, 2018.
**MI-Matrix** : Jayakumar S M, Czarnecki W M, Menick J, et al. Multiplicative interactions and where to find them[C]//International conference on learning representations. 2020.
**RMFE** : Gat I, Schwartz I, Schwing A, et al. Removing bias in multi-modal classifiers: Regularization by maximizing functional entropies[J]. Advances in Neural Information Processing Systems, 2020, 33: 3197-3208.
**CCA**: Sun Z, Sarma P, Sethares W, et al. Learning relationships between text, audio, and video via deep canonical correlation for multimodal language analysis[C]//Proceedings of the AAAI conference on artificial intelligence. 2020, 34(05): 8992-8999.
**RefNet**: Sankaran S, Yang D, Lim S N. Multimodal fusion refiner networks[J]. arXiv preprint arXiv:2104.03435, 2021.
**DynMM** : Xue Z, Marculescu R. Dynamic multimodal fusion[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 2575-2584. | null | null | null | null | null | null |
Diffusion-based Reinforcement Learning via Q-weighted Variational Policy Optimization | Accept (poster) | Summary: This paper introduces QVPO, a new model-free algorithm that trains a diffusion policy online. It proposes a Q-weighted VLO loss by weighting the original diffusion model objective with Q-values. To address the issue of negative Q-values, the algorithm uses advantage instead. QVPO also encourages exploration by mimicking the uniform distribution and reduces inference variance by executing actions with higher Q-values.
Strengths: - The paper addresses a relevant problem, which is to learn diffusion policy online.
- The paper is generally well-written and easy to read.
- The proposed method demonstrates impressive results on MuJoCo benchmarks compared to SOTA baselines.
Weaknesses: - Several claims require additional support. For instance, while Figure 2 is informative in demonstrating the ideal effect of the entropy term, experimental results on toy examples would provide stronger evidence than an illustration.
- Since QVPO relies on action samples for both optimization and action selection, this will increase computational demands.
- The tasks are limited and relatively trivial, making it difficult to tell performance in more complex tasks, such as robot manipulation. In particular, when the action space is larger, it is unclear how the entropy term will scale given its reliance on uniform action distributions.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Can the author provide an explanation for why the exploration ability of diffusion policies declines with a limited number of diffusion steps? Additionally, what happens when the diffusion steps are even fewer, say 5, a common value in other diffusion policy literature? I suppose the entropy term should perform better in such cases.
- If the maximum entropy term is used throughout the training, the policy may continue to follow the uniform distribution in the later stages, potentially hurting performance and convergence. If this is the case, would a scheduling mechanism improve the final performance?
- Can the author explain why the Q-weighted VLO loss is superior to QSM regarding the additional errors in policy optimization?
- In line 300, it is mentioned that action selection is used during inference, which is a standard technique in offline RL. However, is the same technique applied to the baselines? If not, this would be unfair.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: --
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your valuable feedback and comments! We itemize the weaknesses you mentioned and answer them.
> **Q1**: While Figure 2 is informative in demonstrating the ideal effect of the entropy term, experimental results on toy examples would provide stronger evidence than an illustration.
**A1**: We perform a toy experiment on a continuous bandit problem to further present the effect of the entropy term. The results are shown in Figure 1 of the attached pdf file, which is similar to the illustration. We will add it in the final version.
> **Q2**: Since QVPO relies on action samples for both optimization and action selection, this will increase computational demands.
**A2**: Actually, that is not the weakness but the advantage of QVPO. QVPO can sufficiently utilize the parallel computing ability of GPU via the multiple sampling and selection procedure. We observed that the previous diffusion-based RL methods generally have a low gpu utilization rate since the computational bottleneck of the forward process in the diffusion. In that case, parallelly sampling multiple samples does not add much extra time cost.
Besides, QVPO does not need to use the gradient from Q network to optimize the diffusion policy. This also reduces the time cost to a certain extent compared with other diffusion RL methods like DIPO. Finally, the comprehensive training time and inference time comparison can be referred to in **Global Rebuttal**.
> **Q3**: The tasks are limited and relatively trivial, making it difficult to tell performance in more complex tasks, such as robot manipulation. In particular, when the action space is larger, it is unclear how the entropy term will scale given its reliance on uniform action distributions.
**A3**: We need to clarify that these 5 mujoco environments are the standard online RL benchmarks and most of the previous online RL methods (e.g., SAC, DIPO, etc.) only choose them to do the experiments. That is why we verify our algorithm on them.
Besides, to further present the efficiency of QVPO, we also apply it to the recently released HumanoidBench environment, which is a complex continuous control benchmark based on Unitree H1 humanoid with **151 observation dimension** and **61 action dimension**. The results are shown in Figure 3 of the attached pdf file. It can be observed that model-free RL methods (SAC, PPO) do not work totally, while QVPO (as a model-free method) achieves superior performance compared with advanced model-based methods (Dreamer-v3, TD-MPC2).
The entropy coefficient is set 0.01 here as well. In fact, we found that our algorithm is robust enough with the scaling of $w_{ent}$ in 1e-2 to 1e-3.
> **Q4(1)**: Can the author provide an explanation for why the exploration ability of diffusion policies declines with a limited number of diffusion steps?
**A4(1)**: Note that the Gaussian noise is added at each diffusion denoising step. In other words, more diffusion steps lead to more possibilities. Thus, the exploration ability of diffusion policies declines with a limited number of diffusion steps.
> **Q4(2)**: Additionally, what happens when the diffusion steps are even fewer, say 5, a common value in other diffusion policy literature? I suppose the entropy term should perform better in such cases.
**A4(2)**: We add the result of the ablation study with 5 diffusion steps in the attached pdf file. It can be found that the performance is poor. Hence, different from offline RL methods, a certain number of diffusion steps is required in online RL setting. This is also verified in previous works like DIPO [R7].
> **Q5**: If the maximum entropy term is used throughout the training, the policy may continue ... potentially hurting performance and convergence. Would a scheduling mechanism improve the final performance?
**A5**: The entropy term is not constant as you thought. As mentioned in Algorithm 1, line 8. You can find that the actual entropy coefficient $w_{ent}(s)=w_{ent}w_{eq}(s,a_{max})$ is related to $w_{eq}(s,a_{max})=A(s,a_{max}), A(s,a_{max})\ge 0$. When QVPO converges, $A(s,a_{max})\rightarrow 0$ and $w_{ent}(s)\rightarrow 0$ .
> **Q6**: Can the author explain why the Q-weighted VLO loss is superior to QSM regarding the additional errors in policy optimization?
**A6**: As mentioned in lines 43-46, QSM has a doubled approximation error from the alignment process. More concretely, the trained Q value function has an approximation error. QSM utilizes the gradient of this approximated Q function to train the score model. This leads to the doubled approximation error in the score model. That is why the performance of QSM is worse than QVPO as shown in the attached pdf. Besides, the policy improvement stage of DIPO is sampling-based, which implies QVPO is more likely to cross the local optimum.
> **Q7**: In line 300, it is mentioned that action selection is used during inference, which is a standard technique in offline RL. However, is the same technique applied to the baselines? If not, this would be unfair.
**A7**: We need to clarify that there is a contradiction if we apply this technique to other methods.
As mentioned in offline RL works like IDQL [R6], applying action selection to diffusion policy is to yield a deterministic policy during inference. For MLP or Gaussian policy, we can directly output the deterministic action or the mean of Gaussian policy to achieve this goal. Hence, this trick is not appropriate for classical online RL methods (e.g., SAC, TD3).
For diffusion-based RL, like DIPO, it yields the deterministic policy via fixing the denoising process (i.e., updating with the mean value in each diffusion step). In that case, there exists a contradiction if action selection is also applied during inference.
[R6] Hansen P, et al. Idql: Implicit q-learning as an actor-critic method with diffusion policies.
[R7] Yang L, et al. Policy representation via diffusion probability model for reinforcement learning.
---
Rebuttal Comment 1.1:
Comment: I appreciate the response and the effort made by the authors during the rebuttal. However, I still have a couple of questions. First, the result of continuous bandit problem looks perfect. Can you elaborate more on how the experiment is actually being done, e.g., how do you sample the initial points? Second, the authors claim that QVPO has a high GPU utilization compared to baselines. However, do you use any parallel environment during training? I believe that parallel environments, either on CPU or GPU, have been a common technique in RL training. If you use hundreds of parallel environments, say 256 (a common number that most devices can support), the GPU utilization will be high for baselines too, and therefore the multiple sampling and selection procedure of QVPO will become a bottleneck, let alone GPU simulators with thousands of parallel environments. Third, the authors claim that QVPO can leverage the multimodality of diffusion policies (line 60), however, the authors didn't prove it or even mention it in the rest of the paper. I feel like this claim is very strong and you should verify it in experiments.
---
Reply to Comment 1.1.1:
Comment: Thanks for your comments and questions as well.
**Q8**: Can the authors elaborate more on how the continuous bandit experiment is actually being done, e.g., how do you sample the initial points?
**A8**: The continuous bandit experiment is done similarly to Algorithm 1. Concretely, we sample 64 actions from the current diffusion policy and select the best according to the reward function $Q(x)=\sum_{i=1}^3w_i\frac1{2\pi\sigma_i^2}\exp\left(-\frac1{2\sigma_i}(x-\mu_i)^T(x-\mu_i)\right)$, where $w_i=1.5$, $\sigma_i=0.1$, and $\mu_i=[-1.35,0.65]^T$, $[-0.65,1.35]^T$ and $[-1.61,1.61]^T$ respectively. Then, we train the diffusion policy with the best action sample (with & without entropy term) in each training epoch. We plot the evaluation results per 10 training epochs with enough samples from the diffusion policy.
The initial points (we guess you mean the first column of Figure 1 in the attached pdf file) are sampled from an initial diffusion policy pre-trained from a Gaussian distribution $\mathcal{N}(0, 0.25)$. Notably, **this pre-training procedure is not required in practical implementation**. The pre-training procedure from a given Gaussian distribution (not good) actually increases the convergence difficulty for QVPO. We use this pre-training procedure just for better visualization; otherwise, QVPO will converge very fast for such a toy example and thus cannot show a good visualization.
**Q9**: Do you use any parallel environment during training? The multiple sampling and selection procedure of QVPO may become a bottleneck, let alone GPU simulators with thousands of parallel environments.
**A9**: No, we do not use any parallel environment during training for fair comparison. Firstly, we need to clarify that the parallelism of QVPO is not in **the online environment interaction stage** but **in the policy optimization stage**. As mentioned in lines 178-179, the difficulty of diffusion policy optimization is how to obtain the optimal action samples. Previous diffusion-based RL method like DIPO finds the optimal action sample by performing multiple gradient updates on the actions in the replay buffer. However, the multiple gradient updates in DIPO cannot be implemented in parallel. In contrast, QVPO finds the optimal action samples via multiple sampling and selection, which can be parallelly executed. In that case, QVPO can speed up policy optimization compared to DIPO.
Besides, although the idea of action selection is also applied to online environment interaction (i.e., "efficient diffusion policy" in Section 4.4), we believe this trick of QVPO is not a bottleneck with parallel environment interaction. This is because the action selection number in "efficient diffusion policy" is commonly chosen to be very small ($K_b=4$ is enough generally). In that case, it will not affect the training time too much with parallel environments. In fact, the GPU utilization and memory consumption of QVPO is limited (i.e., 55% & 598MB) even in the policy optimization stage, where 256 states are given parallelly and we need to sample 64 actions for each of them.
**Q10**: The authors claim that QVPO can **leverage** the multimodality of diffusion policies (line 60), however, the authors didn't prove it or even mention it in the rest of the paper. I feel like this claim is very strong and you should verify it in experiments.
**A10**: We have no intention to show the multimodality of diffusion policy (which has been proven in DIPO, QSM). Instead, we want to efficiently **leverage** the multimodality of diffusion policy to achieve better performance in continuous control tasks. The multimodality is generally hard to visualize, especially for complex real tasks. Therefore, we develop a toy example to show this property. As shown in Figure 1 of the attached file, the proposed diffusion entropy term and the sampling-based policy optimization make it easy for QVPO to explore broader action space (Figure 1) and thus can verify that QVPO can better leverage multimodality. | Summary: The paper proposes a novel model-free online reinforcement learning (RL) algorithm called Q-weighted Variational Policy Optimization (QVPO), which leverages the expressiveness and multimodality of diffusion models. By introducing Q-weighted variational loss and entropy regularization, the authors aim to overcome the limitations of unimodal policies and enhance exploration capabilities. The algorithm is validated through experiments on MuJoCo continuous control benchmarks, demonstrating state-of-the-art performance in terms of both cumulative reward and sample efficiency.
Strengths: The paper presents a significant advancement in integrating diffusion models into online RL, a relatively underexplored area. The proposed QVPO algorithm is theoretically sound, with the Q-weighted variational loss being proven as a tight lower bound of the policy objective in online RL. The incorporation of an entropy regularization term and an efficient behavior policy are innovative approaches to enhance exploration and sample efficiency.
Weaknesses: 1. Experiment Scope: The experiments are limited to D4RL datasets, which do not cover the full spectrum of possible RL environments. The generalization of the proposed method to other types of tasks, such as discrete action spaces or tasks with sparse rewards, is not demonstrated, eg. Adroit, Humanoid, MetaWorld, Maniskill, KUKA Pick-and-Place.
2. Ablation Studies: While the paper includes ablation studies, they are not exhaustive. Additional ablation studies are needed to isolate the contributions of individual components, such as the Q-weighted variational loss and the entropy regularization term. It is unclear how much each component independently contributes to the overall performance improvements.
3. Diffusion Policy Variance Reduction: The proposed method for reducing diffusion policy variance via action selection aims to improve sample efficiency. However, this approach may sacrifice the exploration capabilities of the diffusion policy, thus limiting its multimodal potential. This trade-off is not discussed in the paper, and no ablation study is provided to evaluate the impact.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Can the proposed algorithm be adapted for tasks with discrete action spaces or sparse rewards, and if so, how?
2. How is the diffusion policy initialized, and what impact does this initialization have on the performance of the algorithm? If the policy is tranined from scratch, how is the Q-function trained at the beginning stage?
3. The efficient diffusion policy method reduces policy variance but may limit exploration. How do you balance exploration and exploitation, and can you provide an ablation study to verify the effects of this trade-off?
4. From Figure 3, QVPO appears to converge faster. Can you provide an explanation or experiments to support this observation?
5. How does the computational complexity of QVPO compare with existing diffusion-based online RL methods in terms of training time and resource utilization?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have addressed the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her careful reading and valuable suggestions. Below we will answer your concerns point-by-point.
> **Q1**: The experiments are limited to D4RL datasets, which do not cover the full spectrum of possible RL environments. Can the proposed algorithm be adapted for tasks with discrete action spaces or sparse rewards, and if so, how?
**A1**: Thanks for your question. We need to clarify two fundamental points:
(1) **We never use D4RL datasets. QVPO is an online RL method instead of an offline one. Only offline RL requires datasets such as D4RL.** We perform our experiments in 5 mujoco online environments rather than D4RL datasets. These mujoco tasks are quite standard benchmarks in existing online RL methods. Besides, we also added the experiments on the recently released HumanoidBench environment in the attached pdf file. It is a very complex environment and QVPO still achieves the SOTA performance.
(2) **QVPO is designed to handle complex continuous control tasks rather than for discrete action spaces or sparse rewards.** For discrete action spaces, we can directly sample in the discrete probability simplex output by the neural network and achieve multimodal policy. However, it is not trivial to achieve multimodal policy in continuous action distribution, since we cannot output the corresponding probability density function via neural networks. In that case, we consider utilizing diffusion model to enhance the multimodality and exploration ability of RL agent in continuous control tasks. Besides, QVPO does not conflict with most existing RL methods for sparse rewards like HER [R5]. Thus, we can incorporate HER into QVPO in environments with sparse rewards.
> **Q2**: While the paper includes ablation studies, they are not exhaustive. Additional ablation studies are needed to isolate the contributions of individual components, such as the Q-weighted variational loss and the entropy regularization term. It is unclear how much each component independently contributes to the overall performance improvements.
**A2**: Thanks for raising the concern. We need to clarify that the main body of QVPO is the Q-weighted variational loss and QVPO must use this loss to train the diffusion policy. Thus, there is no way to do ablation study on the Q-weighted variational loss. Besides, the ablation study on the entropy regularization term is shown in Figure 4 of the original paper.
> **Q3**: The proposed method for reducing diffusion policy variance via action selection aims to improve sample efficiency. However, this approach may sacrifice the exploration capabilities of the diffusion policy, thus limiting its multimodal potential. This trade-off is not discussed in the paper, and no ablation study is provided to evaluate the impact.
**A3**: We did conduct such ablation study with the trade-off analysis in our paper. The action selection number $K_b$ in efficient diffusion policy is actually the trade-off between exploration and exploitation. As shown in **Figure 5** and mentioned in lines 320-330, the diffusion policy converges slowly when $K_b=1$ (i.e., without efficient diffusion policy). This means QVPO without efficient diffusion policy cannot exploit the information of Q value very well. Besides, we also add the experiment with $K_b=20$ to show that too high action selection number will lead to a limited exploration. In that case, setting $K_b=4$ can balance exploration and exploitation well.
> **Q4**: How is the diffusion policy initialized, and what impact does this initialization have on the performance of the algorithm? If the policy is trained from scratch, how is the Q-function trained at the beginning stage?
**A4**: Thanks for your question. The diffusion policy and Q-function are both trained from scratch without any special initilization trick.
> **Q5**: The efficient diffusion policy method reduces policy variance but may limit exploration. How do you balance exploration and exploitation, and can you provide an ablation study to verify the effects of this trade-off?
**A5**: The answer can be referred to in **A3**.
> **Q6**: From Figure 3, QVPO appears to converge faster. Can you provide an explanation or experiments to support this observation?
**A6**: Thanks for your question. Compared with DIPO and QSM, QVPO can explore broader action space and achieve multimodal training samples with the sampling-based update and entropy regularization term. We think that is why QVPO converges faster.
> **Q7**: How does the computational complexity of QVPO compare with existing diffusion-based online RL methods in terms of training time and resource utilization?
**A7**: The training and inference time of QVPO and other online RL methods are shown in **Global Rebuttal**. The GPU resource utilization rate of QVPO/DIPO is 55%/28% and memory consumption of QVPO/DIPO is 598MB/1354MB. In that case, QVPO can sufficiently make use of the parallel computing ability of GPU.
[R5] Andrychowicz M, Wolski F, Ray A, et al. Hindsight experience replay[J]. Advances in neural information processing systems, 2017, 30.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Dear Authors,
Thank you for your rebuttal. You mentioned that "QVPO can efficiently leverage GPU parallel computing, and the multiple-sampling and selecting procedure does not significantly impact time compared to gradient-based optimization like DIPO." Could you clarify the parallel settings used to evaluate the training and inference times for your method? Thank you.
---
Reply to Comment 1.1.1:
Comment: **Q8**: You mentioned that "QVPO can efficiently leverage GPU parallel computing, and the multiple-sampling and selecting procedure does not significantly impact time compared to gradient-based optimization like DIPO." Could you clarify the parallel settings used to evaluate the training and inference times for your method?
**A8**: Thanks for your comments as well. As mentioned in lines 178-179, the difficulty of diffusion policy optimization is how to obtain the optimal action samples. QVPO finds the optimal action samples via multiple sampling and selection, which can be done parallelly. In contrast, the previous diffusion-based RL method DIPO finds the optimal action sample by performing multiple gradient updates on the actions in the replay buffer. However, the multiple gradient updates cannot be implemented in parallel. In that case, QVPO can speed up the training with the parallelism of GPU. The concrete parallel settings of QVPO in training and in inference are shown in Table 3 and lines 300-301 of the original paper respectively. The action selection number (i.e., the number of parallelly generated actions) is 64 in training and 32 in inference for each state. | Summary: The paper uses a diffusion policy to address online RL. The method works by weighting the diffusion model VLO loss using advantages computed with a Q function. Additionally, they add an entropy maximization term to aid with exploration. The authors present results indicating strong performance on a variety of continuous control tasks as well as ablations verifying the necessity of several components of their method.
Strengths: - The paper is one of the first to consider the application of diffusion policies to online RL and gets reasonably good results. This could be a significant paper with some additional experiments and touch ups.
- The paper is the first to apply the weighted VLO objective to online RL and contributes a novel method for maximizing the entropy of a diffusion policy
- Overall the discussion of the method and experiments is clear
Weaknesses: There are several claims made in the paper that I don't believe are substantiated. Some important ones:
- In the introduction and RW section you make claims about the DIPO and QSM baselines that don't seem to be proven anywhere
- You take credit for several things that aren't yours. Your "efficient diffusion policy" trick is used extensively in other work and the weighted VLO loss is one of the methods presented in EDP (Kang et al.)
I think there need to be some changes to the experiments.
- Most important: the comparison to other RL algorithms is unfair since you tune your algorithm per environment whereas other algorithms have a fixed set of hyperparameters. You need to either tune all relevant hyperparameters for all other algorithms for all environments or fix yours.
- One of your key claims is that the diffusion policy is a good fit for RL because of its ability to model multimodal action distributions and yet you don't consider any environments where this is the case. Rather, you simply demonstrate improved sample efficiency on a handful of tasks that clearly can be solved using a MLP policy parameterization
- You really should compare to QSM
- The ablation studies are a bit incomplete in my opinion. It would be good to see results from all five environments. Also, it would be good to run an ablation where you omit the policy loss weighting, leading to an algorithm similar to IDQL, to verify that the policy weighting is important and that your policy isn't relying on the efficient diffusion policy step. I assume this ablation will go well for you based on the fact that the $K_b=1$ ablation still learns a policy but this will strengthen the paper.
- You mention qcut, then say it doesn't work and never mention it again. Either show an experiment proving that or remove it from the paper
- I believe that slow wall clock time, either for training or online rollouts, is potentially a big limitation of the method. Wall clock time is vaguely mentioned in the appendix but it would be good to see some data about this in the paper
Technical Quality: 3
Clarity: 3
Questions for Authors: Random questions and comments
- The second and third sentence of the abstract seem to contradict one another. How is it that diffusion policies can improve exploration (which is only relevant to online RL) if they have only really been applied to offline RL?
- In the introduction you claim that DIPO has issues with limited exploration but I don't see any proof for that claim in your or their paper
- In the introduction you claim that QSM has issues that prevent it from converging to optimal policies that your method does not have, but you never compare to QSM to demonstrate this win. I think QSM is an important comparison that really should be added to the paper since it is probably the closest to your method.
- I don't understand the purpose of including qcut in the manuscript since you immediately propose a different approximation that works better and then never mention qcut again.
- I'm very confused by Figure 2. How did you generate this? What does 'explorable area' mean?
- Section 4.4 seems to imply that selecting an action by sampling several and then choosing the one that maximizes the Q value is a novel contribution, but this has been done on several occasions including by Diffusion QL, Efficient Diffusion Policy, Implicit Diffusion Q Learning, etc. This should be rewritten and these works should be cited. Also it is frequently referred to as "efficient diffusion policy" in the paper which is confusing since there is another paper (Kang et al.) that is called the same thing.
- The policy improvement objective in this paper is actually very similar to one of the variants of EDP (Kang et al.), specifically shown in Equation 13 of https://arxiv.org/pdf/2305.20081. This paper still has reasonable novelty compared to that paper but that similarity should be better acknowledged.
- Figure 4 is a nice result but it would be helpful to also see steps=100 & ent.
- I'm curious about the wall clock training time of QVPO compared with baselines. Does everything take about 10 hours? I would expect QVPO to be quite a bit slower.
- Is $\omega_\textrm{ent}$ difficult to tune?
Nit picks (not penalizing you for these but thought I should point them out):
- The text in the plot legend is too small and the plots would be easier to read if the authors applied some smoothing
- The transitions used at the beginning of the sentences in lines 163-170 (besides, firstly, notably, etc) are very distracting and the paragraph would sound better if you removed all of them.
- There is an extra " on line 237 for some reason
- Appendix B: hyperparameters should not be hyphenated
- The legends of the plots look sloppy. Matplotlib allows the use of LaTeX and you should make use of that
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors mention one limitation--the lack of adaptive entropy regularization. However, they miss other potential limitations, such as slower wall clock training time or the inability to output extended chunks of actions, as is common for diffusion policies applied to BC.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1**: You make claims about DIPO and QSM that don't seem to be proven anywhere. You really should compare to QSM.
**A1(DIPO)**: **We did compare QVPO with DIPO in Figure 4 (original paper)**. Besides, as mentioned in lines 110-114, DIPO creates a dedicated buffer for diffusion policy, updates the state-action pairs in this buffer via the gradient of Q network, and finally uses the state-action pair from this buffer to train the diffusion policy. In that case, the training samples of DIPO will be limited to the vicinity of previously explored actions, thus leading to limited exploration.
**A1(QSM)**: The **code of QSM was not yet released when this paper was submitted**. So we did not compare QVPO with QSM before. We have **added the QSM results** of 5 Mujoco Benchmarks in Figure 2 of the attached pdf file, following the **recently released** official implementation. We can see the performance of QSM is much worse than QVPO and DIPO, and even worse than SAC and TD3 in the Walker and Humanoid environments, **which is also shown in Figure 3 of the QSM original paper**. As we mentioned in lines 43-46, QSM has a doubled approximation error from the alignment process. The trained Q value function has an approximation error. QSM utilizes the gradient of this approximated Q function to train the score model.
> **Q2**: You take credit ... "efficient diffusion policy" trick is used extensively in other work and the weighted VLO loss is one of the methods presented in EDP (Kang et al.) ... This should be rewritten and these works should be cited. ... This paper still has reasonable novelty but that similarity should be better acknowledged.
>
**A2**: We will add more illustrations and cite these works properly in the final version. Note that these works are all offline RL. They used "efficient diffusion policy" to yield deterministic policy during inference, while our QVPO utilizes "efficient diffusion policy" in the online interaction with the environment to obtain **more efficient transition data**. Our motivation is different from these existing works.
Besides, the weighted VLO loss of QVPO is different from EDP. In the weighted VLO loss, QVPO trains diffusion policy with the weighted sample from the current policy, while EDP trains the diffusion policy with the weighted sample from policy in the offline dataset (replay buffer).
> **Q3-1**: The comparison to other RL algorithms is unfair. Need to either tune all relevant hyperparameters for other algorithms or fix yours.
**A3-1**: Most hyperparameters in our experiments are the same, except $K_t$ in Hopper and $w_{ent}$ in HalfCheetah, as shown in Table 3. During the rebuttal, we **have fixed all hyperparameters for QVPO** and the **new results** are shown in the **attached pdf file for 5 Mujoco benchmarks**. QVPO shows similar performance and still outperforms all baselines with a clear margin.
>**Q3-2**: **Is $\omega_{ent}$ difficult to tune?**
**A3-2**: No, quite easy to set this value. Just make the scaling within 1e-3 to 1e-2.
> **Q4**: Claim diffusion policy is a good fit for RL due to its ability to model multimodal action distributions, but don't conduct experiments on such environment.
**A4**: Firstly, existing online RL methods almost all verify their performance on these robot locomotion tasks. In that case, we can compare QVPO with previous methods in a relatively fair manner with these standard environments.
Second, the multimodality of policy is important because it can avoid the policy stuck at the local optimum. Moreover, our tested tasks such as Humanoid-v3 cannot be solved very well with a MLP policy.
Third, to further test QVPO for more complex tasks, we applied QVPO to the recently released HumanoidBench environment. The benchmark is based on Unitree H1 humanoid robot with **151 state dimension** and **61 action dimension**. Please refer to **Global Response** for more details.
> **Q5(1)**: The ablation studies are a bit incomplete in my opinion. It would be good to see results from all five environments.
**A5(1)**: Due to the time and resource limit, we cannot complete these additional experiments. We will add them in our final version. Actually, the results for other 4 environments are similar but we did not record them.
> **Q5(2)**: It would be good to run an ablation where you omit the policy loss weighting ... I assume this ablation will go well ... $K_b=1$ still learns a policy but this will strengthen the paper.
**A5(2)**: The requested ablation study $K_b=1$ is shown in Figure 5 of the original paper. Offline diffusion RL methods like EDP train the diffusion policy with samples from offline datasets or replay buffer, while QVPO trains the diffusion policy with selected samples from the diffusion policy itself.
> **Q6**: You mention qcut. Either show an experiment proving that or remove it.
**A6**: In the beginning, qcut is mentioned to present the original motivation of QVPO. We will remove qcut part in the final version.
> **Q7**: The training and inference time of QVPO.
**A7**: Please refer to the **Global Response**.
> **Q8**: The second and third sentence of the abstract seem to contradict one another.
**A8**: Thanks and we will modify the two sentences.
> **Q9**: I'm very confused by Figure 2. How did you generate this? What does 'explorable area' mean?
**A9**: Figure 2 is exported by powerpoint to visualize the effect of diffusion entropy regularization term. The 'explorable area' means the area with a certain generation probability of diffusion policy.
> **Q10**: Figure 4 is a nice result but it would be helpful to also see steps=100 & ent.
**Q10**: The entropy term is introduced for the diffusion model with few denoising steps. QVPO with steps=100 with entropy term will not improve the result. We will add this result in the final version.
> **Q11**: Suggestions to improve the presentation (some minor format issues) ...
**A11**: We will modify them.
---
Rebuttal Comment 1.1:
Title: Most comments addressed but I'm worried about performance of all your baselines.
Comment: Thanks for your detailed responses. You addressed most of my concerns about the paper, but I was still not 100% convinced by your results. I thought it was odd that PPO does so poorly on Humanoid and HalfCheetah so I looked around for other papers running the same baselines on the same benchmark tasks. I found a paper titled "DSAC-T: Distributional Soft Actor-Critic with Three Refinements" from Duan et al. which also runs several of your baselines on your benchmark tasks. Notably, their implementation of SAC does at least as well as QVPO on all environments except Hopper, and actually does substantially better on Walker, Halfcheetah and Humanoid. This pattern hold across their implementation of several baselines you compare to. This leads me to question whether your method is truly stronger than these baselines or if the difference in performance that you observe is merely a matter of implementation details. In order to improve my score it would be helpful to understand why their results are so much stronger than yours.
---
Rebuttal 2:
Title: Illustrations about the performance of our baselines ---- Reply to Reviewer Kypj
Comment: Thanks for your valuable suggestions as well. We are glad that our responses help you solve most of your concerns. Here are some further illustrations for your concerns.
**Why results of DSAC-T are so much stronger than ours?**
We want to clarify that the difference in performance between DSAC-T [R8] and QVPO merely **comes from different implementation settings**. As shown in Figure 2 of DSAC-T [R8], the x-axis is not the online interaction steps as in our paper (i.e., training epoch) but the training iteration. As shown in Table 3 of DSAC-T [R8], DSAC-T performs one training iteration with **20 online interaction steps** (**sample batch size in original paper**) for off-policy RL methods. In that case, to fairly compare QVPO with DSAC-T, you need to **multiply all the coordinates on the x-axis by 20**. With that operation, you will obtain results similar to ours on SAC. Actually, it is meaningful to use interaction steps (ours) rather than training iterations (DSAC-T) as the x-axis, since the former can show sample efficiency of different algorithms better in the online RL paradigm.
We want to highlight that **our setting is the same as the most existing works**, including SAC [R8], DIPO [R9], QSM [R10], TD-MPC2 [R1], Dreamer-v3 [R2], etc. **Our results on all other baselines including SAC are also consistent with these works**. Please refer to Figure 1 of SAC [R8], Figure 6 of DIPO [R9], Figure 3 of QSM [R10], Figure 12 of TD-MPC2 [R1], etc.
**Why PPO performs poorly on Humanoid and HalfCheetah?** PPO does perform poorly with limited interactions since it is an on-policy RL method. Many previous works such as DIPO [R7], SAC [R9], Dreamer-v3 [R2] also show similar results to ours. Please refer to Figure 6 in DIPO [R7], Figure 1 in SAC [R9], and Figure 14 in Dreamer-v3 [R9].
[R8] Duan J, Wang W, Xiao L, et al. DSAC-T: Distributional soft actor-critic with three refinements[J]. arXiv preprint arXiv:2310.05858, 2023.
[R9] Haarnoja T, Zhou A, Hartikainen K, et al. Soft actor-critic algorithms and applications[J]. arXiv preprint arXiv:1812.05905, 2018.
[R10] Psenka M, Escontrela A, Abbeel P, et al. Learning a diffusion model policy from rewards via q-score matching[J]. arXiv preprint arXiv:2312.11752, 2023.
---
Rebuttal Comment 2.1:
Comment: Thanks for the response. I incorrectly assumed that "iterations" in that paper was equal to the number of environment interactions. It appears you are correct that they are training on more data and your results do mostly match the DIPO results. Can you please confirm that in your paper one epoch equals one environment step? I couldn't find any details specifying this in the paper. I actually think replacing 'epoch' with something along the lines of the number of environment steps would get rid of a lot of confusion. At the very least you should make the relationship between epochs and training data more explicit in the paper.
As I was looking into this I noticed that algorithm 1 is confusing. You loop over $t$ from 1 to $T$, but never specify what $T$ is. Is each iteration through that loop an epoch? Also, you use $t$ twice, as it simultaneously refers to the step in the iteration from 1 to $T$, (lines 1-3) but also refers to the index in the batch for the TD target (line 10).
Regardless of the small issues I've raised in this response I'm feeling far more confident about the paper. Upon confirmation that one epoch means one environment step I'll increase my score to 6. The main reason I don't feel a higher score is justified is that I really need to see some experiments in more difficult environments for me to believe this paper will have "high impact", as is the criteria for a score higher than 6.
---
Reply to Comment 2.1.1:
Title: Clarification on the meaning of training epoch and the impact of QVPO -- Reply to Reviewer Kypj
Comment: **Q12: Does one training epoch equal one environment step in your paper?**:
Thanks for your feedback. We confirm that one training epoch equals one environment step in our paper. We do not mention this since it is a common setting in the literature of online RL. However, to make our paper clearer, we will replace "training epoch" with "environment step" in the final version.
**Q13: $t$ in Algorithm 1 looks confusing.**:
Thank you for pointing out this ambiguity. In Algorithm 1, each iteration indicates a training epoch and $T$ means the total number of training epochs (i.e., 1e6 in our experiments on 5 mujoco environments). However, $t$ in line 10 does not mean the training epoch but represents different transitions in the replay buffer. We will use different notations for training epoch (environment step) in the final version.
**Q14: I really need to see some experiments in more difficult environments for me to believe this paper will have "high impact"**:
We need to emphasize that the HumanoidBench environment [R7] we added is difficult enough to present the "high impact" of QVPO. As mentioned in A4, This environment is based on Unitree H1 humanoid robot with **151 state dimension and 61 action dimension**. As shown in paper [R7], **model-free RL methods, such as PPO and SAC cannot even converge in this challenging environment**. However, as shown in **Figure 3 of our attached pdf file, QVPO (although as a model-free method) even surpasses the performance of advanced model-based RL methods (TD-MPC2, Dreamer-v3)**. Notably, the whole-body control of humanoid is truly very very hard! In that case, existing works on humanoid control such as [R11, R12] tend to separate the RL learning on lower body and upper body of the humanoid. However, QVPO can perform very well with whole-body humanoid control (61 action dimensions include the upper and lower body and two hands). Hence, we believe QVPO will be a revolutionary online RL method for complex continuous control tasks.
[R7] Sferrazza C, Huang D M, Lin X, et al. Humanoidbench: Simulated humanoid benchmark for whole-body locomotion and manipulation[J]. arXiv preprint arXiv:2403.10506, 2024.
[R11] He T, Luo Z, Xiao W, et al. Learning human-to-humanoid real-time whole-body teleoperation[J]. arXiv preprint arXiv:2403.04436, 2024.
[R12] Cheng X, Ji Y, Chen J, et al. Expressive whole-body control for humanoid robots[J]. arXiv preprint arXiv:2402.16796, 2024. | Summary: The method proposes using Q-weight diffusion loss to train agents in online RL. Instead of approximating log probability, the paper uses advantage weight tuning to maintain the entropy of the actor.
Strengths: The paper is well-organized, and the method is straightforward but effective. I found Theorem 1 interesting as it naturally builds a connection between diffusion models and the online RL objective. Experiments show the method is easy to converge and achieves higher rewards.
Weaknesses: My main concern is that the paper has too much heuristic design regarding equivalent Q-weight transformation functions, the entropy regularization term, and Action Selection. Especially, Eq(9) and Eq(10) are called equivalent Q-weight transformation functions, but they are not mathematically equivalent to Eq(5).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Equivalent Q-weight transformation functions are used to solve the negative Q value problem. Have the authors considered tuning rewards (for example, adding a fixed constant to all rewards to make all rewards positive) to guarantee Q values are all positive? It may be the most straightforward way to avoid the negative Q value issue.
2. There is a typo in line 208 where the right parenthesis should be "]".
3. For Eq(9), how do you estimate $\max_a Q(s,a)$ in practice?
4. Even though critic training is not the main point of the paper, I suggest adding a section in the Appendix to explicitly describe it for completeness.
5. In Figure 5, some $K_b,K_t$ combinations may lead to extreme training collapse. Did the authors experiment with higher values of $K_b,K_t$ while maintaining $K_t<K_b$?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: My main concern is that the design of equivalent Q-weight transformation functions is heuristic and may need some theoretical intuition or backup.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1**: My main concern is that the design of equivalent Q-weight transformation functions is heuristic and may need some theoretical intuition or backup.
**A1**: Thank you for raising the concern. Here is the theoretical proof to show the convergence of QVPO with qadv weight transformation function. (We are afraid OpenReview cannot display the equation clearly. You can copy the following equations as well as texts into HackMD for a clear presentation.)
Assume the new diffusion policy after one update can be approximately expressed as
$$\pi_{new}(a\mid s) \approx (1-p_{data}(s)A_{\pi_{old}}(s, a^\star)\eta)\pi_{old}(a\mid s) + p_{data}(s) A_{\pi_{old}}(s, a^\star) \eta \frac{\mathbb{I}_{a\in \mathcal{N}(a^\star\mid s, \epsilon)}(a)}{S_{\mathcal{N}(a^\star\mid s, \epsilon)}}
$$
where $a^\star\mid s$ is the action that can maximize $Q_{\pi_{old}}(s, a)$ in the state $s$, $\mathcal{N}(a^\star \mid s, \epsilon)$ is the neighborhood of $a^\star\mid s$ with a small radius $\epsilon$, $S_{\mathcal{N}(a^\star\mid s, \epsilon)}$ is the area of this neighborhood, and $p_{data}(s)$ is the sampling distribution of the state. This assumption is straightforward: the training sample's generation probability in the diffusion model will be improved and the improved probability is proportional to its weight.
Now consider the improvement of the RL objective:
$$
\begin{aligned}
\mathcal{J}(\pi_{new}) - \mathcal{J}(\pi_{old}) & = \mathbb{E}_{s\sim \rho_0}\left[V_{\pi_{new}}(s) - V_{\pi_{old}}(s)\right] \\
& = \mathbb{E}_{s\sim \rho_0}\left[\mathbb{E}_{a\sim \pi_{new}(a\mid s)}\left[Q_{\pi_{new}}(s, a)\right] - V_{\pi_{old}}(s)\right] \\
&=\mathbb{E}_{s\sim \rho_0}\left[\mathbb{E}_{a\sim \pi_{new}(a\mid s)}\left[Q_{\pi_{new}}(s, a)-Q_{\pi_{old}}(s, a)\right]\right] \\ &\quad \quad + \mathbb{E}_{s\sim \rho_0}\left[\mathbb{E}_{a\sim \pi_{new}(a\mid s)}\left[ Q_{\pi_{old}}(s, a)\right] - V_{\pi_{old}}(s)\right] \\
&=\mathbb{E}_{s\sim \rho_0}\left[\mathbb{E}_{a\sim \pi_{new}(a\mid s)}\left[Q_{\pi_{new}}(s, a)-Q_{\pi_{old}}(s, a)\right]\right] + \mathbb{E}_{s,a\sim \rho_0, \pi_{new}(a\mid s)}\left[A_{\pi_{old}}(s,a)\right] \\
\end{aligned}
$$
The first term here can be further expanded according to the Bellman equation:
$$
\mathbb{E}_{s\sim \rho_0}\left[\mathbb{E}_{a\sim \pi_{new}(a\mid s)}\left[Q_{\pi_{new}}(s, a)-Q_{\pi_{old}}(s, a)\right]\right] = \gamma \mathbb{E}_{s\sim d^1_{\pi_{new}}}\left[V_{\pi_{new}}(s)-V_{\pi_{old}}(s)\right]
$$
where $d^1_{\pi_{new}}$ denotes the probability distribution of state in time step $t=1$ with policy $\pi_{new}$. Repeating the above operation, we will obtain:
$$
\begin{aligned}
\mathcal{J}(\pi_{new}) - \mathcal{J}(\pi_{old}) &= \sum_{t=0}^\infty \gamma^t\mathbb{E}_{s,a \sim d_{\pi_{new}}^t, \pi_{new}}\left[A_{\pi_{old}}(s,a)\right] \\
&= \frac{1}{1-\gamma} \mathbb{E}_{s \sim d_{\pi_{new}}}\left[\mathbb{E}_{a\sim \pi_{new}(\cdot\mid s)}\left[A_{\pi_{old}}(s,a)\right]\right] \\
&\approx \frac{1}{1-\gamma} \mathbb{E}_{s \sim d_{\pi_{new}}}\Bigg[(1-p_{data}(s)A_{\pi_{old}}(s,a^\star)\eta)\mathbb{E}_{a\sim \pi_{old}(\cdot\mid s)}\left[A_{\pi_{old}}(s,a)\right] \\& \quad \quad + p_{data}(s)A^2_{\pi_{old}}(s,a^\star) \eta \frac{\mathbb{I}_{a\in \mathcal{N}(a^\star\mid s, \epsilon)}(a)}{S_{\mathcal{N}(a^\star\mid s, \epsilon)}}\Bigg] \\
& = \frac{1}{1-\gamma} \mathbb{E}_{s \sim d_{\pi_{new}}}\left[p_{data}(s)A^2_{\pi_{old}}(s, a^\star) \eta \frac{\mathbb{I}_{a\in \mathcal{N}(a^\star\mid s, \epsilon)}(a)}{S_{\mathcal{N}(a^\star\mid s, \epsilon)}}\right] \ge 0
\end{aligned}
$$
> **Q2**: Have the authors considered tuning rewards (for example, adding a fixed constant to all rewards to make all rewards positive) to guarantee Q values are all positive?
**A2**: Thank you for your question. Yes, we did conduct such a test by adding a fixed constant to all rewards at the very beginning. However, it does not work well after the policy has been improved to a certain extent. This is because the relative difference of Q value will be reduced with this operation. For example, if we have $Q(s,a_1)=1, Q(s,a_2)=2$, and $Q^{'}(s,a_1)=11, Q^{'}(s,a_2)=12$ after adding a fixed constant $10$, the raw relative difference is $\frac{2-1}{2}=0.5$ and the modified relative difference is $\frac{12-11}{12}=0.083$. Besides, in many real applications, how to set this fixed constant to ensure all $Q$ values positive is hard.
> **Q3**: There is a typo in line 208 where the right parenthesis should be "]".
**A3**: Thank you for pointing out this. We will correct this typo in the final version.
> **Q4**: For Eq(9), how do you estimate $\max_a Q(s,a)$ in practice?
**A4**: As mentioned in lines 219-220, we can obtain the approximate $\max_a Q(s,a)$ via enough samples from the diffusion. In practice, we choose around 64 samples.
> **Q5**: Even though critic training is not the main point of the paper, I suggest adding a section in the Appendix to explicitly describe it for completeness.
**A5**: Thank you for your valuable suggestion. The critic training part is standard and the same as TD3 and DiPO. We will add the critic training part to the appendix in the final version.
> **Q6**: In Figure 5, some $K_b, K_t$ combinations may lead to extreme training collapse. Did the authors experiment with higher values of $K_b, K_t$ while maintaining $K_t<K_b$?
**A6**: Thank you for your constructive suggestion. Firstly, we need to clarify that the fluctuation of the reward curve is a common phenomenon in RL and cannot be viewed as a training collapse, since it recovers quickly to the previous level. In Figure 5, we only plot the result of one run for each case without a window smoothing. That is why the reward curves look unstable.
We have conducted experiments with higher $K_b$ and have presented the results in the attached PDF file. It can be found that a high value of $K_b$ will limit the exploration of QVPO and tend to be stuck at a local optimum.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' detailed response. Most of my concerns have been addressed, but a few questions remain unresolved.
1. I believe the proof is valid. However, could you clarify what $\rho_0$ represents? Is it the discounted state distribution [1]? Additionally, could you explain the reasoning behind assuming the new policy in this manner? Do the authors have any intuition about this assumption?
2. I appreciate the explanation regarding reward tuning. One of my main questions following this is why not use $exp(Q)$ or $exp(A)$, which is a standard transformation when applying AWR-style methods, as referenced in [2,3]. This approach naturally guarantees that the weight is non-negative.
[1] Sutton, R. S., & Barto, A. G. (1998). Introduction to reinforcement learning. mit press. Cambridge, MA.
[2] Peng, X. B., Kumar, A., Zhang, G., & Levine, S. (2019). Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177.
[3] Kostrikov, I., Nair, A., & Levine, S. (2021). Offline reinforcement learning with implicit q-learning. arXiv preprint arXiv:2110.06169.
---
Reply to Comment 1.1.1:
Comment: Thanks for your comments as well.
**Q7**: I believe the proof is valid. However, could you clarify what $\rho_0$ represents? Is it the discounted state distribution [1]? Additionally, could you explain the reasoning behind assuming the new policy in this manner? Do the authors have any intuition about this assumption?
**A7**: As mentioned in lines 126-129 of our original paper, $\rho_0$ is the distribution of the initial state $s_0$ rather than the discounted state distribution. The discounted state distribution is represented by $d_\pi$.
The assumption is straightforward. We will present a simpler formulation to make it easy to follow up. Generally, we can write the updated policy as $\pi_{new}(a\mid s) = (1-\beta(s))\pi_{old}(a\mid s) +
\beta(s)\pi(a; a^\star\mid s)$ (similar update formula can be found in equation 4.1 of [R13]), where $\pi(a; a^\star\mid s)$ is a normalized probability distribution which is close to a Dirac distribution on $a^\star \mid s$ (i.e. we use $\frac{I_{a\in \mathcal{N}(a^\star\mid s, \epsilon)}(a)}{S_{\mathcal{N}(a^\star\mid s, \epsilon)}}$ to denote it). Considering weight $A_{\pi_{old}}(s, a^\star)$ in the objective, we just assume the degree of updating is proportional to the weight (i.e., $\beta(s)\propto A_{\pi_{old}}(s, a^\star)$). This assumption is reasonable that policy can achieve larger update steps in states with relatively larger weights.
In that case, we will finally achieve $\pi_{new}(a\mid s) \approx (1-p_{data}(s)A_{\pi_{old}}(s, a^\star)\eta)\pi_{old}(a\mid s) + p_{data}(s) A_{\pi_{old}}(s, a^\star) \eta \pi(a; a^\star\mid s)$.
**Q8**: I appreciate the explanation regarding reward tuning. One of my main questions following this is why not use $\exp(Q)$ or $\exp(A)$, which is a standard transformation when applying AWR-style methods, as referenced in [2,3]. This approach naturally guarantees that the weight is non-negative.
**A8**: That is a good question. It can be observed that AWR [R3] converges slowly according to Figure 3 of AWR [R3]. One reason for the slow convergence is that $\exp(A)$ is too conservative for policy in online RL. For instance, if there exist 10 different actions for training, we have $A(s,a_1)=1$ and $A(s,a_i)=0.1, i=2,\cdots,10$, the weight of optimal action $a_1$ is $exp(1)\approx 2.7$ and the weight of sub-optimal actions is $\sum_{i=2}^{10}\exp(0.1)\approx 9.9$. In that case, the updated policy with $\exp$ weight function still has a high probability to output a sub-optimal action. In contrast, QVPO avoids this problem via the action selection procedure, which only uses the optimal action to train the diffusion policy. Another problem of $\exp(A)$ is the numerical instability. In practice, if the scale of reward is not properly set, the numerical instability will happen with $\exp(A)$ (e.g., $A(s, a)=10, exp(A(s, a))=22026$).
[R13] Kakade S, Langford J. Approximately optimal approximate reinforcement learning[C]//Proceedings of the Nineteenth International Conference on Machine Learning. 2002: 267-274. | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful reading of considerate and meaningful suggestions to help us improve our paper. We sincerely appreciate that the reviewers find our work "straightforward but effective" (nUs6), "a significant work" (Kypj, Qp9e), "novel and innovative as the first to apply the weighted VLO loss to online RL" (Kypj, Qp9e), and contributes "a novel method for maximizing the entropy of a diffusion policy" (Kypj). We are further glad that the reviewers agree unanimously that our manuscript is "well-written and easy to read" (nUs6, Kypj) and confirm our contributions on both the "interesting and sound" theoretical analysis and "reasonably good, impressive, easy to converge" empirical results (Kypj, Qp9e, uSjP, nUs6) to support our algorithm.
In the following, we will try to address the concerns/questions of the reviewers and present a detailed item-by-item response to their comments.
**(1) Illustration of the attached pdf file**: The attached pdf file contains 5 figures.
Figure 1 replaces the schematic diagram in the original paper with an experiment on a continuous bandit toy example according to Reviewer uSjP. The contour lines indicate the reward function of continuous bandit, which is an arbitrarily selected function with 3 peaks. The concrete reward function is $Q(x)=\sum_{i=1}^3w_i\frac1{2\pi\sigma_i^2}\exp\left(-\frac1{2\sigma_i}(x-\mu_i)^T(x-\mu_i)\right)$, where $w_i=1.5$, $\sigma_i=0.1$, and $\mu_i=[-1.35,0.65]^T$, $[-0.65,1.35]^T$ and $[-1.61,1.61]^T$ respectively.
Figure 2 adds QSM as a new comparison baseline and shows rerun experimental results on QVPO with fixed hyperparameters. The legends and plots are all improved according to the suggestions of Reviewer Kypj.
Figure 3 adds experimental results on the recently released HumanoidBench [R4]. The benchmark is very complex based on Unitree H1 humanoid robot with **151 observation dimension** and **61 action dimension**, and **most existing model-free RL methods do not work totally in this environment.** It verifies QVPO is competitive even compared with advanced model-based methods (e.g., TD-MPC2 [R1], Dreamer-v3 [R2]) in complex continuous control tasks that have high state and action dimensions.
Figure 4 is the comparison between QVPO with and without entropy term, which adds the experimental result of 5 diffusion steps with entropy term according to Reviewer uSjP. We believe 5 diffusion steps are not enough for diffusion policy in online RL.
Figure 5 is the comparison of QVPO with different action selection numbers for behavior policy $K_b$ and for target policy $K_t$, which adds the new case with $K_b=20, K_t=2$ according to Reviewer Qp9e. It implies that a too high $K_b$ in QVPO will result in a lack of exploration.
**(2) Comparison on training time and inference time**:
The training and inference time comparison is shown in the following tables. **Notably, since the official implementation of QSM is based on the jax and other algorithms are based on pytorch, it is not a fair comparison.** In practice, the same algorithm realized with jax is 6-10 times faster than that realized with pytorch. Besides, to fairly compare the diffusion-based RL methods in training and inference time, we set the same number of diffusion steps (T=20) for all of them (i.e., QVPO, QSM, and DIPO). The results imply that QVPO can sufficiently use the parallel computing ability of GPU and the multiple-sampling and selecting procedure does not take much time compared with gradient-based optimization like DIPO.
Moreover, we need to clarify that: **although QVPO is much slower than classical RL methods like SAC in inference, the inference time (6ms) still is acceptable**. To our knowledge, most existing real robots only require a 50-Hz control policy (i.e. output action per 20 ms). Besides, just like QSM, the inference time of QVPO can be further improved with jax framework if it is necessary. Hence, the inference time is not a bottleneck to applying QVPO to real applications.
It is worth noting that we only run one program at the same time here to avoid the effect of other running programs. That is why the training time of QVPO is less than what we mentioned in the original paper.
Due to the word limit, here we only show the comparison on Ant-v3. Results on other 4 environments are almost the same.
The training time comparison on Ant-v3 Benchmarks.
| Method | QVPO | DIPO | TD3 | SAC | PPO | SPO | QSM (jax) |
|:------------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|
| **Training Time (h)** | 6.8 | 10.5 | 0.5 | 2.5 | 0.3 | 0.3 | 1.0 |
The inference time comparison on Ant-v3 Benchmarks.
| Method | QVPO | DIPO | TD3 | SAC | PPO | SPO | QSM (jax) |
|:------------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|
| **Inference Time (ms)** | 6.2 | 5.7 | 0.2 | 0.3 | 0.2 | 0.3 | 0.9 |
[R1] Hansen N, Su H, Wang X. Td-mpc2: Scalable, robust world models for continuous control[J]. arXiv preprint arXiv:2310.16828, 2023.
[R2] Hafner D, Pasukonis J, Ba J, et al. Mastering diverse domains through world models[J]. arXiv preprint arXiv:2301.04104, 2023.
[R3] Peng X B, Kumar A, Zhang G, et al. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning[J]. arXiv preprint arXiv:1910.00177, 2019.
[R4] Sferrazza C, Huang D M, Lin X, et al. Humanoidbench: Simulated humanoid benchmark for whole-body locomotion and manipulation[J]. arXiv preprint arXiv:2403.10506, 2024.
Pdf: /pdf/b7163ae594122d970760e9c818e4abfe6b888840.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Toward a Well-Calibrated Discrimination via Survival Outcome-Aware Contrastive Learning | Accept (poster) | Summary: This work concerns deep learning for survival analysis. Deep learning models for survival analysis are typically desired to have good discriminative performance, where a model can differentiate between patients of different risk profile, and calibration performance, where the time-to-event is accurately predicted by the model. Methods which seek to improve discrimination performance have often led to poorer model calibration. Consequently, this paper proposes an auxiliary contrastive loss to improve the discriminative performance of deep survival models, while still achieving high calibration performance. The novelty in this method is adapting the SOTA contrastive learning method for tabular data, SCARF, by weighting the negative pairs according to the difference in time-to-event. Hence, samples with greater difference in time-to-event are considered more important negative pairs, enforcing an inductive bias to differentiate the latent representations of samples with different time-to-event. After providing a detailed problem formulation, the contrastive loss and generation of positive and negative pairs is described. The results of several experiments are provided which compare the proposed method with current state-of-the-art methods when applied to a number of real world datasets, and demonstrate the proposed model performance under ablation of each term in the loss function. From these experiments it is concluded that the auxiliary contrastive loss achieves superior discriminative and calibration performance.
Strengths: The strength of this work lies in its originality, quality and clarity. The paper proposes an interesting adaptation to SCARF, a contrastive method for tabular data, by weighting the negative pairs such that samples with greater difference in time-to-event are considered more important negative pairs. Recent work on contrastive learning and deep survival methods are cited and appropriately discussed.
Technically, the paper is sound, with a good mathematical foundation for the importance sampling in the weighted contrastive loss term, and a thorough set of experiments which establishes performance against important baselines and provides good support for modelling choices via ablation studies. Furthermore, the clarity of the paper is for the most part good: the description of the background, methodology, and experiments are clear and detailed.
Weaknesses: While the technical parts of this paper are clearly explained, the organisation and presentation of the paper could be improved. In particular, the introduction uses quite a lot of terminology specific to survival analysis that is unexplained or explained in detail only later in the paper. As a specific example, censoring is first mentioned on line 67, but is not clearly explained. Explaining this terminology, which may not be familiar to all of NeurIPS wide readership, at first use would improve the clarity of the paper. Presentation and clarity of the paper would also be improved if all captions were self-contained. In some cases, for example in Table 1, it is difficult to interpret the results as presented, and this could have been avoided if the captions were more detailed. Specifically, it is not clear what the standard deviations represent in this table i.e. variation across model initialisations or data splits.
This work should be praised for providing a list of experiments that should provide the evidence needed to make the claim that the proposed model provides superior discriminative performance while maintaining calibration performance. However, the clarity and quality of these experiments is sometimes below expectations, and this ultimately reduces the overall significance of this work. Table 1 is used to report that the proposed model is the best performing model. However, it seems that the model is within 1 standard deviation of the second best performing model, and hence the statistical significance of this result is unclear. As previously mentioned, while mean and standard deviation of performance metrics are reported, it is not entirely clear what distribution they refer to – whether it be different data splits, or model initialisations via random seed. This makes it difficult to interpret the significance of these results, and also makes it more difficult to reproduce.
The paper, therefore, addresses an important problem, proposes a novel and well thought out method to address this issue, but the significance is undermined by a lack of clarity and ambiguity surrounding the statistical significance of reported results.
Technical Quality: 2
Clarity: 3
Questions for Authors: The questions below focus on the details of the experiments that were unclear. Clarity on these questions may change the overall opinion of the paper.
Table 1: In the caption you refer to the standard deviation - are the reported values in this table the mean?
Table 1: Does the standard deviation refer to variation over different data splits, model initialisations or something else?
Table 1: What do bold and underlined values represent?
Table 1: How do you determine that the superior performance of ConSurv is statistically significant?
Line 528: Can you confirm whether min-max normalisation was performed before or after data was split into train, test and validation sets?
Line 236: What is the marginal corruption process? By what augmentation were positive pairs generated?
Line 197: What does the temperature coefficient scale? What impact does this have on model training?
Line 104: Does ref. [18] show that model training is destabilised by the contrastive loss? If not, what supports this statement? Could this model have been included in your benchmarks?
NOTE: Due to the discussion in the author-reviewer discussion period, I feel that many of these points were clarified. In particular new results have better demonstrated the statistical significance of the performance of ConSurv. I raise my score from a 4 to a 6 to reflect this.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the potential negative societal impact of their work, stating that model predictions from such work should be under scrutiny by domain experts before being applied in a clinical setting. They stress that incorrect use could lead to inequitable access or discriminatory practices. While this is welcome, it is not specific to this work, but is a concern of all works proposing machine learning for clinical practice. A potential impact more specific to this work might consider why the inductive bias introduced by the auxiliary contrastive loss would lead to unintended consequences for AI-supported decisions on treatment and intervention. For example, the unsupervised nature of the contrastive learning procedure could align patients based on features which may act as a proxy for protected traits and lead to inequitable model performance amongst different demographics.
Additionally, a discussion of limitations is provided but is limited. The authors correctly identify that a significant number of patients are censored in real world datasets, and this undermines the importance sampling based on difference in time-to-event, as weights are assigned according to a function in equation (5). It would improve the discussion of the limitations if it was pointed out that seeking alternatives or amendments to (5) would be a specific avenue to improve upon this work.
The limitation section could benefit from greater specificity. The authors propose “modifying models to account for such uncertainties or developing alternative learning methods”. The uncertainties they refer to are unclear, and a preferred statement might be: future avenues of work would be to develop a model which retains the benefit of contrastive learning with respect to discriminative performance, whilst also being agnostic to the number of censored samples within the training set.
They then state “there is a need for new evaluation metrics or model developments that consider the characteristics of such data”. The clarity of this proposal might be clearer if stated with more specificity i.e. "to develop a model that retains high discriminatory and calibration performance for datasets with arbitrary proportions of censored samples, new metrics must be established that can compare the same model applied to different proportions of censored samples."
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # [ Response to Reviewer VjbS ]
We thank the reviewer for the valuable suggestions on our work. We have addressed the reviewer’s comments in our response below.
## (Details of Table 1)
- Mean and Standard Deviation: The values in Table 1 represent the mean performance metrics, with standard deviations provided in Appendix C.1, Table 5. These standard deviations show the variation across 10 random data splits for each model, ensuring robustness and reliability. We have updated the manuscript to clearly refer to the appendix for details on standard deviation.
- Meaning of Bold and Underlined values: Bold values indicate the models with the best performance for each dataset. Underlined values indicate where the D-CAL measure is above 0.05, meaning that calibration is guaranteed.
- Statistical Significance of ConSurv’s Performance: To assess the statistical significance of the performance comparison, we conducted Welch’s t-test. In Table E, we present the findings where the improvement was statistically significant (i.e., p-value < 0.05). We have provided a detailed explanation and statistical analysis to substantiate our claim in the updated manuscript.
*Table E. p-value for Performance Metrics.*
|Model|Metric|METABRIC|NWTCO|GBSG|FLCHAIN|SUPPORT|SEER|
|-|-|-|-|-|-|-|-|
|CoxPH|CI|0.004|0.264|0.047|0.325|0.001|0.047|
||IBS|0.204|0.022|0.612|0.033|0.315|0.000|
|DeepSurv|CI|0.031|0.013|0.009|0.115|0.003|0.000|
||IBS|0.302|0.034|0.487|0.046|0.229|0.000|
|DeepHit|CI|0.020|0.385|0.010|0.301|0.436|0.404|
||IBS|0.110|0.000|0.000|0.300|0.000|0.000|
|DRSA|CI|0.000|0.185|0.385|0.000|0.000|0.050|
||IBS|0.000|0.000|0.000|0.000|0.000|0.000|
|DCS|CI|0.000|0.000|0.346|0.040|0.030|0.191|
||IBS|0.000|0.240|0.312|0.040|0.000|0.000|
|X-CAL|CI|0.025|0.000|0.491|0.128|0.050|0.050|
||IBS|0.012|0.025|0.212|0.044|0.060|0.000|
## (Details of Generating Positive/Negative Samples)
We construct the positive and negative samples based on the contrastive learning framework for tabular data [R8]. Specifically, we generate augmented versions $\tilde{x}^{(i)}$ of each data point $x^{(i)}$ by randomly selecting a subset of features (up to a pre-specified corruption rate) and replacing the value by sampling the corresponding feature's marginal distribution. The augmented version of the reference sample is used as the positive pair, and the augmented versions of the remaining samples in the mini-batch are used as the negative pairs. This corruption process is applied regardless of the feature types. We have included the details of generating the positive and negative samples in the updated manuscript.
## (Additional Experiments: New Baselines)
In Table F, we have included a new baseline SupWcon which is a variant of our method where we replace $\mathcal{L}_{SNCE}$ with a supervised contrastive loss that directly weights negative samples based on the actual time differences of the comparable pairs, without considering their distributional properties of these weights [R11]. This can make weighting of negative samples sensitive to the scale of the time differences, potentially reducing the robustness of the contrastive learning loss. Notably, while weighted contrastive learning based on the difference in time-to-events show slight performance gains, the improvement is not significant when compared to ours. This suggests the importance of applying good inductive bias when contrasting samples, potentially because it encourages the model to focus on discriminating the survival prediction based on a proper distribution defined based on the differences in time-to-events.
*Table F. Discrimination and calibration of survival models for ConSurv w/ $\mathcal{L}_{SupWcon}$.*
|Data|CI|IBS|DDC|D-Cal|
|-|-|-|-|-|
|METABRIC|0.637±0.025|0.198±0.032|0.106±0.029|0.098±0.109|
|NWTCO|0.721±0.042|0.106±0.012|0.558±0.055|0.571±0.483|
|GBSG|0.675±0.027|0.186±0.018|0.179±0.025|0.000±0.000|
|FLCHAIN|0.781±0.021|0.105±0.006|0.314±0.053|0.332±0.394|
## (Temperature coefficient scale)
We thank the reviewer for insightful feedback regarding the use of $\sigma$ in our time-to-event contrastive learning framework. Here, we explain how $\sigma$ values affect model performance.
When $\sigma$ is very small, the weight for time differences becomes excessively large, causing even minor time differences to be overstated and all negative samples to be treated equally. This leads to the model ignoring time-to-event information and behaving like standard contrastive learning. Conversely, with a very large $\sigma$, the weight for time differences becomes negligible, making the model insensitive to these differences. As a result, all samples receive similar weights, which also reduces the framework to standard contrastive learning.
Our experiments ensure these findings, demonstrating that $\sigma$ must be chosen carefully. We observed that for contrastive learning to function effectively, $\sigma$ needs to be in a range where the scale of the weight applied to time differences does not excessively diverge from the model output. If the weight values become too extreme, either too large or too small, the model's performance is adversely affected. Thus, selecting an appropriate $\sigma$ that balances the impact of time differences is crucial for optimal contrastive learning performance.
## (Implementation Details)
Min-max normalization was performed before the data was split into train, test, and validation sets.
## (Clarification on Limitations)
We thank the reviewer for pointing out the need for clarification on limitations. We understand the importance of discussing the potential biases in detail and have incorporated your suggestions to make our proposals clearer in the updated manuscript.
---
### *Reference*
[R8] D. Bahri et al., “Scarf: Self-supervised contrastive learning using random feature corruption,” ICLR, 2022.
[R11] Kerdabadi et al., “Contrastive learning of temporal distinctiveness for survival analysis in electronic health records,” CIKM, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks to all authors for their rebuttal. They have clearly gone to great lengths to address the comments that all reviewers have made. I have read the reviews from each of the reviewers, and the authors' response to them, as well as the response of the authors to my own review.
I thank the authors for their clear explanation of the temperature coefficient, censoring, and the generation of positive and negative pairs. The manuscript would be improved by the inclusion of this text.
I also thank the authors for performing further statistical analysis, specifically by providing the p-values associated with the Welch's t-test, to assess the performance of ConSurv relative to the other baselines. Unfortunately, I remain unconvinced by the performance of ConSurv. The proposed benefit of this model is to enhance discrimination, without sacrificing calibration. However, table E shows us that ConSurv outperforms the following models on **both** discrimination (CI) and calibration (IBS):
- METABRIC: DRSA, DCS and X-CAL
- NWTO: DeepSurv and X-CAL
- GBSG: DeepHit
- FLCHAIN: DRSA and DCS
- SUPPORT: DRSA and DCS
- SEER: CoxPH and DeepSurv
In short, it seems that for 36 model-dataset pairs, only 12 show evidence for the proposed property.
I also believe the choice to perform min-max normalisation before data splitting may have inadvertently led to data leakage, and I would expect the performance reported to be an overestimate were normalisation applied using the train/validation sets only.
My score would change if I were convinced that the statistical performance was significant, and may be changed after further discussion.
---
Rebuttal 2:
Title: Response to Reviewer VjbS
Comment: # [ Response to Reviewer VjbS ]
We appreciate the reviewer for the feedback and acknowledging our efforts in addressing the reviewer’s comments. Here, we would like to provide further clarification on the following points:
**Normalization and Data Leakage**:
We apologize for any confusion caused by our initial explanation of the min-max normalization process. To clarify, we performed the min-max normalization **after** splitting the dataset, **not before**. This inadvertent miscommunication may have raised concerns about potential data leakage. We assure you that our method prevents any leakage, as normalization is conducted based solely on the training and validation sets. For verification, this procedure can be confirmed in the supplementary $\texttt{dataset.py}$ file that was submitted at the initial submission deadline.
**Statistical Significance of the Performance Gain**:
We would like to start by emphasizing that our work introduces a deep survival model leveraging contrastive learning to enhance discriminative power without sacrificing calibration. Therefore, a fairer comparison showcasing our contribution would involve comparing our model with i) variations of our model trained solely with the NLL loss or with both the NLL and ranking losses, and ii) deep survival models utilizing traditional ranking loss.
In this context, we have evaluated the statistical significance of our results based on the p-values calculated by the Welch’s t-test. As shown in Table G, ConSurv achieves superior discriminative power (while maintaining its calibration) over its variant trained solely with the NLL loss. The results are statistically significant except for the NWTCO dataset. On the other hand, our variant trained with both the NLL and ranking losses often loses its calibration power. Contrarily, when compared with the variant trained with the NLL and ranking losses, ConSurv significantly outperforms in terms of calibration power for all the evaluated datasets while providing gains in discriminative power (but not statistically significant for the NWTCO dataset) except for the SUPPORT dataset. These results are consistent with our motivation described in the Introduction.
A similar trend can be observed when compared with traditional ranking-based deep survival models (i.e., DeepHit and DRSA). While the gain in discriminative power is sometimes not statistically significant, our method provides statistically significant improvements in terms of calibration power, as shown in Table G.
*Table G. p-value for the variants of our method*
|Model|Metric|METABRIC|NWTCO|GBSG|FLCHAIN|SUPPORT|SEER|
|-|-|-|-|-|-|-|-|
|**NLL**|CI|**0.000**|0.308|**0.001**|**0.000**|**0.000**|**0.008** |
||IBS|**0.000**|**0.001**|0.174|0.194|**0.034**|**0.000**|
|**NLL+Rank**|CI|**0.000**|0.424|**0.000**|**0.000**|-|**0.000**|
||IBS|**0.000**|**0.003**|**0.000**|**0.000**|**0.000**|**0.018** |
**Analysis of Time-Dependent Performance Metrics**:
While the CI and IBS provide valuable overall assessments of a given survival model (over the entire time horizon), these metrics cannot fully capture variations in model performance across different time points. To address this, we included time-dependent performance evaluations, namely the time-dependent C-index [R1] and the time-dependent Brier score [R2], in Appendix C.2, at three different time points (i.e., 25%, 50%, and 75% percentiles of time-to-events as in [R3, R4, R5]). It is worth highlighting that utilizing these time-dependent metrics may reveal subtle differences that might be obscured when using CI and IBS alone.
Table H shows the number of statistically significant performance improvements over each survival model across different datasets. The improvement is considered significant if ConSurv provides statistically significant gains at at least one of the three time points. Again, we use the Welch's t-test to calculate the p-values for both time-dependent discriminative and calibration performance throughout the datasets.
ConSurv offers superior calibration performance compared to ranking-based deep survival models (DeepHit and DRSA) and outperforms calibration-focused survival models (DCS and X-CAL) in discriminative power. Overall, compared to the deep learning baselines, ConSurv achieves statistically significant improvements in at least 4 out of 6 datasets, demonstrating its robust and consistent performance gains across various scenarios.
*Table H. Number of Significant Performances Across Datasets*
|Model|C-index|Brier score|
|-|-|-|
|CoxPH|3|1|
|DeepSurv|6|6|
|DeepHit|4|6|
|DRSA|5|6|
|DCS|5|5|
|X-CAL|5|5|
|SupWcon|5|4|
---
Rebuttal 3:
Title: Response to Reviewer VjbS
Comment: **Evaluating Calibration from Different Perspectives**:
We further provided the comparison in the model calibration by evaluating alternative calibration metrics -- Distributional Divergence for Calibration (DDC) [R6] and D-Calibration (D-CAL) [R7] -- that assess model calibration from a slightly different perspective. These metrics provide a more nuanced perspective on calibration quality, directly evaluating the alignment of predicted survival probabilities with uniform distributions.
For the assessment of statistical significance of the performance gain in terms of model calibration, we have additionally included the p-values for the DDC in Table I. (Please note that the D-CAL test already incorporates p-values by its definition and thus is not considered for comparing the statistical significance of the performance gains.) Table I highlights that ConSurv achieves significant calibration gain over the benchmarks -- especially the ranking-based deep survival models -- across most of the datasets.
*Table I. p-value for DDC*
|Dataset|M|N|G|F|SU|SE|
|-|-|-|-|-|-|-|
|CoxPH|-|**0.012**|-|**0.018**|**0.000**|**0.017**|
|DeepSurv|**0.030**|**0.000**|-|**0.018**|**0.000**|**0.000**|
|DeepHit|**0.000**|**0.000**|**0.000**|**0.000**|**0.000**|**0.005**|
|DRSA|**0.013**|-|**0.000**|**0.049**|**0.000**|**0.001**|
|DCS|0.341|-|**0.000**|0.318|**0.000**|-|
|X-CAL|0.926|-|0.964|**0.016**|**0.002**|-|
|SupWcon|-|**0.005**|-|**0.043**|**0.045**|**0.000**|
---
## *Reference*
[R1] H. Uno et al., “On the C‐statistics for evaluating overall adequacy of risk prediction procedures with censored survival data,” Statistics in Medicine, 2011.
[R2] E. Graf et al., “Assessment and comparison of prognostic classification schemes for survival data,” Statistics in Medicine, 1999.
[R3] Z. Wang et al., "Survtrace: Transformers for survival analysis with competing events," ACM-BCM, 2022.
[R4] C. Nagpal et al., "Deep survival machines: Fully parametric survival regression and representation learning for censored data with competing risks," IEEE Journal of Biomedical and Health Informatics, 2021.
[R5] C. Lee et al., "Temporal quilting for survival analysis," AISTATS, 2019.
[R6] F. Kamran et al., "Estimating calibrated individualized survival curves with deep learning," AAAI, 2021.
[R7] H. Haider et al., “Effective ways to build and evaluate individual survival distributions,” JMLR, 2020.
---
Rebuttal Comment 3.1:
Comment: Thank you to the authors for taking the time to reply again and provide further discussion and clarification.
I have no further questions. Thank you for pointing me to the relevant code - I agree that min-max normalisation has not led to data leakage. And thanks for clarifying how your results demonstrate a statistical significant improvement over current baselines. I find the arguments above convincing.
---
Reply to Comment 3.1.1:
Title: Thanks
Comment: Dear Reviewer VjbS,
We are sincerely grateful for your time and energy in the review process. In light of your satisfaction with our response, we wonder whether the reviewer would kindly consider revising the rating.
Thank you, Paper 15583 Authors
---
Rebuttal 4:
Title: reminder from your area chair to respond to this author feedback
Comment: Hello reviewer VjbS,
Thanks for already engaging in discussion with the authors on this paper. The authors have responded in detail to your most recent comments. Please try to provide a response before the end of the author/reviewer discussion period (Aug 13 11:59pm AoE).
Thanks,
Your AC | Summary: The authors study discrete-time survival analysis, proposing to train models using a loss function that combines the NLL loss and a modified NCE loss. This NCE loss is modified to take the survival outcomes (event times) into account, mitigating the effect of potential false negatives that have small event time differences.
The method is evaluated on 4 tabular clinical survival datasets, comparing against 6 baseline models and 3 method variations. The proposed method performs well compared to all baselines.
Strengths: - The paper is well-written overall.
- The proposed method is described well in Section 4.
- The proposed method is conceptually quite simple and intuitive. To add a contrastive loss term, and for this use a modified contrastive loss that takes the survival outcomes into account, makes intuitive sense.
- The proposed method seems to perform well compared to baselines, both in terms of discrimination and calibration. In particular, it performs well compared to the baseline of replacing the proposed modified NCE loss term with standard NCE.
Weaknesses: - The technical novelty/innovation is perhaps somewhat limited ("just" adding a contrastive loss term).
- The experimental evaluation could perhaps be more extensive, include more datasets. Would have been nice to see the method being applied also to some non-tabular dataset.
Summary:
- Somewhat limited technical novelty, but this is a well-written paper with a simple/intuitive method that seems to perform well compared to reasonable baselines. Thus, I am definitely leaning towards accept.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could you perhaps apply the proposed method also to some non-tabular dataset?
- The visualization in Figure 2 is really neat, could you add this for the three other datasets as well (to the appendix)?
Minor things:
- 130: "by minimizing the following negative NLL loss", remove "negative"?
- Perhaps add 1-3 sentences to the start of Section 3, just to describe what is covered here (and why this is covered, why this is relevant for your method)?
- 248: "We compare our proposed method and the benchmarks with...", "benchmarks" --> "baselines"? The same also for line 251?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # [ Response to Reviewer 1ZZS ]
We thank the reviewer for the positive feedback on our work. We have addressed the comments in the updated manuscript and provided a point-by-point response below.
## (Technical Novelty about ConSurv)
ConSurv enhances the discriminative power of survival models by using a contrastive learning framework combined with a weighted sampling technique based on the similarity of event times. Instead of directly changing risk or survival predictions, this method focuses on distinguishing samples in a latent space according to their survival outcomes. It assigns weights to samples based on differences in their survival outcomes, focusing the learning process more on important samples. While Kerdabadi et al. [R9] uses a contrastive learning framework to time-to-event analysis, ConSurv is the first approach to use hard negative samples specifically to increase discriminative power for survival analysis. In addition, ConSurv achieves excellent calibration performance without using a separate loss function specifically for calibration [R6, R10]. This is achieved as the contrastive learning process inherently preserves good calibration while enhancing discriminative capabilities.
## (Challenges of applying the proposed method to non-tabular data.)
Unfortunately, despite the potential of the contrastive learning framework utilized in our method, we were unable to find suitable non-tabular survival data for immediate application. To the best of our knowledge, the only available dataset is the METABRIC dataset, which contains whole slide images (WSIs) of breast tumors [R4, R5] with time-to-event information for each sample. However, these WSIs present a challenge: each sample contains a varying number of image segments (collected from different locations of a tumor). Addressing this would require either domain expertise to choose the correct tumor region of interest or a deep learning technique (such as multiple instance learning) to handle samples with varying numbers of image segments. As applying augmentation in this setting is not straightforward, it was beyond the scope of our current work and remains an interesting avenue for future research.
## (Additional Experiment: t-SNE visualization)
We have included a t-SNE visualization for each of the real-world datasets we used in our experiments in the Global PDF (Figure 2). We show experimental results for FLCHAIN and SUPPORT, where performance improvements are significant. When using the NLL alone, the representations tend to cluster tightly in a narrow space, regardless of time. However, when applying SNCE (contrastive learning with a time weight) to NLL, representations of features with similar times are more likely to be positioned closer together. This shows that our model achieves superior performance.
## (Clarification on Notations)
We thank the reviewer for pointing out typos and suggestions for better clarification. We have addressed those comments in the updated manuscript.
---
### *Reference*
[R4] H. Xu et al., "A whole-slide foundation model for digital pathology from real-world data," Nature, 2024.
[R5] P. Mobadersany et al., "Predicting cancer outcomes from histology and genomics using convolutional networks," National Academy of Sciences, 2018.
[R6] F. Kamran et al., "Estimating calibrated individualized survival curves with deep learning," AAAI, 2021.
[R9] Kerdabadi et al., “Contrastive learning of temporal distinctiveness for survival analysis in electronic health records,” CIKM, 2023.
[R10] M. Goldstein et al., “X-CAL: explicit calibration for survival analysis,” NeurIPS, 2020.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response.
I have read the other reviews and all rebuttals.
While two of the other reviewers were leaning towards reject, I think the authors responded well overall to their concerns. I don't see any clear reason to lower my score. | Summary: The paper proposes a contrastive learning loss to regularize maximum likelihood learning of discretized survival times. The main contribution of this work is the utilization of the Laplace kernel to weigh negative pairs inversely proportional to their time difference from the anchor sample, while also accounting for censoring. Specifically, negative pairs with event times close to that of the anchor sample contribute less to the contrastive loss. Experimental results on four datasets demonstrate that the proposed approach achieves a high Concordance Index (C-Index) without a significant loss in calibration when compared to baselines.
Strengths: The use of contrastive learning to improve model calibration in survival analysis seems interesting. Experimental results demonstrate the efficacy of this approach in maintaining a high C-index without significant loss in calibration, a trade-off that is difficult to achieve in practice.
Weaknesses: *Given that the main contribution of this work is the application of contrastive loss to survival analysis, the paper lacks complete details for reproducibility:*
- There are no details on how the model functions $ f_{\theta}, f_{\phi}$, and $g_{\psi} $ are parametrized.
- Details on the construction of positive pairs are very sparse. No motivation is provided in terms of why the SCARF approach is chosen and why it should be expected to work with survival data.
- The paper proposes the Laplace Kernel for weighing samples; it is unclear if alternative kernels were also considered.
- The notation seems sloppy; symbols ${f}$ and $Z$ are overloaded.
- I expect calibration and the C-Index should be sensitive to $\beta $; the paper does not discuss how to choose $\beta $ and what values resulted in the performance results reported. This phenomenon has been discussed in detail in [1]. I encourage the authors to include a comprehensive discussion on this trade-off.
- It is unclear how the margin parameter $\alpha$ was chosen. I encourage the authors to provide visualizations in the main paper to demonstrate sensitivity to both $\alpha$ and $\beta $.
*The proposed approach relies on discretizing event times, which is sensitive to the time-binning approach. Additionally, it seems the contrastive loss could be an indirect way to achieve consistency in event times. I encourage the authors to also benchmark with approaches that directly predict event times such as [2].*
*The four datasets used seem to be of low sample size. I encourage the authors to also consider larger datasets such as SUPPORT and SEER.*
**References**
- [1] Qi et al., "Conformalized Survival Distributions: A Generic Post-Process to Increase Calibration", ICML 2024
- [2] Chapfuwa et al., "Calibration and Uncertainty in Neural Time-to-Event Modeling", IEEE TNNLS 2020
Technical Quality: 2
Clarity: 1
Questions for Authors: - What are the values of $\alpha$ and $\beta $ for the reported results?
- Were alternative weighting kernels considered?
- Were alternative approaches for constructing positive pairs considered?
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: - The paper does not discuss modeling assumptions; for example, the assumed independent censoring mechanism could be violated in practice.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # [ Response to Reviewer Lfcy ]
We thank the reviewer for the valuable suggestions on our work. We have addressed the reviewer’s comments in our response below.
## (Sensitive Analysis of the $\alpha$)
Introducing a margin prevents a comparable pair of samples with distant (unobserved) event times from being mistakenly treated as similar samples due to small differences between the observed event time of one sample and the censored time of the other sample within that pair. To show this effect, we have conducted an experiment using additional synthetic data by adapting the time-to-event generation process in DeepHit [R2] for both event times $T_i$ and censoring times $C_i$:
$T_i \sim \exp\left((10 x_{i1})^2 + 5 x_{i3}\right)$
$C_i \sim \exp\left((10 x_{i2})^2 + 5 x_{i4}\right)$
Here, we randomly generated 1,000 random samples, where a sample is censored when $T_i > C_i$.
In Figure 1 of the Global PDF, we compare time differences based on censoring times and the (unobserved) ground-truth event times. For each event sample, we compute the time differences between its event time and: 1) the (observed) censoring time of its corresponding comparable censored sample, and 2) the (unobserved) ground-truth event time of that same censored sample. Without a margin, censored samples with censoring times close to event times may be incorrectly treated as having similar event times, leading to very low weight assignments. To avoid this, we introduce a margin to exclude such samples when computing our contrastive loss. Overall, the margin in $|\tau_i-\tau_j| \geq \alpha$ helps prevent failures in accurately distinguishing risk between comparable censored samples.
## (Sensitive Analysis of the $\beta$)
*Table D. Impact of balancing coefficient $\beta$ on performance metrics.*
|Dataset|M||||N||||G||||F||||
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|**Methods**|**CI**|**IBS**|**DDC**|**D-CAL**|**CI**|**IBS**|**DDC**|**D-CAL**|**CI**|**IBS** |**DDC**|**D-CAL**|**CI**|**IBS**|**DDC**|**D-CAL**|
|$\beta$=0.01|0.673|0.089|0.148|*0.254*|0.747|**0.069**|0.493|0.043|0.655|0.149|0.150|0.015|0.746|**0.063**|0.493|*0.353*|
|$\beta$=0.1|0.682|0.124|0.109|*0.320*|0.759|0.091|0.470|0.095|0.677|0.171|0.156|0.012|0.775|0.108|0.364|*0.543*|
|$\beta$=1.0|**0.706**|0.086|**0.093**|*0.524*|**0.773**|0.078|0.222|*0.428*|**0.703**|**0.138**|0.180|*0.075*|**0.816**|0.096|**0.338**|*0.465*|
|$\beta$=10.0|0.652|0.166|0.096|*0.405*|0.759|0.093|0.344|*0.397*|0.686|0.169|**0.139**|0.002|0.777|0.107|0.354|*0.537*|
|$\beta$=100.0|0.687|**0.078**|0.127|*0.492*|0.743|0.071|**0.463**|*0.352*|0.694|0.170|0.363|*0.057*|0.743|0.071|0.551|*0.057*|
F. Kamran et al. [R6] demonstrates that while ranking loss functions used in survival analysis focus on discriminative power, they often perform poorly on calibration metrics. Similarly, Qi et al. [R7] points out the independence of discrimination and calibration. They show that models with perfect discrimination can still have significant calibration problems. This suggests that finding a good balance is crucial for improving the model performance.
We investigated the relationship between discrimination and calibration, focusing on the impact of the balancing coefficient $\beta$. In Table D, additional experiments with $\beta=0.1$ and $\beta=10.0$ show that a very small $\beta$ (0.01) primarily focuses on NLL, resulting in good calibration. As $\beta$ increases, discriminative power improves while maintaining calibration performance. However, if $\beta$ is too large (beyond 1.0), it results in deviating from the concept of employing auxiliary loss, thereby leading to suboptimal discriminative and calibration performance. An optimal balance is achieved at $\beta=1.0$, performing well on both metrics. This highlights the need for hyperparameter searches to achieve optimal performance, as detailed in Appendix C.3.
## (Details of Generating Positive/Negative Samples)
We construct positive and negative samples using the contrastive learning framework for survival data [R8]. SCARF addresses the challenge of tabular data augmentation while maintaining semantic integrity. Augmented versions $\tilde{x}^{(i)}$ of data points $x^{(i)}$ are created by selectively corrupting feature subsets using their marginal distributions. The augmented reference sample serves as the positive pair, while augmented versions from other mini-batch samples are the negative pairs. This permutation method avoids out-of-distribution issues common with noise injection described in Appendix B and is applicable to various data types. Further details have been provided in the updated manuscript.
## (Laplacian Kernel)
We compared several kernel functions, especially the exponential and standard kernels. These kernels assign lower values to closer samples and higher values to farther samples. Our main goal, however, was to identify a kernel that not only maintains this distance-weighting property but also stabilizes at a finite value instead of increasing infinitely. Ultimately, we selected a modified version of the Laplacian Kernel, which has the property of having weights converge rather than diverge as the distance between samples increases.
## (Clarification on Notations)
Each model function—$f$ (encoder $f_\theta$, hazard $f_\phi$) and $g$ (projection head $g_\psi$)—uses an MLP. Parameters were optimized using random search as detailed in Appendix E.2. We have maintained the representation as $z$ and updated the normalizing constant to $C$.
---
### *Reference*
[R2] C. Lee et al., "DeepHit: A Deep Learning Approach to Survival Analysis with Competing Risks," AAAI, 2018.
[R6] F. Kamran et al., "Estimating calibrated individualized survival curves with deep learning," AAAI, 2021.
[R7] Qi et al., "Conformalized Survival Distributions: A Generic Post-Process to Increase Calibration", ICML 2024.
[R8] D. Bahri et al., “Scarf: Self-supervised contrastive learning using random feature corruption,” ICLR, 2022.
---
Rebuttal Comment 1.1:
Title: Official Response by Reviewer Lfcy
Comment: Thank you for your response and for providing additional experimental results on DRAFT, large datasets, and sensitivity analysis.
- Unfortunately, I don't think the comparison to DRAFT is adequate; a fairer comparison would be against DATE, since Chapfuwa et al. have already demonstrated that DATE is superior to DRAFT.
- The connection of contrastive learning to model calibration appears to be superficial. The paper does not provide theoretical guarantees, only experimental results. Given that the experimental results highlight that $\beta$ controls the calibration-discrimination trade-off, this is a key weakness of the work. This weakness is not shared by the conformalized post-processing approach detailed by Qi et al. Hence, revisions should include comparisons to CSD and the Kaplan-Meier estimator.
- The paper will require significant revisions to address the sloppy mathematical notations and provide complete details on the modeling choices.
Considering these issues, as well as the other reviews, I am inclined to maintain my score.
---
Rebuttal 2:
Title: Response to Reviewer Lfcy
Comment: # [ Response to Reviewer Lfcy ]
Once again, we would like to thank you for your invaluable feedback! We were wondering whether our response from Aug 7 has sufficiently addressed your comment. If you have any remaining comments, please let us know. We would be happy to do our utmost to address them!
---
Rebuttal 3:
Title: reminder from your area chair to respond to this author feedback
Comment: Hello reviewer Lfcy,
The author/reviewer discussion period ends very soon (Aug 13 11:59pm AoE). It would be great if you could respond to the authors as they've taken the time to respond to your review.
Thanks,
Your AC
---
Rebuttal 4:
Title: Response to Reviewer Lfcy
Comment: # [ Response to Reviewer Lfcy ]
**Regarding the Comparisons with DRAFT and DATE**:
We appreciate the reviewer's insightful comments regarding the comparison of ConSurv with DRAFT and DATE.
We opted to include DRAFT, not DATE, in our comparative analysis due to the following considerations: DRAFT posits a log-normal distribution for the underlying time-to-event process, $p(t∣x)$. This allows for a direct calculation of the survival function, $S(t∣x)$, facilitating a clear and equitable comparison using the performance metrics presented in our study.
Conversely, while DATE represents a significant advancement in deep survival models by directly generating time-to-event outcomes, it employs a generator that learns the underlying distribution, $p(t∣x)$, implicitly. Consequently, computing $S(t∣x)$ necessitates sampling a substantial number of samples, complicating the comparison against the reported performance metrics.
Therefore, we restricted our comparison to DRAFT as a deep learning counterpart that explicitly focuses on AFT models.
**Regarding the comparison with CSD (Qi et al.’s Method)**:
We appreciate the suggestion to compare ConSurv with CSD (Qi et al.'s post-hoc calibration method), but we believe a direct comparison is not within the scope of our work. Our focus lies in improving discriminative power while preserving calibration, a distinct advancement from traditional ranking losses. CSD addresses a post-hoc approach to improve calibration of survival models, making a direct comparison less relevant.
Furthermore, Qi et al.'s work was published after our submission deadline and thus couldn't be included in our original analysis. However, we acknowledge its value and will incorporate a comparison in the appendix of our final version by applying their post-hoc calibration to existing ranking-based models.
**The Connection Between Contrastive Learning and Model Calibration**:
As the reviewer pointed out, Kamran et al. and Qi et al. have discussed the trade-off between discriminative power and calibration, where the trade-off arises because discrimination is typically assessed at an individual level, whereas calibration is evaluated at the population level. However, the parameter, $/lambda$, in the loss term proposed by Kamran et al. does not directly explain this trade-off. Instead, it controls the balance between two losses in the composite loss function, where each loss influences both the discrimination and calibration of the survival model.
To the best of our knowledge, there is no theoretical proof on why utilizing the ranking loss may lead to poor calibration. We hypothesize that directly modifying the output of survival models (whether the hazard function in DRSA or the PMF of event times in DeepHit) can bias the network towards optimizing the approximated C-index, potentially sacrificing accurate modeling of event time distributions, as conjectured by Kamran et al.
In contrast, our approach employs a contrastive learning framework that adjusts latent representations based on the similarity in the event times, implicitly promoting discriminative power without directly impacting the distribution of event times. Consequently, we were not able to observe a clear trade-off between the two loss functions, a phenomenon more commonly seen with traditional ranking losses.
**Clarification on Notations**:
We've taken steps to improve clarity by clarifying notations and providing comprehensive implementation details in both the appendix and this rebuttal. If you have specific areas that you feel still require attention, please point them out and we'll be happy to address them further. | Summary: This paper presents an approach to handle survival analysis datasets using a method to improve both calibration and discrimination. This adds a bunch of novelties to the field like handling right-censoring and an SNCE loss to handle calibration and ranking simultaneously (NLL + contrastive loss). This paper presents a reliable way to look at survival analysis, write a hazard function which is quite reasonable and also present a ranking loss to take care of the domain-specific problem of stratification by risk-levels using generated patient rank list.
Strengths: -> novelties added quite a lot: SNCE loss and handling right-censoring
-> ranking loss introduced can be generally used for any method aiming to generate or classify along ranked lists
-> well presented math for different loss functions and easy to follow
Weaknesses: -> lack of using unstructured data
-> more clarity of censored data
Technical Quality: 4
Clarity: 4
Questions for Authors: I understood why there is censored data, but if the authors could explain with an example of a time series what censored data looks like that would make it easier.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: maybe added unstructured data like doctors notes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # [ Response to Reviewer RNpF ]
We thank the reviewer for the positive feedback on our work. We have included the details of censoring in the General Response Section.
## (Missing Experiments on Unstructured Data)
Unfortunately, despite the potential of the contrastive learning framework utilized in our method, we were unable to find suitable unstructured survival data for immediate application. To the best of our knowledge, the only available dataset is the METABRIC dataset, which contains whole slide images (WSIs) of breast tumors [R4, R5] with time-to-event information for each sample. However, these WSIs present a challenge: each sample contains a varying number of image segments (collected from different locations of a tumor). Addressing this would require either domain expertise to choose the correct tumor region of interest or a deep learning technique (such as multiple instance learning) to handle samples with varying numbers of image segments. As applying augmentation in this setting is not straightforward, it was beyond the scope of our current work and remains an interesting avenue for future research.
---
### *Reference*
[R4] H. Xu et al., "A whole-slide foundation model for digital pathology from real-world data," Nature, 2024.
[R5] P. Mobadersany et al., "Predicting cancer outcomes from histology and genomics using convolutional networks," National Academy of Sciences, 2018.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks for the rebuttal. Agree with the future work. | Rebuttal 1:
Rebuttal: # [ General Response to the Reviewers ]
We thank the reviewers for taking their valuable time to provide insightful comments and suggestions for the paper. We believe the thoughtful reviews and recommendations have substantially improved the quality of the paper. In this response, we aim to address the common comments raised by the reviewers and outline the major changes made:
- Results on additional baselines including DRAFT [R1]
- Results on additional large real-world datasets (i.e., SUPPORT and SEER)
- Explanations on censoring in time-to-event analysis
## (Additional Experiment: A New Baseline)
We have included a new baseline, DRAFT [R1], which is a deep learning approach to accelerated failure time model assuming a log-normal distribution for the underlying distribution of event times. This baseline directly models event times and hence does not rely on discretization that many deep survival models (e.g. DeepHit [R2], DRSA [R3], and our method) use. In Table A, we show that while DRAFT demonstrates comparable performance to other deep learning baselines, our method achieves superior performance in both discrimination and calibration.
*Table A. Discrimination and calibration of survival models for DRAFT.*
| **Data** | **Model** | **CI** | **IBS** | **DDC** | **D-Cal** |
|-|-|-|-|-|-|
| **METABRIC** | DRAFT |0.651±0.005 | 0.209±0.023 | 0.176±0.027 | 0.194±0.387 |
| | ConSurv | 0.661±0.040 | 0.189±0.019 | 0.010±0.026 | 0.234±0.234 |
| **NWTCO** | DRAFT | 0.701±0.002 | 0.125±0.009 | 0.364±0.061 | 0.000±0.000 |
| | ConSurv | 0.731±0.037 | 0.100±0.017 | 0.195±0.042 | 0.653±0.454 |
| **GBSG** | DRAFT | 0.678±0.018 | 0.228±0.006 | 0.356±0.049 | 0.000±0.000 |
| | ConSurv | 0.683±0.026 | 0.178±0.011 | 0.172±0.019 | 0.012±0.027 |
| **FLCHAIN** | DRAFT | 0.788±0.015 | 0.117±0.017 | 0.660±0.018 | 0.000±0.000 |
| | ConSurv | 0.794±0.023 | 0.087±0.045 | 0.293±0.039 | 0.317±0.338 |
## (Additional Experiments: Large Real-world Dataset)
We thank the reviewer for suggesting additional experiments on large real-world datasets. Additionally, we have conducted experiments on the SUPPORT (8873 samples) and the SEER Prostate (54544 samples). As presented in Tables B and C, our proposed method not only significantly outperforms all baseline models in terms of discriminative power, but also demonstrates superior calibration performance across these relatively large datasets.
*Table B. Performance for SUPPORT dataset.*
| **Data** | **Model** | **CI** | **IBS** | **DDC** | **D-Cal** |
|-|-|-|-|-|-|
| **SUPPORT**| **CoxPH** | 0.603±0.006 | 0.196±0.006 | 0.258±0.006 | 0.000±0.000 |
| | **DeepSurv**| 0.596±0.014 | 0.198±0.009 | 0.244±0.035 | 0.000±0.000 |
| | **DRAFT**| 0.593±0.020 | 0.244±0.024 | 0.377±0.033 | 0.000±0.000
| | **DeepHit** | 0.615±0.007 | 0.275±0.002 | 0.336±0.003 | 0.000±0.000 |
| | **DRSA** | 0.521±0.048 | 0.268±0.022 | 0.523±0.047 | 0.000±0.000 |
| | **DCS** | 0.582±0.045 | 0.216±0.016 | 0.142±0.033 | 0.000±0.000 |
| | **X-Cal** | 0.611±0.007 | 0.212±0.016 | 0.191±0.034 | 0.000±0.000 |
| | **ConSurv** | 0.615±0.007 | 0.194±0.006 | 0.142±0.014 | 0.000±0.000|
*Table C. Performance for SEER dataset.*
| **Data** | **Model** | **CI** | **IBS** | **DDC** | **D-Cal** |
|-|-|-|-|-|-|
| **SEER** | **CoxPH** | 0.858±0.016 | 0.009±0.001 | 0.967±0.005 | 1.000±0.000 |
| | **DeepSurv**| 0.764±0.037 | 0.014±0.004 | 1.000±0.000 | 1.000±0.000 |
| | **DRAFT**| 0.825±0.049| 0.014±0.004 | 0.804±0.017 | 0.126±0.182 |
| | **DeepHit** | 0.870±0.014 | 0.100±0.001 | 0.336±0.003 | 0.000±0.000 |
| | **DRSA** | 0.840±0.051 | 0.017±0.001 | 0.523±0.047 | 0.000±0.000 |
| | **DCS** | 0.860±0.014 | 0.010±0.001 | 0.142±0.033 | 0.439±0.513 |
| | **X-Cal** | 0.838±0.041 | 0.009±0.001 | 0.191±0.034 | 0.459±0.425 |
| | **ConSurv** | 0.865±0.014 | 0.004±0.003 | 0.142±0.014 | 1.000±0.000|
## (Details of Censoring)
Censoring is a crucial concept in survival analysis (also known as time-to-event analysis). It refers to cases where the event of interest (e.g., death) is not observed for some individuals by the end of the study, either due to the study ending before the event occurs or the individual being lost to follow-up. Since not all events are observed, survival data are frequently right-censored; dealing with censored samples is a critical aspect in survival analysis.
Hence, the survival data for an individual patient $i$ consists of either a time-to-event $T_i \in \mathbb{R}^+$ or a time-to-censoring $C_i \in \mathbb{R}^+$, and an indicator $\Delta_i = \mathbb{I}(T_i < C_i)$; $\Delta_i = 1$ if the patient experienced the event of interest and $\Delta_i = 0$ if the patient was right-censored. Here, in survival analysis, censoring provides the information that the patient had not experienced the event (e.g., was alive) up to time $C_i$.
We will provide a clear explanation of censoring in survival analysis in the updated manuscript for a general audience.
---
### *Reference*
[R1] P. Chapfuwa et al., "Calibration and Uncertainty in Neural Time-to-Event Modeling," IEEE TNNLS, 2020.
[R2] C. Lee et al., "DeepHit: A Deep Learning Approach to Survival Analysis with Competing Risks," AAAI, 2018.
[R3] K. Ren et al., “Deep recurrent survival analysis,” AAAI, 2019.
Pdf: /pdf/4b0b472b8ec7b4ff381b07ee9d377cc44cfb04c0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization | Accept (poster) | Summary: The authors propose a zero-th order algorithm for Stochastic Compositional Optimization (SCO) problems. Such problems are given in the form of a composition of two functions, each of them being random depending, and the corresponding objective should be minimizer in expectation. The authors consider the nonsmooth nonconvex setting and propose an algorithm which to tackle such problems. The algorithm is called GFCOM, and a variance reduction variant, GFCOM+ is proposed. The authors propose a complexity analysis for the proposed method, with explicit estimates for reaching Goldstein approximate stationary points. They extend their analysis to the convex setting and conclude with numerical experiments and a comparison with the method of Kiefer-Wolfowitz.
I did not read all the details of the proof of the variance reduced algorithm and the convex analysis.
Strengths: The paper is well written, the assumptions are clearly presented. The analysis flows well and looks technically solid.
Weaknesses: The numerical section does not seem to use a specific compositional structure, so it constitutes a weak illustration of the relevance of the method. The paper fail to provide a convincing motivating example related to machine learning.
The value of $c$ should be given to actually implement the algorithm.
Technical Quality: 3
Clarity: 3
Questions for Authors: Lemma 3.7 and 3.8, what are the precise references to the paper of Lin et al, which part of the paper do the authors refer to?
Are the random variables $\xi$ and $\zeta$ independent? If yes it should probably be stated.
This sentence is misleading: "hard instances have shown that finding an $\epsilon$-stationary point with respect to the Clarke subdifferential of a Lipschitz function in finite time is computationally intractable". Being $\epsilon$ stationary is not the right notion for nonsmooth optimization: even in the convex setting, the subdifferential does not tend to 0. The hardness result of Kornowski and Shamir is actually different from what is suggested by the sentence and relates to the impossibility of getting close to the critical set.
Table 1: what is WS-GFCOM?
This sentence is misleading: "Consequently, they considered a refined notion of approximate stationary point in terms of the Goldstein $\delta$-subdifferential". Kornowski and Shamir do not use this notion.
Section 6: what is the underlying compositional structure? What are the random variables?
Equality (2) is extremely misleading as a different expression is used in the algorithms and proofs.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Section 4.2, what is the value of $c$. Without this the algorithm cannot be implemented.
The convex section is probably of more limited interest than the rest.
The paper should probably contain a discussion, comparing the obtained complexity estimate with the more usual zero-th order stochastic optimization literature. What is the added cost of the stochastic compositional structure?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer VEqR for your insightful and detailed review. Here we would like to address your concern.
**Q1:** What are the precise references to the paper of Lin et al.?
**A1:** Lemma 3.7 refers to Proposition 2.3 of Lin et al. [30], and Lemma 3.8 refers to Lemma D.1 of Lin et al. [30].
We will include these details in the revision.
**Q2:** Are the random variables $\mathbf{\xi}$ and $\mathbf{\zeta}$ independent?
**A2:** Yes, these two random variables are independent. We will explicitly state this point in the revision.
**Q3:** The hardness result of Kornowski and Shamir is different from what is suggested by the sentence.
**A3:** We will rephrase this sentence to avoid confusion in the revision.
Specifically, Zhang et al. [13] showed that no deterministic algorithms can find an $\epsilon$-stationary point with respect to the Clarke subdifferential in finite time.
They construct the lower bound with a resisting oracle, so their result only holds for deterministic algorithms that generate a fixed sequence.
Later, Kornowski and Shamir [14] showed that no randomized algorithms can find an $\epsilon$-near approximately stationary point in finite time with a probability of almost 1.
It can be inferred from their result that no deterministic or randomized algorithm can find an $\epsilon$-stationary point in finite time since the $\epsilon$-stationary point is a tighter convergence notion than the $\epsilon$-near approximately stationary point.
**Q4:** What is WS-GFCOM?
**A4:** It is the warm-started GFCOM method (lines 218-219), which is presented in Section 5 for the convergence analysis of the convex nonsmooth SCO problem.
The details of WS-GFCOM are presented in Algorithm 3.
**Q5:** Kornowski and Shamir do not use this notion.
**A5:** Thanks for your careful review. The refined notion in terms of Goldstein $\delta$-subdifferential was proposed by Zhang et al. [13]. We will correct the citation in the revision.
**Q6:** What is the underlying compositional structure? What are the random variables?
**A6:** We provide the details for the compositional formulation of these problems below.
1. For the portfolio management problem (6), we formulate it as
$$\min \Phi(x)= \mathbb{E}[F(\mathbb{E}[G(x;\zeta)];\xi)]$$
where
$$G(x;\zeta)= \left[x_1, \ldots, x_N, \langle r_{\zeta}, x \rangle\right]^{\top}$$
and
$$F(w;\xi)=- \langle r_{\xi}, w_{[N]}\rangle + (\langle r_\xi, w_{[N]}\rangle - w_{N+1})^2 + \beta (x).$$
The random variables in the above formulation are $\xi$ and $\zeta$.
Both of $\xi$ and $\zeta$ are uniformly sampled from $\\{1,\ldots,T\\}$.
2. For the Bellman residual minimization problem (after line 271), we formulate it as
$$\\min\\Phi(w)=\\mathbb{E}[F(\\mathbb{E}[G(w;\\zeta)];\\xi)]$$
where
$$ G(w; \zeta) = [\langle \psi_1, w \rangle, r_{1, \zeta_1} + \gamma \langle \psi_{\zeta_1}, w \rangle, \ldots, \langle \psi_n, w \rangle, r_{n, \zeta_n} + \gamma \langle \psi_{\zeta_n}, w \rangle]^{\top}$$
and
$$ F(z; \xi) = h(z_{2 \xi} - z_{2\xi+1}).$$
The random variables in above formulation are $\zeta = [\zeta_1, \ldots, \zeta_n]^{\top}$ and $\xi$. Specifically, each $\zeta_i$ is uniformly sampled from $\\{P_{i 1}, \ldots, P_{i,n}\\}$ and $\xi$ is uniformly sampled from $\\{1,\dots,T\\}$.
We will provide the above explicitly compositional formulation in the revision.
**Q7:** Equality (2) is extremely misleading as a different expression is used in the algorithms and proofs.
**A7:** Thanks for your careful review. We will use different notations in Equality (2) to distinguish $v_t$ in the algorithm in the revision.
**Q8:** What is the value of $c$.
**A8:** According to Lemma 8 of Duchi et al. [33], we can take $c=1$ when $\mathcal{P}$ is the uniform distribution in the unit ball.
We will provide this explanation in the revision.
**Q9:** The convex section is probably of more limited interest than the rest.
**A9:** We agree that the contribution for the nonconvex case is more interesting, while the convex case is also an important class of problems.
In Section 5, we show that convexity can result in the improved convergence guarantee to find the $(\delta, \epsilon)$-Goldstein stationary point.
**Q10:** The paper should probably contain a discussion, comparing the obtained complexity estimate with the more usual zero-th order stochastic optimization literature.
**A10:** Thanks for your suggestion. We present the function query complexity of zeroth-order methods for different classes of stochastic optimization problems in the following table.
| Methods | Problem | Complexity | Reference |
| -------- | ------- | ------- | ------- |
| GFM | $\min f(x)$ | $\mathcal{O}(d^{1.5}\delta^{-1}\epsilon^{-4})$ | [30] |
| GFM+ | $\min f(x)$ | $\mathcal{O}(d^{1.5}\delta^{-1}\epsilon^{-3})$ | [31] |
| Online to Nonconvex | $\min f(x)$ | $\mathcal{O}(d \delta^{-1}\epsilon^{-3})$ | [32] |
| GFCOM | $\min f(g(x))$ | $\mathcal{O}(d^{3.5} \delta^{-3}\epsilon^{-6})$ | Corollary 4.2 |
| GFCOM+ | $\min f(g(x))$ | $\mathcal{O}(d^{3.5} \delta^{-3}\epsilon^{-5})$ | Corollary 4.4 |
**Reference**
- [13] Jingzhao Zhang, Hongzhou Lin, Stefanie Jegelka, Suvrit Sra, and Ali Jadbabaie. Complexity of finding stationary points of nonconvex nonsmooth functions. ICML 2020.
- [14] Guy Kornowski, and Ohad Shamir. Oracle complexity in nonsmooth nonconvex optimization. Neurips 2021.
- [30] Tianyi Lin, Zeyu Zheng, and Michael Jordan. Gradient-free methods for deterministic and stochastic nonsmooth nonconvex optimization. Neurips 2022.
- [31] Lesi Chen, Jing Xu, and Luo Luo. Faster gradient-free algorithms for nonsmooth nonconvex stochastic optimization. ICML 2023.
- [32] Guy Kornowski, and Ohad Shamir. An algorithm with optimal dimension-dependence for zero-order nonsmooth nonconvex stochastic optimization. JMLR 2024.
- [33] John C. Duchi, Peter L. Bartlett, and Martin J. Wainwright. Randomized smoothing for stochastic optimization. SIAM Journal on Optimization 22.2 (2012): 674-701.
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal
Comment: I have read the authors response. Thanks. | Summary: This paper investigates stochastic compositional optimization (SCO) problems, which are popular in many real-world applications. The authors focus on nonconvex and nonsmooth SCO, and propose gradient-free stochastic methods for finding the $(\delta,\epsilon)$-Goldstein stationary points of such problems with non-asymptotic convergence rates. Furthermore, they also use variance reduction technique to accelerate their algorithms with improved results.
Strengths: * The study of the SCO problem is highly significant, and the research on gradient-free algorithms is also of considerable value.
* This paper is well-written and easy to understand.
Weaknesses: * The technical novelty of this work is limited, as the proposed algorithms are mostly based on SGFM [1].
* The authors do not prove that $\mathbf{v}\_t$ is an unbiased gradient estimator of $\nabla \Phi_\delta (\mathbf{x}_t)$ (See Question 1 for details).
* The authors utilize variance reduction technique to accelerate their algorithms. However, they do not review some important related work on variance reduction in this paper, e.g., SVRG, STORM and so on.
[1] Lin et al. Gradient-free methods for deterministic and stochastic nonsmooth nonconvex optimization. NeurIPS, 2022.
Technical Quality: 2
Clarity: 4
Questions for Authors: * My primary concern is whether $\mathbf{v}\_t$ in Algorithm 1 is an unbiased estimator of $\nabla \Phi\_\delta (\mathbf{x}\_t)$. Although Lin et al. (2022) have proven that $\mathbf{v}\_t$ in Eq. (2) is an unbiased estimator of $\nabla f\_\delta (\mathbf{x})$, such gradient estimation cannot be directly applied to the SCO problem by introducing auxiliary variables $\mathbf{y}\_t$ and $\mathbf{z}\_t$ in Eq. (3). Specifically, although the estimation of each layer function and its gradients is unbiased, i.e., $\mathbb{E}\_{\xi\_1} [F(\mathbf{x};\xi\_1)]=f(\mathbf{x})$, $\mathbb{E}\_{\xi\_1} [\nabla F(\mathbf{x};\xi\_1)]=\nabla f(\mathbf{x})$, and $\mathbb{E}\_{\xi\_2} [G(\mathbf{x};\xi\_2)]=g(\mathbf{x})$, the main challenge in SCO lies in obtaining an unbiased estimation of gradient $\nabla f(g(\mathbf{x}))$. This is because the expectation over $\xi\_2$ cannot be moved inside of $\nabla f$ such that $\mathbb{E}_{\xi_1,\xi_2} [\nabla F(G(\mathbf{x};\xi_2);\xi_1)]\neq \nabla f(g(\mathbf{x}))$.
* The proposed algorithms in this paper appear to be a direct application of existing algorithms [1] to the SCO problem. Additionally, the accelerated algorithms employ variance reduction technique, which have already been widely used for other settings in SCO problems [2,3]. Could you elaborate more details on the technical challenge and novelty of your work?
[1] Lin et al. Gradient-free methods for deterministic and stochastic nonsmooth nonconvex optimization. NeurIPS, 2022.
[2] Yuan et al. Efficient smooth non-convex stochastic compositional optimization via stochastic recursive gradient descent. NeurIPS, 2019.
[3] Zhang and Xiao. A stochastic composite gradient method with incremental variance reduction. NeurIPS, 2019.
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer nNfG for your detailed review.
**Q1:** The authors utilize variance reduction technique to accelerate their algorithms. However, they do not review some important related work on variance reduction in this paper, e.g., SVRG, STORM and so on.
**A1:** Thanks for the suggestion. We will provide a more detailed literature review of variance reduction techniques in the revision, including SVRG [10] and STORM [9].
**Q2:** The authors do not prove that $\\mathbf{v_t}$ is an unbiased gradient estimator of $\nabla \Phi_{\delta}(\mathbf{x}_t)$
**A2:** We emphasize that our algorithm design and theoretical analysis do not require ${\mathbf v}_t$ to be unbiased.
In contrast, we focus on providing the upper bound on the expected gradient estimation error, e.g. Lemma B.1 (line 410-415).
The sharp gradient estimation error indeed leads to reasonable convergence rates (e.g., Theorem 4.1) even if the estimator is biased.
Furthermore, solving the nonconvex stochastic (compositional) optimization problem with a biased gradient estimator is very popular in literature Ref. [2,3,4,5,8,9].
For example, Zhang and Lin [3] applied the biased gradient estimator to solve a smooth stochastic compositional optimization problem, and their introduction mentioned
**Using biased gradient estimators can cause various difficulties for constructing and analyzing randomized algorithms, but is often inevitable in dealing with more complex objective functions other than the empirical risk.**
**Q3:** The proposed algorithms in this paper appear to be a direct application of existing algorithms [1] to the SCO problem. Additionally, the accelerated algorithms employ variance reduction technique, which have already been widely used for other settings in SCO problems [2, 3]. Could you elaborate more details on the technical challenge and novelty of your work?
**A3:** The theoretical results in our work cannot be obtained by combining the techniques of Ref. [1,2,3].
Notice that the convergence analysis of the existing work [2, 3] heavily depends on the smoothness of both the outer function $f$ (or its differentiable term) and the inner function $g$.
In this work, both $f$ and $g$ may be nonsmooth, leading to the nonsmooth compositional function $\Phi(\mathbf{x}) = (f \circ g)(\mathbf{x})$.
We only apply the randomized smoothing technique [1] on $\Phi(\mathbf{x})$ to construct its smoothing surrogate $\Phi_{\delta}(\mathbf{x}) = (f \circ g)_{\delta}(\mathbf{x})$.
However, it does NOT lead to any smoothing surrogate of $f$ or $g$, which means we cannot directly combine the analysis of Ref. [1, 2, 3].
In contrast, we need to use the Lipschitz continuity of $f$ and $g$
(rather than their smoothness) to carefully bound the expected mean squared error with reasonable sample complexity (see Lemma B.1 and C.1), which is quite different from the analysis in smooth case [1, 2, 3].
**Reference**
- [1] Tianyi Lin, Zeyu Zheng, and Michael Jordan. Gradient-free methods for deterministic and stochastic nonsmooth nonconvex optimization. Advances in Neural Information Processing Systems 35 (2022): 26160-26175.
- [2] Huizhuo Yuan, Xiangru Lian, and Ji Liu. Stochastic recursive variance reduction for efficient smooth non-convex compositional optimization. arXiv preprint arXiv:1912.13515 (2019).
- [3] Junyu Zhang, and Lin Xiao. A stochastic composite gradient method with incremental variance reduction. Advances in Neural Information Processing Systems 32 (2019).
- [4] Mengdi Wang, Ethan X. Fang, and Han Liu. Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions. Mathematical Programming 161 (2017): 419-449.
- [5] Mengdi Wang, Ji Liu, and Ethan X. Fang. Accelerating stochastic composition optimization. Journal of Machine Learning Research 18.105 (2017): 1-23.
- [6] Yin Liu, and Sam Davanloo Tajbakhsh. Stochastic Composition Optimization of Functions Without Lipschitz Continuous Gradient. Journal of Optimization Theory and Applications 198.1 (2023): 239-289.
- [7] Quanqi Hu, Dixian Zhu, and Tianbao Yang. Non-smooth weakly-convex finite-sum coupled compositional optimization. Advances in Neural Information Processing Systems 36 (2024).
- [8] Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. Advances in neural information processing systems 31 (2018).
- [9] Ashok Cutkosky, and Francesco Orabona. Momentum-based variance reduction in non-convex sgd. Advances in neural information processing systems 32 (2019).
- [10] Rie Johnson, and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. Advances in neural information processing systems 26 (2013).
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, which has partially addressed my concerns. For the technical novelty, I have some further questions.
**Q:** After reviewing Lemma B.1 and Lemma C.1, I find that the technical contribution in bounding the expected mean squared error may be limited, as the analysis using the Lipschitz continuity seems straightforward and easy. Additionally, I have some concerns regarding the proof of Lemma B.1 (Line 411). Specifically, where does $g_t$ come from in the second inequality? Is this a mistake? If it should be $g$ instead, then the expression in the expectation would be $0$, i.e., $E[\Vert F(g(\mathbf{x}\_t+\delta \mathbf{w}\_{t,j});\xi_{t,j}) - F(g(\mathbf{x}\_t+\delta \mathbf{w}\_{t,j});\xi\_{t,j})\Vert^2]=0$. Is this also a mistake? I think that the authors may omit some steps. Could the authors provide a more detailed and correct analysis of Lemma B.1?
---
Rebuttal 2:
Comment: Thanks for your careful and detailed reply. We would like to address your questions as follows.
**Q1:** I have some concerns regarding the proof of Lemma B.1 (Line 411). Specifically, where does $g_t$ come from in the second inequality? Is this a mistake? If it should be instead, then the expression in the expectation would be 0, i.e., $E[\|F(g(x_t + \delta w_{t,j});\xi_{t,j})-F(g(x_t + \delta w_{t,j}); \xi_{t,j})\|^2]=0$. Is this a mistake?
**A1:** Please note that we have defined $g_t(\cdot)$ for GFCOM (Algorithm 1) at the beginning of Appendix A (line 399), that is
$$g_t(x) = \frac{1}{b_g} \sum_{i \in [b_g]}G(x; \zeta_{t,i})$$
which is indeed different from
$$g(x) = E_\zeta[G(x;\zeta)].$$
In the proof of Lemma B.1, the notation $g_t(\cdot)$ is **not** a mistake.
Therefore, the terms
$F(g(x_t + \delta w_{t,j}); \xi_{t,j})$ and $F(g_t(x_t + \delta w_{t,j}); \xi_{t,j}) $
are **different**.
The expression $E[\|F(g(x_t + \delta w_{t,j}); \xi_{t,j})-F(g_t(x_t + \delta w_{t,j}); \xi_{t,j})\|^2]$ is also **not** a mistake and cannot be replaced by
$E[\|F(g(x_t + \delta w_{t,j}); \xi_{t,j})-F(g(x_t + \delta w_{t,j}); \xi_{t,j})\|^2].$
**Q2:** I think that the authors may omit some steps. Could the authors provide a more detailed and correct analysis of Lemma B.1?
**A2:** Recall that we have defined $g_t(x) = \frac{1}{b_g} \sum_{i \in [b_g]}G(x; \zeta_{t, i})$ for the GFCOM method in line 399, we can simplify $y_{t,j}$ and $z_{t,j}$ in Eq.(3) (below line 157) as
$y_{t,j} = g_t(x_t + \delta w_{t,j})$ and $z_{t,j} = g_t(x_t - \delta w_{t,j}).$
Consequently, we can rewrite $v_t$ (Line 6 in Algorithm 1) as
$$v_t = \frac{1}{b_f} \sum_{j \in [b_f]} \frac{d}{2 \delta}(F(g_t(x_t + \delta w_{t,j}); \xi_{t,j}) - F(g_t(x_t - \delta w_{t,j}); \xi_{t,j})) w_{t,j}.$$
Now we can bound the gradient estimation error as
$$E[\lVert v_t - \nabla \Phi_{\delta} (x_t) \rVert^2] \leq 2 E\Big[ \Big\lVert v_t - \frac{1}{b_f} \sum_{j \in [b_f]} \frac{d}{2 \delta} (F(g(x_t + \delta w_{t,j}); \xi_{t,j}) - F(g(x_t - \delta w_{t,j});\xi_{t,j})) \Big\rVert^2 \Big] + 2 E \Big[ \Big\lVert \frac{1}{b_f} \sum_{j \in [b_f]} \frac{d}{2 \delta} (F(g(x_t + \delta w_{t,j}); \xi_{t,j}) - F(g(x_t - \delta w_{t,j});\xi_{t,j})) - \nabla (f \circ g)_{\delta}(x_t) \Big\rVert^2 \Big] ,$$
where the inequality follows the fact $\lVert a + b\rVert^2 \leq 2 \lVert a \rVert^2 + 2\lVert b\rVert^2$.
For the first term on the right-hand side, we have
$$
\begin{aligned}
& 2 E\Big[ \Big\lVert v_t - \frac{1}{b_f} \sum _{j \in [b_f]} \frac{d}{2 \delta} (F(g(x _t + \delta w _{t,j}); \xi _{t,j}) - F(g(x _t - \delta w _{t,j});\xi _{t,j})) \Big\rVert^2 \Big] \\\\
\leq & 2 \Big(\frac{d^2}{2 \delta^2 b _f} \sum _{j \in [b _f]} E\Big[\Big\lVert F(g(x _t + \delta w _{t,j}); \xi _{t,j}) - F(g _t(x _t + \delta w _{t,j}); \xi _{t,j})\Big\rVert^2 \Big] + \frac{d^2}{2 \delta^2 b _f} \sum _{j \in [b_f]}E \Big[\Big\lVert F(g(x _t - \delta w _{t,j}); \xi _{t,j}) - F(g _t(x _t - \delta w _{t,j}); \xi _{t,j}\Big)^2\Big] \Big) \\\\
\leq & 2 \Big(\frac{G _f^2 d^2}{2 \delta^2 b_f} \sum _{j \in [b_f]} E \Big[\Big\lVert g(x _t + \delta w _{t,j}) - g _t(x _t + \delta w _{t,j}) \Big\rVert^2 \Big] + \frac{G _f^2 d^2}{2 \delta^2 b_f} \sum _{j \in [b_f]} E \Big[\Big\lVert g(x _t - \delta w _{t,j}) - g _t(x _t - \delta w _{t,j}) \Big\rVert^2 \Big]\Big)
\end{aligned}
$$
where first inequality is due to $\lVert a + b\rVert^2 \leq 2 \lVert a \rVert^2 + 2\lVert b\rVert^2$ and the second inequality follows Assumption 3.1.
Since $g_t(\cdot)$ is an unbiased estimator of $g(\cdot)$, Assumption 3.3 implies
$E[\lVert g(x_t + \delta w_{t,j}) - g_t(x_t + \delta w_{t,j}) \rVert^2 ] \leq \frac{\sigma_0^2}{b_g}$ and $E[\lVert g(x_t - \delta w_{t,j}) - g_t(x_t - \delta w_{t,j}) \rVert^2 ] \leq \frac{\sigma_0^2}{b_g}$.
Combining the above two results, we have
$$2E\Big[ \Big\lVert v_t - \frac{1}{b_f} \sum_{j \in [b_f]} \frac{d}{2 \delta} (F(g(x_t + \delta w_{t,j}); \xi_{t,j}) - F(g(x_t - \delta w_{t,j});\xi_{t,j}))\Big\rVert^2 \Big] \leq \frac{2d^2 G_f^2 \sigma_0^2}{\delta^2 b_g}.$$
In addition, Lemma 3.8 implies
$$2 E \Big[ \Big\lVert \frac{1}{b_f} \sum_{j \in [b_f]} \frac{d}{2 \delta} (F(g(x_t + \delta w_{t,j}); \xi_{t,j}) - F(g(x_t - \delta w_{t,j});\xi_{t,j})) - \nabla (f \circ g)_{\delta}(x_t)\Big\rVert^2 \Big] \leq \frac{32 \sqrt{2 \pi} d G_f^2 G_g^2}{b_f}.$$
Putting everything together, we get the bound
$$E[\lVert v _t - \nabla \Phi _{\delta} (x _t) \rVert^2 ] \leq \frac{2 d^2 G_f^2 \sigma_0^2}{\delta^2 b _g} + \frac{32 \sqrt{2 \pi} d G _f^2 G _g^2}{b _f},$$
which finishes the proof of Lemma B.1.
If you are still confused about our response, you are welcome to discuss it further!
---
Rebuttal Comment 2.1:
Comment: Thanks for your detailed response! The concerns regarding the correctness of this paper have been fully addressed. The results of this work are indeed valuable. I will increase my score to 5.
---
Reply to Comment 2.1.1:
Comment: Thank you for raising our score! We are happy to hear that we have addressed your concerns. | Summary: This work proposes two zeroth-order methods (including one variance-reduced method) for solving non-convex non-smooth stochastic compositional optimization (SCO). These two methods are further extended to solving convex non-smooth SCO. Theoretical analysis are provided to show the convergence guarantee of all proposed methods.
Strengths: The main contribution of this work is that it proposes the first zeroth-order methods for non-smooth SCO under non-convex and convex setting and presents convergence analysis. The paper is well-written and easy to follow.
Weaknesses: My main concern is in the novelty of the proposed methods and their convergence analysis. Base on my understanding, the proposed four methods are extensions of the existing work [31]. [31] proposed four zeroth-order methods (GFM, GFM+, WS-GFM, WS-GFM+) for solving non-smooth optimization without compositional structure under both non-convex and convex setting. Based on GFM in [31], the GFCOM in this work simply replaces the stochastic function values in the gradient estimators into stochastic function values of the compositional function, and use large batches to ensure the accuracy of the inner function value estimation. The convergence analysis is thus similar to GFM as well.
Reference.
[31] Lesi Chen, Jing Xu, and Luo Luo. Faster gradient-free algorithms for nonsmooth nonconvex stochastic optimization. In International Conference on Machine Learning, pages 5219–5233. PMLR, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness section.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No significant limitations in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer ZyfL for your insightful review.
**Q1:** My main concern is in the novelty of the proposed methods and their convergence analysis. Base on my understanding, the proposed four methods are extensions of the existing work [31].
**A1:** This is the first work that proposes stochastic algorithms to solve the general nonconvex nonsmooth stochastic compositional problem with non-asymptotic convergence rates. Note that both the algorithm design and the convergence analysis of previous work for nonconvex nonsmooth stochastic compositional optimization problems require additional assumptions such as weak convexity [11] and relative smoothness [10].
Our algorithm design and convergence analysis are not simple extensions of existing work [31]. In this work, we apply the randomized smoothing on $\Phi(\mathbf{x})$ to construct its smoothing surrogate $\Phi_{\delta}(\mathbf{x})$.
However, achieving a sufficient accurate function value of a surrogate compositional problem $\Phi_{\delta}(\cdot)$ to guarantee convergence is more difficult than the counterpart in the problem without compositional structure. Additionally, obtaining a smoothing surrogate $\Phi_{\delta}(\mathbf{x})$ does not result in any smoothing surrogate of $f$ or $g$.
Therefore, we have to carefully use the Lipschitz continuity of $f$ and $g$ (rather than the smoothness of their surrogate functions) to bound the expected gradient estimation error (see Lemma B.1 and C.1) with reasonable sample complexity, which is more complicated than previous work for non-compositional problems that can directly use the smoothness of the surrogate function of objective [31].
**Reference**
- [10] Yin Liu, and Sam Davanloo Tajbakhsh. Stochastic Composition Optimization of Functions Without Lipschitz Continuous Gradient. Journal of Optimization Theory and Applications 198.1 (2023): 239-289.
- [11] Quanqi Hu, Dixian Zhu, and Tianbao Yang. Non-smooth weakly-convex finite-sum coupled compositional optimization. Advances in Neural Information Processing Systems 36 (2024).
- [31] Lesi Chen, Jing Xu, and Luo Luo. Faster gradient-free algorithms for nonsmooth nonconvex stochastic optimization. International Conference on Machine Learning. PMLR, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will raise my score to 5.
---
Reply to Comment 1.1.1:
Comment: We are glad that our rebuttal helped address your concerns. Thank you for raising the score! | Summary: This paper studied the zero-order method for computing an approximately stationary point for a Lipschitz function with a composition structure. The main difficulty lies in the function value evaluation. The composition structure involves multiple expectations, requiring multiple rounds of sampling to obtain a satisfactory estimation. The authors also discuss the situation when the objective function is convex. They show that the complexity can be improved under the convexity assumption.
Strengths: Overall, I think this is an okay paper, in the sense that it makes a meaningful contribution to an important setting that was previously unexplored. The technique is intuitive, and the proof is very neat and clean, which is good and should be easy to follow.
Weaknesses: My impression is that the technique is okay but not that surprising. There are two expectations, so two sampling sequences are needed to evaluate the function value. I would say the technical contribution is a little bit insufficient, but I do not strongly oppose it solely on this point.
The following are minor points:
* I would recommend specifying the distribution P in the main text, rather than in the statement of Lemma 3.7. It seems the definition of f_\delta appears in L134, with a very general distribution P following. The definition of this P is not specified even in Algorithms 1 and 2.
* L118: \mathbb{B}(x, \delta) should be \mathbb{B}_\delta(x), according to L93.
* Theorem 4.1, Corollary 4.2, Theorem 4.3, etc.: The upper bound on gradient norm should be in the sense of expectation, right? It seems highly non-trivial to exactly select one R \in [T] such that the gradient norm is minimal in the telescoping sum, due to the difficulty in evaluating the gradient of the smoothed function.
* For Section 5, I recommend considering studying tighter stationarity notions for convex functions, e.g., those whose definition requires weak convexity. The notion of a Goldstein approximate stationary point is rather loose. Thus, whenever possible, considering tighter solution notions would be better.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer iZH7 for your careful review and insightful suggestions.
**Q1:** I would recommend specifying the distribution $P$ in the main text, rather than in the statement of Lemma 3.7. It seems the definition of $f_\delta$ appears in L134, with a very general distribution P following. The definition of this P is not specified even in Algorithms 1 and 2.
**A1:** Thanks for your suggestion. We will explicitly state that $P$ presents the uniform distribution in the unit ball in the revision.
**Q2:** L118: $\mathbb{B}(x, \delta)$ should be $\mathbb{B}_\delta(x)$, according to L93.
**A2:** Thanks for your careful review. We will unify the notations in the revision.
**Q3:** Theorem 4.1, Corollary 4.2, Theorem 4.3, etc.: The upper bound on gradient norm should be in the sense of expectation, right?
**A3:** Thanks for your careful review. It should be the gradient norm in expectation, and we will fix it in the revision.
**Q4:** For Section 5, I recommend considering studying tighter stationarity notions for convex functions, e.g., those whose definition requires weak convexity. The notion of a Goldstein approximate stationary point is rather loose. Thus, whenever possible, considering tighter solution notions would be better.
**A4:** We agree that $(\delta, \epsilon)$-goldstein stationary point is not a tight notion for convex functions.
For the convex problem, we can study other notions such as the optimal function value gap and the nearly approximate stationary point (Davis et al. [a]).
- For the optimal function value gap, Lemma 5.2 has shown that we can achieve $\mathbb{E}[\Phi(x_T)- \Phi^*]\leq \rho$ with $\mathcal{O}(d \hat{R}^2 G_f^4 G_g^2 \sigma_0^2\rho^{-4})$ function value query calls.
- For the nearly approximately stationary point, Davis et al. [a] have shown that the explicit convergence rates can be achieved by minimizing the weakly-convex (possibly nonsmooth) function. We think the convergence analysis based on this notion can also be achieved in the compositional optimization problem by introducing the proximal point iterations in the algorithm design.
**Reference**
- [a] Damek Davis, and Benjamin Grimmer. Proximally guided stochastic subgradient method for nonsmooth, nonconvex problems. SIAM Journal on Optimization 29.3 (2019): 1908-1930. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks | Reject | Summary: The paper titled "Meta Stackelberg Game: Robust Federated Learning against Adaptive and Mixed Poisoning Attacks" proposes a new approach to enhancing the security of Federated Learning (FL) systems. The paper identifies that existing FL defenses are inadequate against adaptive and mixed attacks. To address this, they introduce a Meta Stackelberg Game (meta-SG) framework, which employs a Bayesian Stackelberg Markov game (BSMG) and a meta-learning approach. This framework aims to provide a robust and adaptive defense mechanism against various poisoning attacks, including model poisoning and backdoor attacks. The proposed method is theoretically proven to converge to an equilibrium efficiently and is empirically validated to perform well against strong adversarial attacks.
Strengths: + Introducing the Meta Stackelberg Game (meta-SG) framework is innovative, offering a new perspective on defending against adaptive and mixed attacks in FL.
+ The paper provides a solid theoretical foundation, proving that the proposed algorithm converges to a first-order ε-equilibrium, proving the method's efficiency.
+ Extensive experiments demonstrate the effectiveness of the meta-SG framework, showing significant improvements in defense against various attack types compared to existing methods.
+ The meta-learning component allows the defense mechanism to adapt dynamically to different attack scenarios, enhancing its robustness in uncertain environments.
Weaknesses: - The proposed approach seems computationally intensive, requiring significant resources for pre-training and adaptation, which might limit its practicality in real-world applications. Although it proves that it can converge in Theorem 3.3, it would be beneficial to have empirical evidence, such as the method's run-time overhead.
- While the framework is tested against several attack types, the scope of attacks considered might not cover all possible real-world adversarial strategies, limiting the generalizability of the results. The paper especially makes it unclear what attacks are used in pre-training, whether they are the same, and how different they are compared to the real FL environment when testing and generating results.
- The proposed method's scalability to larger and more diverse FL environments remains unclear, especially given its computational demands.
Technical Quality: 3
Clarity: 3
Questions for Authors: Given the above points, I have some questions that need to be addressed:
1. How does the meta-SG framework scale with increasing clients and more complex models? Have you tested its performance in larger FL setups?
2. How does the framework handle adaptive attack types not seen during the pre-training phase? Are there any limits to the adaptive attacks it can defend against?
3. Have you considered applying the meta-SG framework to domains other than image classification, such as natural language processing or time-series analysis in FL?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. computationally intensive
We stress that our proposed method deals with mixed attacks of unknown and uncertain types, which is beyond the scope of other baselines that focus on specific attacks. Since our problem setup is more complicated, it is not surprising that more computational resources are required.
From a theoretical viewpoint, our meta-Stackelberg learning (meta-SL) in the pre-training phase amounts to a two-time scale fully first-order policy gradient algorithm for solving a bi-level stochastic programming problem. In terms of sample complexity, the meta-SL has reached the SOTA efficiency in bi-level algorithms. This computation complexity is not because of our design fallacy but due to the bi-level structure of the meta-Stackelberg equilibrium defense, which we consider as the best fit to combat the defender's incomplete information on the mixed attack types involving strong adaptive attacks.
2. generalization
We agree with the reviewer that the considered attacks do not cover all possibilities. However, as presented in Section 3.1, the pre-training considers white-box RL attacks as surrogates for those unseen strong attacks in the real world. These white-box RL attacks present the worst-case scenario in the training, and the resulting defense can generalize to other attacks. We report the generalization experiment in Table 8 Appendix D, where the pre-trained defense achieves satisfying performance against unseen adaptive attacks. Please also refer to Table 3 in Appendix which showcase the set of attacks and defenses employed during pre-training and online adaptation and their related figures/tables.
4. scalability
All the baseline defenses conducted similar or smaller scale experiments as we did. We further tested larger-scale experiment with meta-SG against LMP on CIFAR-10 including 1000 clients where 100 clients been selected each round (online adaptation only). The test is conducted on AWS p2.16xlarge instance with 16 GPUs, 64 vCPUs, 732 Mem (GiB) and 192 GPU Memory (GiB). We implement parallel computing on 10 client simultaneously. The average global model accuracy after 500 rounds reach 0.6954 which close to our result in Table 1.
5. unseen adaptive attack type
As shown in line 87, page 2, and also in "Online Adaptation and Execution'' in line 799, page 19, the defender collects trajectories of model updates to fine-tune its pre-trained meta policy using gradient adaptation, no matter whether the attack is seen or unseen in the pre-training (the defender has no access to the attack type in the online stage). As certified in Proposition 3.4 (and also Proposition F.13), the meta-SG defense performance degradation when facing unseen attacks is upper-bounded by the "distance'' between those seen and the unseen. The "distance'' is defined by the total variation between trajectories produced by the attacks. In other words, given an acceptable defense performance drop, we can quantify the maximum "distance'' of an unseen attack to the set of seen, which becomes the limit of the meta-SG defense. To elaborate on this generalization property empirically, we test our framework with unsee attacks, and the results are in Table 8 on page 22, which presents satisfying defense performance.
6. meta-SG for other application
Please refer to our discussion in the paper's conclusion section. Our defense framework is general and can potentially be applied to other domains beyond image classification. We encourage researchers with different expertise to join in applying this framework or its modified version to more domains. | Summary: The paper presents a game theoretic model for robust federated learning. The technique is composed of pre-training and online adaptation. During pre-training, a meta-policy for the defender is solved as a Bayesian Stackelberg Markov game. The defense policy is further polished during the online adaptation stage.
Strengths: The Stackelberg game is designed to counter unknown/uncertain attacks by an adaptive adversary.
Theoretical bounds for sample complexity are provided.
Empirical results demonstrate the effectiveness of the proposed technique.
Weaknesses: The main weakness is the slight violation of privacy as the technique needs a portion of ground truth data from the clients. This has been disused as the limitations in the paper.
Minor comments:
On page 2, "including mixed attacks ," ---> extra space
Technical Quality: 3
Clarity: 4
Questions for Authors: What is the statistical significance of the results shown in Table 1? Without the standard error, it is unclear whether the proposed technique is indeed superior to other existing techniques. I understand there is no room in the table, at least it should be mentioned in the text.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Privacy violation is mentioned as a limitation of the technique. It's unclear whether future development can remove the dependence on the client-side ground truth.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please refer to rebuttal to Reviewer a9ia for privacy concern. We have address the minor comments. We currently didn't calculate the statistical significance (need more computation resources). In most experiments, we fix initial model and all random seeds for fair comparisons. Client-side defense need thoroughly design which leave to our future work.
---
Rebuttal 2:
Comment: I apologize for the brevity of our initial response, which was due to time constraints. Thank you for your valuable feedback. We have carefully addressed the minor comments you provided.
- Regarding the privacy concerns, please refer to our response to Reviewer a9ia's comments.
- We acknowledge the importance of statistical significance and have made efforts to address this in our study. Specifically, we conducted 20 trials for the experiments shown in Figures 11 (c) and (d) to generate the error bars. However, due to limitations in computational resources, we were unable to calculate statistical significance for all of our results. We plan to address this in future work when we have access to additional resources. For the other graphs and tables, we fixed the initial model and all random seeds across experiments to ensure fair comparisons.
- Additionally, we recognize that client-side defenses are crucial for enhancing privacy and security. We agree that this is an area that requires thorough design and development, and it is indeed part of our future research agenda.
---
Rebuttal Comment 2.1:
Title: Response to authors' rebuttal
Comment: Thanks for the explanations. I'll keep my score. | Summary: The authors propose a defense mechanism in federated learning that has adaptability inspired from meta learning. The authors formulates a Bayesian Stackelberg Markov game (BSMG), focusing on addressing poisoning attacks of unknown or uncertain types. The authors propose an equilbrium inspired by meta learning and then look at a local version of that. Empirical evaluations are done on MNIST and CIFAR.
Strengths: This paper seems to have interesting results.
Weaknesses: The writing is sloppy in many places and quite a few things are unexplored.
Federated learning's primary motivation is privacy, so the privacy loss from a core small dataset must be analyzed. The authors do acknowledge the loss but IMO that is not enough.
Behind all the motivation of federated learning, the core idea is in the equilibrium proposed - IMO, exploring this equilibrium in more detail would make the paper stronger (in fact, the problem makes sense even in simpler adversarial learning problems).
Is Def 3.1 just a differential equilibrium, in the style of "Implicit Learning Dynamics in Stackelberg Games: Equilibria Characterization, Convergence Analysis, and Empirical Study" but missing second order conditions?
Why are there no second order conditions? Just first order may not induce a local equilibrium, which is a meaningful equilibrium to achieve. This is where even more meaning needs to be discovered for the first meta-SE. The authors compare it to PBE in the appendix, but the notations of belief consistency and sequential rationality is what makes PBE (SE) convincing. Without any such (or similar) notion, a new equilibrium in a dynamic setting is not principled.
I do not understand why the ball $B(\theta^*)$ is used with a bound of 1? What is special about 1? (same for the other ball)
Proposition written informally (and no explanation) in the main paper does not make sense (e.g., Prop 3.4).
There are many typos when I started to look in appendix:
1) Eq F6, $\tilde{l}$ should have two inputs
2) Line 959, the parameters of $\tilde{l}$ is lost, without this the equations with $\theta'$ on left and no $\theta'$ on right is not well-defined.
3) I do not understand how the equation in line 964 came about - first it is said that it is an inequality but what is written is an equality.
Overall, typos do not inspire confidence.
Also, any defense mechanism should itself be subject to attack with the adversary possessing knowledge of defense mechanism - I do not see that in experiments.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please respond to review
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. privacy concern
Please refer to rebuttal to Reviewer a9ia
2. meta-Stackelberg equilibrium
Indeed, the core of the proposed meta-Stackelberg framework lies in the meta-Stackelberg equilibrium (meta-SE). The essence of meta-SE is to create the strategic adaptation in the interim stage (online) in a data-driven approach (online gradient updates) when the defender has incomplete information about the attacker. Such a data-driven approach deviates from the conventional Bayesian approach in PBE. Since the defender's and attacker's objective functions in FL are non-convex, the exact analysis of the meta-SE is challenging due to the bi-level structure of the equilibrium definition and the dynamic nature of FL processes (i.e., a Markov game). We agree that a simpler adversarial learning problem (e.g., supervised learning), which can be modeled as a static Bayesian game, may lead to a thorough theoretical analysis. However, such a direction is a digression from our FL problem. Even though we are unable to provide strong theoretical characterizations of the proposed meta-SE, we conduct empirical studies to demonstrate the advantage of meta-SE over conventional Bayesian equilibrium (BSE defined in line 251). As shown in Figure 1 in the author rebuttal pdf, the BSE policy (red curve) does not display effective adaptation within the 500 rounds, whereas the meta-SE policy (blue curve) quickly adapts to the LMP attack.
3. Is Def 3.1 just a differential equilibrium?
Def 3.1 is not a differential equilibrium. The differential equilibrium (Def 4) in the reference is a sufficient condition, whereas our definition presents a first-order necessary condition. These first-order conditions, along with the positive-definiteness of the Hessian matrix, construct the optimality conditions for a local solution for the meta-SE, which may not exist even in the zero-sum cases [1, proposition 6]. Hence, we consider the necessary conditions to guarantee the existence.
We agree with the reviewer that belief consistency and sequential rationality provide a solid theoretical foundation for PBE when players are Bayesian: they maintain a prior distribution over the uncertainty (type in our case) and make decisions based on the posterior. However, as we argue in Appendix G, computing Bayesian posterior is intractable in large-scale FL systems with neural network models. Our meta-SE essentially discards the Bayesian approach in handling incomplete information games. Instead, we resort to the online gradient adaptation to process the information feedback without computing the posterior. Even though meta-SE is not as well-grounded as PBE in game theory, it does provide a non-Bayesian data-driven alternative in handling incomplete information that is suitable for complex multi-agent systems such as FL.
4. the ball $\mathcal{B}$
We consider the unit ball to avoid introducing more notation; in fact, we can use any ball with radius $r$. In the setting where the $r$-radius ball $\|\theta-\theta^*\|\leq r$ falls within the space $\Theta$ (i.e., unconstrained setting), the condition simply implies $\|\nabla\_\theta \mathcal{L}\_\mathcal{D} (\theta^*, \phi^*_\xi, \xi) \| \leq \varepsilon/r$. To see this, we let $\theta = \theta^* +r \frac{\nabla\_{\theta} \mathcal{L}\_\mathcal{D} (\theta^*, \phi^*_\xi, \xi)}{\|\nabla\_{\theta} \mathcal{L}\_\mathcal{D} (\theta^*, \phi^*\_\xi, \xi)\|}$, and then direct calculation gives the upper-bound on the gradient norm. We simply let $r=1$ in this work, which is a common practice in optimization literature [2].
Regarding Proposition 3.4, the explanation is that the generalization error, which is the difference of expected returns under a learned defense policy with different attack types, can be controlled by the total variations between trajectory distributions induced by the sampled attack types and unseen attack types. The ``discrepancy'' concerns additional notations that might make it less readable. Appendix F.3 gives more detailed explanations.
5. typos in Appendix
(1) $\tilde{\ell}\_{\mathcal{D}}$ is a concave function augmented from $ \ell\_{\mathcal{D}}$ that takes three arguments: $\theta, \theta^{\prime}, \text{ and } \phi^{\prime}$, and there was a typo in $\ell\_{\mathcal{D}}(\theta, \phi)$, which should be $\ell_{\mathcal{D}}(\theta, \phi^{\prime})$, which likely caused the confusion.
(2) It should be $\tilde{\ell}\_{\mathcal{D}}(\theta; (\theta^{\prime}, \phi^{\prime}) ) $, thanks to the reviewer multiple typos have been corrected.
(3) Apology for this typo. "inequality" should be "equalities". We basically augmented the functions here to make them satisfy the fixed-point theorem, which guarantees the existence. We then leveraged the equalities to show that the equilibrium of game $ (\tilde{\ell}\_{\mathcal{D}}, \tilde{\ell}\_{\xi})$ is also the equilibrium of game $( \ell\_{\mathcal{D}}, \ell\_{\xi})$.
[1] Chi Jin, Praneeth Netrapalli, and Michael I. Jordan. What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? ArXiv:1902.00618, 2019.
[2] Nouiehed, Maher, Maziar Sanjabi, Tianjian Huang, Jason D. Lee, and Meisam Razaviyayn. "Solving a class of non-convex min-max games using iterative first order methods." Advances in Neural Information Processing Systems 32 (2019).
---
Rebuttal Comment 1.1:
Comment: I am not an expert on federate learning, so I leave the significance of that to other reviewers. But, I know game theory very well and I am not satisfied (and curious) about this new sort of equilibrium idea proposed without really exploring what it means in the sequential setting.
The privacy angle still is a question - I feel used in prior work precedent is not enough.
---
Rebuttal 2:
Title: Thanks for the reviewer's comment
Comment: We appreciate Reviewer Ui9w's insightful suggestions. We would like to clarify that sequential rationality does not apply to the meta equilibrium due to the data-driven gradient adaptation. Our proposed meta-equilibrium concept is a mixed-strategy equilibrium instead of a behavioral-strategy equilibrium if we represent the Markov game in an extensive form. The defense policy $\pi_D(a^t|s^t;\theta)$, even though takes in $s^t$ at each time step, is not a behavioral strategy because $s^t$ does not correspond to an information set in this incomplete information dynamic game. In the Bayesian framework (e.g., PBE), the information set given by the history is equivalent to a Bayesian posterior belief. A behavioral strategy must take in the information sets (or beliefs) to determine the actions. Since computing Bayesian posteriors is intractable in large FL systems, our framework discards the Bayesian approach and embarks on a data-driven method to handle interim information. The learned policy $\pi_D(a^t|s^t;\theta)$ only determines a probability measure over the action sequence $\{a^1,a^2,\ldots, a^H\}$ to be played. In summary, the proposed meta-equilibrium is not a subgame perfect equilibrium in the Markov game because one cannot define the perfection condition (sequential rationality) [Def. 8.2, 1] without defining the information set in the Bayesian framework. Our motivation is to trade sequential rationality for computational viability and efficiency.
We believe that it is practical for the server to collect a small, clean training dataset, known as the root dataset, for the learning task. For instance, Google uses its employees to type with Gboard, creating a root dataset for its federated next-word prediction [2]. This root dataset may or may not align with the distribution of the overall training data used for the learning task.
[1] Fudenberg, D. and Tirole, J. Game Theory. MIT Press, 1991.
[2] Federated Learning: Collaborative Machine Learning without Centralized Training Data. [Online]. Available: https://ai.googleblog.com/2017/04/federated-learning-collaborative.htm | Summary: This paper considers the problem of backdoor/poisoning attacks in federated learning (FL). In this setting, a single attacker has control over all malicious clients trying to employ different attack types (on each controlled client). This paper aims to create a defense mechanism against such adaptive attackers. To this end, the paper proposes a game-theoretic approach, which contains two stages: (1) Pre-training stage: before the FL environment, the defender first learns a good defense policy by simulating that environment using a small amount of truthful data against a simulated attacker with known possible actions (e.g. attack types used), and (2) Online-adaptation stage: the defender leverages the pre-trained policy and adapt it against the attacker at the real FL environment. This paper demonstrates the effectiveness of their proposed mechanism, as well as considers ablation studies where the previous assumptions do not meet: adaptation to attack methods at real FL environment are different (but similar) from ones seen at the pre-training stage.
Strengths: The paper is well-written and easy to follow.
Weaknesses: I have some concerns, mostly about the practicality of the assumptions:
1. Assumption on the accessibility of data: the paper assumes that in the pre-training phase, the defender has access to a small amount of data, which is used to model the data distribution of clients using generative models. (1) It goes against privacy. (2) It is not clear that a small amount of data is representative enough to model client data distribution. (3) Things will be even more complicated if each client has its sub-population (data) that is (reasonably) different from each other. In this case, which malicious clients the attacker has control over will matter.
2. Assumption of the similarity of attack types at the online adaptation stage: the paper assumes that the attack types in a real FL environment, though unseen, should be similar to those of the pre-training phase. This seems impractical, especially in a white-box setting, where the attacker will try to leverage the property of the defense mechanism to create an adaptive attack that is specified for that defense mechanism (see Carlini's works).
I also have concerns about the experiments:
1. Datasets and models used: Would it be possible to use more practical datasets and models instead of MNIST/CIFAR10 and ResNet-18? I would like to see results when the data distribution is complex enough that generative models cannot easily learn with a small amount of data. Right now, the amount of data used is still considerable, considering the dataset used. This would make the assumption of the accessibility to a small amount of data in the pre-training phase more persuasive.
Some comments on writing/paper organizations.
1. In Table 1, please highlight which results are the best. It would be more readable and easier for comparison.
2. In Figure 2, it might be better to show smoothed curves.
3. In Appendix F, before each theorem/lemma/assumption, it would be better if an intuition/proof sketch for each one is provided. Also, if the proof technique/assumption is standard, please mention the corresponding references.
4. In the Conclusion, I think it would be more honest if explicitly stated that the major limitation of this paper lies in the practicability of the assumption. Right now, I only see privacy concerns mentioned, which is misleading.
(Not really a weakness) It could be nice if source code is included (might be using an anonymous repo).
Overall, I think this is a good paper if ignoring the practicability of the assumptions on the attack types/data (on the accessibility in the pre-training phase/distribution of accessed data in the pre-training phase/distribution of data in each client). I also did not find an experiment where the attacker leverages the information about defense mechanisms to instantiate a better attack scheme (white box attack). It is known that many proposed defense mechanisms failed in this scenario, though it seems obvious that it will go against the similarity assumption on the attack types at the online adaptation phase. However, I am also aware of the hardness of defense tasks in adversarial machine learning and it is good to have some initial results even under strong assumptions. I will try my best to be reasonable. Maths were not checked carefully, I will try to go into the details in the rebuttal phase.
Technical Quality: 2
Clarity: 2
Questions for Authors: See Weaknesses above.
Confidence: 1
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See Weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. practicality of the assumptions
(1) In the conclusion of the paper, we have discussed the potential privacy issue of our approach, pointed out our initial efforts to mitigate this concern, and outlined a future direction to address it in a more principled way (i.e., client-side defense). We also note that some recent studies on developing robust defenses in FL [1][2] also rely on a small amount of root data. (2) Please refer to the empirical results in Appendix D (``Importance of inverting/reversing methods'') and the theoretical generalization result in [3] that characterizes the impact of inaccurate data distribution on the performance of RL-based attacks, which directly applies to our RL defense in the face of a fixed attack. (3) Please refer to our non-i.i.d. experiment results in Appendix D (Figure 11 (d)).
2. Assumption of the similarity of attack types at the online adaptation stage
We first clarify that the similarity between two types is measured by the closeness between trajectories produced by different attacks rather than their attack methods and configurations. Due to the page limit, We postpone the presentation to Appendix F.3. For example, the white-box attack proposed by the reviewer and the RL adaptive attack in the domain, even though may employ quite different attack algorithms, are considered similar if they generate similar trajectories in the sense of total variation since they both aim to best respond to the defense mechanism.
As presented in Section 3.1, we do consider white-box RL attacks in the pre-training stage as a surrogate for strong attacks (e.g., the one suggested by the reviewer). These white-box RL attacks present the worst-case scenario the defender can encounter, and the resulting defense can be robust to other weaker attacks. This idea of preparing for the worst naturally leads to the Stackelberg game model proposed in our paper. We test the meta-SG defense against several unseen white-box adaptive attacks, and the results are in Table 8 Appendix D. The key observation is that our framework delivers satisfying performance.
3. complex datasets/model
MNIST, CIFAR-10, and ResNet-18 are commonly used in federated learning literature. Currently, we lack the computational resources to experiment with larger datasets and more complex model structures, which we plan to address in future work.
4. writing/paper organizations
We have address (1)(3) and discussed (4) above. For point (2), we fix the initial model and use fixed random seeds for fair comparisons in most experiments and we include error bars in Figure 11 to account for variability.
[1] Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. Fltrust: Byzantine-robust federated learning via trust bootstrapping. In Network and Distributed System Security (NDSS) Symposium, 2021.
[2] Yinbin Miao, Ziteng Liu, Hongwei Li, Kim-Kwang Raymond Choo, and Robert H Deng. Privacy-preserving byzantine-robust federated learning via blockchain systems. IEEE Transactions on Information Forensics and Security, 17:2848–2861, 2022.
[3] Henger Li, Xiaolin Sun, and Zizhan Zheng. Learning to attack federated learning: A model based reinforcement learning attack framework. In Advances in Neural Information Processing Systems, 2022.
---
Rebuttal 2:
Title: Reply to the rebuttal
Comment: - **On the dataset used in this work**: my concern is not that the paper lacks of experiments with large datasets, but:
- In other federated learning works, it might be fine to use small datasets like MNIST and CIFAR10. However, there is a critical component in this paper, which is modeling data distribution of the clients given limited access to data. How can we know that if the data distribution is more complex, then we still can effectively (and sufficiently so that it does not affect the performance of the frameworks) the data distribution with limited data? This is my concern.
- Moreover, I am also aware couple of works that use other (a bit more) complex datasets in federated learning (Tiny ImageNet), for example [1]. It would be nice if the experiments for those datasets were incorporated into this work.
Overall, I find the rebuttal not convincing. Though I decided to keep my original rating, I would reduce my confidence score to 1.
References
[1] Dung Thuy Nguyen et al. IBA: Towards Inversible Backdoor Attacks in Federated Learning. NeurIPS'23.
---
Rebuttal 3:
Title: Modeling data distribution in complex dataset
Comment: Thank you for the practical advice.
We would like to first clarify that in addition to generative modeling in pre-training, we also utilize the inverting gradient (IG) method [1] in the online FL stage to infer the clients' data distribution (Appendix C lines 801-804), aiming to bridge the gap in data distribution. Due to space limitations, we moved the discussion of IG to Appendix C lines 783-791. A recent paper [2] shows that IG can successfully reconstruct images from ImageNet-1K from gradient data. While implementing more powerful generative models and advanced inverting gradient methods is beyond the scope of this work, our meta-SG framework can be integrated with them to handle more complex datasets.
We further note that our approach can still work even if the learned distribution deviates slightly from the true distribution. This was observed for RL-based attacks in [Thm 1, 3], but a similar result also holds for RL-based defenses.
[1] Geiping, J., Bauermeister, H., Dröge, H., & Moeller, M. (2020). Inverting gradients-how easy is it to break privacy in federated learning? Advances in neural information processing systems.
[2] Hatamizadeh, Ali, et al. "Gradvit: Gradient inversion of vision transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[3] Henger Li, Xiaolin Sun, and Zizhan Zheng. Learning to attack federated learning: A model based reinforcement learning attack framework. In Advances in Neural Information Processing Systems, 2022. | Rebuttal 1:
Rebuttal: We extend our heartfelt gratitude to the reviewers for their invaluable questions, insightful comments, and constructive suggestions. We look forward to your inspiring thoughts. While the detailed responses are attached to reviewers' comments, we summarize some key updates and revisions here.
We thank reviewer HbAX for pointing us to the helpful references, which will be discussed in the related works section. We briefly mention that the two works may not be qualified for baselines since they address dynamic switching between defenses, whereas our method focuses on the adaptive combination of defenses. Additionally, as suggested by the reviewer, we report in Table 1 in the attached pdf the actual running time of our meta-SG and compare its execution time with other baselines. We also compare BSE and meta-SE empirically in Figure 1 to highlight the advantage of gradient adaptation.
We thank reviewer a9ia for suggestions on the paper writing. We have added relevant references, remarks, and proof
intuition for theoretical results in Appendix F. Meanwhile, we would like to point out that the adaptive white-box attack mentioned by the reviewer is considered in our work, and associated experimental results are in Table 8 Appendix D.
We thank reviewer Ui9w for the close inspection of our theoretical analysis. We have carefully checked Appendix F, corrected typos, and added additional remarks on important claims, results, and assumptions. Even though the FL setup considered in this work forbids a thorough theoretical characterization of meta-SE, we empirically compare the meta-SG framework with the conventional Bayesian Stackelberg game (BSG) model ( the equilibrium is defined in line 251). As presented in Figure 1 in the attached pdf, our meta-SG displays greater adaptability than BSG defense.
We thank reviewer s9YV's question on the scalability of meta-SG. We have tested larger-scale experiment with meta-SG against LMP on CIFAR-10 including 1000 clients where 100 clients been selected each round (online adaptation only).
Pdf: /pdf/3fa7deebe6f61dc0adf2304024430ce8cb69a105.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper addresses the vulnerabilities of Federated Learning (FL) systems to various adversarial attacks, including model poisoning and backdoor attacks. The proposed solution is a Meta Stackelberg Game (meta-SG) framework designed to offer robust and adaptive defenses against these attacks. The approach formulates adversarial federated learning as a Bayesian Stackelberg Markov game (BSMG) and employs a meta-learning method to find optimal defense strategies. Theoretical analyses show the algorithm's efficiency in convergence, and empirical results demonstrate its effectiveness against different attack types.
Strengths: 1. The proposed framework considers both untargeted and targeted attacks.
2. The paper uses RL to realize online adaptation which is close to the real world.
3. Inspired by meta-learning, the method is robust to unknown attacks.
4. In the pre-training phase, the paper uses generated data to decrease the concern of privacy leakage.
5. The paper provides sufficient experimental results.
Weaknesses: 1. Could you explain more about the necessity of adding the gradient adaptation (from BSE to meta-SE)? Although BSE is ex-ante when knowing the distribution $Q$, the model $\theta$ is changed during training and it could capture emerging information. Could you provide empirical results comparing BSE and meta-SE to show the advantage of meta-SE?
2. Considering adaptive/mixed attacks, the paper misses two relevant frameworks: MixTailor [1] and RobustTailor [2]. They can adjust aggregation methods during training. Especially, RobustTailor also simulates a game between the defender and the attacker, and it proposes a mixed attack. This kind of method could be included in experiments as a baseline.
3. Because the whole method is complicated with 2 stages, comparing computational cost with other baselines is necessary.
[1] Ramezani-Kebrya, Ali, Iman Tabrizian, Fartash Faghri, and Petar Popovski. "Mixtailor: Mixed gradient aggregation for robust learning against tailored attacks." arXiv preprint arXiv:2207.07941, 2022.
[2] Xie, Wanyun, Pethick, Thomas, Ramezani-Kebrya, Ali, and Cevher, Volkan. "Mixed Nash for Robust Federated Learning." Transactions on Machine Learning Research. 2023.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Are both white-box and black-box settings used in the pre-training stage? Adding the proposed methods explicitly in Figure 1 might be more clear.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors mentioned limitations in Section 5. The main one is the privacy concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. meta-SE vs BSE
We compare the BSE policy $\theta_{BSE}$ and the meta-SE $\theta_{meta}$ from an information feedback viewpoint. The BSE policy uses the current global model $s^t=w^t_g$ to determine the defense action: $\pi_\mathcal{D}(a^t_\mathcal{D}|s^t, \theta_{BSE})$. This policy is Markovian and only uses the emerging information of the current global model to output actions targeting an average attack. In contrast, meta-SE uses online trajectory $\tau$ (a sequence of local/global model updates) to first generate policy adaptation $\theta\_{adapted}=\theta\_{meta}+\eta \hat{\nabla}\_\theta J\_\mathcal{D}(\tau)$.
Then, the adapted policy uses the current global model to determine the defense action: $\pi_\mathcal{D}(a^t_\mathcal{D}|s^t, \theta_{adapted})$. Naturally, $\tau$ incorporated into $\theta_{adapted}$ reveals more information about the actual attack than $s^t$; hence, $\theta_{adapted}$ captures more emerging information than $\theta_{BSE}$ and is more tailored to the actual attack.
We conduct additional experiments comparing the defense performance under BSE and meta-SE; see Figure 1 in the author rebuttal pdf. The pre-training follows the same setup as in the paper (see Figure 2 and associated discussions in the paper). The key message is that the BSE policy (red curve) does not display effective adaptation within the 500 rounds, whereas the meta-SE policy (blue curve) quickly adapts to the LMP attack.
2. two relevant frameworks
We believe these papers, at least in their current form, are not suitable to be included as baselines in our experiments due to the following reasons. 1) The two papers address dynamically switching between existing defenses, which is not particularly useful for addressing our mixed attack problem, where multiple types of attacks (e.g., untargeted model poisoning and backdoor attacks) can occur simultaneously in a single round of FL. The focal point of our work is not about **choosing** a defense but rather how to **combine** them effectively. 2) The major contribution of our meta-SG framework is to use meta learning to address the defender's incomplete information of mixed attacks (similar to [1][2]) and adaptively coordinate a set of defenses (beyond [1][2]).
3. computational cost
Our meta-Stackelberg framework deals with the mixed attacks of unknown and uncertain types, which is beyond the scope of other baselines focusing on specific attacks. Since our problem setup is more complicated, it is not surprising that more computational resources are required. We report the running time of pre-training and online implementation in Table 1 in the author rebuttal pdf. We stress that the execution time of the learned meta-Stackelberg is of the same level as the baselines.
4. white-box and black-box settings
The pre-training stage only considers a white-box setting. In a simulated environment, we deal with a set of known attacks/environment parameters, collected from domain knowledge or historical experience. However, in the online stage, we must adapt to unknown attacks/environment parameters, which may be either previously encountered or entirely new. Please also refer to Table 3 in the Appendix to see the set of attacks and defenses employed during pre-training and online adaptation and their related figures/tables.
Both settings are considered in our experiments: the main paper covers only the white-box setting as our default, while the black-box settings and corresponding experiments are detailed in Appendices C and D (see Figures 10 and 11).
[1] Ramezani-Kebrya, Ali, Iman Tabrizian, Fartash Faghri, and Petar Popovski. "Mixtailor: Mixed gradient aggregation for robust learning against tailored attacks." arXiv preprint arXiv:2207.07941, 2022.
[2] Xie, Wanyun, Pethick, Thomas, Ramezani-Kebrya, Ali, and Cevher, Volkan. "Mixed Nash for Robust Federated Learning." Transactions on Machine Learning Research. 2023.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for the authors' reply and explanation.
I have the following question. Can backdoor-defending methods like NeuroClip be incorporated in other aggregation rules like FedAvg or Trimed Mean? If yes, why MixTailor or RobustTailor cannot be used?
Currently, Meta-RL with extra pre-training only compared with some basic single aggregators. It'll be more convincing if it can be compared with some more 'smarter' aggregator.
---
Rebuttal 2:
Title: Thanks for the follow-up question
Comment: Thanks for your valuable time in reviewing and constructive comments. The key to our framework's ability to integrate NeuroClip and Pruning is its provision of a scalable and efficient method (i.e., meta-RL) for tuning the hyperparameters of these defense mechanisms. We have implemented a 'smarter' aggregator, where we manually tune hyperparameters (see Table 7) for defenses and choose the optimal ones to be applied for ClipMed and FLTrust + NC in Table 1. When there are multiple hyperparameters to tune, it becomes impractical. For instance, with norm bounding, Trimmed Mean, and NeuroClip, the search space expands to the range of the norm bound threshold multiplied by the Trimmed percentage and the clipping range. This search space is continuous and grows exponentially as the number of hyperparameters increases. We naively tested MixTailor (and could not find an open-sourse implementation for RobustTailor) to dynamically transition from the existing defenses listed in Table 1. However, this approach yielded worse performance compared to using only FLtrsut + NC.
Below we further clarify why the original versions of Mixtailor and RobustTailor are not suitable for comparison in our context due to the different considerations in problem setup, online adaptation, and scalability.
$\textbf{Problem setup:}$ Both Mixtailor and RobustTailor consider defenses against a single white-box attack that is tailored to the honest training gradients. That is, the attack objective is to drag the aggregated gradient away from the honest one as much as possible. To counter such attacks, the two works explore the idea of randomization over aggregation rules, creating information asymmetry on the defense side.
However, our work focuses on defending against a mixture of unknown/uncertain attacks. These attacks may not be tailored to the honest gradients, such as targeted attacks. Our framework uses game-theoretic meta-learning to address information asymmetry on the attack side, requiring the defense to be tailored online to unknown/uncertain attacks.
$\textbf{Online Adaptation:}$ Due to different problem setups, MixTailor and RobustTailor do not consider and are unable to incorporate online adaptation. MixTailor simply picks an aggregation randomly, while RobustTailor approximately solves a minimax problem to get the sampling distribution over the set of aggregation rules. One can view the resulting defense as a worst-case defense without considering the actual attack methods. RobustTailor only uses the current gradient information at each round to determine the aggregation.
In contrast, our proposed method considers online adaptation, which utilizes trajectories of online model updates (gradients) to implicitly identify the attack type (different attacks induce different trajectories). A trajectory contains more informative feedback than gradients at each round. Then, online gradient adaptation is derived using the trajectories.
$\textbf{Scalability:}$ Both Mixtailor and RobustTailor considered randomization over a finite set of aggregation rules. Our experiments consider defense methods (e.g., aggregation rules) parameterized by continuous parameters, which are optimized by the proposed meta-RL algorithm using policy gradients, which can efficiently handle continuous action space. In contrast, to apply the two randomization methods, we need to first discretize the parameter space. The number of the discretized parameters grows exponentially with respect to the dimension (i.e., the number of hyperparameters for all defenses combined).
---
Rebuttal Comment 2.1:
Title: Thanks for the detailed response
Comment: Thanks for the detailed response, and it addressed my concerns about the other two frameworks.
I'll keep my score because I'm not sure whether a little privacy leakage is acceptable in federated learning. | null | null | null | null | null | null |
Geometry Cloak: Preventing TGS-based 3D Reconstruction from Copyrighted Images | Accept (poster) | Summary: This paper proposed a new image cloaking approach, which adds adversarial noise on single-view image and makes TGS-based 3D reconstruction fail. This can be served as a watermark for protecting copyright image assets.
Strengths: The topic is popular and needs more investigation by the community. The paper writing is clear.
Weaknesses: 1. The main weakness lies in the scope of this work is too limited. The motivation is to protect copyright images from unauthorized 3D reconstruction, however the work only targets at TGS-based reconstruction, which is too limited to have practical effects. I suggest the authors should at least do experiments on other single image 3D reconstruction works, for example LRM[1], Gamba[2] or LGM[3]. This work shows fair results on TGS, but TGS itself is not representative among 3D reconstruction works, and does not yield the best single image 3D reconstruction results. In practice, we cannot assume unauthorized users will use which image-to-3D model, so it only makes sense when the watermarking / cloaking technique can effectively generalize to shield all those image-to-3D reconstruction algorithms.
2. Some typos.
Line 138, "preventing copyrighted"?
Reference
[1] LRM: Large Reconstruction Model for Single Image to 3D. ICLR 2024
[2] Gamba: Marry Gaussian Splatting with Mamba. https://arxiv.org/abs/2403.18795
[3] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. https://arxiv.org/abs/2402.05054
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How to specify the observation viewing direction? Is it arbitrarily selected? How will the specified viewing direction affect reconstruction effects? (e.g. when the observation viewing direction is not good, will the algorithm fail?)
2. The example images shown in the paper use only one character or number for pre-defined patterns. If the watermark information is more complexed and needs multiple characters / numbers, how will the results be like?
3. Regarding weakness, can you discuss how this method can generalize to other single image to 3D reconstruction algorithms? Is there any more generalized way to do watermarking/cloaking ?
4. In method description, the authors mentioned they use "a mask version of PGD", but details on how to generate the mask is not given in the text.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 1
Limitations: The author has discussed the social impacts. Overall this is a work for copyright protection, which does not have direct negative social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your valuable feedback and suggestions.
**Response to W1: Extending to other methods**
Our method is designed to utilize the explicit geometry feature in GS-based single views to 3D methods, which is fragile and susceptible to disturbances in the reconstruction process. Thus, our method can work on various GS-based single views to 3D methods as the explicit geometry feature is a necessary element for 3DGS. Before our submission, other GS-based single images to 3D methods were not formally peer-reviewed. While we primarily focus on TGS as it is peer-reviewed during our research, our approach can be extended to other recent methods. To illustrate this, we evaluate our technique on the recently proposed LGM [1], accepted by ECCV 2024. The results are appended in Fig. R1 and Tab. R1 of the rebuttal page. By simply adapting key design elements, our approach shows promising results on LGM as well, underscoring its potential in manipulating other GS-based single-image-to-3D reconstruction methods.
**Response to W2: Typos**
We appreciate you pointing out these typos. The typos in our paper will be carefully corrected.
**Response to Q1: Observation viewing direction**
1. In the paper, we present the results from the top view. In theory, our method can be applied to arbitrarily selected viewing directions. The algorithm will not fail because we are simply mapping the projected point cloud at direction θ, then using this projected point cloud to present the watermark. We present the results of embedding watermarks at different views in Fig. R5 and Tab. R1 (right). The experimental results indicate that our method can work well when embedding watermarks in different directions.
2. Our watermarks are embedded at a certain location, which is sufficient for copyright verification of **3D** models. As provided visual results of the watermark from a certain angle from Fig. R5 (Left), viewing the 3D model from other unsuitable perspectives will result in low-quality fragmented information. Besides, to verify copyright, users can verify from any angle, as long as it is identitable and consistent with the pre-embedded watermarks that can be matched. We will incorporate these results in our next version for better clarification of the property of the geometry cloak.
| | Front | Side | Top |
| ----------------- | ------ | ------ | ------ |
| PSNR $\downarrow$ | 15.4 | 14.37 | 13.02 |
| SSIM $\downarrow$ | 0.808 | 0.797 | 0.762 |
| LPIPS $\uparrow$ | 0.170 | 0.172 | 0.213 |
| CD $\uparrow$ | 138.76 | 150.43 | 193.74 |
**Response to Q2: Multiple characters**
1. Although the geometry cloak was not specifically designed for multiple characters, our method can still ensure identifiable results when the watermarks are multiple characters. We provide visual results in Fig. R5 (left), which remain promising when there are fewer than four characters. This suggests that the geometry cloak holds promising potential regarding watermark capacity.
2. Besides, compared to previous methods of image cloaking [2,3] against the diffusion model, which can only use artist-style as watermarks, the watermarks embedded via our method are much more identifiable.
**Response to Q3: Generalized way to do watermarking/cloaking**
1. We have discussed how our method can be generalized to other single image to 3D reconstruction algorithms in response to W1. Preliminary experimental results on LGM [1] are provided in Fig. R1 and Tab. R1, indicating our method is promising in manipulating other GS-based single-image-to-3D reconstruction methods.
2. To the best of our knowledge, this is the first paper to reveal a vulnerability in the GS-based 3D reconstruction and effectively manipulate the generated results using this vulnerability. The results generated via GS-based methods can be manipulated through adversarial attacks on geometric features, which is an issue worthy of attention as 3DGS are rather popular today. More studies should be conducted to protect image/model owners' copyrights and prevent potential malicious attackers. Our method provides a novel perspective on adversarial perturbations, enabling copyrighted images to prevent misuse by GS-based methods. which could potentially encourage related research in enhancing the robustness of 3D reconstruction and addressing AI privacy/safety issues.
**Response to Q4: Details of obtaining masks**
We used SAM to obtain a segmentation of the images to ensure that the perturbations were only added to the objects. We will provide a detailed generation process in our next version of the paper, and experimental codes of geometry cloak will be released to ensure reproducibility.
We look forward to discussing with you in the upcoming discussion phase to clarify things we may neglect.
[1] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. In ECCV 2024.
[2] Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples. In ICML 2024.
[3] Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models. In USENIX 2023.
---
Rebuttal 2:
Comment: Thanks for your rebuttal, which clarifies my questions regarding viewing direction and multiple characters watermarking pattern. However, I still have significant concerns regarding the generalization capabilities of the current version of the work.
The authors suggest that their method can generalize to LGM by `simply adapting key elements`. However, the rebuttal materials do not provide sufficient details about the specific algorithmic adjustments required for LGM. LGM employs a fundamentally different architecture to achieve single image to 3D generation. Specifically, it first utilizes off-the-shelf models like ImageDream or MVDream to synthesize multi-view images, followed by an asymmetric U-Net architecture with cross-view self-attentions to construct Gaussian features. Given these foundational differences, it is unclear how the authors' approach could be seamlessly adapted to LGM[1]. For instance, the method described in the paper relies on a point cloud encoder from TGS and uses the Chamfer distance between two-point clouds as a loss to craft adversarial perturbations (c.f. Algorithm 1 and Figure 2). These critical components do not translate directly to LGM[1], as it does not incorporate point cloud encoders. If authors wants to expand their method to other single image to 3D models, they would have to fundamentally change their motivation statement and algorithm description, which will make a huge modification to their current version of paper.
In the rebuttal, the authors mention that their method utilizes `geometry features in GS-based single views to 3D methods`, which I have reservations about this. Their initial claim is that the method is specifically tailored for Triplane Gaussian Splatting. It is important to note that different approaches to single-image-to-3D conversion employ significantly diverse network architectures (e.g., LRM[2] uses transformers, Gamba[3] uses Mamba), as a result, their `geometry feature` space are highly distinct. Ensuring image copyright protection while considering generalization across these varied models is indeed challenging, and it may not be accurate to make such a broad generalization claim for their methodology design.
TGS[4] is a recently accepted paper in this field, but it is not a representative technique, and its reconstruction results are not among the best. The likelihood of it being widely used in real-world applications remains highly questionable. The key point here is that applying adversarial attacks to **a specific model** is too limited to be considered a broadly useful copyright protection method. Similarly, designing adversarial attacks tailored to a specific method may be too narrow in scope for a NeurIPS paper.
---
**Reference**
[1] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. ECCV 2024.
[2] LRM: Large Reconstruction Model for Single Image to 3D. ICLR 2024
[3] Gamba: Marry Gaussian Splatting with Mamba for Single-View 3D Reconstruction. arxiv 2024
[4] Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers. CVPR 2024
---
Rebuttal 3:
Title: Thanks for your feedback 1
Comment: We would like to clarify the misunderstanding of the settings of LGM and point cloud encoder.
LGM[1] and Gamba[3] do have an encoder to get point cloud.
### **Settings of LGM [1]**
For LGM, it takes 4 input views obtained from ImageDream/MVDream. Therefore, to verify the effectiveness of our method in undermining the generated 3D results, we directly optimize adversarial perturbations on these four input images. LGM does have a **point cloud encoder** part to get the point cloud, as 3DGS needs a point cloud to represent the 3D scene. Specifically, please refer to Line 109 of LGM/core/model.py: https://github.com/3DTopia/LGM/blob/main/core/models.py
```
109 pos = self.pos_act(x[..., 0:3]) # [B, N, 3]
110 opacity = self.opacity_act(x[..., 3:4])
111 scale = self.scale_act(x[..., 4:7])
112 rotation = self.rot_act(x[..., 7:11])
113 rgbs = self.rgb_act(x[..., 11:])
114 gaussians = torch.cat([pos, opacity, scale, rotation, rgbs], dim=-1) # [B, N, 14]
```
In our rebuttal settings, we directly target and perturbed the pos property of Gaussians,
Line 109 pos = self.pos_act(x[..., 0:3]) # [B, N, 3],
which is the center of 3DGS. (i.e., point cloud).
### **Adapting to LRM [2] and Gamba [3]**
For NeRF-based methods like LRM [2], as it does not require explicit geometry features, our method may not be suitable. We didn't experiment with Gamba as it has not been formally peer-reviewed. However, it also has a point cloud encoder to estimate the point cloud (Line 130, position).
https://github.com/kyegomez/Gamba/blob/main/gamba_torch/main.py
```
130 # Position, opacity, color
131 position = self.position_layer(features)
132 opacity = self.opacity_layer(features)
133 color = self.color_layer(features)
```
Gamba is still required to estimate the point cloud.
Thus, we do not need to change our motivation statement and algorithm description. The GS-based method must have a point cloud encoder to represent the 3D scene.
[1] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. ECCV 2024.
[2] LRM: Large Reconstruction Model for Single Image to 3D. ICLR 2024
[3] Gamba: Marry Gaussian Splatting with Mamba for Single-View 3D Reconstruction. arxiv 2024
---
Rebuttal 4:
Title: Thanks for your feedback 2
Comment: Dear reviewer,
Thanks for your valuable feedback.
Our experimental findings indicate that explicit geometry features can be effectively utilized to protect ownership in GS-based tasks, despite TGS being a recent development. Due to the explicit feature of GS, GS-based image-to-3D methods inherently require similar explicit geometry features. This insight suggests that our approach can be extended to these methods, potentially enhancing security and ownership protection in single-image-to-3D applications and inspiring future research in this area.
We strongly agree with you on the concerns of the generalization ability. We will incorporate these additional experiments about our effectiveness on other GS-based approaches in the final version based on your valuable suggestions. We look forward to addressing your further concerns during this discussion. | Summary: This paper proposes a novel method to protect copyrighted images from unauthorized 3D reconstruction using Triplane Gaussian Splatting (TGS). This topic is very interesting and highly valuable for preventing the misuse of copyrighted images. Their proposed method achieves protection by incorporating invisible geometry perturbations into the image, which is easy to deploy with less cost. The extensive experimental results have demonstrated the effectiveness of their approach in preventing unauthorized 3D reconstructions while embedding a verifiable pattern. This paper can provide a contribution to addressing the growing issue of image abuse and raise the community's attention to this issue.
Strengths: 1. The motivation of this paper is well-stated and significant, and the paper is well-written. The idea of using geometry cloaks to protect images from unauthorized 3D reconstruction is novel. This paper is an attempt to address the issue of image abuse in 3D reconstruction. Such an issue has not received sufficient attention, but it is very important, especially when 3D reconstruction technologies are becoming more accessible. This paper can also raise the community's awareness of this issue.
2. Besides significantly distorting the 3D reconstruction from the protected images, the idea of embedding identifiable patterns into output renderings is interesting. This allows the image owners to determine whether the generated 3D models have used their copyrighted images. This traceability property can enhance the practicality of the method for digital copyright protection.
3. The paper provides extensive experimental results, validating the effectiveness of the proposed approach across different datasets and perturbation strategies. This thorough evaluation helps in building confidence in the robustness and reliability of the proposed method. The approach is scalable and can be applied to a wide range of images without requiring significant computational resources. This enables artists and content creators to safeguard from being misused illegally.
Weaknesses: 1. The paper lacks quantitative metrics comparing the similarity between protected and unprotected images, such as PSNR, which would provide a more comprehensive evaluation of the method's impact on image quality.
2. The importance of the view-specific PGD angle has not been thoroughly explored. It would be beneficial to investigate whether the method is effective from different angles, not just the top view. Besides, the effectiveness of combined attacks on point cloud and triplane latent features should be explored to determine if they provide better protection.
3. Minor Issues:
a). Add a period at the end of "Quantitative comparison of perturbation strategies" in the Table 1 caption.
b). Replace "no perturbation" with "not perturbed" (L255 P7).
c). Correct the citation format "et al" to "et al." (L95 P3).
Technical Quality: 3
Clarity: 4
Questions for Authors: Please see the weakness.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors have adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your recognize and valuable suggestions.
**Response to W1: Impact on image quality**
Our geometry cloak is designed to be invisible so legitimate users can have visually consistent results with the original image quality. All perturbations are controlled within a certain budget $\epsilon$. Hence, we neglect to provide the impact on image quality. For a more comprehensive evaluation of the method's impact on image quality, we provide quantitative metrics of the similarity between protected and unprotected images as below:
| | $\epsilon$ | PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ |
|-----------------|-----|-------|--------|---------|
|Ours | 2 | 34.28 | 0.9779 | 0.0220 |
| | 4 | 34.14 | 0.9758 | 0.0223 |
| | 8 | 33.57 | 0.9668 | 0.0256 |
|Ours w/o target | 2 | 33.85 | 0.9717 | 0.0337 |
| | 4 | 33.71 | 0.9695 | 0.0352 |
| | 8 | 33.37 | 0.9631 | 0.0337 |
| Adv. image | 2 | 32.32 | 0.9558 | 0.0361 |
| | 4 | 32.25 | 0.9544 | 0.0354 |
| | 8 | 31.86 | 0.9475 | 0.0383 |
Compared to previous methods of adversarial attacks on image features, our proposed geometry cloak ensures higher invisibility while effectively disturbing the reconstruction results. Besides TGS, our method also keeps the invisibility for LGM[1], as shown in Tab. R1 of the rebuttal page.
**Response to W2: Viewing direction and Adv. triplane**
(View-specific PGD) We conduct experiments from other perspectives (side/front). The experimental results indicate that the reconstructed results can still be effectively manipulated from these perspectives, demonstrating the effectiveness of our method. We also provide results of embedding multiple letters further to demonstrate the view-specific PGD performance (Fig. R5 and Tab. R1).
| | Front | Side | Top |
|----------------------------|--------|--------|--------|
| PSNR $\downarrow$ | 15.4 | 14.37 | 13.02 |
| SSIM $\downarrow$ | 0.808 | 0.797 | 0.762 |
| LPIPS $\uparrow$ | 0.170 | 0.172 | 0.213 |
| CD $\uparrow$ | 138.76 | 150.43 | 193.74 |
(Effectiveness of point cloud + triplane latent features ) Following your suggestions, we conduct experiments by combining attacking point cloud and triplane latent features. The results in Fig. R2 (a) indicate that attacking only the tri-plane will affect the visual quality of the reconstruction. There is no significant change after combining attacks on the point cloud. Future work could focus on studying the components in the 3D reconstruction process that are vulnerable to disturbances.
**Response to W3: Minor Issues**
We appreciate you pointing out these typos. These typos in our paper will be carefully corrected in the next version of our paper.
[1] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. in ECCV 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. All of my concerns are addressed. The experiments of adapting to other methods should be incorporated into the main paper. The figures in the rebuttal for muli-char embedding are impressive. Thus, I will keep my original rating.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Comment: Dear Reviewer,
We are very grateful for your recognition. We will incorporate the experimental results into the main paper following your valuable suggestions.
Best regards,
Authors of #1185 | Summary: The paper introduces a novel approach to protect copyrighted images from unauthorized 3D reconstructions using Triplane Gaussian Splatting (TGS). The method involves embedding invisible geometry perturbations, termed "geometry cloaks," into images. These cloaks cause TGS to fail in a specific way, generating a recognizable watermark, thus protecting the original content.
Strengths: 1. The concept of protecting images against unauthorized 3D reconstruction using geometric perturbations is novel.
2. This paper is well-detailed and well-written.
Weaknesses: 1. The approach is tailored specifically to TGS-based 3D reconstruction. Can the geometry cloak technique be adapted or extended to protect against other 3D reconstruction methods beyond TGS?
2. What are the potential impacts on image quality for legitimate uses when these cloaks are applied?
3. The optimization process for generating geometry cloaks might introduce significant computational overhead, which is not thoroughly discussed in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Mentioned in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your recognition and valuable suggestions.
**Response to W1: Extending to other methods**
Our method is designed to utilize the explicit geometry feature in GS-based single views to 3D methods, which is fragile and susceptible to disturbances in the reconstruction process. Thus, our method can work on various GS-based single views to 3D methods as the explicit geometry feature is a necessary element for 3DGS. Before our submission, other GS-based single images to 3D methods were not formally peer-reviewed. While we primarily focus on TGS as it is peer-reviewed during our research, our approach can be extended to other recent methods. To illustrate this, we evaluate our technique on the recently proposed LGM [1], accepted by ECCV 2024. The results are appended in Fig. R1 and Tab. R1 of the rebuttal page. By simply adapting key design elements, our approach shows promising results on LGM as well, underscoring its potential in manipulating other GS-based single-image-to-3D reconstruction methods.
**Response to W2: Impact on image quality**
Our geometry cloak is designed to be invisible so legitimate users can have visually consistent results with the original image quality. All perturbations are controlled within a certain budget $\epsilon$. Hence, we neglect to provide the impact on image quality. For a more comprehensive evaluation of the method's impact on image quality, we provide quantitative metrics of the similarity between protected and unprotected images as below:
| | $\epsilon$ | PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ |
|-----------------|-----|-------|--------|---------|
| Ours | 2 | 34.28 | 0.9779 | 0.0220 |
| | 4 | 34.14 | 0.9758 | 0.0223 |
| | 8 | 33.57 | 0.9668 | 0.0256 |
|Ours w/o target | 2 | 33.85 | 0.9717 | 0.0337 |
| | 4 | 33.71 | 0.9695 | 0.0352 |
| | 8 | 33.37 | 0.9631 | 0.0337 |
| Adv. image | 2 | 32.32 | 0.9558 | 0.0361 |
| | 4 | 32.25 | 0.9544 | 0.0354 |
| | 8 | 31.86 | 0.9475 | 0.0383 |
Compared to previous methods of adversarial attacks on image features, our proposed geometry cloak ensures higher invisibility while effectively disturbing the reconstruction results. Besides TGS, our method yields consistent invisible results in LGM[1], demonstrating the generality and concealment of our method (as shown in Tab. R1 of the rebuttal page).
**Response to W3: Computational resources**
Our method only optimizes the invisible cloak, which does not require a lot of computational resources (~8GB GPU memory). We provide the convergence curve of the loss during the optimization process on the rebuttal page. In less than 50 secs, with a single V100 GPU, we can achieve protection for each image. The computational resources will be incorporated in the next version of our paper.
[1] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. In ECCV 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response. Your detailed explanations have resolved most of my concerns. Given the method's demonstrated extensibility and invisibility, I will be increasing my score.
One additional suggestion: it may be beneficial to include a pixel-wise difference map between protected and unprotected images.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer,
We are very grateful for your recognition. We will provide pixel-wise difference maps for protected and unprotected images in the final version based on your valuable suggestions.
Best regards,
Authors of #1185 | Summary: The paper introduces a novel image protection approach called "Geometry Cloak" to prevent unauthorized 3D model generation from copyrighted images using single-view 3D reconstruction methods like Triplane Gaussian Splatting (TGS). The Geometry Cloak embeds invisible geometry perturbations into images, which are revealed as a customized message when TGS attempts 3D reconstructions, thus acting as a watermark for copyright assertion.
Strengths: 1. This paper raises a novel question, namely how to protect the copyright in the process of image to 3D, which is very meaningful.
2. The presentation of this paper is very clear and easy to understand.
3. A view-specific PGD strategy is proposed to optimize geometry cloak, which is simple but effective.
4. The authors conduct experiments on two 3D datasets and various types of patterns, and verify the effectiveness of the experimental results via sufficient ablation experiments and visualization results.
Weaknesses: 1. Can the proposed method be extended to other image to 3D models, such as LRM [1], LGM [2]? The author could introduce the advantages of using TGS instead of other image to 3D models in terms of generation speed and quality, so that the readers can better understand its task scenario.
[1] Lrm: Large reconstruction model for single image to 3d. In ICLR 2024.
[2] Large multi-view gaussian model. In ECCV 2024.
2. The robustness of the proposed method should be verified. For example, when Gaussian noise and JPEG compression are added to the protected image, can it still resist illegal theft?
Technical Quality: 4
Clarity: 4
Questions for Authors: Please refer to the weakness.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have clearly presented their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
Thank you for your recognition and valuable suggestions.
**Response to W1: Extending to other methods**
Our method is designed to utilize the explicit geometry feature in GS-based single views to 3D methods, which is fragile and susceptible to disturbances in the reconstruction process. Thus, our method can work on various GS-based single views to 3D methods as the explicit geometry feature is a necessary element for 3DGS. Before our submission, other GS-based single images to 3D methods were not formally peer-reviewed. While we primarily focus on TGS as it is peer-reviewed during our research, our approach can be extended to other recent methods. To illustrate this, we evaluate our technique on the recently proposed LGM [1], accepted by ECCV 2024. The results are appended in Fig. R1 and Tab. R1 of the rebuttal page. By simply adapting key design elements, our approach shows promising results on LGM as well, underscoring its potential in manipulating other GS-based single-image-to-3D reconstruction methods. For NeRF-based methods like LRM [2], our work could inspire more research to study the vulnerability layers in these frameworks.
TGS combines explicit geometry features and implicit tri-plane representations to achieve accurate and detailed reconstructions, representing a cutting-edge approach to 3D object reconstruction from **single-view images**. Before our submission, LGM had not been officially peer-reviewed. Thus, we choose TGS as our experiment subject to verify the effectiveness of the geometry cloak. However, this does not affect the generality of our approach in GS-based methods. As discussed in the paper, GS-based methods require explicit geometric features to represent 3D models. The process of obtaining this geometric feature is easily manipulable with proper adversarial perturbations. We will extend the experimental results in LGM to our paper to further clarify the effectiveness of the geometry cloak.
**Response to W2: Robustness against image compression**
To verify the robustness of our method, we experiment with some common image operations, including Gaussian noise, JPEG, etc. Under mild operations, the geometry cloak cannot be removed, and the geometric features of the generated 3D result are still disturbed. With higher operations, the geometry cloak can be affected. However, the quality of the view image has been severely compromised, leading to poor visual effects in the reconstructed results, making these protected images unusable.
| | no. Comp. | Noise | | | JPEG | |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| | - | 5.0 | 10.0 | | 60 | 90 |
| PSNR $\downarrow$ | 11.05 | 10.97 | 11.21 | | 20.13 | 14.71 |
| SSIM $\downarrow$ | 0.804 | 0.806 | 0.798 | | 0.861 | 0.807 |
| LPIPS $\uparrow$ | 0.194 | 0.194 | 0.197 | | 0.111 | 0.158 |
| CD $\uparrow$ | 155.6 | 118.7 | 93.81 | | 14.75 | 42.22 |
We present more results about robustness against common image operations in Tab. R1 of the rebuttal page.
[1] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. In ECCV 2024.
[2] LRM: Large reconstruction model for single image to 3d. In ICLR 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. My concerns are well-addressed. Specifically, this method exhibits notable effectiveness across various GS-based single-image-to-3D approaches, proving its broader applicability. Besides, from experimental results in the rebuttal, this paper also demonstrates that some complex patterns can also be efficiently generated for copyright protection. Considering its rebuttal and the two additional merits, I will increase my rating to 8.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer,
We are very grateful for your recognition. We will integrate these results into the next version of our paper based on your valuable feedback.
Best regards,
Authors of #1185 | Rebuttal 1:
Rebuttal: Dear reviewers,
We would like to thank all the reviewers for their time and for writing thoughtful reviews of our work.
In this work, we introduce the geometry cloak, which can effectively manipulate the process of 3DGS-based single-image to 3D methods by adding invisible perturbation. We reveal that the explicit geometric features are vulnerable components in the reconstruction process. Our geometry cloak can work on various methods like LGM[1], as 3DGS requires explicit geometry features for 3D representation. By exploiting this vulnerability, we can effectively manipulate the reconstructed 3D results, which is a noteworthy issue given the popularity of 3DGS today. Our approach offers a fresh viewpoint on adversarial perturbations, preventing copyrighted images from being used by GS-based methods. This could potentially stimulate further research into improving the resilience of 3D reconstruction and addressing AI privacy concerns.
To further clarify our work, we have provided more experimental results on the rebuttal page.
**1. Extending to other methods**
Fig. R1 and Tab. R1 provide the qualitative and quantitative results when implementing our method on LGM [1].
Fig. R1 presents the reconstructed views and point cloud via LGM under different perturbating strategies. The reconstructed 3D model is undermined and manipulated via our geometry cloak.
In Tab. R1, we experiment on three default scenes in LGM and report the reconstructed results when applying Gaussian noise and our method to the input views. The results show that our method can effectively disturb the quality of the reconstructed 3D model.
**2. Combining adv. tri-plane**
Fig. R2 (a) presents the visual results when combining perturbation on the tri-plane feature and geometry feature. Combining the two does not improve the attack performance, as the tri-plane feature is a robust part of TGS that is difficult to disturb. Future work could focus on studying the components in the 3D reconstruction process that are vulnerable to disturbances.
**3. Perturbations with smaller/larger budget**
Fig. R2 (b-c) provide the visual results when employing a smaller/larger budget $\epsilon$. These two figures indicate that our method is insensitive to larger epsilon values, as high-intensity Gaussian noise struggles to disturb the geometric features of the reconstructed results.
**4. Tendency of performance degradation**
Fig. R3 (1-3) illustrate the quality of the reconstructed 3D results under different epsilon intensities. An obvious decrease in reconstruction quality occurs within the 0 to 4 intensity range.
**5. Convergence status under different budget**
Fig. R3 (4) presents the convergence status under different budgets.
**6. Quality of protected image**
Fig. R4 (1-3) shows the quality of the protected image under different perturbation strategies.
**7. Computational resources**
Fig. R4 (4) presents the time required to finish protection via our method.
**8. Viewing direction**
Fig. R5 (left) illustrates the visual results of embedding a watermark at different angles and observing it from different perspectives.
Tab. R1 (right) shows the metric results when embedding a watermark at different angles.
**9. Multi-character as watermarks**
Fig. R5 (right) presents the results when embedding multi-character as the side view.
**10. Robustness against image compression**
Tab. R1 (right) presents the results when the protected image is modified via common image operations.
[1] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. In ECCV 2024.
Pdf: /pdf/ed2115db345b67be2dda429373c3c111720adccd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper presents a method for copyrights in 3D reconstruction, specifically targeting novel-view synthesis, rather than traditional 2D images. Recently, advancements in 3D reconstruction have been driven by neural radiance fields (NeRFs) and 3D Gaussian splatting (3D GS), both of which maintain 3D consistency. These methods primarily focus on learning RGB values from multiple posed images. Notably, recent developments have shown that single-view 3D reconstruction can be achieved with the support of pre-trained diffusion-based generative models.
Unlike 2D images, privacy preservation in 3D reconstruction has received less attention due to its recent development. This paper introduces a geometry cloak that perturbs 3D point cloud representations instead of 2D images. This perturbation prevents 3D reconstruction from 2D posed images without degrading the visual quality of the 2D images.
The technique is particularly supportive of 3D GS, enabling real-time novel-view synthesis, a capability not supported by existing NeRFs. To the best of our knowledge, this paper is the first to propose an adversarial attack to preserve copyright in 3D GS from a 3D reconstruction perspective.
Strengths: 1. This paper addresses contemporary issues regarding copyright protection in 3D reconstruction. Utilizing the framework of Tri-Plane Gaussian Splatting (TGS), which encodes posed images into tri-plane representations and 3D point clouds before generating 3D Gaussian splatting, the paper proposes a novel perturbation method. This method degrades the quality of 3D geometry without affecting the visual quality of 2D images.
2. This demonstrates that the proposed perturbation method significantly impacts the performance of 3D reconstruction, regardless of the degree of perturbation, unlike simple noise injections. It also shows that this method is robust and not sensitive to hyper-parameter variations.
Weaknesses: 1. This paper heavily relies on the prior work of Tri-Plane Gaussian Splatting (TGS), which explicitly represents 3D point clouds to enhance geometric properties and employs tri-plane representation to encode 3D Gaussians. Without leveraging TGS, this study would not effectively address copyright issues in 3D reconstruction. This dependency indicates that the proposed approach is not applicable to a wide range of novel-view synthesis techniques, highlighting a limitation in its generalizability.
2. While this paper demonstrates that the proposed method is insensitive to the degree of perturbation, it does not adequately explain this phenomenon in the context of privacy preservation. In the past, understanding the tendency of performance degradation is crucial and be well-explained through probabilistic analysis for the extent of perturbation. However, the experimental results presented in this paper do not support this concept.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you show the slope of performance deviation when $\epsilon$ is less than 2? When $\epsilon = \{0, 1\}$, does the proposed method also exhibit a negative slope of performance in terms of PSNR, SSIM, and LPIPS?
2. Could you show the performance degradation of random noise when $\epsilon$ increases beyond 8? It should demonstrate that the proposed approach is more beneficial under strong perturbation. While $\epsilon=8$ indicates color disturbance, the context and shape in the 2D images do not change.
3. Could you present the ablation study on the influences of PGD?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer,
We express our gratitude for your recognition and valuable suggestions.
**Response to W1: Extending to other methods**
Our method is designed to utilize the explicit geometry feature in GS-based single views to 3D methods, which is fragile and susceptible to disturbances in the reconstruction process. Thus, our method can work on various GS-based single views to 3D methods as the explicit geometry feature is a necessary element for 3DGS. Before our submission, other GS-based single images to 3D methods were not formally peer-reviewed. While we primarily focus on TGS as it is peer-reviewed during our research, our approach can be extended to other recent methods. To illustrate this, we evaluate our technique on the recently proposed LGM [1], accepted by ECCV 2024. The results are appended in Fig. R1 and Tab. R1 of the rebuttal page. By simply adapting key design elements, our approach shows promising results on LGM as well, underscoring its potential in manipulating other GS-based single-image-to-3D reconstruction methods.
**Response to W2: Tendency of performance degradation**
We are very grateful for you pointing out this phenomenon.
Tab.1 in the main paper aims to demonstrate that even with larger budgets, other perturbation methods (Gaussian noise and adv. image features) still do not effectively undermine the reconstruction results. To understand the tendency of perturbation, we provide more experimental results under a wider range of budget $\epsilon$.
| $\epsilon$ | PSNR$\downarrow$ | SSIM$\downarrow$ | LPIPS$\uparrow$ |
| ---------- | ----- | ------ | ----- |
| 0.5 | 24.00 | 0.9147 | 0.065 |
| 0.8 | 20.14 | 0.8711 | 0.102 |
| 1.0 | 16.64 | 0.8337 | 0.151 |
| 1.5 | 14.89 | 0.8124 | 0.183 |
| 2.0 | 13.80 | 0.7996 | 0.198 |
| 4.0 | 11.71 | 0.7918 | 0.214 |
| 8.0 | 11.23 | 0.7935 | 0.216 |
| 16.0 | 11.20 | 0.7914 | 0.218 |
Besides this, we provide visual results of perturbed results in Fig. R2 (b-c) and curves for different $\epsilon$ in Fig. R3 of the rebuttal page.
The experimental results reveal a sensitivity to smaller budget $\epsilon$ (<2) values, while larger budget $\epsilon$ (>2) values demonstrate insensitivity due to the already severe disruptive effects. We recognize and thank your suggestions that understanding the tendency of performance degradation is crucial; these results will be extended to the next version of our paper.
**Response to Q1: Slope of performance deviation**
We provide the slope of performance deviation in Fig. R2 of the rebuttal page, which also exhibits a negative performance slope.
**Response to Q2: Results under stronger perturbation**
We provide visual results when a random noise ($\epsilon$ =16, 32) is applied in Fig R2 (c) of the rebuttal page. Even with this larger budget $\epsilon$, The geometry pattern of the reconstructed 3D model has no obvious change. This indicates that the process of obtaining geometry features through TGS is robust to noise. We will incorporate these results into our paper.
Our method shows unobvious changes in disturbance results after $\epsilon$ > 4 (as discussed in W1). We present the performance slope in Fig. R3 of the rebuttal page.
**Response to Q3: Ablation study of PGD**
We demonstrate the convergence of loss under different values of epsilon in PGD (Fig. R3). We also provide the results of PGD attacks from different perspectives (Fig. R5 and Tab. R1) and multiple characters as watermarks (Fig. R5) on the rebuttal page. We look forward to discussing with you in the upcoming discussion phase to clarify things we may neglect.
[1] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation. In ECCV 2024.
---
Rebuttal 2:
Title: Response to the author's rebuttal
Comment: I am satisfied with the authors' response and appreciate their effort to address my concerns regarding the effectiveness of noise perturbation depending on the privacy budget.
Additionally, It is impressive that performance decreases as perturbation increases.
While the authors notice that noise to the Tri-Plane do not produce the expected results, the reason for this phenomena remains unexplained. Although they have indicated this as future works, understanding the phenomena seems crucial in 3D reconstruction since learned Tri-Plane appears to also contain geometry information. Given these issue, I maintain my original score.
---
Rebuttal Comment 2.1:
Title: Thanks for your feedback
Comment: Dear reviewer kd52,
We are very grateful for your recognition. We will integrate the results of the privacy budget into the final version of our paper based on your valuable feedback.
3DGS-based single-view to 3D methods are novel mechanisms recently proposed. The properties of such mechanisms are still under exploration. One possible explanation for this phenomenon could be that the feature of Tri-Plane is high-dimensional and implicit, while the geometric features (point cloud) are lower-dimensional and explicit. Explicit point clouds directly represent the attributes of the Gaussian (position of Gaussian). This discrepancy may make it easier to obtain appropriate perturbations when dealing with point clouds. We appreciate your insights, and we will incorporate all your valuable suggestions into our final paper.
Best regards,
Authors of #1185 | null | null | null | null | null | null |
Gradual Domain Adaptation via Manifold-Constrained Distributionally Robust Optimization | Accept (poster) | Summary: This paper introduces a novel approach to gradual domain adaptation using distributionally robust optimization (DRO). The core idea is to adapt models across successive datasets by controlling the Wasserstein distance between distributions and ensuring they lie on a favorable manifold. The authors apply the method theoretically to two examples and provide theoretical guarantees for generalization error. Furthermore, the authors also validate the theoretical findings through a series of experiments.
Strengths: + The theoretical contributions are substantial, providing rigorous guarantees on model adaptation and generalization errors across domains. The algorithm provides a bounded error regardless of $T$, For appropriately constrained distributions, the error can be demonstrated to be linear or even entirely eradicated.
+ This paper extend the theoretical results to a more general class of distributions (referred to as "expandable" distributions) with learnable classifiers.
+ Furthermore, The authors demonstrate the polynomial-time convergence of the algorithm.
* The paper is well-written and structured, effectively communicating complex theoretical concepts in a clear and organized manner. It employs rigorous mathematical formalism while maintaining readability, making it accessible to readers
Weaknesses: 1. It seems that the proof of the theoretical results requires overly stringent conditions, e.g., the assumption of expandable distributions, the use of smooth mappings and the requirement of distribution characterized by favorable properties. And it might not generalize well to real-world data distributions encountered in practice.
2. Both Gaussian mixture model data distributions and "expandable" distributions are relatively toy data distributions, which are somewhat different from the data distribution of real tasks.
3. The paper primarily focuses on the error within the domain $P_T$. However, the error of $\theta^*$ in the previous domains $P_1, \cdots, P_{T-1}$ remains uncertain.
4. How to effectively compute the following WDRO problem **under the manifold constraint** $G$**?
$\Delta_i^*, \theta_i^* \longleftarrow \{\min _{\theta \in \Theta}, \underset{\theta \in \Theta}{\operatorname{argmin}}\} \sup *{P \in \mathcal{B}*{\varepsilon_i}\left(\widehat{P}_i \mid \mathcal{G}\right)} \mathbb{E}*P\left[\ell\left(y, h*\theta(\boldsymbol{X})\right)\right].$
5. This is related to the theoretical analysis of the Self-Training method in a semi-supervised scenario, where pseudo-labels are assigned to unlabeled data for training [1]. It seems to extend the transformation from $P_{labeled} \rightarrow P_{unlabel}$ , in this scenario to $P_{labeled}^1 \rightarrow P_{unlabel}^2\rightarrow P_{unlabel}^3\rightarrow P_{unlabel}^4 ...\rightarrow P_{unlabel}^T$ and uses WDRO to capture the changes in Domain $P^1$ to $P^T$ (in Definition 4.1-4.3). The authors should discuss more about the Assumptions.
6. When estimate the WDRO radius $\epsilon_i \longleftarrow \lambda \Delta_{i-1}^*+\eta$. It appears that the $\Delta_i^*$ is meaningful only if the loss $\ell$ is the zero-one loss. This is because the definition of $\mathcal{W}_{p, \lambda}^q(P, Q)$ (as shown in Eq.(2) in the paper) is related to the zero-one loss $\mathbb{1}\left\{y \neq y^{\prime}\right\}$.
[1] Colin Wei, Kendrick Shen, Yining Chen, and Tengyu Ma. Theoretical analysis of self-training with deep networks on unlabeled data. In *International Conference on Learning Representations*, 2020.
Technical Quality: 3
Clarity: 3
Questions for Authors: + My main concern is that theoretical results rely on relatively toy data distributions and stringent properties, which could lead to limited practical applicability.
+ What are the challenges in extending theoretical results to multi-class data distributions?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: see weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback. The reviewer's main concern is the applicability of our method in real-world tasks which we have addressed in the camera-ready version. Below, we provide detailed responses to each of the reviewer's comments and concerns:
----
**Weaknesses**:
- **The proof of the theoretical results require stringent conditions, such as the assumption of expandability, the use of smooth mappings, and the requirement for distributions characterized by favorable properties. This may limit the generalizability of the results to real-world data distributions.**: These assumptions and toy examples were introduced to facilitate the theoretical analysis of our algorithm. However, the algorithm is robust and can be applied to real-world data without issues. Please refer to the global rebuttal and the attached PDF for more details, including results on real-world datasets commonly used in gradual domain adaptation literature, where our method outperforms competitors.
- **Both GMM and "expandable" distributions are relatively toy examples, which are different from the real data distributions.**: As mentioned in our previous response, we have tested our algorithm on real-world datasets, and the results are available in the global response. These results will be included in the camera-ready version. Furthermore, our notion of expandability is closely related to the one used in previous work [WSCM20], where it has already been validated on datasets such as CIFAR-100.
- **The paper focuses on the error within the domain $P_T$. However, the error of $\theta_T$ in the previous domains $P_1, ...,P_{{T-1}}$ remains uncertain.**: In our approach, we identify a $\theta^{\*}_i$ for each domain $P_i$ ($i \geq 1$) and provide guarantees for the error associated with $\theta^{\*}_i$ within its respective domain. $\theta^{\*}_T$ is specifically optimized for $P_T$, as each prior distribution has its own optimal solution.
- **How to compute the WDRO problem under the manifold constraint of $\mathcal{G}$?**:
We considered two scenarios: In the first, $\mathcal{G}$ is the family of Gaussian distributions, which is parameterizable, meaning we only need to tune the parameters while remaining on the manifold. This simplifies the problem to finding $P = P_\mu$, where $\Vert \mu - \mu_i \Vert_2 \leq \varepsilon_i$. In the second scenario, we assume that distributions can be generated from one another by applying an unknown but sufficiently smooth function. For example, $P_{i+1}$ models the distribution of $f(X)$, where $X \sim P_i$ and $f$ belongs to a general but smooth function family $\mathcal{F}$ (this condition is not overly restrictive in practice). In this case, we replace $P \in \mathcal{B}_{\varepsilon_i}(\widehat{P}_i | \mathcal{G})$
with $P = f_{\\#} P_i$ for $f \in \mathcal{F}$. Practically, we consider a parametric family of functions for $\mathcal{F}$ and maximize the loss within this class using gradient ascent, while penalizing the function to ensure it does not deviate significantly from real samples.
- **Regarding the analysis of Self-Training method in a semi-supervised scenario, where pseudo-labels are assigned to unlabeled data [WSCM20]. It seems to extend the transformation from $P_{l} \to P_{ul}$, in this scenario to $P^1_{l} \to P^2_{ul} \to \ldots \to P^T_{ul} $and uses WDRO to capture the changes in Domain $P^1$ to $P^T$ (in Definition 4.1-4.3). The authors should discuss more about the Assumptions.**: Our work differs from [WSCM20] in several key aspects. In [WSCM20], the authors assume the availability of two potentially distant distributions over the feature space, with an unknown but shared labeling rule. They also assume access to a pseudo-labeler from the first distribution, which can be used to assign labels to unlabeled data from the second distribution. Additionally, it is assumed that both distributions and the unknown ground truth labeler satisfy certain robustness properties. In contrast, our work focuses on gradual domain adaptation, where consecutive distributions are assumed to be close. However, we make no assumptions about the labeling rule or the availability of the true distributions.
- **When estimate the WDRO radius, it appears that the $ \Delta^{*}$ is meaningful only if the loss $ \ell $ is the zero-one loss. This is because the definition of distance (as shown in Eq.(2) in the paper) is related to the zero-one loss ${1}\\{y \neq y^{\prime}\\}$.**: The reviewer is correct that this result is directly applicable to the zero-one loss. However, it can be generalized to cases where $\ell$ or a scaled version of it, such as $\alpha \ell$, is greater than the (0,1)-loss. In this scenario, the estimation would be modified to $\epsilon_i \longleftarrow \alpha \lambda \Delta^*_{i-1} + \eta$. We will discuss this generalization in the camera-ready version.
----
**Questions**:
- **My main concern is that theoretical results rely on relatively toy data distributions and stringent properties, which could lead to limited practical applicability.** To address this concern, we implemented our method on a number of real-world datasets and achieved superior results compared to existing methods. Please refer to the global rebuttal for detailed results.
- **What are the challenges in extending theoretical results to multi-class data distributions?**: As we mentioned in our response to Reviewer FbyN, our method, including Theorem 2.3 and Corollary 2.4, can be extended to multi-class settings. However, extending the examples involving Gaussian distributions and manifold assumptions would require additional statistical analysis, which we plan to address in future work. This will be discussed further in the camera-ready version.
----
We hope these revisions address the majority of the reviewer’s concerns and lead to a favorable reassessment. Once again, thank you for your timely and thorough review of our work.
---
Rebuttal Comment 1.1:
Title: Appreciate
Comment: I appreciate the author's responses, and carefully read the comments from other reviewers. I agree with most of the reviewers that the paper presents a nice work. So I would like to raise my grade. | Summary: This paper studies the theoretical aspect of gradual domain adaptation (GDA), where the knowledge of labeled source domains is supposed to be transferred to a sequence of target domains. The main results show that the gradual adaptation process can be well characterized by the distributionally robust optimization (DRO) framework, where the domain gaps between the source domain and multiple target domains are gradually captured by the robustness of the model within a pre-set region, i.e., a ball w.r.t. Wasserstein metric over probability space. Finally, the distribution shift is guaranteed to be mitigated with the DRO algorithm.
Strengths: + The motivation of employing DRO as a theoretical framework to address gradual domain adaptation is clear and reasonable.
+ The extensive theoretical results seem to be solid.
Weaknesses: - The clarity should be improved, where the advantages or improvements w.r.t. related theory for GDA is not discussed, which leads to unclear contributions.
- The presentation should be improved, e.g., the DRO algorithm is provided without justification while the main results closely depend on this algorithm.
- There are many typos and the readability is fair.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1. As discussed in the previous work section, there are already several theoretical works for GDA, e.g., [WLZ22] and [HWLZ23]. However, the differences between the derived results and these works are not discussed in either *previous work section* or *main result section*. It should be clarified that what new insights are provided in the derived results.
Q2. The basic framework, i.e., DRO, is presented in Algorithm 1 directly, while there are no insights provided for it. Though there are plenty of results derived based on DRO, it is hard to understand the working mechanism of the DRO algorithm.
Q3. The bond in Theorem 2.3 shows that the target error can be dominated by the source risk with the factor $g_\lambda (\cdot)^{\circ T}$. Though Corollary 2.4 provides an analytic bound for $g_\lambda$, it would be more interesting to show the monotonicity of $g$ w.r.t. the composition operator. Furthermore, can the factor $g_\lambda (\cdot)^{\circ T}$ can be monotonically reduced by the increase of $T$? i.e., the factor of error can be reduced with the gradual adaptation process.
Q4. In literature [WLZ22], the main result of Theorem 1 shows that the target error is bounded by the source risk and accumulated error w.r.t. domain number $T$, which is induced by gradual adaptation process. Note that the main result in submission show the accumulated error is a factor of source risk, which implies this result could be loose when the factor is large. Some comparison between the tightness of these bounds are highly appreciated.
Minor: 1) Line 126, reference of Theorem; 2) Line 140, notations $\theta$ and $\Theta$ are used without definitions; 3) Line 186, previous studies [].
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The theoretical analysis for gradual domain adaptation seems to have no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback. The reviewer's main concern was the lack of sufficient discussion on certain aspects of the paper, which we have addressed in the revised/camera-ready version. Below, we provide detailed responses to each of the reviewer's comments and concerns:
**Weaknesses**:
- **Clarity needs improvement, especially in discussing the advantages or improvements related to GDA theory, leading to unclear contributions.**: We have enhanced the clarity and presentation of our work in the revised version. To provide a brief summary: as noted by other reviewers, our method has a significant advantage over GDA and other competing methods, where the error propagation in GDA grows exponentially with the number of intermediate domains $T$. Recent works [WLZ22, HWLZ23] have achieved linear error propagation, although the rate of growth remains constant even when the initial error is small. In contrast, our method can control error propagation to be independent of $T$, given certain assumptions on the distributions.
- **The presentation should be improved, e.g., the DRO algorithm is provided without justification, despite the main results relying on this algorithm.**: We would like to clarify that DRO is a well-established technique and is not our original contribution. It has been extensively studied (see [1]) and previously applied in domain adaptation [2]. In our work, we propose adaptive robustness radii, and then utilize DRO by constraining the Wasserstein ball to a specific manifold of distributions, and analyze its performance in the context of gradual domain adaptation, both theoretically and experimentally (please refer to the global rebuttal for details on the new experiments).
[1] Sinha et al., (2018). Certifying Some Distributional Robustness with Principled Adversarial Training. In International Conference on Learning Representations.
[2] Lee et al. (2018). Minimax statistical learning with wasserstein distances. Advances in Neural Information Processing Systems, 31.
- **Typos...**: We have carefully revised the paper to address this issue and assure the reviewer that all typos and rushed phrasing have been corrected.
-------------------
**Questions**:
- **There are several theoretical works on GDA, e.g., [WLZ22] and [HWLZ23]. However, the differences between these works and the results in this paper are not discussed. It should be clarified what new insights are provided by the derived results.**: This question is addressed in our response to the first weakness. We will also highlight our contributions and the advantages of our method over previous methods in a dedicated subsection in the camera-ready version.
- **The basic framework, i.e., DRO, is presented in Algorithm 1 directly, with no insights provided for it. Though there are plenty of results derived based on DRO, it is hard to understand the working mechanism of the DRO algorithm.**: Theorem 1 (main theorem) demonstrates that using our DRO-based Algorithm 1, the error propagation can be uniquely and solely determined by a newly introduced complexity measure, $g_{\lambda}(\cdot)$. The rest of the paper is dedicated to obtaining $g(\cdot)$ for various settings and showing how error propagation can be entirely mitigated.
- **Theorem 2.3 shows that the target error can be dominated by a factor of $g_{\lambda}(.)^{oT}$. Corollary 2.4 provides an analytic bound for $g_{\lambda}$, but it would be more interesting to show the monotonicity of $g$ with respect to the composition operator. Furthermore, can the factor $g_{\lambda}(.)^{oT}$ be monotonically reduced by increasing $T$?**: From Equation (4), we know that $g_{\lambda}$ is an increasing function. It is important to note that $g_{\lambda}$ is not simply composed with itself; as stated in line 166 of Theorem 2.3, it is composed in a specific manner. For example, after one composition, we have $g(2\lambda g_{\lambda}(\eta) + \eta)$, which is greater than $g_{\lambda}(\eta)$ because $2\lambda g_{\lambda}(\eta) + \eta > \eta$, and $g_{\lambda}$ is increasing. Repeating this process shows that the composition $[g(2\lambda(.) + \eta)]^{oT}$ always increases (or at least remains constant) with respect to $T$. Also, it is not intuitively plausible for the error to reduce as it propagates.
- **In [WLZ22], Theorem 1 shows that the target error is bounded by the source risk and accumulated error w.r.t. domain number $T$, which is induced by gradual adaptation process. Note that the main result in submission show the accumulated error is a factor of source risk, which implies this result could be loose when the factor is large. Some comparison between the tightness of these bounds are highly appreciated.**: The reviewer is correct in noting this, and analyzing the tightness of these bounds and providing a lower bound for the error propagation would indeed be a valuable contribution. However, this analysis is beyond the scope of the current paper. We will address this in the future work section of the camera-ready version. Additionally, it is worth noting that in the opposite scenario, where the initial error is low, our bounds—even under linear error propagation—are still better than those in [WLZ22], as their rate of increase is independent of the initial error.
- **Minor issues**: Thank you for pointing out these minor issues. All have been corrected in the revised version.
------------------
We hope these revisions address the majority of the reviewer’s concerns and lead to a favorable reassessment. Once again, thank you for your timely and thorough review of our work.
---
Rebuttal Comment 1.1:
Title: Concerns are addressed.
Comment: I thank the authors for their detailed responses. In the previous round review, my main concerns are the (theoretical) comparison with existing GDA theory and the tightness of bounds. In rebuttal, I found that 1) the advantages and weaknesses of derived bound over existing works are properly discussed; 2) the implications from derived results, e.g., Thm. 2.3 and Cor. 2.4, are further provided.
Therefore, considering the modifications and improvements above, I'm satisfied with the responses and willing to improve the score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer C19w
Comment: Once again, we sincerely thank the reviewer for their time and effort in reviewing our paper. We are glad that the reviewer's concerns have been addressed and appreciate their decision to raise their score in favor of our submission. | Summary: This paper proposes an optimization paradigm for gradual domain adaptation by iteratively performing manifold-constrained Wasserstein DRO and pseudo-labeling on the sequence of domains. The error propagation is theoretically investigated by a compatibility measure $g(\eta)$ between the manifold of distributions and the class of classifiers. It is shown that with sub-linear $g(\eta)$, the error propagation will be bounded by Wasserstein distance between adjacent domains, which is independent of the steps of adaptation. The theory is applied to analyze both gaussian generative models and a more general class of distributions characterized by an expansion property.
Strengths: The paper makes good contribution in obtaining the first error bound for gradual domain adaptation that does not scale with $T$, blocking error propagation. The manifold assumption is natural and prevalent. And by the measure of compatibility, the author manages to associate the error propagation rate with complexity of the manifold structure. An exact threshold for error propagation is also obtained as a linear compatibility function. And the classic setting of mixture gaussian models for binary classification is also solved without error propagation.
Weaknesses: 1. At the end of proof of theorem 3.1, the author says that one can only upper bound $\min \\{ \mathbb E[\ell(x,y)], 1-\mathbb E[\ell(x,y)] \\}$, which is much a weaker result. Should the conclusion of theorem 3.1 be revised accordingly? Also can the minimum form be improved?
2. Theorem 3.1 is not tight in the current form for small distances between adjacent domains. Consider $\eta=0$ and $\lambda = 0$,
the error should always be that of Bayes optimal across domains, which is smaller than the current constant upper bound.
3. Theorem 3.2 is not tight for $\eta > 1$, in which case it is actually saying the upper bound of error rate will shrink.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Typos: L126, reference missing. L186, citation missing. L228, $P_i$.
2. Is definition 4.1 correctly presented? The left hand side of the inequality is independent of the Borel set A, while the right hand side is dependent on A.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The author has not explicitly addressed the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their positive feedback. We have addressed the remaining concerns as follows:
**Weaknesses**:
- **At the end of proof of theorem 3.1, the author says that one can only upper bound $\min\\{\mathbb{E}[\ell(x, y)], 1- \mathbb{E}[\ell(x, y)]\\}$, which is much a weaker result. Should the conclusion of theorem 3.1 be revised accordingly? Also can the minimum form be improved?**: This limitation arises specifically in Theorem 3.1, where we consider the $(0-1)$-loss function. In this case, $1 - \mathbb{E}[\ell(x, y)]$ represents the expected loss of our classifier on $(x, -y)$. Essentially, this indicates that our classifier can separate the two classes very well, though it’s possible that the labels are flipped. Such guarantees are not uncommon in the literature, as discussed in [WSCM20]. Additionally, it should be noted that a few number of labeled data point from $P_T$ can always correct this label flipping.
- **Theorem 3.1 is not tight in the current form for small distances between adjacent domains. Consider $\eta = 0, \lambda = 0$, the error should always be that of Bayes optimal across domains, which is smaller than the current constant upper bound.**: Theorem 3.1 was originally presented for cases where $\eta > 0$ and $\lambda \neq 0$. However, as the reviewer suggested, when $\eta = 0$, the error indeed corresponds to the exact Bayes error. To elaborate: as seen from the first line of inequalities in (37), when $\eta=0$, we have:
$$
g_{\lambda}^0(\eta)
\leq \\, \inf_{\theta\in\Theta} \\, \sup_{P_{\mu}: \Vert \mu - \mu_0\Vert \leq 2\eta} E_{P_{\mu}} \\, [\ell(y,h_{\theta}(X))]
= \inf_{\theta\in\Theta} E_{P_0} [\ell( y , h_{\theta} (\boldsymbol{X}))]
= E_{\text{Bayes}},
$$
where $E_{\text{Bayes}}$ denotes the Bayes error. (We apologize for the cumbersome formulation, but this was necessary to work within the constraints of OpenReview). We will ensure that these edge cases are discussed in more detail in the camera-ready version.
- **Theorem 3.2 is not tight for $\eta > 1$, in which case it is actually saying the upper bound of error rate will shrink.**: The reviewer is correct. As seen in the proof of Theorem 3.2, our analysis primarily focuses on cases where $\eta$ is not too large, which aligns with the typical scope of gradual domain adaptation research. When $\lambda$ is chosen moderately, $\eta \geq 1$ can significantly diminish the information of labels. For larger values of $\eta$, which were not the primary focus of this paper, alternative bounds and methodologies may be required to obtain more appropriate results.
---------------------
**Questions**:
- **Typos...**: We have carefully revised the paper to address this issue and assure the reviewer that all typos and rushed phrasing have been corrected.
- **Is definition 4.1 correctly presented? The left hand side of the inequality is independent of the Borel set A, while the right hand side is dependent on A.**: Thank you for highlighting this issue. Our intention was to show that for all Borel sets $A \in \mathcal{A}$, the two conditions hold. We will correct this by removing the $\inf$ and $\sup$, and explicitly stating that the conditions apply for $\forall A$.
------------------
**Limitations: The author has not explicitly addressed the limitations of the work.**: We appreciate the reviewer’s input on this point. We will include a discussion of the limitations of our work, taking the reviewer's comments into careful consideration.
-------------------
We hope these revisions address the majority of the reviewer’s concerns. Once again, thank you for your timely and thorough review of our work.
---
Rebuttal 2:
Comment: I acknowledge and thank the author for their response. Overall, I believe this paper significantly contributes to the existing theoretical analysis of gradual domain adaptation, being the first, as far as I know, to obtain a non-expansive error bound as the number of domains T increases. Despite some limitations in the practical validation of the algorithm, I maintain my support for its acceptance.
I would appreciate it if the author could improve the clarity of the paper by revising the statement of Theorem 3.1 and Definition 4.1, discussing the limitations explicitly, as well as thoroughly checking for any other typos.
---
Rebuttal Comment 2.1:
Title: Response to Reviewer BZqU
Comment: Thank you for your time and favorable assessment of our work. We greatly appreciate your comments and feedback. Based on your and the other reviewers' suggestions, we conducted additional experiments to demonstrate the implementability and applicability of our method in real-world tasks, where it outperforms a number of rival methods. We also corrected all the typos, including those mentioned by the reviewers. | Summary: The paper presents a new approach to gradual domain adaptation using distributionally robust optimization (DRO). This approach provides theoretical guarantees for model adaptation across successive datasets by bounding the Wasserstein distance between consecutive distributions and requiring that these distributions lie on a manifold. The theoretical analysis demonstrates that the proposed approach controls the error propagation and improves generalization across domains. Additionally, the analysis of two specific settings shows that the proposed approach eliminates the error propagation completely.
Strengths: - The paper is well-written.
- The considered problem is timely and important.
- To the best of my knowledge, the proposed approach and the presented theoretical analysis are new.
- The presented theoretical results, especially those on error propagation, are significant and have potential practical implications.
Weaknesses: - Although this is a theoretical paper, the experimental study is very limited and hidden in Appendix D.
- The relationship to existing work, both the classical theory of domain adaptation (e.g., [*]) and on gradual domain adaptation, could be clarified. Specifically, the only work mentioned on gradual DA is [KML20].
- The paper could benefit from thorough proofreading. For example, there are broken links and missing references (e.g., lines 126 and 186).
[*] Ben-David et al. A theory of learning from different domains. Machine Learning, 79(1):151–175, 2010.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Could the authors discuss possible extensions to non-binary settings and settings where the domain shift is applied to the joint distribution (Z) rather than the marginal (X)?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: - The limitations appear only in the form of the assumptions required for the theoretical results to hold. Discussing the implications of these assumptions would enhance the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their positive feedback. We have addressed the remaining concerns as follows:
**Weaknesses**:
- **Although this is a theoretical paper, the experimental study is very limited**: We have expanded the experimental section significantly (please refer to the global rebuttal and the attached PDF). We tested our method on a real-world dataset used in the [KML20] paper and achieved superior results. These additional experiments and comparisons will be included in the camera-ready version.
- **The relationship to classical theory of domain adaptation (e.g., Ben-David et al.) and on gradual domain adaptation, could be clarified. Specifically, the only work mentioned on gradual DA is [KML20].**: In addition to [KML20], we also discussed other relevant works, such as [WLZ22, HWLZ23, WSCM20], among others. To better position our work within the literature, we have expanded our discussion on these methods as well as the classical work of Ben-David et al. (as mentioned by the reviewer). This will be detailed further in the camera-ready version.
- **The paper could benefit from thorough proofreading**: We agree that some phrases were rushed. We have carefully revised the paper to address this issue and assure the reviewer that all typos and rushed phrasing have been corrected.
------------------
**Questions**:
- **Could the authors discuss possible extensions to non-binary settings and settings where the domain shift is applied to the joint distribution (Z) rather than the marginal (X)?**: In our work, as shown in Equation (2), the domain shift is indeed applied to the joint distribution $\mathcal{Z}=\mathcal{X}\times\mathcal{Y}$, not just the marginal distributions, thus addressing the second concern raised by the reviewer. Regarding the extension to non-binary settings, generalizing our method (including Theorem 2.3 and Corollary 2.4) is straightforward. However, examples involving Gaussian distributions and manifold assumptions require further statistical analysis, which we consider a valuable direction for future work. Thank you for this suggestion, we will ensure that this topic is discussed in more detail in the camera-ready version.
- **The limitations appear only in the form of the assumptions required for the theoretical results to hold. Discussing the implications of these assumptions would enhance the paper.**: The assumptions in our paper are, in essence, quite similar to those in [WSCM20] and [KML20]. These references have already discussed and experimentally validated the relevance of such assumptions in real-world datasets. We will elaborate on this issue further in the camera-ready version.
----------------
We hope these revisions address the majority of the reviewer’s concerns. Once again, thank you for your timely and thorough review of our work.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed responses and the additional experiments they conducted. I find them satisfactory and will therefore maintain my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer FbyN
Comment: Once again, we sincerely thank the reviewer for their time, effort in reviewing our work and their recommendation for acceptance. | Rebuttal 1:
Rebuttal: We would like to express our gratitude to all the reviewers for their thoughtful comments and feedback. As some questions and concerns were raised by multiple reviewers, we have provided a global response.
Some reviewers expressed concerns regarding the implementability of our method and its performance on real-world data, particularly under the manifold assumption. To address this, we have included a schematic in Figure 1 of the attached PDF, illustrating the workings of our method. As depicted, at the $i$th step, we perturb the data samples $(X_j,y_j),~j \in [n_i]$ from $P_i$ using a parametric function class, denoted as $f_P$, and penalize the extent of perturbation using the following term:
$$\frac{\gamma}{n_i}\sum_{j=1}^{n_i}{\Vert f_P(X_j) - X_j\Vert_2}.$$
These perturbed samples are then classified using a classifier. Let $L_C(f_P;X_1,\ldots,X_{n_i})$ represent the cross-entropy loss of the classifier on the perturbed samples. Our objective is to minimize $L_C(f_P;X_1,\ldots,X_{n_i}) - \frac{\gamma}{n_i} \sum_{j=1}^{n_i}\Vert f_P(X_j) - X_j\Vert_2$ with respect to the parameters of the classifier $f_C$, while simultaneously maximizing it with respect to the parameters of $f_P$.
Our experimental details are as follows:
- We implemented this method on the 'Rotating MNIST' dataset, similar to [KML20]. In particular, we sampled 6 batches, each with a size of 4200, without replacement from the MNIST dataset, and labeled these batches as $D_0, D_1, \ldots, D_4$, which represent the datasets obtained from $P_0, P_1, \ldots, P_4$. The images in dataset $D_i$ were then rotated by $i \times 15$ degrees, with $D_0$ serving as the source dataset and $D_4$ as the target dataset. We provided the source dataset with labels and left $D_1, D_2, D_3$, and $D_4$ unlabeled for our algorithm. We then tested the accuracy of $\theta^*_0, \ldots, \theta^*_3$—the outputs of our algorithm at each step—on $D_1, D_2, D_3$, and $D_4$, respectively.
- We also implemented the GDA method exactly as described in [KML20]. For our method, we employed a 2-layer CNN with a $7\times 7$ kernel in the first layer and a $5\times 5$ kernel in the second layer. We also utilized an affine grid and grid sample function in PyTorch for $f_P$, following the approach introduced in [1]. For the classifier $f_C$, we used a 3-layer CNN with max pooling and a fully connected layer, applying dropout with a rate of 0.5 in the fully connected layer.
We compared our method to the GDA method presented in [KML20] and detailed the results in Figure 2 of the PDF. Additionally, we reported the accuracy of $\theta^*_0$ on $D_0$ as an example of in-domain accuracy. Our results show that our method outperforms GDA by a significant margin of 8 percent in the last domain.
[1] Jaderberg, M., Simonyan, K., & Zisserman, A. (2015). Spatial transformer networks. Advances in Neural Information Processing Systems, 28.
Pdf: /pdf/6eff272ae35324cc0fc3e6f1b75a8e7bf1927c60.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Value of Reward Lookahead in Reinforcement Learning | Accept (spotlight) | Summary: The paper investigates theoretically the advantage that RL agents get from reward lookahead. In a tabular MDP setup, the reward is postulated as being a random variable $R_h(s, a)$, whose value is, by default, revealed to the agent after taking the action $a$ in state $s$ at a time-step $h$. Authors calculate exactly/prove bounds on the ratio between return obtained by a no-lookahead agent and an agent that has access to information about the particular values (samples) of the reward $L$ steps ahead. Exact calculation is given in the two cases of (1) fixed rewards' expectations, fixed dynamics (2) worst-case expectations, fixed dynamics, and the (tight) bounds are given the case of (3) worst-case expectations and worst-case dynamics. Authors also discuss the connection of their results to the notion of coverability coefficient, and walk the reader through several examples of MDPs including chain-, grid- and tree-shaped environments.
Strengths: The paper presents the main theorems rigorously. It provides extensive and very detailed proofs for all of its statements in the appendix. I also liked the idea of discussing proof sketches in the main part of the paper. The paper discusses the connections to the existing literature in detail, and carefully spells out the differences to previous approaches.
Weaknesses: The presentation of the paper could be improved in my opinion. The paper briefly mentions several motivating examples in the introduction (trading with known prices, ride-sharing, traffic control), but this did not give me a good understanding of what the problem is - I only understood this once I got to the formal statements in section 2.1. It would be good to have more clearly and comprehensively presented "central example" that would show exactly what is going on.
The paper also suffers from moving important bits to appendix - such as Figure 3, which quite important to understand Part III of the proof sketch for Thm 1. The treatment of the Part II proof is also quite brief (although the main idea of using minimax theorem for the policy maximisation/reward minimisation is quite clear).
I did not get a clear take-away from the paper: although the results were mathematically meaningful, I did not get a sense of how much should I care about reward lookahead. Looking at those pathological edge cases of reward distributions seems unavoidable from the technical point of view, but maybe could be mitigated by e.g. examples constraining the distributions to be some reasonable family. Since the contribution itself relies on a quite straightforward proof idea (c.f. Q1 below), the strength of the paper stems mostly from meticulously working through the details of extending this to a broader setup. It would be good to see an example where the theorem helps to resolve some more-than-toy problem.
Technical Quality: 4
Clarity: 2
Questions for Authors: 1. The central idea of using tree-shaped MDPs plus long-shot reward distributions with vanishing probability $\epsilon$ and then moving $\epsilon \to 0$ is quite straightforward. The value of the paper therefore seems to be in extending this idea to cover the case of $L < H$ and various other edge cases. However, the complications in formulas in Thm 1/3 seem to stem from the problem that there's "too many"/"too few" states wrt to the time horizon (which is mitigated by adding the waiting action). Wouldn't it simplify at lot if you would just index the states and actions by $h$, which would make the MDP always a complete $A$-ary tree?
2. All of the development is done in a finite-horizon case. However, you also mention using the receding horizon idea - would that make it possible to extend your work to get around this limitation?
3. What would be the value/an application of this work in a more-than-toy setup?
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and apologize for any clarity issue; we will consider adding a central example to improve the clarity of the introduction.
Sadly, due to the page limit, we had to move to the appendix some parts of the proof that we also find essential - we intend to use all the extra space in the camera-ready version to both address the remarks in the different reviews and then move proof sketches/figures to the main text of the paper. As space permits, we will prioritize the parts suggested by the reviewer.
We thank the reviewer for the opportunity to highlight our contributions. Reward lookahead is a different feedback paradigm than the standard one in RL that covers many realistic settings. This includes the trading/traffic examples in the people, but also other situations - for example, if the rewards depend on the weather forecast (e.g., in electricity grids). In some situations, it naturally exists, while in others, agents must actively pay to gather this information. The goal of our work is to establish a deep understanding of the effect of this information on the value. This has a direct implication when information has a price, but even when it is free - algorithms that incorporate lookahead are naturally much more complex and are oftentimes tailored to each specific application. It is thus important to ask how much do we expect to gain from using this information before deciding whether it is worth the direct and indirect costs.
While we agree that the environment that yields worst-case behavior is somewhat intuitive, we emphasize that our results also give a tight characterization as a function of the dynamics of the environment - we believe that these results are not straightforward (as we elaborate in the answer to question 1). In particular, if we are interested in the CR for a specific environment, the ratio $CR(P,r)$ can easily be calculated using standard planning modules while $CR(P)$ forms a zero-sum game between two Markov policies. Our results also draw a surprising link between reward lookahead and coverability/other concentrability coefficients; in particular, to our knowledge, it is the first instance where coverability appears intrinsically and not as an assumption/constant in upper bounds. As a final note, the definition of the CR is invariant to scaling, and since long-shot distributions are bounded, the same ratios $CR^L(P)$ and $CR^L$ would also be obtained when only considering Bernoulli rewards. Thus, to get different ratios, it is not enough to limit ourselves to ‘standard’ distributions, but we might also need additional regularity conditions.
# Questions:
1. We agree that for delayed trees, longshot rewards are an intuitive choice. However, we would like to emphasize that we prove that longshot rewards are the worst-case for *any* dynamics and/or expected rewards, a result that we find unintuitive. This includes dense environments such as contextual bandits and situations with tradeoffs between navigation and reward collection such as in the chain (prophet)/grid examples. We also believe that the derivation of the closed-form expression of $CR(P)$, though relying on well-known tools such as the minimax theorem, was non-trivial to devise.
*Delayed tree and non-stationary environments:* as a remark, we intentionally chose to present a worst-case environment with stationary dynamics – otherwise, one could legitimately claim that the $H$ dependence might be due to the non-stationarity, and a tighter bound of $\approx SA$ could be the ‘right’ CR in stationary environments. Moreover, even in non-stationary environments, we believe that a similar loop mechanism is still necessary. The idea is to create an environment where only one reward could be collected (to minimize the value $V^0$), but the probability of collecting this reward greatly increases when lookahead information is available. For the first part - the states where a reward could be collected must be transient (so that no-lookahead agents would not be able to go back there), while for the second part, we need to allow the lookahead agent to decide when to collect the reward based on its observed information – so the environment requires some waiting mechanism.
2. While we did not study it in this paper, we believe that some of our techniques could be extended to discounted situations, and maybe (under some regularity conditions) to stochastic shortest paths. We leave this for future work. For infinite-horizon average reward, it is not too hard to prove that the CR can approach zero even for environments with a constant number of states (by controlling the effective horizon via loops), but it is still interesting to analyze the CR as a function of the dynamics.
3. As previously mentioned, one could take the closed-form expressions in the theorems and evaluate them on non-toy applications that have lookahead information. In the paper itself, we aimed to give closed-form results and intuition, so smaller examples were more natural. In particular, the examples studied in the paper give valuable insights into different properties of environments and their effect on the value of lookahead information (reward density, navigation elements, etc.)
We also believe that the connection to coverability\concentrability could lead to additional theoretical applications - in any situation where coverability appears, one could reformulate the problem using the ratio between lookahead values as an analysis tool.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their response.
I don't think I agree with the un-intuitiveness of long-shot rewards constituting the worst-case wrt dynamics and reward expectations - that is, reading the paper, I expected exactly those distributions to play the central role before getting to Definition 2. (I would recommend remarking on this fact somewhere in the introduction). On the other hand, I do agree that devising the proof seems non-trivial.
I liked the framing of the result as "how much should an agent pay for access to the future information" - I don't think it was present in the paper, and, although logically trivial, it still made me think of the result in a different light - I would recommend working it into the paper.
I decided to keep my current positive rating. | Summary: This paper examines the value of having lookahead information about future rewards in reinforcement learning. Specifically, it analyzes the competitive ratio between the optimal agent under no lookahead versus agents that can see reward realizations for some number of future timesteps. This competitive ratio is defined as the value (expected cumulative reward) of a standard RL agent with no lookahead divided by the value of an agent with L-step lookahead. Using this measure the authors provide tight bounds on this competitive ratio for different lookahead ranges, characterizing the worst-case reward distributions and environments.
Interestingly, they show connections between their results and fundamental quantities in offline reinforcement learning and reward-free exploration. The analysis provides theoretical insights into the value of future reward information, and opens up the roadmap for future works on transition look-ahead and development of approximate planning algorithms.
Strengths: **Strengths**
- The paper provides a rigorous theoretical analysis of the value of future reward information in RL, covering the full spectrum from one-step to full lookahead. It derives tight bounds on the competitive ratio for various lookahead ranges, characterizing worst-case reward distributions and environments. Notably, the competitive ratio is shown to be closely related to concentrability coefficients used in offline RL and reward-free exploration, suggesting a deeper connection between these areas.
- The analysis also includes specific environment types (e.g., chain MDPs, grid MDPs) to provide concrete examples. Notably they introduce "delayed tree" environment that exhibits near-worst-case competitive ratios, offering insights into what makes lookahead challenging to utilize.
- The focus on worst-case scenarios provides robust guarantees and insights that complement average-case analyses common in RL literature. This approach lays crucial groundwork for understanding lookahead in more complex environments and could inform future practical algorithm design, bridging theoretical robustness with potential real-world applications.
Weaknesses: **Weakness**
- The analysis assumes perfect knowledge of future rewards, which may be unrealistic in many practical scenarios where only noisy or partial information might be available.
- The paper focuses on theoretical analysis in a simplified tabular setting, which may not directly translate to more complex real-world RL problems or environments. Experiments on approximate planning for complex environments would complement its theoretical results and could have provided additional insights or validation.
Technical Quality: 3
Clarity: 3
Questions for Authors: -
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback.
* **Perfect future information:** as stated in the conclusions section, we agree that situations with noisy/imperfect predictions are of great interest and should be further investigated in future work. Nonetheless, we believe that the case of perfect information is still important to study, due to multiple reasons:
1. *Applicability:* in some problems, perfect or near-perfect information is available. For example, consider an inventory management problem, where supplies are bought in a market. The item prices are exactly known before each transaction, even if the market itself is stochastic - so this scenario could be formulated as one-step reward lookahead. Another scenario is ride-sharing: assume we travel between two points but are willing to pick up other travelers on the way. The knowledge of where and when travelers want to be picked from is (approximately) accurate.
2. *A stepping stone towards predictions.* The case of perfect information is the edge case in many different formulations of reward predictions. One formulation is when predictions become increasingly noisy as we look further into the future; our results are the limit case of no-noise up to a certain point, and infinite noise later on. Another situation is when predictions can either be perfect or adversarial, and agents need to learn how to utilize the prediction without losing too much if they are inaccurate (‘consistency-robustness tradeoff’); our paper analyzes the ‘consistent’ case. In both cases, it would be extremely hard to analyze the general case without having tight characteristics at the limit of perfect information.
* **Lookahead in complex environments:** we also think that extending our analysis beyond tabular environments is very interesting and could have numerous practical implications. Yet, we would also like to stress that even in the tabular setting, there are many interesting open questions: planning with multi-step lookahead information, learning when lookahead information is available, tight characterization of transition lookahead and more. We believe that before moving to more complex settings, it would be beneficial to establish a deeper understanding of lookahead in tabular settings. | Summary: This paper aims to quantifiably analyze the the value of future reward lookahead in Reinforcement Learning settings where future reward information is available before-hand. The authors utilize competitive analysis, and characterize the worst-case reward distribution while also deriving exact ratios for the worst case reward expectations between standard RL agents and agents with partial future-reward lookahead information.
Strengths: 1) The paper provides an important theoretical study shedding light on the importance of future reward lookahead in Reinforcement Learning settings. The problem is very relevant to not only simulation-based but also real-world scenarios where future reward information will either be known or can be inferred via exploration.
2) The paper is well-written, easy to follow and provides useful intuitions throughout allowing for readers to gain insight into the problem and the theoretical analysis presented.
Weaknesses: 1) There seems to be an important missing piece, specifically in situating the work with respect to the existing literature. A rich literature exists in the field of Control Theory that talks about the rollout approach and there has been theoretical evidence shedding light on advantages of rollout, which assumes the presence of future rewards (or <state,action> Q-values). How does this work connect to the Rollout approach and/or its variants?
2) It would be helpful to have conclusive statements along with definitions presented in the work. For example, line 121 - what are the implications of the fact that "the competitive ratio is the worst-possible multiplicative loss of the standard (no-lookahead) policy" to policy learning?
3) There seems to be some confusion about how dense rewards have been defined in the paper, line 344 onwards. Do the authors make the assumption that a reward is available in every state? How is the density defined in this case? Also, it is not clear why it is important to consider all states with non-zero rewards. With sufficient number of trials given sparse rewards, agents could still navigate to rewarding future states. How does this break the competitive ratio analysis further presented in the paper?
Technical Quality: 3
Clarity: 4
Questions for Authors: Please see weaknesses section for questions.
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: While authors include useful examples towards the end of the work, it will be useful to include a section on Broader Impact as it may allow to see how the results from this work can be translated to empirical studies.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful comments.
1. Thanks for the comment. To the best of our knowledge, there are two types of rollout approaches that are applied in control/RL: i) Rollout as a tool to perform planning: in this case, it is a computational scheme, and no future information is revealed. This kind of rollout is less relevant to our work (even though it might later be used as a planning tool for multi-step lookahead). In particular, ‘standard’ Q-values at the leaf of a rollout usually fall under this case. ii) Rollouts with additional information about state-reward realization: to our knowledge, this case is a particular instance of the Model Predictive Control Paradigm. We will try to clarify our discussion to reflect this, and will be happy to hear about any specific reference that the reviewer thinks we should discuss.
2. The no-lookahead agent is the standard agent used throughout the RL literature and serves as the ‘off-the-shelf’ agent. Therefore, the competitive ratio in our paper quantifies the maximal potential gain when moving from the standard RL scenario to agents that utilize future reward information. For example, consider a situation where we might get lookhead information, but obtaining it has a price (either because the information itself is costly or just because the lookahead algorithm is much more complicated). The CR can help determine whether the potential gain is worth the price. Another application is RL with adversarial rewards - as mentioned in Remark 1, our CR is an upper bound on the best achievable CR in adversarial settings. In fact, for full lookahead, one could calculate the policy that optimally covers the space (see, e.g., Al-Marjani et. al, 2023), and our results imply that it achieves the optimal CR with adversarial rewards. Finally, our results provide interesting insights on sequential decision-making and MDPs. In particular, our results provide a new definition of the coverability coefficient that goes beyond the mathematical expression: the worst case CR between vs. full lookahead given the dynamics. To our knowledge, this is the first result that obtains the coverability coefficient as an intrinsic quantity - previous papers only rely on it for the analysis or obtain upper bounds that depend on it. We will further discuss this in the final version of the paper.
3. We apologize for the confusion. When we say that rewards are dense, we assume that if a reward could be obtained for one action in some state, then the expected rewards of all other actions of this state can be at most $C$ times lower. If all the expected rewards at some state are zero, then the rewards will deterministically be equal to zero (since the rewards are non-negative), and such states do not affect this result. When this assumption holds, the cost of balancing immediate reward collection and future reward collection is bounded - even if we focus on future rewards, we still collect a fraction of the best immediate reward. This is not the case, for example, in the prophet problem - we either collect immediate rewards and move to non-rewarding states or collect a zero immediate reward and move to a state with a positive value. We will rephrase this example to make it clearer in the final version of the paper. We are also open to suggestions on alternative names for this scenario.
4. We completely agree that taking this work to practical settings is an important future work - we will discuss it in the final version of the paper.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: Thanks for your response! I will maintain my assessment of the paper. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Night-to-Day Translation via Illumination Degradation Disentanglement | Reject | Summary: This paper presents an approach, namely N2D3, for night-to-day image translation. Specifically, the proposed pipeline involves two stages: illumination degradation disentanglement and degradation-aware contrastive learning. The first stage decomposes an image into darkness, well-lit areas, light effects, and highlight regions. The second stage applies contrastive learning to these four types of nighttime degradations. Extensive experiments conducted on the BDD100K and Alderley datasets demonstrate that N2D3 outperforms existing methods.
Strengths: - The paper addresses the critical problem of night-to-day image translation in computer vision.
- The authors provide comprehensive experimental validation of the proposed method.
Weaknesses: The reviewer has raised this paper for an ethics review due to a significant omission of a key citation. In Section 3.1, the authors introduce a color invariant term for light effect detection. However, this term was originally derived by Geusebroek et al. in their paper *Color Invariance* [1]. The authors devote an entire page to deriving the invariant term without appropriately citing the original work, which violates academic integrity. The authors should explain why this citation is missing, as it does not seem to be an unintentional oversight. This intended missing reference also made Eq. (1)-(5) lack logical coherence and hard to follow.
[1] Color Invariance. J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, H. Geerts. IEEE TPAMI, 2001.
**Note that although the reviewer has raised the ethics review flag, the reviewer’s rating does not take this into account.**
In addition to the missing citation, the reviewer has concerns about the technical soundness of the paper. Specifically, why are four types of degradation considered? Since the disentanglement of well-lit and light effect regions is the paper’s main contribution, ablation studies using only three types of degradation (darkness, well-lit, and highlight) should be provided.
Besides, the paper’s citation style is inconsistent. For instance, citations for the same conference sometimes include the abbreviation and publisher while others do not (e.g., [1], [19], and [26]). Additionally, some citations include the month of the conference while others do not (e.g., [24], [28]), and some contain volume information while others do not (e.g., [22], [23]). Ensuring consistent citation formatting would enhance the paper’s overall presentation quality.
Technical Quality: 2
Clarity: 1
Questions for Authors: Beyond concerns regarding missing citations and technical soundness, the reviewer has the following questions:
- Given the prevalence of large models, why not approach the night-to-day translation task using diffusion models? For instance, a paper [2] addresses day-to-night/fog/snow translation on BDD100K; could this framework be adapted to handle night-to-day tasks? The reviewer does not require quantitative or qualitative results but would appreciate the authors’ insights on this matter.
- What is the rationale behind categorizing degradation into four types? The paper mentions that light effects involve phenomena like flare, glow, and specular reflections. Could degradation be categorized into more types? While a physics-based approach may not apply here, could a segmentation model be trained for this purpose (the reviewer acknowledges the lack of labeled data for training such a model, but there might exist other possible solutions)?
[2] Greenberg et al. S2ST: Image-to-Image Translation in the Seed Space of Latent Diffusion. In CVPR, 2023.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: The authors have adequately discussed the limitations of the work, and this paper does not have any negative social impacts.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Q1: Omission of a key citation.***
**R1:**
We apologize for the omission of this key citation, which has caused confusion. We will ensure that the key citation is included in the revised version.
***Q2: Why are four types of degradation considered?/What is the rationale behind categorizing degradation into four types?***
**R2:**
At nighttime, the intensity of illumination is the most important criterion for determining whether patterns are far from each other, which categorizes nighttime images into three non-overlapping regions: high-light, well-lit, and darkness. However, within well-lit regions, colored illumination still results in complex patterns with similar intensity levels, which require further subdivision. Therefore, we derive an invariant to extract features related to colored illumination and propose categorizing and disentangling patterns into darkness, well-lit regions, light effects, and high-light to address these challenges.
We hope our explanation can address your concerns.
***Q3: The paper’s citation style is inconsistent.***
**R3:**
Thank you for your advice. We will revise the citation and ensure the style is consistent in the camera-ready version.
***Q4: why not approach the night-to-day translation task using diffusion models?***
**R4:**
Thank you for your advice. We acknowledge that diffusion models have significant potential for addressing translation-based image tasks. However, **applying diffusion models directly does not effectively solve the night-to-day translation problem.** This task is fundamentally a restoration problem that requires recovering information rather than merely performing style transfer, as seen in day-to-night translation tasks. The lack of paired training data further complicates night-to-day translation, making current supervised diffusion backbones insufficient.
While we acknowledge the value and potential of diffusion models for night-to-day translation, **further research into the mechanisms of nighttime imaging and the development of techniques for extracting significant information from such images are more crucial**. These advancements will substantially influence the effective application of diffusion models in night-to-day translation tasks, particularly in selecting controllable information as conditions in the diffusion process.
***Q5: Could degradation be categorized into more types? Could a segmentation model be trained for this purpose?***
**R5:**
Yes, the degradations can be categorized into more types based on varying levels of illumination density, different types of colored illumination, scattering, and reflective flare. However, apart from degradations with distinct edge structures, such as scattering flares, the efficacy of segmentation models in extracting additional types of degradation is limited.
In my opinion, research into the mechanisms of imaging and optical systems, combined with physically informed machine learning, represents a more promising approach for ultimately addressing night-to-day translation and related nighttime imaging tasks. We sincerely appreciate your advice.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: I have thoroughly read all the reviewers’ comments and the author's rebuttal, and I thank the authors for their responses. However, I still have the following concerns about this paper:
**Academic Integrity**: As pointed out by all reviewers, this paper misses a key citation [1], which the authors addressed by “apologizing for our oversight.” This response is insufficient to convince me. The authors acknowledged in their rebuttal to reviewer JHwd that “Eq (5) introduces the invariant from [1],” yet in their response to reviewer rJuz, they wrote “we derive an invariant,” and in their paper, they stated, “we observe that the following color invariant.” Furthermore, in Lines 148-149, the authors wrote, “We develop a computation scheme” and then introduced Eq. (9), which is exactly Eq. (31) in [1]. The authors’ rebuttal only reinforces my suspicion that the missing citation was not an unintentional oversight.
[1] Color Invariance. J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, H. Geerts. IEEE TPAMI, 2001.
**Technical Soundness**: The authors argued, "This task is fundamentally a restoration problem that requires recovering information rather than merely performing style transfer.” However, the N2D3 still employs GAN, a generative network, as its backbone, which may introduce semantic changes beyond information recovery. For instance, in Figure 4, the lamppost in the nighttime image disappears in the translated daytime image; in Figure 5, the traffic light appears to be floating in the air.
Given these issues, I choose to maintain my initial rating.
---
Rebuttal 2:
Title: Response to Academic Integrity and Technical Soundness
Comment: **Response to Academic Integrity**
Thank you for your valuable comments.
First, we want to assure you that academic integrity is paramount to us. We have taken this feedback seriously and have implemented additional checks in our manuscript preparation process to prevent such oversights in the future. **The omission was due to an unfortunate lapse during our final citation review process.** Citation [1] was included in an earlier version of the manuscript but was inadvertently removed during subsequent revisions, and this omission went unnoticed. Additionally, in Sections 3.1 and 3.2, we mistakenly believed the paper was cited and retained simplified statements, which led to difficulties in understanding. We sincerely apologize for this oversight. We have thoroughly revised our manuscript and now explicitly cite [1] in all relevant sections, including Eq. (5) and Eq. (9).
Second, while our work builds on the invariant presented in [1], **there are several key differences**:
+ **The different photometric model.** As the invariant in [1] are derived from the photometric model,$E(λ,x)=e(λ)i(x) R_{\\infty}(λ,x)$, which is tailored for the colored uneven illumination, our invariant are derived from the photometric model :$E(\\lambda, x) = \\{
\\begin{array}{ll}
e(\\lambda, x) & \text{if } x \\notin \\Omega, \\\\
e(\\lambda, x)R(\\lambda)C(x) & \\text{if } x \\in \\Omega
\\end{array} $, designed to describe complex illumination in nighttime environments.
+ **The different characteristics.** The different characteristics. The Lemma 8 in [1] introduces the characteristic that extracts object reflectance, specifically edge-relevant features in images. In contrast, the invariant in our work, described in Eq (6), demonstrates the ability of extracting of light effects, which are degradation-related features specific to nighttime conditions
+ **The different computation process.** Unlike the computation process in [1], which focuses on extracting high-frequency features such as edges, our work employs two additional normalization and activation functions to extract relatively low-frequency light effects. These steps are specially designed for nighttime disentanglement.
These differences uniquely demonstrate our invariant’s ability to extract light effects in nighttime images, both empirically and theoretically. This specific invariant is derived from our nighttime photometric model, which was developed through our original research.
Despite these distinctions, we fully understand the importance of giving proper credit to prior work. We commit to including citation [1] in the revised version and thank you again for your valuable comments.
[1] Color Invariance. J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, H. Geerts. IEEE TPAMI, 2001.
**Response to Technical Soundness**
Thank you for your comments.
Enhancement-based methods prioritize preserving information from the night domain. However, **night-to-day translation has higher requirements, prioritizing the conversion of images to the daytime domain first, while maintaining semantic consistency.**
We acknowledge that the GAN-based methods are not perfect and may result in the semantic changes. To address this, we developed a degradation-aware contrastive learning approach designed to maintain semantic consistency and ensure successful translation. **Our strategy has proven effective, significantly outperforming earlier GAN-based night-to-day works [1, 2, 3].**
Additionally, our method demonstrates significant benefits for downstream tasks, such as nighttime image localization and semantic segmentation, compared to enhancement-based methods. As shown in Table 1 of the main paper, **our approach achieves a 5.26 mIoU improvement and a 9-point gain in SIFT scores compared to the most advanced enhancement-based methods**, indicating greater potential for enhancing downstream performance.
Thanks for your comments and we wish these explanations can address your concerns.
[1] Night-to-day image translation for retrieval-based localization. A. Anoosheh, T. Sattler, R. Timofte, M. Pollefeys, and L. V. Gool. ICRA, 2019.
[2] Forkgan: Seeing into the rainy night. Z. Zheng, Y. Wu, X. Han, and J. Shi. ECCV, 2020.
[3] Adverse weather image translation with asymmetric and uncertainty-aware gan. J. Kwak, Y. Jin, Y. Li, D. Yoon, D. Kim, and H. Ko. BMVC, 2021.
---
Rebuttal 3:
Title: Further Response to Author Rebuttal
Comment: Thank you to the authors for their further clarification. My concern regarding academic integrity is mostly resolved, though I still find it unusual to use a first-person narrative like "we develop" when referring to results derived from other literature (e.g., Eq. (9)). This phrasing suggests that the person writing the paper may not be the one who proposed the method, conducted the experiments, or fully understood the related work. However, I would now like to focus on new questions that have emerged as I delve deeper into the technical details of this paper.
Firstly, I agree with the authors that their image formation model differs from the one used in [1]. Specifically, [1] assumes a matte, dull surface with Fresnel reflectance set to 1, while this paper employs a model where Fresnel reflectance can either approach 0 or 1. However, I find the derivation from Equation (2) to (3) problematic. Specifically, $R_\infty(\lambda, x)$ cannot be decomposed into two separate functions dependent on $\lambda$ and $x$, respectively.
The explanation following Eq. (2) states that for a given pixel location $x_0$, $R_\infty$ only depends on $\lambda$. While this is obvious since $R_\infty$ depends on two variables $x$ and $\lambda$ only, it does not logically lead to the conclusion that $R_\infty(\lambda, x)$ can be decomposed into $R(\lambda)C(x)$.
For instance, consider a simple model $R(\lambda, x) = \lambda x +1$ (it does not refer to a specific physical model, but it satisfies the statement "for any local pixels, the material reflectivity is determined if the material is given"). If functions $C$ and $R$ exist such that $R_\infty(\lambda, x) = R(\lambda)C(x)$, then setting $x = 0$ would imply $R(\lambda) = \frac{1}{C(0)}$ for all $\lambda$, which is incorrect since $R$ should not be a constant function.
Therefore, further assumptions are necessary for reaching Eq. (3).
Additionally, I have concerns regarding the experiments presented in this paper. Specifically, while the experiments show that using a four-component decomposition is better than a three-component one, they do not demonstrate that this four-component decomposition is optimal or superior to a heuristic approach, such as classifying the pixels into four clusters using a k-nearest algorithm. This ablation is necessary as the qualitative results in Figure (3) seem similar to a simple intensity (RGB) based separation of light-effect and well-lit areas.
Lastly, could the authors clarify what the invariant $L$ represents in Table 2 and Line 278?
Overall, I would like more clarification on Eq. (3) and will reconsider my rating after reviewing the authors’ additional response.
---
Rebuttal 4:
Title: Further Response
Comment: Thank you for your valuable feedback.
**First,** in our photometric model, we assume that the reflectance function $R_{\\infty}(λ,x)$ can be decomposed into the product of two dependent functions: $R(λ)$ and $C(x)$. This decomposition is grounded in the following assumption and definition.
We assume that **materials are uniform and homogeneous within a local area** under normal conditions. Specifically, the optical properties of a material within a small region are described by the function $R(λ)$, which characterizes the material's properties as a function of wavelength and is independent of location. Similar assumption is also used in materials science, optics and computer vision [1, 2, 3]. Under this assumption, we can simplify the reflectivity function $R_{\\infty}(λ,x)$ in the local area to $cR(λ)$, where $c$ is coefficient that describe the material type.
However, this model is limited to describing photometric properties in a local area and does not capture global nighttime conditions. To address this limitation, we introduce the material spatial distribution function $C(x)$ is defined as: $C: \\mathbb{R}^n→{c_1,c_2,…,c_m}$. With $C(x)$, we can model more complex nighttime scenes with diverse material types at macro scales, as detailed in Eq (4) of our paper.
The function $C(x)$ does not influence the derivation of subsequent invariant and their properties. In the derivation of our invariant, the function $C(x)$ appears in both the numerator and denominator of the derivative fraction and cancels out, as shown in Eq (6) of our paper. Nevertheless, the introduction of $C(x)$ is necessary to ensure that our photometric model accurately represents and describes the nighttime environment.
We will emphasize these points in the revised version.
[1] Materials science and engineering: an introduction. W. D. Callister and D. G. Rethwisch New York: Wiley, 1999.
[2] Optical properties of solids. E.A. Moore and L.E Smart. CRC Press, 2020.
[3] Color invariance. J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, H. Geerts. IEEE TPAMI, 2001.
**Second,** we present additional ablation studies on the four components, as detailed in the following tables. The studies reveal that while performance slightly improves with refined classification into four clusters, a more accurate segmentation based on our physical model significantly enhances performance and achieves optimal results. The challenge arises from the similarity in intensity between light effect regions and well-lit areas, making it difficult to differentiate them using a simple KNN. Our physical prior, which extracts features beyond intensity, enables better subdivision and contributes significantly to the final performance.
| |BDD1000K | | |Alderley | |
| :---------------------------------------- | :--------: | :------: | :-------: | :---------: | :-------: |
| |FID |LPIPS |FID |LPIPS |SIFT |
|3 clusters | 49.1 | 0.592 | 62.9 | 0.726 | 9.83 |
|4 clusters with naïve KNN |46.8 |0.529 |60.5 |0.721 | 11.41 |
|4 clusters with physical prior |31.5 |0.466 |50.9 |0.650 | 16.62 |
**Third,** in our experiments, $L$ refers to the setting where only three clusters—darkness, well-lit, and high light—based on illumination intensity are used for disentangling. In contrast, $N$ in Table 2 indicates that we further incorporate physical priors to extract light effects during disentangling, using four clusters—darkness, well-lit, light effect, and high light. We will provide additional clarification on this aspect to enhance understanding.
---
Rebuttal 5:
Title: Further Response
Comment: Thank you for clarifying the photometric model and notations and providing the additional experiments.
However, as I mentioned under *Technical Soundness,* I believe maintaining semantic consistency is crucial in night-to-day translation, an area where the proposed method falls short. A more straightforward alternative to the GAN-based approach could involve learning a per-pixel curve [1, 2] for different regions identified by N2D3, rather than applying a single curve across the entire image.
[1] Li et al., Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation, TPAMI, 2021.
[2] Wang et al., Unsupervised Face Detection in the Dark, TPAMI, 2023.
Additionally, the intuition behind the 4-category segmentation is not fully explained. While I acknowledge its superiority over 3/4-cluster KNN-based segmentation, I remain concerned about the scalability of ‘Degradation-Aware Contrastive Learning’ when faced with more degradation types, as each patch may contain pixels from multiple categories, and patches of different categories may not necessarily form a *negative* pair. This limitation raises doubts about the method’s foundation for future work.
Lastly, as acknowledged, the paper lacks essential explanations of the photometric model, notations, and important citations.
Given these points, I believe a thorough revision is necessary before acceptance, and I therefore raise my score to a borderline reject.
---
Rebuttal 6:
Title: Response to Reviewer dsJ3
Comment: Thank you for your valuable feedback.
**First,** we recognize that maintaining semantic consistency is crucial in this field. Our methods successfully preserve global semantic consistency across most scenes, though slight artifacts may appear in finer details. Nonetheless, our approach significantly outperforms previous methods in this regard and shows advantages in downstream tasks.
Moreover, our methods are fundamentally different from Zero-DCE [1], as Zero-DCE can not translate a nighttime into the daytime domain (in terms of FID in Table 1), despite GANs are utilized. In downstream tasks, our approach also outperforms Zero-DCE as shown in Table 1.
However, we still acknowledge that introducing per-pixel curves for different degradation types could be a valuable direction for future research. We appreciate your suggestion and will consider it in our ongoing work.
**Second,** for the two misunderstandings that have caused the concerns about the scalability, we would like to make the following clarifications:
Regarding the misunderstanding about multiple categories, **we clarify that each patch in our method originates from a single category, with no overlap between different degradation types.** Categories such as darkness, well-lit, and high light are uniquely labeled based on illumination intensity using KNN. The light effect regions are decomposed from well-lit regions, as detailed in Eq (11) of our paper, and the refinement operation $M_n←M_n-M_le$ in line 158 ensures no overlap.
Regarding the misunderstanding that patches from different categories may not necessarily form negative pairs, **we clarify that negative pairs are formed within the same category, not across different categories.** This ensures that all negative examples are hard negative examples, meaning they are sufficiently similar to the query patch but not identical. Mainstream theoretical studies indicate that hard negative example mining can enhance the performance of contrastive learning, which is consistent with our empirical results. Our method demonstrates performance advantages compared to randomly selects patches as negative samples (CUT [2]), indicating that our approach effectively mines negative samples.
Moreover, to further mitigate potential disentanglement errors, we introduced a reweight matrix based on optimal transport, which reassesses the weights for the sampled pairs to ensure optimal negative sample mining. The quantitative comparison in Table 2 confirms that our disentanglement approach significantly enhances performance through effective hard negative sample mining, with additional improvements achieved through the reweighting operations.
**Third,** we have provided a detailed explanation of these aspects in our previous comments and appreciate your recognition of our earlier explanations. In the revised version, we will add the key citations in line 114, and include the assumptions and notations between Eq (2) and Eq (3).
[1] Learning to enhance low-light image via zero-reference deep curve estimation. Li et al., TPAMI, 2021.
[2] Contrastive learning for unpaired image-to-image translation. T. Park. A. A. Efros, R. Zhang, J. Y. Zhu. ECCV 2020.
---
Rebuttal Comment 6.1:
Comment: Thank you for your prompt response.
Regarding Zero-DCE, I want to highlight that some recent work [1] has already applied this curve in reverse for the night-to-day translation task. The authors might consider trying this approach.
[1] Luo et al., ‘Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation,’ ICCV 2023.
Regarding scalability, I apologize for the earlier oversight in selecting negative samples. However, I still have concerns about the patch sampling strategy, as segmentation is based on pixels, which could lead to issues with patches along the borders.
I recognize that this paper is borderline after a thorough discussion. However, I am inclined to reject it due to its poor presentation (inconsistent citation style, missing key citations, lack of theoretical assumptions, unclear notations, etc.). While I would recommend a major revision if this were a journal submission, as a conference paper, I have to lean toward a borderline rejection.
Thank you again to the authors for their detailed rebuttal. I believe I have fulfilled my responsibilities as a reviewer, even though I have been on vacation and traveling since last week. I will not oppose the AC if they believe the reasons to accept outweigh those for rejection. | Summary: The paper proposes a new framework N2D3 for solving night to day image translation problem. Their framework consists of a physics-based disentanglement module and a contrastive learning module for preserving semantic consistency. Their method shows improved performance in terms of FID and downstream task performance on BDD100K and Alderley dataset.
Strengths: - Using the Kubelka-Munk theory for different degradation types and applying a patch-based image translation is a novel method.
- The figures are well-made. For instance, the visualization in Fig. 1 and Fig. 2 are intuitive and helpful for understanding the whole architecture.
- Quantitative evaluation results are convincing, showing the effectiveness of the proposed framework in terms of various metrics.
Weaknesses: - Clarity of the method sections can be improved. For instance, including more rigorous definitions or visualizations of what well-lit and different light effects mean and provide a motivation why it is helpful to disentangle those illumination causes separately.
- The authors can also add proper citations to previous work when they mention “by common practice”, for instance in line 107, line 142 and line 196.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why is illuminance computed by the maximum of RGB channel (line 107)? I think this is indeed not a common practice.
- Could you provide some ablation visualizations with and without applying the disentanglement method in 3.2? For instance, the 3 clusters with initial disentanglement in 3.1 and the four clusters afterwards.
- Since the evaluation metrics are performed on segmentation tasks, can you provide some visual examples of the segmented regions of those previous methods and your method?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As the authors already mentioned in the appendix, the current physics-aware degradation disentanglement module is designed mostly for illumination related effects and does not handle other types of degradation such as raindrops. I wonder how the authors think the framework could benefit or inspire other types of adverse weather image restoration tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Q1: Clarity of the method sections can be improved. For instance, including more rigorous definitions or visualizations of what well-lit and different light effects mean and provide a motivation why it is helpful to disentangle those illumination causes separately.***
**R1:** Thanks for the advise.
A key observation is that, when perform Night2Day translation, treating all regions equally leads to significant artifacts by mixing different patterns from various regions. For example, mixing light effects patterns with high-light patterns, as shown in the Figure 1 of the main paper. Intuitively, separating these patterns and regularizing the structure will help mitigate these artifacts.
Based on this intuition, we sought a disentanglement strategy to separate these patterns. At nighttime, the intensity of illumination is the most important criterion for determining whether patterns are far from each other, which categorizes nighttime images into three non-overlapping regions: high light, well-lit, and darkness. However, within well-lit regions, colored illumination still results in complex patterns with similar intensity levels, which require further subdivision. Therefore, we derive an invariant to extract features related to colored illumination and propose categorizing and disentangling patterns into darkness, well-lit, light effects, and high-light.
We hope our explanation can address your concerns.
***Q2: The authors can also add proper citations to previous work when they mention “by common practice”, for instance in line 107, line 142 and line 196.***
**R2:** Thanks for the advice. We will revise these problems in the camera-ready version.
***Q3: Why is illuminance computed by the maximum of RGB channel (line 107)? I think this is indeed not a common practice.***
**R3:** This estimation operation is broadly used in low-light image enhancement work for rough illumination estimation, such as in LIME (TIP 2016), URetinex-Net (CVPR 2022), and PairLIE (ICCV 2023). We hope these reference can solve your concerns.
***Q4: Could you provide some ablation visualizations with and without applying the disentanglement method in 3.2?***
**R4:** We provide such ablation visualization in the PDF file of the Author Rebuttal. Thanks for your advice again.
***Q5: Can you provide some visual examples of the segmented regions of those previous methods and your method?***
**R5:** We provide visual examples of the segmented regions comparison in the PDF file of the Author Rebuttal. Thanks for your advice again.
***Q6: I wonder how the authors think the framework could benefit or inspire other types of adverse weather image restoration tasks.***
**R6:** The core of this framework is utilizing physical priors to disentangle the mixture of complex patterns in degraded images, allowing the generator to learn these patterns more effectively. From a metric learning perspective, this approach acts as physical-informed hard negative example mining during contrastive learning. We believe that this disentanglement framework can also benefit other types of adverse weather image restoration with translation-based methods, as it leverages related physical priors to extract corresponding degradation patterns.
---
Rebuttal Comment 1.1:
Title: thank you for the response
Comment: I appreciate the authors' response and the additional visualizations provided during the rebuttal. The proposed disentanglement ideas for addressing the night-to-day image translation task is supported by extensive quantitative experiments. While I remain slightly positive about this work, I believe that the theoretical derivations require further development as suggested by other reviewers. Therefore, I will maintain my original rating.
---
Reply to Comment 1.1.1:
Title: Response to the Theoretical Derivations
Comment: Thank you for your valuable feedback and affirmation of our work. We hope that the following clarification in the theoretical derivation addresses your concerns.
First, we assume that **materials are uniform and homogeneous within a local area** under normal conditions. Specifically, the optical properties of a material within a small region are described by the function $R(λ)$, which characterizes the material's properties as a function of wavelength and is independent of location. Similar assumption is also used in materials science, optics and computer vision [1, 2, 3]. Under this assumption, we can simplify the reflectivity function $R_{\\infty}(λ,x)$ in the local area to $cR(λ)$, where $c$ is coefficient that describe the material type.
Then we find that this model is limited to describing photometric properties in a local area and does not adequately capture global nighttime conditions. To address this limitation, we introduce the material spatial distribution function $C(x)$ is defined as: $C: \\mathbb{R}^n→{c_1,c_2,…,c_m}$. With $C(x)$, we can model more complex nighttime scenes with diverse material types at macro scales, as detailed in Eq (4) of our paper.
The function $C(x)$ does not influence the derivation of subsequent invariant and their properties. In the derivation of our invariant, the function $C(x)$ appears in both the numerator and denominator of the derivative fraction and cancels out, as shown in Eq (6) of our paper. Nevertheless, the introduction of $C(x)$ is necessary to ensure that our photometric model accurately represents and describes the nighttime environment.
We will emphasize these points in the revised version for better understanding.
[1] Materials science and engineering: an introduction. W. D. Callister and D. G. Rethwisch New York: Wiley, 1999.
[2] Optical properties of solids. E.A. Moore and L.E Smart. CRC Press, 2020.
[3] Color invariance. J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, H. Geerts. IEEE TPAMI, 2001. | Summary: This paper presents a comprehensive solution for Night2Day image translation by leveraging physical priors, photometric modeling, and contrastive learning, leading to state-of-the-art performance in visual quality and downstream vision tasks.
Strengths: The authors develop a photometric model based on Kubelka-Munk theory to extract physical priors from nighttime images. This model helps to disentangle different types of illumination degradations by analyzing the illumination distribution.
Overall, the paper presents a novel approach to handling nighttime image translation by considering the unique challenges posed by varying degradations and employing both physical modeling and advanced learning strategies to address these challenges effectively.
Weaknesses: 1. The writing is difficult to understand. The explanations and derivations for Eqs (1) to (5) lack logical coherence and necessary references, making them hard to follow. The derivations for Eqs (7) to (9) also lack supporting references, casting doubt on their validity.
2. The motivation for DAR is unclear. Please explain the motivation behind it.
3. There is no baseline network, making it difficult to determine the performance gain for the specific module.
4. The ablation experiments lack in-depth analysis. For instance, there are no ablation experiments to verify the impact of introducing four regions for disentanglement versus three regions (e.g., excluding the light effects region).
5.There is a need to compare with more recent methods for unpaired image-to-image translation, such as COCO-FUNIT, StegoGAN, GP-UNIT, etc. Please check the reference [5], as it does not seem to be published in CVPR.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The explanations for Eqs (1) to (5) lack logical coherence and necessary references, making them hard to follow. What is the "color invariant response" in Eq (5), and why is it used to extract illuminance? The derivations and calculations for Eqs (7) to (9) also lack supporting references, casting doubt on their validity. Why is there a need to refine Mn?
2. How is the reweighting matrix obtained through optimal transport? Are there any references that have implemented this approach?
3. Contrastive learning has been widely used in image translation. The paper lacks ablation experiments to support the performance gains from the reweighting operation.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The motivation behind some methodological choices, is not clearly explained.
The ablation experiments are insufficient and lack depth.
The paper lacks a discussion on the computational complexity and resource consumption of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Q1: The writing is difficult to understand. The explanations and derivations for Eqs (1) to (5) lack logical coherence and necessary references, making them hard to follow. The derivations for Eqs (7) to (9) also lack supporting references, casting doubt on their validity.***
**R1:** We apologize for the omission of a key citation, which has led to poor readability in this part, as explained in the author rebuttal. Moreover, Eq (1) is derived from [1], and Eqs (2)–(4) are the simplified photometric model tailored for nighttime environments proposed by us. Eq (5) presents the invariant from [1], and we derived new characteristics based on our simplified model in Eq (6). From Eqs (7)–(9), we follow [1] to compute the basic invariant and refine it for disentangling. We hope this explanation address your concerns.
[1] Color Invariance. J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, H. Geerts. IEEE TPAMI, 2001.
***Q2: The motivation is unclear. Please explain the motivation behind it.***
**R2:** A key observation is that, when perform Night2Day translation, treating all regions equally leads to significant artifacts by mixing different patterns from various regions, as shown in the Figure 1 of the main paper. Intuitively, separating these patterns and regularizing the structure will help mitigate these artifacts. At nighttime, the intensity of illumination categorizes nighttime images into three non-overlapping regions: high light, well-lit, and darkness. However, within well-lit regions, colored illumination still results in complex patterns with similar intensity levels, which require further subdivision. Therefore, we derive an invariant to extract features related to colored illumination and propose categorizing and disentangling patterns into darkness, well-lit, light effects, and high-light.
We hope this explanation address your concerns.
***Q3: There is no baseline network, making it difficult to determine the performance gain for the specific module.***
**R3:** The baseline network is the CUT [2], which operates without any physical information guidance in contrastive image translation. This baseline is included in the performance comparison in the Table 1, showing an improvement of 13.8 in FID scores and 9.84 SIFT scores on the Alderley dataset. Additionally, it shows an improvement of 24 in FID scores and 12.25 in mIoU on the BDD100k dataset compared to the baseline. Our ablation study in Table 2, Section 4.4, identifies the effectiveness of each module.
We promise to highlight these improvements in the main paper for better understanding.
[2] Contrastive learning for unpaired image-to-image translation. T. Park. A. A. Efros, R. Zhang, J. Y. Zhu. ECCV 2020.
***Q4: For instance, there are no ablation experiments to verify the impact of introducing four regions for disentanglement versus three regions (e.g., excluding the light effects region).***
**R4:** We have provided this analysis in Section 4.4, as shown in the second table of Table 2. When only $L$ is activated, it indicates that we are only incorporating darkness, well-lit, and high light for disentangling. When only $L$ and $N$ both are activated, it represents the full method. The full method demonstrates over a 10-point improvement in FID performance compared to using only three types of disentanglement across both datasets. We promise to clarify this expression for better understanding.
***Q5: There is a need to compare with more recent methods for unpaired image-to-image translation, such as COCO-FUNIT, StegoGAN, GP-UNIT, etc.***
**R5:** We provide a comparison on the two datasets with the most advanced method, StegoGAN (CVPR 2024). It is clear that our method consistently outperforms StegoGAN, demonstrating superior performance. We will include this in the camera-ready version.
| |BDD100k| | Alderley| |
| :----------- | :------: | :------: | :--------: | :------: |
| |FID |LPIPS | FID |LPIPS |
|StegoGAN | 89.9 | 0.687 | 82.8 | 0.718 |
|Ours | **31.5** |**0.466** |**50.9** |**0.650**|
***Q6: Please check the reference [5], as it does not seem to be published in CVPR.***
**R6:** This paper was published in the CVPR Workshop 2023. We will revise this in the main paper. Thank you for your reminder.
***Q7: How is the reweighting matrix obtained through optimal transport? Are there any references that have implemented this approach?***
**R7:** The reweighting matrix is obtained by solving the optimization problem in Eq (14). A similar approach is employed in MoNCE (CVPR 2022), which is also compared in our experiments. Unlike MoNCE, which computes the reweighting matrix across entire image patches, our method designs the reweighting matrix specifically for each degradation regions. This tailored approach results in over 20 FID score improvements on the Alderley dataset and nearly 10 FID score improvements on the BDD100K dataset compaired to MoNCE, as shown in Table 1.
***Q8: Contrastive learning has been widely used in image translation. The paper lacks ablation experiments to support the performance gains from the reweighting operation.***
**R8:** We discuss the performance gains from the reweighting operation in the first subtable of Table 2 in Section 4.4 of the main paper. The notation (b) represents the degradation-aware reweighting operation. This table shows nearly a 5 FID score improvement from the reweighting operation on both two datasets.
---
Rebuttal 2:
Comment: Dear Reviewer **rJuz**,
Thank you for taking the time to review our submission and for your constructive comments and favorable recommendation. We would like to confirm whether our responses have adequately addressed your earlier concerns. If you have any additional questions or suggestions, we would be happy to address them to further enhance the quality of our paper.
Best regards,
Authors | Summary: This paper proposes to address the night-to-day translation problem in which its learning basically can be briefly described by two steps: 1) illumination distribution as well as the physic priors built upon the Kubelka-Munk photometric model are firstly adopted to separate/disentangle the image regions into four degradation categories, i.e. darkness, well-lit, light effects, and high-light, in which such illumination degradation disentanglement is the main contribution of the proposed method; 2) the degradation-aware contrastive learning module is applied to maximize the mutual information between patches in the same spatial location from the generated image and the source image, where the anchor and its corresponding negative patches should be from the same degradation category (i.e. degradation-aware sampling) and the weights for each negative patch are determined by similar matrix obtained from the optimal transport computation (i.e. degradation-aware reweighting). Moreover, the GAN-based objective function is employed to bridge the domain gap between (generated) daytime and nighttime images. The translated images (from nighttime to daytime) are shown to have better quantitative and qualitative performance (in terms of FID) for aligning with the real nighttime image distribution.
Strengths: + In addition to provide better translation performance (both quantitative and qualitative), the translated images produced by the proposed method are shown to have better structural similarity with respect to the corresponding daytime images (evaluated in Alderley dataset) in comparison to the baselines. Moreover, the typical semantic segmentation model (pretrained on typical daytime dataset, i.e. Cityscapes) applied on the translated images (produced by the proposed method) leads to better segmentation results in comparison to being applied on the images generated by the baselines (i.e. indirect evidence showing that the images generated by the proposed better follows the daytime image distribution which the semantic segmentation model is trained on).
+ The ablation study does demonstrate the contribution of illumination degradation disentanglement for separating the image patches into four different degradation categories.
Weaknesses: - Although experimentally shown to be effective, the mechanism and the basic ideas behind leveraging the illuminance distribution as well as the physic priors for realizing disentanglement of four degradation categories (i.e. darkness, well-lit, light effects, and high-light) are not well explained, in which the physical meanings for Eq.1 to Eq.11 are hard to understand and follow. Basically, as such illumination degradation disentanglement is the main contribution of the proposed method, the description should be more self-contained and explanatory.
- As the illumination degradation disentanglement plays a key role in the proposed method, it would be great to have the robustness analysis on such disentanglement if possible (i.e. how accurate is the disentanglement, is there any related dataset we could apply such analysis?) and how would the translation model learning be affected once there are erroneous disentanglement?
Technical Quality: 3
Clarity: 2
Questions for Authors: Though the contribution and the novelty of the proposed illumination degradation disentanglement is clearly recognizable from the quantitative and qualitative results, better description upon its mechanism and basic ideas would be much appreciated. Moreover, the further analysis upon its accuracy/robustness would be also better to have.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: no potential negative societal impact is found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Q1: Although experimentally shown to be effective, the mechanism and the basic ideas behind leveraging the illuminance distribution as well as the physic priors for realizing disentanglement of four degradation categories (i.e. darkness, well-lit, light effects, and high-light) are not well explained, in which the physical meanings for Eq.1 to Eq.11 are hard to understand and follow.***
**R1:** Thanks for your valuable feedback.
First, we will explain our basic ideas behind leveraging the illuminance distribution as well as the physic priors for realizing disentanglement of four degradation categories. At nighttime, the intensity of illumination is the most important criterion for determining whether patterns are far from each other, which categorizes nighttime images into three non-overlapping regions: high light, well-lit, and darkness. However, within well-lit regions, colored illumination still results in complex patterns with similar intensity levels, which require further subdivision. Therefore, we derive an invariant to extract features related to colored illumination and propose categorizing and disentangling patterns into darkness, well-lit , light effects, and high-light.
Second, we apologize for the omission of a key citation, which made it challenging to understand the derivations from Eq (1) to Eq (11). Specifically:
+ Eq (1) is derived from [1].
+ Eqs (2)–(4) present a simplified photometric model tailored for nighttime environments, proposed by us.
+ Eq (5) introduces the invariant from [1], and we derive new characteristics based on our simplified model in Eq (6).
+ Eqs (7)–(10) follow [1] for computing the basic invariant and refining it for disentangling.
+ Eq (11) employs the estimated light effect to refine the mask and extract light effect regions from the original well-lit regions.
We will include these clarifications and the key citation in the revised version to improve understanding.
[1] Color Invariance. J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, H. Geerts. IEEE TPAMI, 2001.
***Q2: As the illumination degradation disentanglement plays a key role in the proposed method, it would be great to have the robustness analysis on such disentanglement if possible and how would the translation model learning be affected once there are erroneous disentanglement?***
**R2:** We agree that a robustness analysis of disentanglement is important. Unfortunately, conducting such research is nearly impossible due to the lack of illumination-related annotations. Creating such a dataset is also challenging due to the absence of unified measurement criteria in this field. Despite this, we provide a task-oriented evaluation of different disentanglement strategies by comparing final performance in the second subtable of Table 2 in the main paper. This table demonstrates that the 4 types of degradation disentanglement leads to significant performance improvements compared to the initial 3 types, indicating that the proposed 4 types disentanglement is empirically more reasonable.
We are committed to advancing this area and will continue to work on robustness analysis, including proposing measurement standards to make the field more complete.
---
Rebuttal 2:
Comment: Dear Reviewer **JHwd**,
Thank you for taking the time to review our submission and for your constructive comments and favorable recommendation. We would like to confirm whether our responses have adequately addressed your earlier concerns. If you have any additional questions or suggestions, we would be happy to address them to further enhance the quality of our paper.
Best regards,
Authors | Rebuttal 1:
Rebuttal: Thank you to all the reviewers for meticulously evaluating our paper.
First of all, we promise to add the missing citation in the revised version and apologize for our oversight. While we employ the color invariant from [1], we are the first to discuss its characteristics in nighttime scenes and identify its potential as a light effect detector to disentangle illumination degradations, both theoretically and empirically.
Next, we will provide a detailed explanation of the motivation behind the four degradation types to address common concerns.
A key observation is that, when perform Night2Day translation, treating all regions equally leads to significant artifacts by mixing different patterns from various regions. For example, mixing light effects patterns with high-light patterns, as shown in the Figure 1 of the main paper. Intuitively, separating these patterns and regularizing the structure will help mitigate these artifacts.
Based on this intuition, we sought a disentanglement strategy to separate these patterns. At nighttime, the intensity of illumination is the most important criterion for determining whether patterns are far from each other, which categorizes nighttime images into three non-overlapping regions: high light, well-lit, and darkness. However, within well-lit regions, colored illumination still results in complex patterns with similar intensity levels, which require further subdivision. Therefore, we derive an invariant to extract features related to colored illumination and propose categorizing and disentangling patterns into darkness, well-lit, light effect, and high light to address these challenges.
We agree that this disentanglement could be more precise. However, achieving this precision solely through physical priors is challenging and may require annotations for these degradation types and well-trained segmentation models.
Additionally, we provide a one-page PDF file showing additional experimental results including :
+ Ablation visualizations with the initial 3-cluster method and the full disentanglement method.
+ Segmentation visualizations of the proposed methods.
Thanks again to all the reviewers.
[1] Color Invariance. J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, H. Geerts. IEEE TPAMI, 2001.
Pdf: /pdf/4aa95db7361a80bd453bab26d8bd5db334df7f27.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Cryptographic Hardness of Score Estimation | Accept (poster) | Summary: This paper investigates the difficulty of distinguishing a Gaussian pancake distribution from a Gaussian distribution from the perspective of score estimation. The authors show that, by assuming the $L^2$-error of the score estimation is of the order $\log(d)$, there is a polynomial-time algorithm that solves the Gaussian pancake problem. Moreover, they show that for the cryptographically hard regime, there is a polynomial sample complexity $O(d)$ that ensures a small score estimation error for Gaussian pancake problems.
Strengths: The authors introduce a new perspective on the Gaussian pancake problem by presenting the score estimation problem from diffusion models. Specifically, their Theorem 3.1 demonstrates that if the score is estimated accurately, then the Gaussian pancake problem can be solved in polynomial time. This perspective is novel for the Gaussian pancake problem.
Weaknesses: One main selling point, which has been emphasized repeatedly in the paper and is also the title of this paper, is that the score estimation is cryptographically hard. This is confusing to me. In fact, as I understand it, the main theorem states that the Gaussian pancake problem can be solved under certain assumptions, while the hardness seems to state that the score estimation is hard.
If I understand correctly, the hardness arises from the assumption that $\gamma\sigma = O(1)$ in Theorem 4.2, which is known to be hard in existing literature, while the score estimation has a polynomial upper bound. However, I cannot see the hardness from Theorem 4.2. Even though the Gaussian pancake problem is hard, the score estimation is accurate. If we further combine Theorem 4.2 with Theorem 3.1, I guess the authors want to argue that if the score estimation is easy, then the Gaussian pancake problem can be solved. However, as it is already known that the Gaussian pancake problem is hard, the score estimation cannot be easy. But the score estimation error assumption in Theorem 3.1 does not match Theorem 4.2's conclusion. In fact, the $O(\log d)$ assumptions of Theorem 3.1 on the estimation error are smaller than the polynomial $O(d)$ upper bound in Theorem 4.2, which may make the story incomplete.
Please correct me if my understanding is incorrect.
Technical Quality: 3
Clarity: 2
Questions for Authors: please solve my concern in the weaknesses part
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for clearly communicating points of confusion and giving us the opportunity to clarify our results. Before providing any clarifications, we would like to mention that the reviewer’s summary below accurately and succinctly captures the main contribution of our paper.
> *I guess the authors want to argue that if the score estimation is easy, then the Gaussian pancake problem can be solved. However, as it is already known that the Gaussian pancake problem is hard, the score estimation cannot be easy.*
We now address specific questions raised by the reviewer.
> *… as I understand it, the main theorem states that the Gaussian pancake problem can be solved under certain assumptions, while the hardness seems to state that the score estimation is hard.*
We show that the Gaussian pancakes problem can be solved *if* we can solve L2-accurate score estimation for Gaussian pancakes distributions. More precisely, if an **efficient** algorithm existed for estimating the scores of Gaussian pancakes, it would imply the existence of an **efficient** algorithm for solving the Gaussian pancakes problem. This, as the reviewer correctly observed, is a contradiction since the Gaussian pancakes problem is known to be computationally hard.
The qualifier “efficient” is crucial here. Our score estimation algorithm for Gaussian pancakes (presented in Section 4) is **not** efficient; it involves a brute-force search over the d-dimensional unit sphere which requires exponential-in-d time. The polynomial quantity in Theorem 4.2 refers to the **number of samples**, not the **running time** of the estimator.
The main purpose of our inefficient estimator in Section 4 is to highlight a *gap* between what’s statistically possible, with no limits on computation, and what’s computationally feasible in poly(d) time (hence the name, statistical-to-computational *gap*). If a statistical problem is impossible even with infinite computation, then any claim of its computational hardness is vacuous. Section 4 demonstrates that our computational hardness result from Section 3 is not vacuous, as L2-accurate score estimation for Gaussian pancakes can be achieved with exp(d) computation time (and poly(d) samples).
> *But the score estimation error assumption in Theorem 3.1 does not match Theorem 4.2's conclusion. In fact, the $O(\log d)$ assumptions of Theorem 3.1 on the estimation error are smaller than the polynomial $O(d)$ upper bound in Theorem 4.2, which may make the story incomplete.*
To provide context, the $\epsilon = O(1/\sqrt{\log d})$ assumption in Theorem 3.1 refers to an upper bound on the **L2 score estimation error** $\epsilon$. Meanwhile, the $n = \mathrm{poly}(d, \gamma, 1/\eta)$ in Theorem 4.2 refers to the **number of samples** sufficient for estimating the Gaussian pancakes secret direction up to L2-error $\eta$. The magnitudes of these two quantities ($\epsilon$ and $n$) are not meant to be directly compared.
Instead, the two quantities are related as follows. For any score estimation algorithm (whether efficient or not) to satisfy the L2 score estimation error assumption $\epsilon = O(1/\sqrt{\log d})$ in Theorem 3.1, it is *statistically sufficient* to choose $n$ to be some large **polynomial in $d$**. In other words, there exists a score estimation algorithm that achieves the L2-error bound $\epsilon = O(1/\sqrt{\log d})$ using a polynomial number of samples. Our Theorem 4.2 provides a **computationally inefficient** score estimator with such guarantees.
We hope this explanation resolves any confusion and respectfully ask if the reviewer would be willing to reconsider our rating in light of this clarification. Thank you again for your time and consideration. | Summary: This paper shows that $L^2$-accurate score estimation, a crucial primitive in the theory of diffusion models and sampling, is computationally hard in the worst-case. The main theorem is a negative result that provides a statistical-computational gap: if computationally efficient $L^2$-accurate score estimation is possible, then one has a computationally efficient algorithm for solving the *Gaussian pancakes problem*, a hard instance under widely believed hardness assumptions from lattice-based cryptography. This negative result is particularly important in relation to Chen et al. (2023), as they showed that access to an $L^2$-accurate score estimation oracle (with some mild additional assumptions) admits an algorithm for sampling from any arbitrary distribution. The computational hardness of such an oracle suggests future directions for research in making stronger assumptions so such an oracle *is* possible (omitting the worst-case instance of Gaussian pancakes) or weaker criteria for understanding when a sample generated from the DDPM process is "good enough."
The main result, Theorem 3.1, is proven mainly by way of using Theorem 2 of Chen et al. (2023), which states that, for any distribution, the output of the DDPM algorithm provides a certificate for Gaussianity. Importantly, this DDPM algorithm assumes access to an $L^2$-accurate score estimation oracle, so by using this certificate of Gaussianity on the particular choice of the Gaussian pancake distribution, we can distinguish just by using the certificate of Gaussianity as a test statistic. Theorem 4.2 also gives the sample complexity of estimating the Gaussian pancake distribution through a brute force search over the possible hidden directions.
Strengths: This paper is very well-written and clear, and its main result, Theorem 3.1, is an interesting addition to the literature on both score estimation in diffusion model theory and the literature on statistical-computational gaps. I am not not an expert in either of these fields, particularly not in diffusion model theory, but I was able to mostly follow along with the proof and main statements in the work. However, because of my lack of previous exposure, I would take my words with a grain of salt.
**Originality:** As far as I know, this work is the only one to evaluate the statistical-computational gap of the $L^2$-accurate score estimation oracle in DDPM. The proof's use of Gaussian pancakes as the hard distribution is certainly original and interesting, and I believe that the result itself is an original contribution to both the literature on diffusion models and the learning theory literature on proving computational hardness results for statistical problems.
**Quality:** The work proves a theorem, Theorem 3.1, that is well-motivated and has ample analysis and interpretation to supplement the main claim. As far as my understanding goes, the proofs seem to be correct, though I may have not understood some details in the introduction of the stochastic differential equation defining the diffusion model process.
**Clarity:** The paper is very well-written and clear. Although I am an outsider to the diffusion model theory literature, I was able to roughly follow along with the arguments and the main theorem seemed very well-motivated after Sections 1 and 2. In particular, the authors do a very good job in sketching the implications of their main result in Section 1.3, and I appreciated the clarity of that section for bringing to light the significance of the result.
**Significance:** I am an outsider to the field of diffusion models, so I cannot confidently speak to the significance, but the result does seem very well-motivated and important, as, to my understanding, the score estimation oracle is central to the diffusion model process. Outside of the diffusion model literature, I believe that the general technique of using the Gaussian panacakes distribution as a hard instance for a statistical-computational gap is worth highlighting as an important technique. For this alone, I believe this paper presents a valuable contribution.
Weaknesses: The paper was well-written and I could not find any typographical or clarity issues. To the best of my checking, the theorem seemed correct, and its proof outline provided insight into the argument. I am an outsider to the literature, so I would not be confident in critiquing the paper in terms of weaknesses. As far as I can see, it the paper is clear, the theorem is well-motivated, and the argument is correct.
If anything, one might critique that perhaps the work would be more complete with a positive result towards a distribution-specific narrowing of DDPM that avoids the Gaussian pancakes distributions. However, I have no gauge on how hard/standard this might be in the literature, and it seems that $L^2$-accurate score estimation is central enough a primitive in DDPM that this negative result is important in its own right.
Technical Quality: 3
Clarity: 4
Questions for Authors: I have a couple of main questions, but they may stem from my ignorance of the literature:
1. Because the Gaussian pancakes class is hard, I assume that there have been positive results in the light of this computationally hard instance in other problems that exclude this class. What are examples of such distribution-specific guarantees?
2. How does the "weaker criteria for evaluating sample quality" in Section 1.3 relate to the Theorem 4.2? Is there still a statistical-computational gap if we allowed "distinguish" to be a looser requirement?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have addressed the limitations of the work in the NeurIPS paper checklist. Because this is mainly a theory paper, I do not see it as having other ethical conflicts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and the detailed review of our paper. The reviewer's summary effectively and accurately captures the essence of our results. We address elements of the review below, which highlight the strengths of our paper and pose insightful questions that deserve further investigation.
> *The paper is very well-written and clear … the main theorem seemed very well-motivated after Sections 1 and 2. In particular, the authors do a very good job in sketching the implications of their main result in Section 1.3, and I appreciated the clarity of that section for bringing to light the significance of the result.*
We are grateful for the reviewer’s appreciation of the motivation behind our work and the clarity of our exposition.
> *… the work would be more complete with a positive result towards a distribution-specific narrowing of DDPM that avoids the Gaussian pancakes distributions. However, I have no gauge on how hard/standard this might be in the literature, and it seems that L2-accurate score estimation is central enough a primitive in DDPM that this negative result is important in its own right.*
We agree that pairing negative results with positive results might provide a more complete understanding of the computational landscape of L2-accurate score estimation. However, as the reviewer noted, we believe that our negative result is significant in its own right. Moreover, we think that positive results warrant separate and careful treatment.
There are various ways of excluding Gaussian pancakes, each leading to an intriguing class of distributions with its own set of technical challenges. Specific examples will be provided when addressing the reviewer’s **Question 1**. However, it’s important to note that each of these distribution classes has been explored in separate papers, each involving non-trivial analyses. Thus, from the perspective of topic homogeneity, it is not clear which classes should be analyzed within this paper and which might be better suited for separate investigation.
> *1. Because the Gaussian pancakes class is hard, I assume that there have been positive results in the light of this computationally hard instance in other problems that exclude this class. What are examples of such distribution-specific guarantees?*
This is a great question. In the context of computationally efficient score estimation, recent works have analyzed mixtures of Gaussians with either a fixed number of components or "well-conditioned" components. In particular, Shah et al. [SCK23] demonstrated a score estimator for mixtures of spherical Gaussians with well-separated means, while Chen et al. [CKS24] provided a score estimator for $k$-mixtures of arbitrary Gaussians, with a running time that depends *exponentially* on $k$ and a certain "condition number" $\tau$ of the mixture components. Note that the running time of Chen et al.’s score estimator is polynomial in the data dimension $d$ if $k$ and $\tau$ are constants with respect to $d$. These distribution classes exclude Gaussian pancakes since the pancakes have degenerate covariances and the number of components $k$ grows with $d$ via the parameter $\gamma$.
Another class of interesting distributions is motivated by non-Gaussian component analysis (NGCA). A prototypical distribution arising in NGCA consists of a low-dimensional non-Gaussian "signal" embedded in high-dimensional Gaussian noise. More precisely, there exists a hidden $k$-dimensional subspace $V$ such that the projection of the distribution onto $V$ is non-Gaussian, while the projection onto its orthogonal complement $V^\perp$ is Gaussian. In standard NGCA settings where both $k$ and the non-Gaussian signal distribution are kept fixed with respect to the dimension $d$, polynomial time estimators for the hidden subspace $V$ are known [TV18, GS19].
Again, Gaussian pancakes distributions are excluded from this class since their non-Gaussian "signal" distribution depends on $d$ via the parameters $\gamma$ and $\sigma$. In standard NGCA the difficulty stems solely from the signal being embedded in higher dimensions. In Gaussian pancakes, the difficulty is twofold: not only is the signal embedded in higher dimensions, but the 1D discrete Gaussian with spacing $1/\gamma$ also becomes increasingly difficult to distinguish from the 1D standard Gaussian as $\gamma$ grows with $d$.
> *2. How does the "weaker criteria for evaluating sample quality" in Section 1.3 relate to the Theorem 4.2? Is there still a statistical-computational gap if we allowed "distinguish" to be a looser requirement?*
Another great question. We do not have clear answers, but it would depend on the specific weaker criterion used. For example, if the weaker criterion were "match the first two moments", then we do not anticipate any statistical-computational gap. An algorithm that simply outputs $\mathcal{N}(\hat{\mu}, \hat{\Sigma})$, where $\hat{\mu}, \hat{\Sigma}$ are the empirical mean and covariance, would suffice. Thus, for this nearly trivial criterion, we can bypass L2-accurate score estimation entirely and directly satisfy the weak criterion.
We believe that understanding the computational complexity of learning a generative model that is "indistinguishable" from the data distribution with respect to interesting metrics, such as integral probability metrics induced by various function classes, is an exciting future direction.
**References**
- [CKS24] Sitan Chen, Vasilis Kontonis, Kulin Shah. Learning general Gaussian mixtures with efficient score matching. arXiv preprint. 2024
- [SCK23] Kulin Shah, Sitan Chen, Adam Klivans. Learning Mixtures of Gaussians Using the DDPM Objective. *NeurIPS* 2023.
- [TV18] Yan Shuo Tan, Roman Vershynin. Polynomial Time and Sample Complexity for Non-Gaussian Component Analysis: Spectral Methods. *COLT* 2018.
- [GS19] Navin Goyal, Abhishek Shetty. Non-Gaussian Component Analysis using Entropy Methods. *STOC* 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for providing such a comprehensive response to my questions! This was an interesting foray into literature that I was not previously exposed to, and I appreciate the authors for engaging. I remain positive in my evaluation of the work, and best of luck to the authors. Thank you for providing an interesting read. | Summary: Without knowing data distribution, it is computationally hard to estimate score function from data samples, such as the reverse step of diffusion models. One previous work shows that L^2-accurate score estimation along the forward process can help efficiently sampling from arbitrary data distribution. However, this works shows L^2-accurate score estimation is still computationally hard even with polynomial sample complexity. Finding a gap between statistical perspective and computational perspective leads to a set of hard distributions, which are “Gaussian pancakes” distributions. This works conclude computationally efficient L^2-accurate score estimation should rely on stronger assumptions.
Strengths: 1. This work brings a set of solid future directions on the grounds of diffusion and computational complexity theory. It can guide fruitfully interesting research topics.
2. This work also bridges the score estimation with cryptography on computationally indistinguishability, which is a important property in the cryptography. Also, this work interprets the hardness from lattice-based cryptographic perspective.
Weaknesses: This work is lack of empirical evidences, especially those related to diffusion models.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you remove one “between” in Line 32?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None limitation has been detected.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and rating. We are grateful for the reviewer’s appreciation of the significance of our main result as a bridge between score estimation and the cryptographic notion of computational indistinguishability, as well as the future research directions we have proposed in light of our findings. We also appreciate the reviewer pointing out the typo in our submission; it will be corrected in the revised version.
We respectfully disagree with the view that lack of empirical evidence constitutes a weakness in our paper. While empirical evidence is indeed valuable in many areas of research, no amount of empirical evidence can establish computational hardness. Even if a large class of algorithms fails to solve a given problem, there is no guarantee that a different algorithm will not succeed. Instead, computer scientists and cryptographers begin with a few core hardness assumptions and rely on mathematical proofs to provide guarantees about computational hardness.
Given this context, our work demonstrates such fundamental limits through theoretical analysis. Specifically, our paper provides mathematical proofs that establish the hardness of L2-accurate score estimation under standard cryptographic assumptions.
Please let us know if there are any additional comments or questions. Thank you again for your time and consideration. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
HairDiffusion: Vivid Multi-Colored Hair Editing via Latent Diffusion | Accept (poster) | Summary: ### NeurIPS Review
#### Summary:
This submission presents an innovative 2D hairstyle editing pipeline leveraging latent diffusion models (LDM). The key component of this method is the Multi-Stage Hairstyle Blend (MHB), which facilitates separate control over hairstyle and hair color. By integrating structural information from non-hair regions such as facial masks and keypoints, the method effectively preserves non-hair attributes in the editing results. Additionally, a hair warping module enables natural transfer of hair color from the source image to the generated target image. Extensive experiments on public datasets like CelebA-HQ demonstrate improved results compared to previous state-of-the-art methods.
#### Strengths:
#### Weaknesses:
Strengths: - Clarity and Writing: The paper is well-written and relatively easy to understand.
- Technical Contributions: The submission presents a comprehensive system with solid and relatively novel technical contributions.
* Hair Warping Module: The inclusion of a warping module that transfers the color pattern from the source hairstyle to the target hairstyle is notable.
* Multi-Stage Hairstyle Blend (MHB) Module:** This module is effective in preserving non-hair attributes in the generated image.
- Extensive Experiments:
- The method produces more natural hair editing results compared to previous approaches, maintaining consistent hair color with the input text/image (control source) and better preserving non-hair regions.
- A user study validates the human preference for the presented method over other methods.
- The ablation study includes visualizations demonstrating the unique contributions of each module/control signal to the final results, along with detailed discussions.
Weaknesses: - More discussions on the failure cases: Most results are shown with a near-frontal head pose. Including and discussing more results with non-frontal head poses would enhance the completeness and understanding of the work. Additionally, as mentioned in the limitation section, the presented method might not work well with the color transfer between different hairstyles. It would also be great if more failure cases are included on that end.
- Manuscript Quality: There are some typos and missing text. A thorough proofreading is necessary to refine the manuscript.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your suggestions. We have included additional failure cases in Figure 1 of the PDF, where we comprehensively demonstrate the effects of multi-view hair color transfer and discuss the reasons for poor performance.
In Figure 1 of the PDF, we have added discussions on the limitations of the warping module in extreme cases of hair color transfer, including significant pose differences, complex textures, and large discrepancies in hairstyle regions. To visually illustrate the limitations of the warping module method, the warped results in the figure have not undergone the post-processing mentioned in the paper.
The first and second rows demonstrate cases with significant differences in hair length. While the hairline region aligns well, the hair ends do not; however。the second row on the left, with similar hair lengths, performs well, which is related to not randomly removing hair end information when generating the paired hair dataset. The third, fourth, and fifth rows show cases with significant pose differences. The warped result on the right side of the second row shows that the hair orientation aligns well with the facial orientation of the Source Image, indicating some robustness in the model across different poses. However, the right side of the second row and the third row still show misalignment in hair parting, leading to color inconsistencies in the Output. The left side of the fourth row shows the model discarding the hair ends while supplementing the bangs area of the Source Image, though there is still some deviation in the hair color's centerline. The last row depicts complex textures scenarios where the Output hair color does not match the Target Image, due to the compression of images required for diffusion input and the bilateral filtering operation removing high-frequency color details.
For cases with missing target region hair color, post-processing with PatchMatch can fill in the blank areas, as shown in Figure 4. The effectiveness is demonstrated in the ablation experiments in Table 1.
We will correct typos and missing text in the future version. Thank you for your assistance!
---
Rebuttal 2:
Comment: Due to the upcoming deadline for the discussion, I would like to confirm if you have any further questions or if there are areas where you need further clarification. If there are no more issues with my submission, I hope to receive your feedback and proceed.
Thank you very much for taking the time to review my work amidst your busy schedule. I greatly value your opinions and hope to improve my research under your guidance.
Thank you for your assistance, and I look forward to your reply. | Summary: This paper presents a new framework for hair editing tasks, which includes editing hair color and hairstyle using text descriptions, reference images, and stroke maps. The proposed approach leverages Latent Diffusion Models (LDMs) and introduces the Multi-stage Hairstyle Blend (MHB) technique to effectively separate the control of hair color and hairstyle. Additionally, the method incorporates a warping module to align hair color with the target region, enhancing multi-color hairstyle editing. The approach is evaluated through extensive experiments and user studies, demonstrating its superiority in editing multi-color hairstyles while preserving facial attributes.
Strengths: The proposed method demonstrates impressive performance in multi-colored hair editing, showcasing the ability to handle complex hair color structures while preserving facial attributes effectively. The integration of Latent Diffusion Models (LDMs) and the Multi-stage Hairstyle Blend (MHB) technique provides a novel approach to decoupling hair color and hairstyle, enhancing the quality of the edited images.
Weaknesses: 1. The overall pipeline primarily consists of two stages:
Altering the Hairstyle: This involves using a combination of control net and diffusion models with a hair-agnostic mask, 2D body pose keypoints, prompts, and a reference image as conditions. These modules are commonly found in previous methods.
Editing the Hair Color: This stage blends information between the hair mask and source image, along with the Canny image of the stylized image and a prompt, to achieve the final result. The main contribution lies in the Multi-stage Hairstyle Blend (MHB) method, which warps the reference hair to the source image to better maintain the hairstyle. However, warping modules are also commonly used in previous methods [4, 21].
The overall technical contribution does not meet the bar for this conference.
2. The hair structure of the source image is not well-maintained during color editing. As demonstrated in Fig. 5, fourth row, the strand direction of the hairline in the generated image is different from the input image. The results of HairClipv2 better preserve the curliness structure of the original input hair. Additionally, the generated image in the third row of Fig. 5 shows noticeable brightness differences and a distinct color discrepancy compared to the input image.
3. The paper's writing lacks clarity. It would be beneficial to specify the difference between \( I_c \) and \( I_i \) in Fig. 2, within the caption of this figure. Additionally, using \( I_i \) to denote the style proxy creates confusion.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please check Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you to the reviewers for pointing out the weaknesses.
**W1:** This study is the first to propose a diffusion-based pipeline in the hair editing field, addressing the issue of **multi-color hair structure** in **text2img** and **img2img** scenarios. Previously, no methods specifically focused on the transfer of multi-color hairstyles, and our visual effects, whether in text2img or img2img, are among the most effective to date. We introduced the Warping module into the hair editing domain to align hair colors, resulting in a Color Proxy that enables the model to faithfully align the target hair color while maintaining the hairstyle structure. To achieve color alignment through the warping module, we undertook the following work:
1. Addressing **the lack of paired datasets**: we used semantic segmentation and data augmentation to obtain the hair region deviating from the original face.
2. **Preserving hairstyle structure** by removing low-frequency details: bilateral filtering was employed to remove low-frequency details, enhancing the quality of hairstyle generation. Overall, the Warping Module can provide guidance for future researchers applying it in the hair editing field.
Our proposed hair-agnostic mask is also crucial for enabling hair editing tasks using diffusion methods, as it maximizes the retention of hairstyle-independent attributes while preserving the editability of the hairstyle region. Although our pipeline incorporates components from existing technologies, we focus on integrating and optimizing these techniques to address specific issues in practical applications. For instance, how to blend hair color and hairstyle to achieve balance while reducing artifacts, filling blank areas without a hairstyle, and improving the quality of generated hairstyles. Additionally, how to achieve more flexible editing by decoupling hair color and hairstyle through multiple steps are all considerations for practical application.
**W2:1. Color discrepancy in facial images**
The issue of color discrepancy outside the inpainting region is a common challenge in diffusion-based inpainting tasks. For example, in the ControlNet-Inpainting model used in our paper, the initial convolution layer of the UNet architecture merges the noisy latent, masked image latent, and mask, all of which are collectively influenced by the text embedding. Consequently, subsequent layers in the UNet model struggle to obtain pure masked image features due to the text's influence. The paper "BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion" proposed a blending operation to address this issue. It presents a simple pixel space solution by first blurring the mask and then performing copy-and-paste using the blurred mask. It is important to emphasize that this is a common challenge for diffusion inpainting and is beyond the scope of this paper.
**2. Preserving hairstyle details**
There are indeed slight changes in the generated hairstyle structure compared to the original, but the overall structural control is good. HairCLIPv2 does not faithfully reproduce the target image's hair color at all. This paper primarily focuses on multi-color transfer. ControlNet, as a sparse control matrix, relies on canny maps for hairstyle structure retention. However, Canny maps cannot achieve pixel-level control, resulting in minor differences between the generated and original hairstyles. These differences do not affect the overall hairstyle structure. Training a more refined canny map with a hairstyle dataset could potentially improve the quality of hairstyle retention, which is a direction for future improvements. As shown in Figure 3 and Table 2 of the supplementary PDF, we supplemented qualitative and quantitative experiments to demonstrate our method's ability to preserve hairstyle details.
**W3:** When the target hairstyle is the one generated by the Hairstyle Editing Stage,I_c is P^s, while retaining the original hairstyle means I_i. This aspect was not clearly explained. The style proxy is intended to guide images with the target hairstyle, as mentioned in the introduction. Therefore, when only changing the hair color while retaining the original hairstyle, P^s in Figure 2 will be replaced by I_i. We will address this in future versions. Thank you for your help!
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and the detailed explanations provided. While I appreciate the clarifications, I still have concerns regarding the overall novelty of the work. Specifically, the points raised about 'Addressing the lack of paired datasets' and 'Preserving hairstyle structure' do not seem to significantly advance the state of the art.
Moreover, in your rebuttal, you mentioned that 'color discrepancy outside the inpainting region is a common challenge in diffusion-based inpainting tasks.' However, in Figure 4, the claim that 'Our approach shows better preservation of irrelevant attributes' appears somewhat overstated given the context. This discrepancy raises questions about the extent of improvement your method offers.
Given these concerns, I have decided to maintain my original rating. That said, I am open to reconsideration if other reviewers strongly advocate for the acceptance of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for taking the time to review my work amidst your busy schedule.
**Innovativeness and Effectiveness** : Our approach has achieved notable results in the novel task of multi-color hair transfer. Previously, no method applied the warp model to hair color transfer. We believe that if existing methods in the hairstyle domain had utilized the warp module, it would be necessary to compare our approach with them and demonstrate that our method surpasses them in metrics. However, as this is our first introduction of this module, we have validated the effectiveness of our method through the following:
- Ablation Study (pdf Table 1): We have conducted ablation studies to verify the effectiveness of each step and the integration of the warp module.
- Evaluation Metrics (pdf Table 2): As shown in Table 2 of the supplementary materials, our method, based on a general hair editing self-transfer approach, outperforms previous methods in the task of multi-color hair transfer. While our metrics in the task of swapping monochrome hairstyles are not the best, they are comparable to earlier methods. Considering our focus on the multi-color task, we believe this is justified.
**Preservation of Facial Details and Irrelevant Attributes:** We have not overstated our effectiveness in preserving irrelevant attributes. In fact, in most cases, our method excels in maintaining facial details, accessories, clothing, and background. Although there are some changes in skin tone on the left side of Figure 4, the retention of facial details and the background is excellent. For example, in Figure 5 of the paper, despite changes in skin tone, the clothes of the child behind are preserved much better than in the text2img comparison method. Therefore, from the overall quantitative metrics and the preservation of irrelevant attributes, our method is superior to the comparison methods. We are willing to provide additional image examples to further demonstrate our preservation of details in the background, clothing, and other aspects.
I greatly value your opinions and hope to improve my research under your guidance.
Thank you for your assistance.
---
Rebuttal 2:
Comment: Due to the upcoming deadline for the discussion, I would like to confirm if you have any further questions or if there are areas where you need further clarification. If there are no more issues with my submission, I hope to receive your feedback and proceed.
Thank you very much for taking the time to review my work amidst your busy schedule. I greatly value your opinions and hope to improve my research under your guidance.
Thank you for your assistance, and I look forward to your reply. | Summary: The paper introduces an approach for hair editing using Latent Diffusion Models (LDMs). A warping module ensures precise alignment of the target hair mask and enables hair color structure editing using reference images. The proposed Multi-Hair Basis (MHB) method within LDMs decouples hair color and hairstyle. The authors showcase the performance of their method through GAN-based methods via qualitative and quantitative evaluations,
Strengths: The paper introduces a warping module ensuring precise alignment of the target hair mask. This helps in handling some small mismatches between the images.
The method separates hair color from hairstyle. This allows flexibility and control over the hair transfer task.
The paper also shows results of text-based hairstyle editing, reference image-based hair color editing, and claims to preserve facial attributes.
Weaknesses: 1) Most of the examples shown in the results are front-facing subjects in the case of the hairstyle transfer. In real-world applications, there are cases where the reference and source images can have diverse/different poses. The lack of such results raises the question if this is a limitation of the method. Such problems are addressed in the paper HairNet[1].
2) A limitation of some GAN-based approaches compared in the paper is that to achieve high-fidelity reconstruction, the images are overfitted into the networks. As such most of these methods do not support further edits, for example shortening the transferred hair, making it wavy or curly or changing the pose of the subject to view the subject from a different angle. The current method does not seem to solve this problem either. What are the advantages of these GAN frameworks? Some of these methods support further editing as well. (check video associated with HairNet). https://www.youtube.com/watch?v=WBB43cgCFZM&t=153s
3) Some of the GAN based approaches can also control the degree of hairstyle transfer (Barbershop). Does this method control this efficiently?
[1] Zhu, Peihao, et al. "Hairnet: Hairstyle transfer with pose changes." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Most examples in the results are front-facing subjects. How does the method perform when the reference and source images have diverse or significantly different poses?
2) GAN-based approaches often overfit images for high-fidelity reconstruction, limiting further edits. How does the current method handle this issue?
3) Can your method support additional edits post-hairstyle transfer, such as shortening hair, making it wavy or curly, or changing the subject's pose to view from different angles?
4) What specific advantages does the current method offer over existing GAN frameworks, especially regarding editability and handling diverse poses?
Also, check the weakness section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you to the reviewers for pointing out the weaknesses.
1. We have reviewed the HairNet video and paper. The issue of transferring hair across diverse/different poses is not the problem addressed in our paper. HairNet can effectively control different facial angles, which would be highly beneficial for our method to achieve multi-angle and multi-color hair transfer in the future. We will cite this work in our revised version. However, it is challenging for GAN-based methods to excel in both editability and retention of original features. In Figure 5 of our parper, we compare various space mapping methods based on StyleGAN (including Barbershop based on FS space), and it is evident that they struggle to reconstruct such niche features as multi-color hairstyles. In the global response, we present some examples of our Warping module handling cases with significantly different poses.
2. **Advantages of GAN-based methods**:
1. The speed of image generation is very fast. Benefiting from the decoupled facial features in StyleGAN, it is almost possible to change certain feature vectors and quickly generate the corresponding images.
2. They retain editability, allowing operations in the latent space, where feature vectors in the StyleGAN latent space can be edited to control different features such as hair length and face orientation.
3. Barbershop cannot control multi-color hair transfer. Although the FS space based on optimization can retain facial feature details, it is difficult to decouple the two features of hair color and hair color structure, because these two features are highly coupled (Figure 5 in the parper compares Barbershop).
Below are the responses to the questions:
**A1:** The global response section G1 discussion on the Warping module's ability to transfer hair color under extreme conditions, including significant posture differences, complex textures, and large differences in hair regions.
**A2:** Diffusion models generate images through a multi-step denoising process, from pure noise to clear images. This allows the model to better capture image details and semantic information at different levels during the generation process, rather than generating the entire image at once. This gradual generation approach helps avoid overfitting, as each step improves and adjusts different levels of the image. This editing capability makes diffusion models more flexible and controllable for different editing tasks. Our paper adopts several common Diffusion Inpainting methods to ensure the masked region remains unchanged. For example, we input the Hair Agnostic Mask and Source Image together into the model, allowing it to distinguish between known and unknown areas, thus only filling in the unknown areas without affecting the known ones. During the generation process, noise can be injected only into the masked areas while keeping the unmasked areas unaffected. This means the unmasked areas always retain their original pixel values during the diffusion process, ensuring these areas are not influenced by the generation process. For the hairstyle region needing editing, we remove information from the original image other than the face, use sparse Controlnet's openpose and textual information to control the hairstyle generation StyleProxy, then further obtain the hairstyle region mask of StyleProxy. As mentioned above, we input these together into the model, thereby preserving both the editability of the hairstyle and the high fidelity of the unrelated regions.
**A3:** Our method does not support additional edits post-hairstyle transfer. Using StyleGAN, direct operations in the latent space can achieve quick and direct feature editing. However, the process of generating images via Diffusion is a multi-step gradual denoising process, where the latent space is not directly editable but involves a series of denoising steps to achieve image generation. Therefore, direct editing in the latent space is more challenging. The generation process of diffusion models requires multiple iterative steps, making real-time editing as in StyleGAN's HairNet impractical. Although some degree of editing can be achieved through conditional generation, such editing usually needs pre-set conditions before generation, making it difficult to achieve real-time, dynamic feature adjustments.
**A4:**
- **Editability**:
1. For text-based hairstyle editing, compared to the current HairCLIP and HairCLIPv2 based on the combination of CLIP and StyleGAN, their editing capabilities are limited by the dimensions of the StyleGAN latent space and the GAN training process. They cannot capture niche and complex facial features or information outside the face, such as background and arms. In contrast, diffusion models can combine text conditions multiple times during the generation process, ensuring that each step of image generation aligns with the text description. This method better captures the details and complexity in the text description.
2. Our method can integrate multiple conditions for additional control, such as stroke maps to control hair color, controlnet to introduce canny maps to control hair structure, and denspose maps to ensure the hairstyle generation aligns with facial posture. These capabilities significantly enhance the editability of our method.
- **Handling pose diversity**: Compared to methods specifically dealing with multi-pose hair transfer, our paper focuses more on color transfer while maintaining the original hairstyle. The warping module aligning the target hair color and region with different facial poses is a conditional GAN method, as it is currently challenging to decouple hairstyle and hair color directly in the latent space of diffusion models. There are also related articles exploring multi-view generation in diffusion models, such as DiffPortrait3D, which achieves multi-view facial effects. | Summary: This paper presents a novel approach called HairDiffusion for editing hair in images using latent diffusion models. The main contributions of the work are:
1. Introduction of the Multi-stage Hairstyle Blend (MHB) method for effectively separating control over hair color and hairstyle in the latent space of the diffusion model. MHB divides the diffusion process into two stages, allowing for precise guidance of hair color generation and context-aware generation of the rest of the image.
2. Development of a warping module to align hair color with the target area. This module adapts the HR-VITON architecture, using DensePose and segmentation maps to account for facial poses.
3. Utilization of hair-agnostic masks to transform the hair editing task into an inpainting task. The authors developed two types of masks for different editing stages, which effectively preserve necessary information and remove unnecessary information.
4. Fine-tuning of the CLIP model on a dataset with multi-colored hairstyles to improve editing of complex hair color structures. The authors applied data augmentation to increase pattern diversity.
The authors demonstrate that their method outperforms existing approaches in hairstyle and hair color editing tasks, especially for complex multi-colored styles. The method allows for editing hairstyle and hair color separately or together, using textual descriptions or reference images. Experiments show that HairDiffusion better preserves relevant image attributes (e.g., facial features, background) compared to existing methods.
The integration of these components into a single pipeline enables HairDiffusion to work effectively with both simple and complex multi-colored hairstyles while preserving other image attributes.
Strengths: A major strength of this method is the quality of complex hair color transfer, as well as the preservation of hair texture, which other methods typically don't pay much attention to. The approach demonstrates exceptional ability in handling multi-colored hairstyles. Additionally, the method's capacity to preserve relevant attributes such as facial features and background elements sets it apart from existing techniques. The flexibility to edit hairstyle and color separately or together, using either text or reference images, adds to its versatility and practical applicability.
Weaknesses: The paper exhibits several weaknesses. Firstly, it employs confusing notations, some of which are not properly introduced in the text. Secondly, the ablation study lacks clarity and sufficient detail. There is also a notable absence of comprehensive information regarding the method's limitations. For instance, the paper fails to provide examples demonstrating how the style transfer functions when transferring from long to short hair, how it handles cases with significant pose differences, or how it manages complex textures and hairstyles. Moreover, there is insufficient information about the limitations of the warping module. All presented images showcasing this module display identical hair shapes, suggesting that the warping module's functionality is not truly tested. This makes it impossible to accurately evaluate the quality of its performance.
Furthermore, the paper lacks sufficient information for its reproduction, as it wasn't explained how exactly and on what data the CLIP model was fine-tuned. In general, there is very little information throughout the work about additional datasets, where and how the data was scraped, and about hyperparameters.
The user study is very poorly described; there is no information anywhere about what exactly the questions looked like and in which domains these experiments were conducted. There are also questions about the statistical significance of the results.
The scientific novelty of this work is questionable. As demonstrated in Figure 3, the Control-SD method produces hairstyles that are virtually identical to those generated by HairDiffusion. This similarity suggests that the main contribution of the work is specifically in the transfer of hair color rather than the creation of the hairstyle. The color transfer component primarily consists of two stage: the warping module (which is based on HR-VITON, published in 2022) and the Multi-stage Hairstyle Blend (MHB) method (which utilizes ControlNet and simple blending techniques). Given that these core components are largely derived from existing methods, the overall originality of the approach is limited.
Overall, the paper suffers from a lack of comprehensive evaluation. It presents a limited range of visual comparisons and metrics, which hinders a comprehensive evaluation of the method's performance. Specifically, it lacks crucial metrics such as realism after editing (FID), realism with pose differences (FID), and full-image reconstruction metrics (PSNR, SSIM, LPIPS, FID, RMSE) as used in [1]. Moreover, the authors have not included comparisons with recent relevant works like StyleGANSalon [1] and HairFastGAN [2]. This omission of key comparisons and metrics significantly limits the reader's ability to fully assess the performance of the proposed method in relation to the current state-of-the-art approaches.
[1] Sasikarn Khwanmuang, Pakkapon Phongthawee, Patsorn Sangkloy, Supasorn Suwajanakorn. StyleGAN Salon: Multi-View Latent Optimization for Pose-Invariant Hairstyle Transfer. arXiv preprint arXiv:2304.02744, 2023.
[2] Maxim Nikolaev, Mikhail Kuznetsov, Dmitry Vetrov, Aibek Alanov. HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach. arXiv preprint arXiv:2404.01094, 2024.
Technical Quality: 1
Clarity: 2
Questions for Authors: 1. As a strong point of your method, you indicate excellent quality in preserving facial details, but you compare it with HairCLIP v2, which works in a relatively low-dimensional FS space in StyleGAN. Could you provide more examples of facial detail preservation on very complex images in comparison with methods like StyleGANSalon and HairFastGAN?
2. Could you provide more examples of the limitations of your method? We would like to see how the method works with large pose differences, complex textures, and very different hair shapes in the image domain. With these images, we would like to understand how well the concept preservation works in the CLIP space, how well the warp module works, and how well the entire method functions overall.
3. Could you provide information on how you collected the additional dataset with hair colors?
Confidence: 5
Soundness: 1
Presentation: 2
Contribution: 1
Limitations: The paper does not adequately address the limitations of the proposed method. The authors dedicate only two brief paragraphs to limitations, and these are relegated to the supplementary materials. This treatment is insufficient for a comprehensive understanding of the method's constraints. A more appropriate approach would be to include a detailed discussion of limitations in the main body of the paper. Furthermore, the authors should present visual examples demonstrating scenarios where each module, as well as the overall method, underperforms. These examples should be accompanied by in-depth analysis to provide insights into the reasons for these limitations and potential avenues for future improvements. Such transparency would significantly enhance the paper's scientific value and reproducibility.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback.
1. We will correct the typos and missing text, and supplement the finetune hyperparameter settings in the future version.
2. We have showed the limitations of our method's dependency on masks in the supplementary materials. We have supplemented the limitations and ablation experiments proving the effectiveness of the warping module (Figure 1, Table 2).
3. The results of the ablation experiments in our paper clearly demonstrate the considerations at each step of our pipeline, and the reasons have been analyzed.
4. User Study strictly follows HairCLIPv2.
5. We have supplemented the experiments with HairFastGAN metrics and detailed visual comparisons with HairFastGAN and StyleGANSalon in the global response sections G3, G4 and G5. The following outlines reasons for not making certain comparisons:
- Quantitative comparisons with StyleGANSalon: StyleGANSalon is based on EG3D pre-trained on the FFHQ dataset for pose transfer, and EG3D does not provide pre-trained models on CelebA-HQ. Hence, we do not conduct quantitative metric comparisons with StyleGANSalon. Moreover, StyleGANSalon performs comparisons on two subsets of the FFHQ dataset without disclosing the partitioning method.
- Quantitative self-transfer comparisons with StyleGANSalon: Unlike HairFastGAN and our method, which can decouple and transfer hair color and hairstyle, StyleGANSalon transfers the entire hairstyle. Thus, we believe that comparisons would be unfair.
- Qualitative color transfer comparisons with StyleGANSalon: StyleGANSalon transfers the entire hairstyle and cannot decouple and transfer hair color independently, making hair color transfer visual comparisons infeasible.
Regarding the concern that "the Control-SD method produces hairstyles that are virtually identical to those generated by HairDiffusion": The advantage of our method over diffusion-based text2img lies in multi-color text control and the maintenance of the original hair color when only using text to control the hairstyle.
Our work is the first to introduce a warping module for color alignment in the field of hairstyle editing, differing from previous StyleGAN-based methods. This approach faithfully aligns with the target hair color without being constrained by the limitations of the StyleGAN latent space. To achieve color alignment through the warping module, we undertook the following steps:
1. Addressing the lack of paired datasets: We used semantic segmentation and data augmentation to obtain hair regions deviating from the original face.
2. Preserving hairstyle details: We employed bilateral filtering to eliminate low-frequency details, improving the quality of hairstyle generation.
Overall, our approach can guide future researchers in applying the warping module to hairstyle editing.
The hair-agnostic masks we proposed is crucial for enabling hairstyle editing tasks with diffusion methods. It maximizes the preservation of hairstyle-independent attributes while retaining the editability of the hairstyle region. Although our pipeline borrows certain components from existing techniques, we focus on integrating and optimizing these techniques to address specific issues in practical applications. For example, achieving a balance between hair color and hairstyle blending while minimizing artifacts, filling in blank regions without hair, improving the quality of generated hairstyles, and achieving more flexible multi-condition editing through the multi-step decoupling of hairstyle and hair color are all considerations in practical applications.
Responses to specific questions are as follows:
**A1:** In the global response sections G3, G4 and G5, we have supplemented experiments with HairFastGAN and StyleGANSalon, including detailed visual comparisons in self-transfer methods with HairFastGAN and StyleGANSalon.
As shown in Figure 3, our method demonstrates better visual effects in preserving multi-color hairstyles and hairstyle details under facial detail attributes and occlusion conditions. Since whole hair transfer is not the focus of our work and StyleGANSalon is a pose transfer model pre-trained on EG3D with the FFHQ dataset, we do not conduct quantitative metric comparisons with StyleGANSalon.
**A2:**
- 1. **Limitations of the Warping Module**: We have supplemented discussions on the limitations of the warping module in extreme cases of hair color transfer in Figure 1 of the supplementary PDF, including significant pose differences, complex textures, and large differences in hairstyle regions.
- 2. **Effectiveness of the Warping Module**: We have added an ablation experiment where 2000 images from the CelebA-HQ validation set are divided into source images and target hair color images for mutual color transfer experiments. The results are shown in Table 1 of the supplementary PDF.
As for Effectiveness of CLIP Fine-tuning, the results are shown in section G6 of the supplementary PDF.
**A3:**
Dataset Details: Number : 4,625;
Average size: 224.93 x 264.25;
Categories:
- Hair colors: purple, red, green, blue, brown, silver, blonde, pink, burgundy;
- Color structures: ombre, dip dyed, streaked, half and half, split color.
Data acquisition: We crawled tens of thousands of multi-color hair images using keywords, cleaned the data, and obtained approximately six thousand multi-color hair images. These images were input into GPT with predefined hair color and structure categories to generate text annotations like "a photo of {colors}, {color structure} hairstyle." We then had 10 professional annotators rate the annotations in multiple rounds, discarding low-rated images, resulting in 4,625 multi-color text-image pairs. If the paper is accepted, we will make this dataset publicly available.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. However, there are still some concerns regarding the limitations of your method that were not fully addressed in the rebuttal.
Regarding your answer A1, you demonstrate an improved reconstruction by referencing Table 2 and Figure 3. While this shows that your method outperforms in reconstruction tasks, it was already evident that Stable Diffusion's reconstruction is superior to StyleGAN's. However, Table 2 indicates that your method underperforms significantly compared to StyleGAN-based methods in single color transfer. It would be beneficial to explain why this occurs and provide examples to illustrate this point. Additionally, the impact of such transfers on facial details remains unclear and should be elaborated upon.
Your answer A2 did not directly address my original question. I inquired about how your method performs when transferring hairstyles from the image domain and its general limitations. Instead, you focused on the limitations of your method when recoloring hair.
In summary, the current presentation of the work lacks transparency, and there are several issues that need to be addressed. If the final paper is permitted to incorporate these important responses from the rebuttal, I believe it would be suitable for publication. Additionally, given the recent publication of the Stable-Hair paper [1], it would be valuable to include a comparison with this work in the future version of your paper.
[1] Yuxuan Zhang, Qing Zhang, Yiren Song, Jiaming Liu, Stable-Hair: Real-World Hair Transfer via Diffusion Model, arXiv:2407.14078
---
Reply to Comment 1.1.1:
Title: The discussion the challenges and limitations of the current method in hairstyle transfer, color alignment, and retaining facial details, while suggesting future improvements and comparisons with other approaches.
Comment: Thank you for your suggestions, which have been very helpful in improving our paper.
Discussion on the Performance Compared to StyleGAN-Based Methods
1. Reasons for Subpar Performance Compared to StyleGAN-Based Methods:
- In cases where hairstyles differ significantly, our color alignment method, which is designed to generate colors more consistent with adjacent hues, may produce colors that deviate from the original hairstyle color. This discrepancy can lead to inconsistencies in the generated results.
- The sparse matrix control provided by ControlNet's Canny map does not offer pixel-to-pixel control over hairstyle features. Alternatively, we could attempt to use a step-by-step approach that mixes multiple ControlNet models, allowing the model to focus on additional information.
- When editing hairstyles based on reference images while retaining Diffusion text editing, setting the text to "hair" by default may introduce additional information that degrades the quality of the generated results. We could try separating the control of the text and ControlNet to balance the control over hairstyle structure and color in the future.
We will include examples of these issues in future versions.
Impact on Facial Details
2. Transfers and Facial Details:
- As illustrated in Figure 3, StyleGAN-based methods show suboptimal retention of accessories (G, M, O) and unique facial features (I) when the training dataset contains few or no samples of these details. The preservation of multi-color hair (N, Q) is compromised because HairFastGAN's approach of decoupling hair color and hairstyle in the feature space makes it challenging to maintain the color structure. StyleGAN Salon employs post-processing enhancements that improve results for multi-color hair to some extent, but retaining hairstyle details remains challenging. Hairstyle preservation (A, B, C, E, T, S, R, N) suffers from the lack of masks to retain irrelevant information. Additionally, maintaining the background (P) and hand preservation (F, K, L, W) in cases where the hand obscures the hairstyle is also difficult without optimization. StyleGAN-based methods struggle to preserve detail for accessories or hands, particularly when the training dataset contains few examples of such features.
- Conversely, the ControlNet Canny map, which provides line-based control, shows better performance in preserving hairstyle flow and individual hair strands.
Discussion on Hairstyle Transfer Limitations
3. Limitations of Hairstyle Transfer:
- Hairstyle transfer is not the primary focus of our work, but it is important to address the method's shortcomings. We use a straightforward approach of converting hairstyles into text vectors, which often fails to capture the complete details and local features of hairstyles, leading to noticeable discrepancies between the generated and original hairstyles. Recent studies suggest using encoders trained on face datasets to better capture facial details, which is a promising direction for future work. We will include additional figures and textual explanations to discuss these limitations in future versions.
- We have also noted the paper on Stable-Hair, which differs from our approach by not decoupling hairstyle features, thereby preserving both hair color and overall hairstyle during the transfer, and not focusing on text-based editing of hairstyles. This paper has not yet released code or demos; we will conduct a comparison once these resources are available.
---
Reply to Comment 1.1.2:
Comment: Due to the upcoming deadline for the discussion, I would like to confirm if you have any further questions or if there are areas where you need further clarification. If there are no more issues with my submission, I hope to receive your feedback and proceed.
Thank you very much for taking the time to review my work amidst your busy schedule. I greatly value your opinions and hope to improve my research under your guidance.
Thank you for your assistance, and I look forward to your reply. | Rebuttal 1:
Rebuttal: **G1: More Examples of the Limitations:** In Figure 1 of the PDF, we have added discussions on the limitations of the warping module in extreme cases of hair color transfer, including significant pose differences, complex textures, and large discrepancies in hairstyle regions. To visually illustrate the limitations of the warping module method, the warped results have not undergone the post-processing mentioned in the paper.
The first and second rows demonstrate cases with significant differences in hair length. While the hairline region aligns well, the hair ends do not; The second row on the left, where the hair lengths are similar, performs well. This is due to the consistent inclusion of hair end information when generating the paired hair dataset used to train the warping module. The third, fourth, and fifth rows show cases with significant pose differences. The warped result on the right side of the second row shows that the hair orientation aligns well with the facial orientation of the Source Image, indicating some robustness in the model across different poses. However, the right side of the second row and the third row still show misalignment in hair parting, leading to color inconsistencies in the Output. The left side of the fourth row shows the model discarding the hair ends while supplementing the bangs area of the Source Image, though there is still some deviation in the hair color's centerline. The last row depicts complex textures scenarios where the Output hair color does not match the Target Image, due to the compression of images required for diffusion input and the bilateral filtering operation removing high-frequency color details.
For cases with missing target region hair color, post-processing with PatchMatch can fill in the blank areas, as shown in Figure 4. The effectiveness is demonstrated in the ablation experiments in Table 1.
**G2: More Examples of Reconstruction:** In Figure 3, we have supplemented self-transfer experiments with state-of-the-art methods HairFastGAN and StyleGAN Salon. The comparison illustrates the retention of accessories (G, M, O); multi-color hair preservation (N, Q); hand preservation (F, K, L, W); unique facial features retention (I); hairstyle preservation (A, B, C, E, T, S, R, N); background retention (P). Our method also has limitations, such as color discrepancies in attributes other than the hairstyle in some cases, like hair color in the second column and skin color in the seventh column. This is due to merging the noisy latent, masked image latent, stroke map latent, and mask at the initial convolution layer of the UNet architecture, where they are collectively influenced by the text embedding. Consequently, subsequent layers in the UNet model struggle to obtain pure masked image features due to the influence of the text and stroke map.
**G3: Hair Color Transfer Metrics:** In Table 2 left, we referenced HairFastGAN for **single color transfer** metrics testing, showing that our method performs comparably to GAN-based methods on the CelebA-HQ dataset, which predominantly features single-colored hair. However, the quality of the multi-colored hairstyles we crawled supports the fine-tuning of the CLIP model, but the majority do not reach the size of CelebA-HQ (1024x1024), and the data volume is insufficient for quantitative experiments. Thus, we only provide visual comparisons in the paper.
**G4: Hair Reconstruction Metrics:** In Table 2 right, we have referenced HairFastGAN to include more comprehensive metrics for Reconstruction quality, demonstrating the superiority of our method in overall image quality.
**G5: Comparison to HairFastGAN:** In Figure 2, our method outperforms HairFastGAN in both facial preservation and hair color transfer.
**G6:Effectiveness of CLIP Fine-tuning:** In Figure 5, We conducted a simple text-to-image alignment experiment. It demonstrates that our fine-tuned CLIP model performs better in hair color and multi-color structure compared to the original model. Some colors show overfitting, which we plan to address by balancing color proportions in the fine-tuning dataset in future work.
Pdf: /pdf/d0fedc0c3449c90c3ce61f5fa819ba13eae15dc1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Efficiency for Free: Ideal Data Are Transportable Representations | Accept (poster) | Summary: This paper considers data set distillation to a more concise form for purposes of representational efficiency, partly as an attempt to consider what forms of deployment might benefit from such forms and more importantly seeking to offer a universal form of translator to extract such compact representations from diverse data sets and their resulting influence towards training efficiency. It includes what appears to be an attempt at a nearly comprehensive survey of related literature which is itself a nice part of the paper.
Strengths: - Originality
I didn't get a strong sense of significant originality, although that was sort of a vague sense and welcome further input from the authors. The five points were obviously clearly articulated in section one, I wondeer if you could color these with their specific relations to prior work?
- Quality
The paper was not standout or anything, but for those with interest in this less common application it might have usefulness.
- Clarity
The Table 1 was kind of awkwardly placed, it distracted from flow of paper and would possibly be better served as paragraphs instead of a table.
- Significance
I had trouble interpreting the significance of the ReLA_D model for generic data set distillation, asking the authors for further input below. Otherwise a contribution of the work appeared to be associated with the demonstrations of impact of distillations towards training efficiency, which is sort of intuitive and probably more useful as a validation of their tool than any significant result.
Weaknesses: The way the paper is written it almost appears that authors are approaching the concepts of dataset distillation considerations (eg in Table 1) as if such matters are unexplored in literature, which I would sort of find surprising. I think the paper would have been better structured as leading with the ReLA-D model convention for data set distillation as a more prominent part of the writeup, including more detail as to how that relates to prior work.
Technical Quality: 2
Clarity: 2
Questions for Authors: I am partly fascinated by scope not addressed by the paper as to where these forms of distilled data sets might be of added benefit due to their less noisy representations.
More of a hypothetical question, no need to answer: Are there any known applications where data augmentation is known to interfere with implementations? Perhaps that might be a starting point of where valuable sources of impact of this form of distillation could be considered.
Can you talk further about the extent of novelty and related work for the ReLA-D tool?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: I don't know if their is any strong interest in dataset distillations in practice where they may be used in production systems. If there are perhaps you could highlight furhter as it may make the importance of the paper more clear.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > [W1] The way the paper is written it almost appears that authors are approaching the concepts of dataset distillation considerations (eg in Table 1) as if such matters are unexplored in literature, which I would sort of find surprising. I think the paper would have been better structured as leading with the ReLA-D model convention for data set distillation as a more prominent part of the writeup, including more detail as to how that relates to prior work.
>
[R1] Thank you for your valuable feedback. We will revise the manuscript to explicitly situate the ReLA-D model within the context of existing literature on dataset distillation. Additionally, we will enhance the discussion to clarify how our work builds upon and differentiates from prior research, as outlined in our [R3] below.
> [Q1] I am partly fascinated by scope not addressed by the paper as to where these forms of distilled data sets might be of added benefit due to their less noisy representations.
>
[R2] Thank you for your feedback. We have conducted additional experiments and further explorations, as detailed in attached PDF and our response to Reviewer zbwg.
In the attached PDF, we have included additional experiments covering: (a) the higher-resolution CelebA-HQ dataset (1024 × 1024); (b) a continual learning task; (c) comparisons with more state-of-the-art baselines; (d) reports on computation time and peak GPU memory usage; (e) a segmentation task on the VOC 2012 dataset; and (f) the larger ImageNet-21K dataset.
> [Q2] More of a hypothetical question, no need to answer: Are there any known applications where data augmentation is known to interfere with implementations? Perhaps that might be a starting point of where valuable sources of impact of this form of distillation could be considered.
>
[R3] To the best of our knowledge, no such applications currently exist.
> [Q3] Can you talk further about the extent of novelty and related work for the ReLA-D tool?
>
[R4] Additional discussions include the following:
1. As detailed in Appendix K, the SOTA dataset distillation methods generally demand a higher computational budget compared to training on the full dataset. For instance, to distill ImageNet-1K, the SOTA efficient methods [a,b,c] require a model that is well-trained on ImageNet-1K. In contrast, our ReLA can distill a dataset using less than the budget required for training a single epoch (refer to our analysis in Appendix K). The technical solution is detailed in Section 4.
2. Our ReLA is capable of distilling large datasets for both self-supervised and supervised learning tasks, whereas conventional methods are typically limited to distilling small datasets or are exclusively applicable to supervised learning.
[a] Sun, Peng, et al. "On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm." CVPR 2024.
[b] Yin, Zeyuan, Eric Xing, and Zhiqiang Shen. "Squeeze, recover and relabel: Dataset condensation at imagenet scale from a new perspective." NeurlPS 2023 Spotlight.
[c] Shao, Shitong, et al. "Generalized large-scale data condensation via various backbone and statistical matching." CVPR 2024 Highlight.
---
Rebuttal Comment 1.1:
Title: Update to score
Comment: Thank you for your responses author(s). Based on your comments, and particularly towards [R4], I have increased the contribution score from 2 to 3 and the overall score from 4 to 6. | Summary: This paper proposes to accelerate the training of self-supervised learning with a pre-trained teacher model and use the prediction of this teacher as a target. The method is motivated from a toy example that the training convergence is dominated by the variance of the random variable. Then, the authors conjure that maximizing the mutual information between samples and targets can accelerate the convergence, based on which the authors give their solution that adds a cosine similarity term between the outputs from the student model and the teacher to the base self-supervised learning loss. Experiments demonstrate that the proposed technique enhances the performance when there are only part of data.
Strengths: 1. The experiments are sufficient and satisfactory to demonstrate the effectiveness of the proposed method.
2. Most of the claims are backed up with theoretical analysis.
Weaknesses: 1. I do not think the writing style of the article is proper for a technical paper. Overall speaking, it is over packaged from the motivation throughout to the final solution. They try to introduce a big picture in the abstract and introduction parts. However, readers can hardly know what they actually do until reading the method part. I recommend introducing the technical solution first and then discussing the underlying motivations.
2. Technically, my major concern lies in the motivation of the proposed method. In Conjecture 1, the authors speculate that maximizing the mutual information between samples and targets can speed up training. This conjecture is based on an over simplified example that conducts classification for a Gaussian Mixture Model with 2 Gaussian distributions. It's true that diminishing the variance can result in faster training in this simple case, but it is far from the real-world cases, which would be much more complex. In this way, although the article involves extensive theoretical analysis, the core part, which connects the motivation and the final solution, is not well supported.
3. In fact, this conjecture, as well as the final solution that minimizes the cosine loss with sample predictions and targets generated by a well-trained teacher, is contradictory to many recent studies. For example, in RDED [a], hard patches are selected to form the synthetic samples. After all, hard patches should have larger classification loss, corresponding to samples closer to the decision boundary in the toy case. Recent studies like [b] and [c] also demonstrate that weak trajectory or teacher models may benefit learning more. Given these studies, the conjecture of this article seems not so plausible without substantial evidences.
4. Based on above analysis, I would like to believe the acceleration comes from additional supervision signals, which transforms a self-supervised learning problem to an almost fully supervised one, since a well-trained teacher is involved. If that is the case, the conclusion and technique proposed in the article would appear trivial, since similar conclusion and technique have been well studied even several years ago [d,e,f,g].
5. The authors analyze *the optimal properties of distilled data*. Here an assumption that specifies the distillation method is missing. After all, different methods would result in different distill data. Not all of them satisfy the proposed properties.
6. I do not think the introduction of dynamic dataset distillation is necessary. The authors randomly sample a portion of real data to form the training dataset and use different data in each epoch, which can be viewed as mini-batch training on a subset and is not closely related to dataset distillation. In fact, I think the settings in DiM [h] and [i] are more adhere to this concept. In other words, the setting in this article is actually not coupled with dataset distillation. Only using a subset of data can achieve satisfactory performance is the result of introduction of pre-trained teacher. The method is not well aligned with dataset distillation serving as the motivation. In this case, the results may not appear surprising since the method transforms a self-supervised learning problem to an almost fully supervised one.
[a] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm (Peng Sun et al., CVPR 2024)
[b] Dataset Quantization (Daquan Zhou & Kai Wang & Jianyang Gu et al., ICCV 2023)
[c] SelMatch: Effectively Scaling Up Dataset Distillation via Selection-Based Initialization and Partial Updates by Trajectory Matching (Yongmin Lee et al., ICML 2024)
[d] Self-Supervised Dataset Distillation for Transfer Learning (Dong Bok Lee & Seanie Lee et al., ICLR 2024)
[e] Boosting Self-Supervised Learning via Knowledge Transfer (Noroozi et al., CVPR 2018)
[f] Knowledge Distillation Meets Self-Supervision (Xu et al., ECCV 2020)
[g] SEED: Self-supervised Distillation For Visual Representation (Fang et al., ICLR 2021)
[h] DiM: Distilling Dataset into Generative Model (Kai Wang & Jianyang Gu et al., 2023)
[I] Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality (Xuxi Chen & Yu Yang et al., ICLR 2024)
Technical Quality: 2
Clarity: 2
Questions for Authors: Please refer to the weakness part above.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have mentioned that Conjecture 1 remains speculative. However, I do think this is an important limitation that serves as the core of the whole method and article.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > [W1] […] I recommend introducing the technical solution first and then discussing the underlying motivations.
>
[R1] Thank you for your feedback. We will revise the paper to present the technical solution earlier, enhancing clarity and focus.
> [W2] Technically, my major concern lies in the motivation of the proposed method. In Conjecture 1, […] This conjecture is based on an over simplified example that conducts classification for a Gaussian Mixture Model with 2 Gaussian distributions. […]
>
[R2] We would like to clarify the following points:
1. Prior research efforts [n,o], including the Outstanding Paper at NeurIPS 2022, have utilized simplified mixtures of Gaussian distributions to study data efficiency and other properties, acknowledging the complexity of analyzing real-world data distributions. These studies typically extend theoretical insights to more complex datasets through empirical validation.
2. Our work provides theoretical insights into data properties relevant to efficient training. These insights do not suggest a reduction in the variance of real-world data but rather support our Conjecture 1, which forms the basis of our proposed technical solution. The validity of our solution is demonstrated in Section 5, where we present extensive experimental results on real-world datasets.
> [W3] In fact, this conjecture, as well as the final solution that minimizes the cosine loss with sample predictions and targets generated by a well-trained teacher, is contradictory to many recent studies. For example, in RDED [a], hard patches are selected to form the synthetic samples. After all, hard patches should have larger classification loss, corresponding to samples closer to the decision boundary in the toy case. Recent studies like [b] and [c] also demonstrate that weak trajectory or teacher models may benefit learning more. […]
>
[R3] We would like to clarify the following points:
1. In fact, RDED [a] promotes the selection of easy patches (those with smaller classification loss, as indicated in their Equation (8)) and utilizes well-trained teacher models for target generation, as specified in their Algorithm 1. This methodology is consistent with our theoretical and empirical analyses and aligns with the recommendations found in several recent studies [e,f,g,j,k,l].
2. The papers reviewer referenced [b, c] utilize weak teachers for trajectory matching or patch selection in image generation. However, none of these studies advocate for the use of weak teachers in target generation.
> [W4] Based on above analysis, I would like to believe the acceleration comes from additional supervision signals, which transforms a self-supervised learning problem to an almost fully supervised one, since a well-trained teacher is involved. […]
>
[R4] We would like to clarify the following points:
1. Unlike prior studies [a,b,e,f,g,j,k,l] that directly utilize a well-trained model specific to the dataset, our approach, ReLA, leverages any publicly available model to generate supervision signals. For instance, ReLA can use a model pre-trained on CIFAR-10 to generate supervision signals that accelerate training for BYOL on a 10% subset of ImageNet-1K, resulting in a 7.7% increase in accuracy compared to the original BYOL (see Sections 4 and 5).
2. Our aim is to propose a simple, effective, plug-and-play, and theoretically supported method to accelerate representation learning. In addition to empirical validation in our paper and similar evidence in prior studies [d,e,f,g,j,k,l], we also provide theoretical insights into how these generated supervision signals enhance the convergence rate of learning (see Section 3.2).
3. We have not transformed a self-supervised learning problem into an almost fully supervised one. The additional supervision loss is integrated with the original loss term and modulated by a dynamic coefficient (see Section 4.2). Furthermore, our ablation study in Section 5.2 illustrates that using only the supervision loss can result in catastrophic performance.
4. Moreover, our method can also be used to accelerate supervised learning, as demonstrated in our experiments and analysis in Appendix L.
> [W5] The authors analyze *the optimal properties of distilled data*. […], different methods would result in different distill data. Not all of them satisfy the proposed properties.
>
[R5] We would like to clarify the following points:
1. Our analysis try to identify the ideal properties of efficient data to offer insights that support our proposed framework. Most existing dataset distillation methods fail to fully satisfy these properties. The relationship between our analyzed properties and previous studies is discussed in Section 3.2 and Appendix H.
2. However, our theoretical results are consistent with the empirical results in many previous studies [e,f,g,j,k,l,m], which indicate that: (i) selecting/synthesizing samples that are easy or close to the class data mean, and (ii) using well-trained models to relabel samples, can achieve superior performance with fewer training steps.
> [W6] I do not think the introduction of dynamic dataset distillation is necessary. […], the results may not appear surprising since the method transforms a self-supervised learning problem to an almost fully supervised one.
>
[R6] We would like to clarify the following points:
1. Our framework ReLA is defined under the concept of dynamic dataset distillation because ReLA involves dynamically editing samples and labels during training.
2. Our approach does not rely solely on the introduction of a pre-trained teacher, as detailed in our analysis in [R4]. Furthermore, our experiments in Section 5 show that introducing weak teachers also contributes positively to our framework.
3. In addition to self-supervised learning tasks, we demonstrate that our ReLA framework outperforms state-of-the-art dataset distillation methods in conventional supervised learning tasks (see Appendix L).
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the detailed response. However, my main concerns are still not addressed. I understand that some papers have conducted theoretical analysis under the setting of GMM. There is still unaddressed controversy with some works. And, the technical contribution of the article still appears trivial to me.
1. In R3, the authors have clarified the points in RDED, DQ, and SelMatch. However, there are indeed some works advocating for the use of weak teachers in target generation, such as Fig. 2 in [Qin et al. 2024]. I perfectly understand that it showed on arXiv after the NeurIPS submission deadline. Nevertheless, their conclusions seem more convincing because it is tested on real-world cases instead of merely analyzed through simple GMMs. Especially in the case when there are only a small number of training samples, which is the main focus of this article for efficient training, weak teachers can be more useful. Taking this into consideration, I do think there exists a significant gap between the theoretical analysis and the practical cases.
2. I also perfectly understand that merely using supervised loss is insufficient. However, the experimental enhancement is not surprising to me at all. Compared with the baseline, the method in this article introduces a pre-trained teacher model on large-scale data, whose representation is definitely useful especially in self-supervised problems. The thing this article actually does is to introduce a knowledge-distillation-like loss term to enhance the performance, which is a popular technique in practice.
Taking these factors into consideration, I tend to maintain my original score.
A Label is Worth a Thousand Images in Dataset Distillation, Qin et al., 2024
---
Reply to Comment 1.1.1:
Comment: > [Q1] In R3, the authors have clarified the points in RDED, DQ, and SelMatch. However, there are indeed some works advocating for the use of weak teachers in target generation, such as Fig. 2 in [Qin et al. 2024]. I perfectly understand that it showed on arXiv after the NeurIPS submission deadline. Nevertheless, their conclusions seem more convincing because it is tested on real-world cases instead of merely analyzed through simple GMMs. Especially in the case when there are only a small number of training samples, which is the main focus of this article for efficient training, weak teachers can be more useful. Taking this into consideration, I do think there exists a significant gap between the theoretical analysis and the practical cases.
>
[R1] We respectfully disagree with the reviewer and would like to clarify that there is no significant gap between the theoretical analysis and the empirical cases in our paper:
1. We drew insights from the theoretical analysis, then designed an effective and efficient technical ReLA framework, and empirically validated our proposed framework ReLA on extensive real-world datasets and downstream tasks.
1. Our insights align with the conclusions presented in Sections 4 and 6 of the referenced paper [1], which are derived from their experimental analysis. Specifically, we establish the insights articulated in Conjecture 1 and Definition 5 through our Theorems 1, 2, and 4. Furthermore, our insights emphasize the importance of employing labelers to provide ***informative*** targets for efficient learning, which corroborates the conclusion of paper [1], asserting the significance of incorporating “structured information.”
2. Moreover, our Theorem 4 indicates that using data consisting of both randomly selected samples and high-quality targets can lead to superior performance. This insight aligns with the empirical results presented in Table 1 of paper [1], where it is demonstrated that randomly selected samples, when labeled by a well-trained teacher, surpass the state-of-the-art SRe$^2$L method [2]. However, paper [1] does not provide a theoretical explanation for this observation.
3. The experimental results presented in Figure 2 of paper [1] do not sufficiently illustrate the inapplicability of our theoretical findings and insights. Although the paper [1] investigates the teacher model's role in generating distilled soft labels, as depicted in Figure 2, it fails to provide essential details regarding the methodology for producing the corresponding distilled samples or images. The omission of critical experimental parameters—such as the optimizer, learning rate, and the number of training epochs for the student model—compromises the ability to accurately compare their experimental setup with the conditions assumed in our theoretical analysis.
4. Moreover, the study presented in paper [1] lacks additional experiments to validate the phenomena observed in Figure 2, such as evaluations with alternative optimization algorithms or different distillation methods. Their current analysis is limited to a single illustrative case. It is also crucial to note that this paper [1] is a preprint submitted to ArXiv after the NeurIPS deadline, and its experimental details and methodologies have not yet undergone peer review.
2. The significant empirical gains we provided align well with our theoretical findings and insights. In fact, the primary objective of our theoretical analysis is to try to identify the ideal properties of data-efficient training in order to offer insights that support our Conjecture 1, Definition 5, and thereby our proposed technical framework ReLA. We included extensive empirical results (in Section 5) across various datasets, neural architectures, and learning algorithms, to support our findings: either a weak or strong prior model can create efficient data to significantly accelerate the training.
1. Small-scale CIFAR models improves ImageNet: ReLA can use a model pre-trained on CIFAR-10 to generate efficient data that accelerate training for BYOL on a 10% subset of ImageNet-1K, resulting in a 7.7% increase in accuracy compared to the original BYOL (see Table 2).
2. Using CLIP as a prior model, ReLA-aided BYOL enables training a ResNet-50 from scratch on 50% of the ImageNet-1K dataset, achieving performance that exceeds models trained on the full dataset.
To the best of our knowledge, the presented empirical results and the acceleration provided to the community are novel and significant.
---
Reply to Comment 1.1.2:
Comment: > [Q2] I also perfectly understand that merely using supervised loss is insufficient. However, the experimental enhancement is not surprising to me at all. Compared with the baseline, the method in this article introduces a pre-trained teacher model on large-scale data, whose representation is definitely useful especially in self-supervised problems. The thing this article actually does is to introduce a knowledge-distillation-like loss term to enhance the performance, which is a popular technique in practice.
>
[R2] We would like to clarify the following points:
1. Our method does not depend on a teacher model pre-trained on large-scale data. As demonstrated in Table 2, even a weak prior model can generate data that significantly enhance training on large datasets, such as ImageNet-1K. Specifically,
1. ReLA illustrates the ability to leverage a model pre-trained on CIFAR-10—a relatively small dataset—to produce data that accelerates BYOL training on a 10% subset of ImageNet-1K, yielding a 7.7% improvement in accuracy over the original BYOL.
2. Even a randomly initialized and untrained model can provide sufficient acceleration. For example, when employing a randomly initialized model as the prior model in our ReLA framework to generate efficient data, it enhances BYOL training on a 10% subset of ImageNet-1K, resulting in a 6.1% increase in accuracy compared to the original BYOL. We believe this finding presents a novel contribution to the field.
2. Our proposed method ReLA is fundamentally different from knowledge distillation:
1. Knowledge distillation [3] typically employs a well-trained large model as the teacher, with the aim to transfer its knowledge to a smaller model to retain similar performance on the corresponding dataset. In contrast, our proposed method, ReLA, is a simple, effective, plug-and-play approach that leverages publicly available models on the internet as prior models to generate efficient data, which, combined with the ReLA loss, accelerates downstream learning tasks and achieves much better performance (see Sections 4.1 and 4.2).
- ReLA imposes no restrictions on the architecture, scale, or pre-trained datasets of the prior models used for generating efficient data. To validate the feasibility of this approach, we tested the effectiveness of various prior models with different representational capacities, knowledge, and architectures, as shown in Tables 1 and 2.
- To respond to the reviewers' feedback, we conducted additional evaluations on various downstream tasks (refer to the supplementary PDF).
2. In ReLA, the dynamic coefficient used in the loss function during downstream task model training is critical. This coefficient dynamically decays over the course of training, particularly for the ReLA loss term related to efficient data. The rationale behind this decay is that the primary function of the efficient data is to provide 'rapid guidance' during the early stages of model training (see Section 4.2). Our ablation study, shown in Figure 4(b), further demonstrates the necessity of this dynamic decay, especially when the prior model used to generate the efficient data is weak. This approach is fundamentally different from the primary objective of knowledge distillation, which focuses on transferring the teacher model’s capabilities to the student model.
3. In the knowledge distillation process [3], each augmented view of the data typically necessitates corresponding soft labels from the teacher model. This enables the student model to emulate the teacher's behavior across various transformations of the input data. However, this method results in a computational cost that increases linearly with the number of training epochs. Specifically, if the number of training epochs is \(N\), the computational overhead for the teacher model scales to \(N\) times the cost of performing inference on the entire dataset. In contrast, as detailed in Appendix K, ReLA requires only a single inference pass of the prior model over the entire dataset to generate efficient data. Further supporting evidence is provided in the attached PDF.
[1] Qin, Tian, Zhiwei Deng, and David Alvarez-Melis. "A Label is Worth a Thousand Images in Dataset Distillation." 2024.
[2] Yin, Zeyuan, Eric Xing, and Zhiqiang Shen. "Squeeze, recover and relabel: Dataset condensation at imagenet scale from a new perspective." NeurlPS 2023 Spotlight.
[3] Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. "Distilling the knowledge in a neural network." *2015*.
---
Reply to Comment 1.1.3:
Comment: We express our sincere appreciation for your insightful feedback and thorough review of our paper. Your comments have been instrumental in enhancing our work and refining the proposed ReLA method. In response to the specific questions ([Q1] and [Q2]) you raised, we have provided detailed explanations to address each concern comprehensively. Below, we summarize our key responses:
- **[R1] We drew insights from the theoretical analysis, designed an effective and efficient ReLA framework, and empirically validated it on extensive real-world datasets and downstream tasks. Our theoretical analysis and observed empirical results are well-aligned, motivating our ReLA design, the key contribution of the manuscript:**
- Our theoretical insights, specifically detailed in Conjecture 1 and Definition 5, which indicate the importance of ***informative*** targets, are consistent with the conclusions drawn from the experimental analyses in Sections 4 and 6 of the referenced paper [1].
- Furthermore, Theorem 4 suggests that using a combination of randomly selected samples and high-quality targets enhances performance, a finding corroborated by the empirical results in Table 1 of paper [1].
- The experimental results shown in Figure 2 of paper [1] lack crucial experimental parameters and an introduction of the image distillation method. Hence, it is difficult to dismiss our theoretical results based solely on the empirical outcomes in Figure 2. Additionally, their analysis is confined to a single illustrative case, without further experiments to validate the observed phenomena.
- Our substantial empirical gains align well with our theoretical predictions and insights. We provided extensive empirical results (Section 5) across various datasets, neural architectures, and learning algorithms, demonstrating that both weak and strong prior models can generate efficient data, significantly accelerating training through our ReLA.
- Note that the empirical understanding preprint [1] was appeared after the NeurIPS 2024 submission deadline. Though we may share some overlapping observations (namely our theoretical analysis and [1]’s empirical study), such an overlap only strengthens our core ReLA contribution. We will incorporate discussions related to the findings presented in paper [1] into our manuscript.
- **[R2] Our method is independent of a pre-trained teacher model on large-scale data and fundamentally differs from knowledge distillation:**
- Knowledge distillation [2] typically uses a well-trained, large model as the teacher to transfer knowledge to a smaller model, aiming to achieve comparable performance on the same dataset.
- In contrast, our ReLA approach offers 'rapid guidance' by leveraging efficient data during the initial stages of model training (see Section 4.2). ReLA allows various prior models, including weak models or even randomly initialized and untrained models, to generate efficient data that significantly enhance training on large datasets like ImageNet-1K.
In response to the positive feedback from reviewers 7uZz and zbwg, who have acknowledged the merits of ReLA and subsequently increased their scores, we respectfully seek your final assessment. If our rebuttal has adequately addressed your concerns and highlighted the significance of our proposed method, we kindly request that you consider revising your score accordingly. An increased score is critically important to our work at this stage.
We remain committed to addressing any remaining concerns and are open to further discussions or clarifications. Your valuable feedback has been instrumental in refining our research, and we appreciate the opportunity to enhance our work based on your input. Thank you for your time and efforts throughout the review process. We look forward to your further feedback.
[1] Qin, Tian, Zhiwei Deng, and David Alvarez-Melis. "A Label is Worth a Thousand Images in Dataset Distillation." 2024.
[2] Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. "Distilling the knowledge in a neural network." *2015*.
---
Rebuttal 2:
Title: Reference
Comment: [a] Sun, Peng, et al. "On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm." CVPR 2024.
[b] Zhou, Daquan, et al. "Dataset quantization." ICCV 2023.
[c] Lee, Yongmin, and Hye Won Chung. "SelMatch: Effectively scaling up dataset distillation via selection-based initialization and partial updates by trajectory matching." ICML 2024.
[d] Lee, Dong Bok, et al. "Self-supervised dataset distillation for transfer learning." ICLR 2024.
[e] Noroozi, Mehdi, et al. "Boosting self-supervised learning via knowledge transfer." CVPR 2018.
[f] Xu, Guodong, et al. "Knowledge distillation meets self-supervision.” ECCV 2020.
[g] Fang, Zhiyuan, et al. "Seed: Self-supervised distillation for visual representation.” ICLR 2021.
[h] Wang, Kai, et al. "Dim: Distilling dataset into generative model." Preprint 2023.
[i] Chen, Xuxi, et al. "Data distillation can be like vodka: Distilling more times for better quality." ICLR 2024
[j] Yin, Zeyuan, Eric Xing, and Zhiqiang Shen. "Squeeze, recover and relabel: Dataset condensation at imagenet scale from a new perspective." NeurlPS 2023 Spotlight.
[k] Shao, Shitong, et al. "Generalized large-scale data condensation via various backbone and statistical matching." CVPR 2024 Highlight.
[l] Shao, Shitong, et al. "Elucidating the Design Space of Dataset Condensation.” Preprint 2024.
[m] Zhao, Bo, and Hakan Bilen. "Dataset condensation with distribution matching." WACV 2023.
[n] Sorscher, Ben, et al. "Beyond neural scaling laws: beating power law scaling via data pruning." NeurlPS 2022 Outstanding Paper.
[o] Loureiro, Bruno, et al. "Learning gaussian mixtures with generalized linear models: Precise asymptotics in high-dimensions.” NeurlPS 2021. | Summary: The paper addresses the scalability constraints in representation learning by proposing a novel Representation Learning Accelerator (RELA). Current paradigms, focusing separately on self-supervised learning and dataset distillation, overlook the potential of intermediate acceleration. The authors define ideal data properties for optimization and generalization, enabling effective transport of model-generated representations. RELA leverages a task- and architecture-agnostic public model to form a dynamic data subset, enhancing (self-)supervised learning. Empirical results show that using CLIP ViT B/16 as a prior model, RELA-aided BYOL can train a ResNet-50 from scratch with 50% of ImageNet-1K, surpassing full dataset performance. This approach improves representation learning efficiency, offering impactful implications for reducing data requirements and computational resources in model training.
Strengths: [S1] The paper tackles critical issues in efficient training and dataset distillation (DD).
[S2] RELA introduces a novel approach in dataset distillation, effectively bridging representation learning with data-efficient methods.
[S3] The paper is clearly and effectively written.
[S4] Comprehensive theoretical analysis supports the proposed claims. The paper explores analytical concepts in dataset distillation, employing information theory principles.
[S5] Extensive experimental and ablation studies are conducted in both the main paper and appendix.
Weaknesses: [W1] The paper lacks generalization ability to high-resolution datasets, such as 1Kx1K, which are common in practical datasets like clinical and aerial images.
[W2] The practical applications of dataset distillation are not discussed. Demonstrating applicability in neural architecture search (NAS), continual learning, federated learning, and privacy preservation would be valuable for both the community and real-world scenarios.
[W3] Comparisons with some state-of-the-art methods, including DREAM [a], DataDAM [b], and SeqMatch [c], are missing from Table 6. Including these comparisons would strengthen the evaluation of the proposed method.
[W4] An analysis of the computational costs is absent. It is crucial to examine the training costs, including both memory and training time, for the proposed process and state-of-the-art methods to ensure data-efficient training algorithms.
[W5] The theoretical analysis does not generalize to complicated data distributions, such as two arbitrary mixtures of Gaussian distributions or a Mixture of Generalized Gaussian distributions (MGG).
Upon addressing these weaknesses, I will consider changing my initial rating to strong accept.
-------------
References:
[a] Liu, Yanqing, et al. "Dream: Efficient dataset distillation by representative matching." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[b] Sajedi, Ahmad, et al. "Datadam: Efficient dataset distillation with attention matching." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[c] Du, Jiawei, et al. "Sequential subset matching for dataset distillation." Advances in Neural Information Processing Systems 36 (2024).
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Can RELA be extended to ImageNet-21K? If so, please provide some experimental results.
2. How does the proposed framework perform on other downstream tasks, such as object detection or segmentation?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The limitations are briefly discussed in the appendix. For more limitations, refer to the Weaknesses and Questions sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > [W1] The paper lacks generalization ability to high-resolution datasets, such as 1Kx1K, which are common in practical datasets like clinical and aerial images.
>
[R1] Thank you for your feedback. We have conducted additional experiments using the CelebA-HQ dataset (1024 $\times$ 1024). The results, presented in Table 1 of the attached PDF, indicate that our ReLA effectively accelerates self-supervised learning on high-resolution datasets.
> [W2] The practical applications of dataset distillation are not discussed. Demonstrating applicability in neural architecture search (NAS), continual learning, federated learning, and privacy preservation would be valuable for both the community and real-world scenarios.
>
[R2] We would like to emphasize that the primary application of our ReLA method is to accelerate representation learning. This extends the scope of conventional dataset distillation, as traditional methods require more computational resources in distillation process than training on the full dataset, making them unsuitable for accelerating learning processes (refer to our detailed analysis in [R4]).
Furthermore, we have applied ReLA to continual learning, with the experimental results presented in Table 2 of the attached PDF. These results demonstrate that ReLA outperforms the baseline in this context.
> [W3] Comparisons with some state-of-the-art methods, including DREAM [a], DataDAM [b], and SeqMatch [c], are missing from Table 6. Including these comparisons would strengthen the evaluation of the proposed method.
>
[R3] We excluded comparisons with methods [a,b,c] because they are inefficient when dealing with large-scale datasets, such as ImageNet-1K, particularly with large backbone architectures like ResNet-18, as demonstrated in previous studies [d,e,f].
Hence, we evaluate these methods [a,b,c] against our proposed ReLA on four datasets with smaller backbone networks. The results of these comparisons are presented in Table 3 of the attached PDF, demonstrating that ReLA consistently outperforms these methods.
We will add these baselines into our manuscript.
> [W4] An analysis of the computational costs is absent. It is crucial to examine the training costs, including both memory and training time, for the proposed process and state-of-the-art methods to ensure data-efficient training algorithms.
>
[R4] As detailed in Appendix K, the SOTA dataset distillation methods generally incur higher computational costs than training on the entire dataset. For example, the SOTA efficient method RDED [d] necessitates a fully trained model on ImageNet-1K for distillation. Additionally, several other SOTA methods [a,b,c] also demand higher computational budgets compared to training on the full dataset, as documented in their respective studies.
In contrast, our proposed ReLA method can distill a dataset using less than the computational budget required for a single epoch of full dataset training (see Appendix K for a detailed analysis).
The computational costs of ReLA, along with comparisons to baseline methods, are presented in Table 4 of the attached PDF. The experimental results indicate that while ReLA slightly increases computational time and peak GPU memory usage when using a partial dataset for training, it significantly enhances accuracy, surpassing the performance achieved by training on the entire dataset.
We will add these comparisons into our manuscript.
> [W5] The theoretical analysis does not generalize to complicated data distributions, such as two arbitrary mixtures of Gaussian distributions or a Mixture of Generalized Gaussian distributions (MGG).
>
[R5] The theoretical analysis presented in this paper can indeed be generalized to arbitrary mixtures of Gaussian distributions. Although Section 3 exemplifies this with a specific mixture of Gaussian distributions, the proofs and theoretical framework detailed in Appendices B and C provide a generalized approach applicable to any mixture of two Gaussian distributions, regardless of their means and variances.
> [Q1] Can RELA be extended to ImageNet-21K? If so, please provide some experimental results.
>
[R6] The ImageNet-1K dataset used in this study is already a substantial and challenging benchmark in the field of dataset distillation [a,b,c,d,e,f]. We extended our experiments to the ImageNet-21K dataset. As shown in Table 6 in the attached PDF, our ReLA approach also accelerates training for BYOL on ImageNet-21K, achieving higher performance with the same data usage as the original BYOL.
> [Q2] How does the proposed framework perform on other downstream tasks, such as object detection or segmentation?
>
[R7] We evaluate the pre-trained models on a downstream segmentation task, as detailed in Table 5 of the attached PDF. The results demonstrate that models trained with our ReLA framework outperform baseline methods in both classification and segmentation tasks.
[a] Liu, Yanqing, et al. "Dream: Efficient dataset distillation by representative matching." ICCV 2023.
[b] Sajedi, Ahmad, et al. "Datadam: Efficient dataset distillation with attention matching." ICCV 2023.
[c] Du, Jiawei, Qin Shi, and Joey Tianyi Zhou. "Sequential subset matching for dataset distillation." NeurIPS 2023.
[d] Sun, Peng, et al. "On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm." CVPR 2024.
[e] Yin, Zeyuan, Eric Xing, and Zhiqiang Shen. "Squeeze, recover and relabel: Dataset condensation at imagenet scale from a new perspective." NeurlPS 2023 Spotlight.
[f] Shao, Shitong, et al. "Generalized large-scale data condensation via various backbone and statistical matching." CVPR 2024 Highlight.
---
Rebuttal 2:
Comment: Thank you for addressing some of my comments in the rebuttal. After reviewing the appendices again, I still do not see how the theoretical analyses can be generalized to two Mixture of Generalized Gaussian distributions (MGG). This requires further discussion.
However, given that you have extended the work to high-resolution images, various tasks, and complex datasets, I am inclined to increase the contribution rating to 4. If you provide more discussion on the theoretical analysis and address all comments in the final draft, I will consider raising my score to a strong acceptance.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response. We apologize for the oversight regarding MGG. Upon re-evaluating the proofs in our paper, we have determined that our argument does not rely on any specific properties unique to MG compared to MGG. Consequently, transitioning from MG to MGG will mainly involve modifying certain constants and substituting $\Sigma^2$ with $\frac{\alpha^2 \Gamma(3 / \beta)}{\Gamma(1 / \beta)}$ (assuming the probability density function is $\frac{\beta}{2 \alpha \Gamma(1 / \beta)} e^{-(|x-\mu_i| / \alpha)^\beta}$).
Below, we present the revised proof of Theorem 1, now based on MGG. For Theorem 2, the only modification is in the setup, which now aligns with the setup used in the proof of Theorem 1 as outlined below. The core reasoning remains consistent with the original proof based on MG, indicating that the proof process detailed in Appendix A and B is generalizable. As for the other theorems in this paper, no updates are required since their proofs do not depend on assumptions regarding the data distribution.
Should you have any further questions or require additional clarification, please feel free to reach out.
---
Rebuttal 3:
Comment: **Proof of MGG version of Theorem 1:**
## **Setup**
**Notation:** $\textup{N}(\mu,\alpha,\beta)$ denotes the generalized Gaussian distribution with pdf $\frac{\beta}{2 \alpha \Gamma(1 / \beta)} e^{-(|x-\mu| / \alpha)^\beta}$, $\textup{B}$ for Bernoulli distribution.
We focus on the 1-dim situation. Assume that $\mu_1 < \mu_2$. Define the original data distribution ($\mathcal{N}_0 = \textup{N}(\mu_1, \alpha_0,\beta_0)$ and $\mathcal{N}_1 = \textup{N}(\mu_2, \alpha_0,\beta_0)$):
$$ G := \{(x, y) \mid y \sim 2 \cdot \textup{B}(1,\frac{1}{2}) - 1, x \sim \frac{1-y}{2} \cdot \mathcal{N}_0 + \frac{1+y}{2} \cdot \mathcal{N}_1 \} $$
and the modified one ($\mathcal{N}_0' = \textup{N}(\mu_1, \alpha,\beta)$ and $\mathcal{N}_1' = \textup{N}(\mu_2, \alpha,\beta)$):
$$ G' := \{(x, y) \mid y \sim 2 \cdot \textup{B}(1,\frac{1}{2}) - 1, x \sim \frac{1-y}{2} \cdot \mathcal{N}_0' + \frac{1+y}{2} \cdot \mathcal{N}_1' \} $$
Our task is predicting $y$ given $x$. Note that $y \in \{\pm1\}$, which is a bit different from the definition in Section 3.2 . In the 1-dim situation, we just need one parameter for this classification task, so define $f_{\theta}(x) := \textup{sign}(x + \theta)$ to fit the distribution.
We could compute the generalization loss on the original distribution:
$$ \mathcal{L}(f _{\theta}) = \left( \int _{-\theta}^{+\infty} \, dF _- + \int^{-\theta} _{-\infty} \, dF _{+} \right) / 2 = \left( 1 - \int _{-\frac{\theta + \mu _2}{\alpha_0}}^{-\frac{\theta + \mu_1}{\alpha_0}} \, dF \right) / 2 $$
Obviously $\theta^\star = -\frac{\mu_1 + \mu_2}{2}$, we have:
$$ \mathcal{L}(f_{\theta}) - \mathcal{L}(f_{\theta^\star}) = \left( \int_{-\frac{\mu_2 - \mu_1}{2 \alpha_0}}^{\frac{\mu_2 - \mu_1}{2 \alpha_0}} \, dF - \int_{-\frac{\theta + \mu_2}{\alpha_0}}^{-\frac{\theta + \mu_1}{\alpha_0}} \, dF \right) / 2 \leq C_1 \cdot (\theta - \theta^\star)^2 \quad (\text{or } C_1' \, |\theta - \theta^\star|) $$
where $C_1$, $C_1'$ are constants, $F_0$, $F_1$, and $F$ denote the CDF of $\mathcal{N} _0$, $\mathcal{N} _1$, and $\textup{N}(0,1,\beta_0)$ respectively.
The inequality above is due to the fact that the function $h(x) = \left( \int^{1}_ {-1} \, dF - \int _{x-1}^{x+1} \, dF \right) / x^2$ has limits at 0 and so is bounded.
## **Algorithm**
For a dataset $\{(x_i, y_i)\}_{i=1}^{n}$,
set the loss function $L(\theta) = \frac{1}{n} \sum_{i=1}^{n} \ell \left[ y_i (x_i + \theta) \right]$, $\ell(v) = \tfrac{1}{2} (1-v)^2$.
We apply the stochastic gradient descent algorithm and assume the online setting ($n=1$): at step $t$, draw one sample $(x_t, y_t)$ from $G'$ then use the gradient $\nabla L(\theta_t)$ to update $\theta$ ($\eta \in (0,1), t \in \mathbb{N}$):
$$ \theta_{t+1} = \theta_t - \eta \nabla L(\theta_t) $$
$$ \nabla L(\theta_t) = \theta + (x_t - y_t) $$
It can be observed that the randomness of $x$ leads to noise on the gradient.
## **Bounds with Variance**
We prove the proposition that lower variance of GG can make convergence faster, i.e., $\mathbb{E}\left[ \mathcal{L}(f_{\theta_t}) - \mathcal{L}(f_{\theta^\star}) \right]$ is bounded by an increasing function of the variance ($t$ fixed).
## **Proof**
From above, we get:
$$ \theta_t = (1 - \eta)^{t} \theta_0 - \eta \left[ (x_{t-1} - y_{t-1}) + (1 - \eta) (x_{t-2} - y_{t-2}) + \dots + (1 - \eta)^{t-1} (x_0 - y_0) \right] $$
and so:
$$ \mathbb{E}\left[ \mathcal{L}(f_{\theta_t}) - \mathcal{L}(f_{\theta^\star}) \right] \leq C_1 \mathbb{E}\left[ (\theta_t - \theta^\star)^2 \right] $$
$$ = C_1 \mathbb{E} \left\( \left[ (1 - \eta)^{t} (\theta_0 - \theta^\star) - \eta \sum_{j=1}^{t} (1 - \eta)^{j-1} (x_{t-j} - y_{t-j} + \theta^\star) \right]^2 \right\) $$
$$ = C_1 \mathbb{E} \left[ (1 - \eta)^{2t} (\theta_0 - \theta^\star)^2 + \eta^2 \sum_{j=1}^{t} (1 - \eta)^{2(j-1)} (x_{t-j} - y_{t-j} + \theta^\star)^2 \right] $$
$$ = C_1 \left\( (1 - \eta)^{2t} (\theta_0 - \theta^\star)^2 + \frac{\eta}{(2 - \eta)} (1-(1 - \eta)^{2t})\left[\frac{\alpha^2 \Gamma(3 / \beta)}{\Gamma(1 / \beta)} + \left( 1 - \frac{\mu_2 - \mu_1}{2} \right)^2\right] \right\) $$
The last two equalities are due to the fact that for $(x, y) \sim G'$:
$$ \mathbb{E}\left[ x - y + \theta^\star \right] = 0 $$
$$ \mathbb{E}\left[ (x - y + \theta^\star)^2 \right] = \frac{\alpha^2 \Gamma(3 / \beta)}{\Gamma(1 / \beta)} + \left( 1 - \frac{\mu_2 - \mu_1}{2} \right)^2 $$
---
Rebuttal Comment 3.1:
Comment: Thank you for the detailed proof. I would like to increase my rating to 7.
---
Reply to Comment 3.1.1:
Comment: We appreciate your insightful feedback and thorough review of our paper. Your comments have been pivotal in enhancing our work and refining the proposed framework. We are committed to addressing any remaining concerns and welcome further discussions or clarifications. | Summary: The authors propose a theoretically motivated dynamic distilled datasets. With use of these transportable representation the authors show that they can outperform the performance of the original dataset and in some cases even the complete dataset.
Strengths: - The methods have a sound theoratical basis, the proofs presented although for a narrow setup our quite interetsing.
- The setup for experiments is well defined.
- The results show extensive performance improvement.
Weaknesses: - The writing style can improve and make the paper approachable. The notations are often terse and lack details about symbols until later. Especially for proofs in appendix.
- It might be more prudent to add experiments of larger scale.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors did not present a detailed study of where the approach can fail and limitation of their analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > [W1] The writing style can improve and make the paper approachable. The notations are often terse and lack details about symbols until later. Especially for proofs in appendix.
>
[R1] Thank you for your feedback. We will revise the paper to improve its clarity and make it more accessible. Specifically, we will provide more detailed explanations of symbols and notations earlier in the text, particularly in the proofs section of the appendix.
> [W2] It might be more prudent to add experiments of larger scale.
>
[R2] Thank you for your feedback. The ImageNet-1K dataset utilized in this study is already a substantial and challenging benchmark in the field of dataset distillation [a,b,c,d,e,f,g]. We have conducted additional large-scale experiments, as detailed in attached PDF and our response to Reviewer zbwg.
Our additional experiments detailed in the attached PDF include the following: (1) Utilization of the higher resolution dataset CelebA-HQ (1024 × 1024). (2) Implementation of a continual learning task. (3) Comparisons with more state-of-the-art (SOTA) baselines. (4) Reporting of computation time and peak GPU memory usage. (5) Segmentation task performance on the VOC 2012 dataset. (6) Evaluation on the larger ImageNet-21K dataset.
[a] Sun, Peng, et al. "On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm." CVPR 2024.
[b] Lee, Yongmin, and Hye Won Chung. "SelMatch: Effectively scaling up dataset distillation via selection-based initialization and partial updates by trajectory matching." ICML 2024.
[c] Lee, Dong Bok, et al. "Self-supervised dataset distillation for transfer learning." ICLR 2024.
[d] Chen, Xuxi, et al. "Data distillation can be like vodka: Distilling more times for better quality." ICLR 2024
[e] Yin, Zeyuan, Eric Xing, and Zhiqiang Shen. "Squeeze, recover and relabel: Dataset condensation at imagenet scale from a new perspective." NeurlPS 2023 Spotlight.
[f] Shao, Shitong, et al. "Generalized large-scale data condensation via various backbone and statistical matching." CVPR 2024 Highlight.
[g] Shao, Shitong, et al. "Elucidating the Design Space of Dataset Condensation.” Preprint 2024.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thank you reviewers for your comments.
I will keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt response and for taking the time to review our submission. We greatly appreciate your feedback and respect your decision to maintain your original score. Should you have any further comments or require additional clarification, we remain available to address any concerns. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you again for your constructive comments, which have been very helpful in improving our paper.
During the rebuttal period, we have addressed your concerns in detail in our responses. We have also provided additional experimental results in the attached PDF.
Thank you very much for your precious time and attention.
Best wishes,
Authors of Paper 19675
Pdf: /pdf/6dbe979d678cc700364d2c0e12c942fc5e9d6804.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
TFS-NeRF: Template-Free NeRF for Semantic 3D Reconstruction of Dynamic Scene | Accept (poster) | Summary: The author proposed a novel, time-efficient, template-free NeRF-based method for 3D dynamic scene reconstruction, focusing on capturing detailed explicit geometry for each entity in the scene. Extensive experiments demonstrate the efficiency of the proposed method.
Strengths: 1. The paper is well-organized, and the font size in the figures is appropriate, making it easy for readers to follow.
2. The method section is straightforward, providing a detailed introduction to each part, including sufficient details on the network, training, and loss weights.
Weaknesses: 1. The author mentioned that one of the contributions is time efficiency. Based on the description in lines 183-202, it seems the proposed method reduces computation to accelerate training. However, Figure 6 only shows the convergence speed. It would be more informative if the author could report the time consumed per iteration.
2. The information on how to obtain the semantic masks is missing. For example, in RoDynRF [1], the author uses optical flow and Mask R-CNN to obtain the segmentation masks.
3. The work focuses more on dynamic scenes with human subjects, but the title gives the impression that the method is designed for general dynamic scenes.
4. From Figures 2 and 3, it appears that the proposed method requires a 3D skeleton as part of the inputs. Does this mean that the proposed method does not work for scenes without humans or datasets lacking 3D skeleton information?
5. In the bottom part of Table 3, under the Dist. Acc column, HyperNeRF is the second-best method.
[1] Liu, Yu-Lun, et al. "Robust dynamic radiance fields." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Time consumed per iteration:** Thank you for bringing this to our attention. Our approach takes around 0.3 sec/frame with INN while 0.9 sec/frame with the Broyden approach used in TAVA. We will include this information in our final paper.
**Information on how we obtain semantic masks:** Please refer to the *Author Rebuttal* section for the response.
**Gives impression of general dynamic scenes, but works on human subjects:** Please refer to the *Author Rebuttal* section for the response.
**Method does not work for scenes without humans or datasets lacking 3D skeleton information:**
We acknowledge that our method relies on 3D skeleton information to constrain the network and produce accurate shapes and geometries of the scene objects. However, this 3D skeleton information can be easily obtained for both rigid and non-rigid objects. For rigid objects, skeleton joints are derived from uniformly sampled 3D points along the x, y, and z axes, passing through the center of the 3D bounding box (obtained from multiview semantic masks) surrounding the object. For non-rigid objects, 3D skeletons can also be easily acquired using available tools (e.g. [3],[6], etc.).
**Table 3, HyperNeRF is second best:** Thank you for pointing out the error. We will rectify the error in the final submission.
[3] Yu Sun, Qian Bao, Wu Liu, Yili Fu, Black Michael J., and Tao Mei. Monocular, One-stage, Regression of Multiple 3D People. In ICCV, 2021. \
[6] Alexander Mathis, Pranav Mamidanna, Kevin M. Cury, Taiga Abe, Venkatesh N. Murthy, Mackenzie W. Mathis, and Matthias Bethge. Deeplabcut: markerless pose estimation of user-defined body parts with deep learning. Nature Neuroscience, 2018.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed explanation. The authors have addressed all my questions, and I have decided to raise my score to borderline accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review the rebuttal. We're glad to hear that our explanations were helpful and that you've decided to raise your score. | Summary: This paper propose TFS-NeRF to solve the semantic reconstruction of dynamic scenes.
Strengths: Experimental results on multiple entity and deformable entity reconstructions are good.
Weaknesses: It is very hard to follow the storyline of the introduction.
Unclear contributions. In Lines 81 and 91, the paper claims it could reconstruct multiple dynamic entities with multiple complex interactions, but in the experiment parts, the presented results are mostly from one person/animal with a single object, it is hard to judge whether the proposed model could process multiple dynamic entities with complex interactions. Also, the authors claim the proposed model could address the occlusion challenge in Line 81, but there is no experiments about it. Besides, the paper claims that it could reconstruct semantics of dynamic scenes by using semantic masks. However, such semantic reconstruction has been solved by previous works like Semantic Flow [1], it is hard to judge the contribution of this paper in this situation.
Missing important method details. This paper proposes a semantic-aware ray sampling strategy and illustrates the points are separated in to deformable object set and non-deformable object set, but does not present any details about the separation process. After reading the paper, the reviewer guesses it separates the points on the rays based on the semantic labels of the pixels traveling through the rays. However, it may not be correct since different points in the same camera ray may belong to different set (like a ray travels through a dynamic person and a static wall behind the person). It is very hard to understand the entire pipeline of semantic reconstruction without such details.
Writing should be significantly improved. There are many informal and unclear expressions in this paper:
- Line 405: Takeaways -> Conclusion
- Figure 2: Overview of the system-
- Line 229: unclear expression of R^{1+256}
- Line 221: why "it helps capture better articulation or deformation and surface reconstruction under motion?"
- Tables 5 and 6: what is the mearning of "->"
- Line 216: Given,
- Lines 98, 112, 136: the usage of : should be the same
Reference:
[1] Tian, Fengrui, Yueqi Duan, Angtian Wang, Jianfei Guo, and Shaoyi Du, Semantic Flow: Learning Semantic Fields of Dynamic Scenes from Monocular Videos, ICLR 2024.
Technical Quality: 1
Clarity: 1
Questions for Authors: See above
Confidence: 4
Soundness: 1
Presentation: 1
Contribution: 1
Limitations: Unclear expressions and limited contributions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Hard to follow introduction:** Thank you for your feedback regarding the introduction. We apologize if the storyline was challenging to follow. We will carefully review and revise the introduction to ensure it is clearer and more cohesive. Your input is valuable, and we appreciate your efforts to help us improve our work.
**Claim on multiple dynamic entities:** Please refer to the *Author Rebuttal* section.
**Claim on addressing occlusion challenge, but no experiment:**
Thank you for the insightful review. We have included an experiment (**Figure 3 in the attached PDF in Author Rebuttal section**).
Our goal is to generate a 3D reconstruction of the scene without using any template model. Unlike methods that use the T-pose or A-pose of SMPL models as canonical representations to handle self-occlusions, we select a keyframe from the sequence as the canonical pose. Consequently, if there is no frame in the sequence without occlusion, the SDF network struggles to capture the occluded geometry in the canonical space.
To address this occlusion challenge, we use a strategy that disentangles the motions of distinct entities. This involves semantic-aware sampling and distance-based encoding of 3D points on the rays. Additionally, we employ separate SDF prediction networks for each entity, which helps mitigate the issue. Our overall design helps mitigate the occlusion problem.
**Contribution compared to SemanticFlow** Thank you for mentioning this method. SemanticFlow primarily focuses on semantic rendering and scene editing. In contrast, our work is focused on semantic geometry reconstruction. Techniques that excel in rendering may not necessarily perform well in 3D geometry reconstruction ([4], [5], [7]). Therefore, our primary emphasis has been on methods that are strong in geometry reconstruction. However, we are considering evaluating this method to validate its effectiveness and to determine if it could be integrated into our final submission if appropriate.
**Separates the points on the rays based on the semantic labels of the pixels** We apologize for not being clear in explaining this approach. We utilize information from both 2D semantic maps and 3D skeletons. While the rays are sampled in image space within 2D bounding boxes around each entity, we also perform an encoding for every 3D point on the sampled rays based on their distance from the 3D skeletons of individual elements. As mentioned in lines 230-238 of the initial submission for details, to provide the SDF prediction network information about which object the point belongs to, we assign a semantic label to each 3D point. This is defined as a weightage, $\omega^j = \exp(-dist^2/\sigma^2)$ based on the distance of the point from the nearest 3D joints in the per entity canonical skeletons $\mathbf{J^j_{p0}}$.
**Formatting issues** Thank you for the detailed reviews and for pointing out these errors in the manuscript. We will rectify all these errors in the final paper.
[4] Huang B, Yu Z, Chen A, Geiger A, Gao S. 2d gaussian splatting for geometrically accurate radiance fields. InACM SIGGRAPH 2024 Conference Papers 2024 Jul 13 (pp. 1-11). \
[5] Guédon A, Lepetit V. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024 (pp. 5354-5363).\
[7] Martin-Brualla R, Radwan N, Sajjadi MS, Barron JT, Dosovitskiy A, Duckworth D. Nerf in the wild: Neural radiance fields for unconstrained photo collections. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition 2021 (pp. 7210-7219).
---
Rebuttal 2:
Comment: Dear Reviewer 3yt5,
As the discussion period is drawing to a close, we would greatly appreciate it if you could let us know if there are any further clarifications or additional information we can provide to assist with any remaining questions.
We have addressed the concerns you raised regarding unclear expressions for multiple dynamic entities, the lack of clarity on semantic-aware ray sampling, the perceived limited contribution compared to the Semantic Flow paper, and the missing experiments related to our claim on occlusion handling. Detailed responses to all these concerns are included in the **Author Rebuttal** and the review-specific rebuttal section. Additionally, we have added new experiments on occlusion handling in the attached PDF.
Thank you once again for your thorough review and valuable suggestions on paper writing. We will ensure these points are addressed in the final version of the paper.
---
Rebuttal Comment 2.1:
Comment: Dear authors,
Thank you for your effort in the rebuttal phrase. Your rebuttal has addressed some of my concerns.
As for solving the occlusion problem is one of your contributions mentioned in the paper, although Figure 3 in the rebuttal seems to have good results, it is still weak to use only one sample to demonstrate the contribution.
Although there is no further multiple dynamic entities dataset and the experiments are conducted on 2 entities, claiming the contribution on the generic scene may be somehow overqualified.
Although Lines 230-238 illustrate the process of semantic label assignment, it is difficult to understand Lines 175-177 as it needs forward reference to get such information.
Besides I am still confused about the "->" in Tables 5 and 6 and the unclear expression in Line 229.
I feel more positive about this paper, but I still think the paper's contribution is not well-stated and the paper can not be accepted by the current version due to the writing issue.
Thank you for your effort in the rebuttal again.
---
Rebuttal 3:
Comment: Thank you for your feedback.
- We would like to emphasize the main contributions of our paper:
- We present a time-efficient, template-free NeRF-based 3D reconstruction method for dynamic scenes involving two interacting entities.
- We focus on the semantic reconstruction of dynamic scenes, emphasizing the detailed and explicit geometry of each entity within the scene. \
While our system does address occlusion challenges that typically arise during entity interactions, it's important to note that handling occlusion is not the primary contribution of our work, and we did not claim it as such. Since this is not a main contribution, it should not be a reason for an unfavorable rating.
- We agree that "generic scene" or "multiple entities" terms can be misleading, and we are willing to revise these phrases. However, we believe these are minor adjustments.
- We agree that Lines 175-177 might be difficult to understand as they require forward reference for the details on label assignment. We believe this also needs only a minor revision, and we are happy to make these changes in the revised version.
- Regarding "$\downarrow$ Methods/Metric $\rightarrow$" in Tables 5 and 6: The $\rightarrow$ symbol indicates that the columns represent metrics. "$R^{1+256}$" defines that the SDF prediction network predicts both an SDF value and a global feature representation of the scene (as used in general NeRF). We will clarify these in the revised paper. These are straightforward fixes and are considered minor comments. | Summary: The paper addresses the problem of reconstructing dynamic environments for arbitrary rigid, non-rigid, or deformable entities. The authors propose a template-free 3D semantic NeRF for dynamic scenes, which employs an Invertible Neural Network (INN) for LBS prediction, and optimizing per-entity skinning weights based on semantic masks. The experimental results show high-quality reconstructions of complex interactions with improved training efficiency compared to existing methods.
Strengths: - The proposed method is able to reconstruct arbitrary non-rigid objects by utilizing TAVA framework.
- Disentanglement of objects improves the reconstruction quality by predicting LBS for each entity.
- The proposed framework outperforms existing template-free NeRF methods on various datasets containing different non-rigid and rigid objects.
- The paper is clearly written and easy to follow.
Weaknesses: - The paper emphasizes semantic-aware ray sampling as a key contribution but lacks details on how the semantic masks are generated.
- The framework heavily relies on existing methods (TAVA, INN), limiting the novelty and distinctiveness of its contribution.
Technical Quality: 3
Clarity: 2
Questions for Authors: - How is the semantic masks calculated? How is objects classified as rigid or non-rigid?
- During the training, are the same number of rays sampled for rigid and non-rigid objects?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - The method is not scalable for the scenes more than two entities.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **How is the semantic masks calculated:** Please refer to the *Author Rebuttal* section.
**Limiting novelty and distinctiveness w.r.t TAVA, INN:** Our method focuses on the research gap of producing semantic 3D reconstruction of dynamic scenes under multiple object interactions. While our problem setup of reconstructing the scene guided by a 3D skeleton is similar to TAVA, extending this concept for various object interactions is not trivial. Moreover, TAVA relies on an iterative method for solving the forward LBS, which is not time-efficient. In our method, we utilize INN to make the solution more time efficient, which makes it even more challenging under multiple object interactions setup, causing occlusion and different motions.
**Classifier for rigid, non-rigid objects:** Thank you for your question. Based on the semantic masks, it is indeed possible to infer whether an object is rigid or non-rigid. We do not explicitly employ a separate classifier for this purpose. Instead, our approach leverages the information provided by the semantic masks to distinguish between rigid and non-rigid objects.
**Number of rays sampled for rigid and non-rigid objects:** The rays are sampled in a 60:40 ratio for the non-rigid and rigid objects, respectively.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's comments that helped clarify the paper. I initially did not assume that the authors used ground truth segmentation masks for the experiments. However, given that the method uses ground truth segmentation and 3D pose, it seems that the comparison in Tables 2, 3, and 4 is unfair, as the compared methods do not require segmentation masks or 3D skeleton information. Although the authors provided performance results using YOLO and SAM models in the rebuttal, the improvements over dynamic NeRF algorithms are marginal. Considering that the algorithm uses additional information (3D skeleton, skinning model, etc.), the experimental results diminish the contribution of the paper. Based on this and other reviewers' comments, I am downgrading my rating to borderline accept.
---
Rebuttal 2:
Comment: Sorry if there is any confusion regarding the ground-truth segmentation masks.
- All the methods (SOTA and ours) in Tables 2,3, and 4 in the initial submission are trained with ground-truth segmentation masks. So, the values for our method given in Table 1 of the attached PDF in rebuttal are not directly comparable with SOTA metrics in Tables 2,3, and 4 in the initial submission.
- We acknowledge that we require 3D pose information which is not required by the SOTA methods. However, using a 3D skeleton helps in achieving much better reconstruction quality (as you will be able to find out in the qualitative results of Figure 5 in the main paper). Without using any constraints on the structure of the entities it is difficult to achieve a good reconstruction quality, especially when there are multiple entities present and entities are in large motions. Also, it should be noted that using the predicted poses does not affect the quality of the reconstruction (Figure 1 in rebuttal PDF) and the metric value decrease occurs due to predicted pose inaccuracies.
- We would also like to emphasize another point, that we do not use any off-the-shelf skinning model (like SMPL). Rather we predict the skinning weights (Figure 3 in the main paper) using an MLP and train an end-to-end network for the reconstruction.
---
Rebuttal Comment 2.1:
Comment: Thank you for the clarification. I understand that the baseline methods also utilized ground truth masks. Regarding Tables 2 and 3, were the baseline NeRF methods trained separately on each entity (e.g., human and object), or was a single NeRF model used for the entire scene?
---
Rebuttal 3:
Comment: Thank you for your question. Regarding Tables 2 and 3, a single NeRF model was trained for the entire scene, which includes both the human and the object.
---
Rebuttal Comment 3.1:
Comment: Thank you. We are more than happy to clarify or address any additional questions you may have. If we are able to satisfactorily address the concerns that led to a lower rating, would you kindly consider revisiting the rating? | Summary: This paper proposes TFS-NeRF, a semantic-NeRF framework leveraging no prior templates of dynamic scenes for 3D reconstruction. Guided by INN-driven LBS prediction and semantic-aware ray sampling, TFS-NeRF separately consider deformable and non-deformable parts during geometric learning but composite them to learn apperance to benefit from self-supervised RGB reconstruction loss. Experiments on human-object and animal cenric dynamic videos demosntrate the advantages of the proposed method against chosen baselibes.
Strengths: -The overall paper is well motivated and easy to follow. The qualitative reuslts and supplement videos demonstrate the promising reconstruction performance of dynamic scenes with multiple entities.
-The experiment are extensively evalauted on several public benchmarks and the ablation studies highlight the unqiue contribution of several design choices.
Weaknesses: My main concerns comes from the lack of clarification on several key components:
(1)As a semantic-nerf framework, how abou the influence of quality of input masks? As to labels from an imperfect 2D predictor, what is the impact on final results, since the label quality may affect the sampling quality for deformable and non-deformable rays.
(2)Similarly, what is the robustness to pose accuracy as poses are involved with both ray generation and the pose loss?
Discussion on the above two points could better strenthen the practicalibility of TFS-NeRF.
(3)As a rapidly growing area, the chosen baselines do not include latest methods such as HexPlan,Tensor4D or GS-based ones to highlight the advantages of using INNs, compositional rendering as well as reconstruction quality.
(4)As to LBS predicton, what is the unique advantages of applying INN over other types of NN modules (e.g, lossy CNN auto-encoders)? What is the efficiency or performance boosts compared to standard CNNs?
(5)How to get the initial values of B_i to compute the initialization value of x_c for arbitrary objects?
(6)As the paper is claimed to template free and able to deal with generic scenes, it would be more exciting and convincing to see other types of interactions? For example, a common daily objects (e.g. a door or a cabinet, opening and closing) to really show the advantages of prior-free deformations of general scenes.
(7)It would be good to aldo add the appearance quality or view synthesis performanc, though I understand the focus of TFS-NeRF is on 3D reconstruction.
(8)Formatting needs to be improved. For example, it is wierd that in Sec3.B, authors put much related work descriptions. There are also improper indents like Line 282 and Line284,etc.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weakness section above on the clarification on the robustness to masks and pose qualities, as well as more in-depth discussions towards recent baselines and more general scenes.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are properly mentioned in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Robustness to mask accuracy:** Please refer to the *Author Rebuttal* section.
**Robustness to pose accuracy:** We agree that for the practicability of TFS-NeRF, it is important to evaluate it from this aspect, and apologize for not presenting the same in the initial submission. We have added quantitative results for the reconstruction quality of our method using predicted 3D poses (**Table 1 in the attached pdf in Author Rebuttal section**). We generate these 3D poses by using the state-of-the-art 3D pose estimation network [3]. As the results show, our method needs a good quality 3D pose to constrain the shape and motions of the individual elements. Even though the reconstruction quality does not get affected, the inaccuracies come from the predicted pose (**Figure 1 in the attached pdf in Author Rebuttal section**). We will include this evaluation in the final paper.
**Comparison with current methods (Hexplane, Tensor4D, GS-based methods):** Thank you for highlighting these methods. We primarily compared our approach with techniques that emphasize geometry reconstruction, such as NDR, ResField, D-NeRF, and HyperNeRF, which predict Signed Distance Functions (SDF) for underlying geometries. We agree that including Tensor4D in the comparison is valuable and have conducted additional experiments to assess its performance relative to our method (**Figure 2 and Table 2 in attached pdf in Author Rebuttal section**).
Our experiments show that our method provides superior reconstruction quality compared to Tensor4D. While Tensor4D's key innovation lies in its efficient 4D tensor decomposition, which represents dynamic scenes as a 4D spatiotemporal tensor, it does not account for the relative motions between scene elements. Instead, it captures the scene's overall motion. In contrast, our approach models the motion of individual elements and employs skeleton-guided reconstruction, leading to more accurate geometry.
Additionally, current GS-based methods for dynamic scenes primarily focus on rendering rather than surface reconstruction. Research indicates that methods emphasizing geometry reconstruction often fall short in rendering quality, and vice versa (refer to Table 3 in 2DGS [4], Table 1 in SuGaR [5]). Similarly, Hexplane prioritizes image rendering over geometry reconstruction which is a NeRF-based method. Similar arguments are applicable for the NeRf-based method [7]. Therefore, these methods may not be directly comparable. Hence, given the limited time frame for rebuttal, we are concentrating on the geometry-based method. However, we are open to including these methods in the final comparison.
**INN vs Lossy auto-encoder:** We appreciate your feedback and would like to clarify any confusion. We chose INN because we expect to maintain the invertible mapping between the 3D point space and the deformation space, which is the property that INN naturally has. A lossy auto-encoder could also become a choice only if it has the same property. We left the model structure exploration as future work to enhance the semantic reconstruction task further.
**Computation of $B_i$ for arbitrary objects:** We apologize for the lack of clarity in the submission. For arbitrary non-articulated objects, $B_i$ is computed based on the 3D bounding boxes, which are derived from multiview semantic masks. Specifically, for non-articulated objects, $B_i$ represents the rigid transformation between the bounding boxes of the canonical frame (keyframe) and each input frame. We will ensure that this explanation is included in the final version of the paper to provide clearer context.
**Evaluation on more datasets (daily life objects with door close, opening):** Thank you for highlighting this important aspect. Given the limited time frame for rebuttal, we are unable to evaluate additional datasets at this moment. Moreover, we have already evaluated several benchmark datasets (including datasets like BEHAVE containing daily life objects) to showcase the efficacy of our method. We appreciate your understanding and interest in expanding the scope of our evaluation.
**Rendering quality:** We appreciate your feedback regarding the importance of appearance quality. We have compared our rendering quality with Tensor4D and obtained the following results: PSNR of 30.09 and SSIM of 0.970, whereas Tensor4D achieved a PSNR of 35.78 and an SSIM of 0.985. Methods, that show very good geometric reconstruction quality do not necessarily perform well in rendering and vice versa ([4], [5], [7]). The improvements in geometric detail often reduce the rendering quality, as the optimization process becomes more complex and computationally demanding. In our work, we aim to do good reconstruction but we will not care about rendering quality.
**Formatting issues:** Thank you for the detailed reviews and feedback. We will rectify all these errors in the final paper
[3] Yu Sun, Qian Bao, Wu Liu, Yili Fu, Black Michael J., and Tao Mei. Monocular, One-stage, Regression of Multiple 3D People. In ICCV, 2021.\
[4] Huang B, Yu Z, Chen A, Geiger A, Gao S. 2d gaussian splatting for geometrically accurate radiance fields. InACM SIGGRAPH 2024 Conference Papers 2024 Jul 13 (pp. 1-11).\
[5] Guédon A, Lepetit V. Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024 (pp. 5354-5363).\
[7] Martin-Brualla R, Radwan N, Sajjadi MS, Barron JT, Dosovitskiy A, Duckworth D. Nerf in the wild: Neural radiance fields for unconstrained photo collections. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition 2021 (pp. 7210-7219).
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer Rbt4,
As we approach the end of the discussion period, we would like to know if there are any further clarifications or additional information we can provide to address your concerns.
Currently, to address the issue of robustness to input masks and pose accuracy, we have conducted additional experiments. The results are presented in **Table 1 and Figure 1 of the attached PDF**. These experiments analyze the quality of the reconstruction in relation to these factors.
Regarding the concern about comparisons with recent baselines, given the limited time for the rebuttal, we have focused on the most relevant baselines that specifically target 3D reconstruction. These comparisons are presented in **Table 2 and Figure 2 of the attached PDF**. We also provide a discussion in the **rebuttal section** below, explaining why the other baselines are not directly comparable.
Also, we have included all the details (in **rebuttal section**) about the method mentioned in the Weaknesses section.
Thank you once again for your time and would be glad to address any further queries.
---
Rebuttal Comment 1.2:
Comment: I thank authors for providing a detailed response to most of my concerns during the rebuttal period. And I have read all comments from all reviewers carefully.
The ablative stuides on the quality of input priors are indeed important to this paper as one main advantages of this submisison is to get rid of explicit templates like SMPL. I would encourage authors to make a more extensive studies on various scenes to help readers know the performance bound of this paper.
The comparison to Tensor4D interms of NVS and reconstructions are valuable and reveals performance gaps and tradeoffs. I would expect authors to give further motivations for INN qualitatively and quantitatively as well.
Overall, though there are still monir points to be improved in the revision, I think the current form provide necessary information and show the potential of practical usages given noisy priors. I would like to raise to my score to a positive rating.
---
Reply to Comment 1.2.1:
Comment: Thank you very much for your thoughtful feedback and for taking the time to carefully review our responses and the comments from all reviewers, as well as for raising your score. We will certainly take your suggestions to conduct more extensive studies across various scenes and perform additional experiments to clarify the motivations for using INN to strengthen these aspects in our revision. | Rebuttal 1:
Rebuttal: We are grateful for the constructive feedback from the reviewers and are pleased that they found our paper "well-motivated, well-organized, and easy to follow" (Rbt4, KrvP, UY9B). They highlighted that our paper includes a "detailed introduction, sufficient details, and extensive evaluation on several public benchmarks" (UY9B, Rbt4). Additionally, they noted that our method produces "promising reconstruction under multiple entities" (Rbt4, 3yt5). We appreciate these insights and will incorporate all the suggestions and additional experiments to enhance the writing and presentation of the paper. In this section, we try to address all the common concerns.
**How masks are generated and the influence of the quality of masks on results [Rbt4, UY9B]:** We appreciate for highlighting this point. Following the baseline method TAVA, we use the semantic masks given in respective datasets, to generate the quantitative results for ours as well as the comparing methods. \
As per the suggestion, to evaluate the robustness of our method in terms of the accuracy of semantic masks, we have further provided an experiment with predicted semantic masks (**Table 1 in attached pdf**). For this purpose, we have used a combination of a state-of-the-art object detection network, YOLOv8 [1], for first detecting the objects under reconstruction and then SAM [2] for segmenting the respective objects within the predicted bounding boxes given by YOLOv8. Results show that our method can generate similar reconstruction results as *dataset given masks*, because, in our method, the reconstruction quality for the semantically separable geometries is not solely dependent on the quality of input semantic masks. We utilize information from both 2D semantic masks and 3D skeletons. While the rays are sampled in image space within 2D bounding boxes around each entity, we also perform an encoding for every 3D point on the sampled rays based on its distance from the 3D skeletons of individual elements under reconstruction (Please refer to lines 230-238 of the initial submission for details). Hence, our method does not need a very accurate semantic mask as input. We will include this evaluation in the final paper.
**Evaluation dataset (mentioned generic scene but results given only on two entities, mostly focused on the human, or a single entity) [3yt5, UY9B]** We apologize for the lack of clarity and confusion in our presentation. By *multiple dynamic entities* in our paper, we mean more than one entity interacting with each other. Our method considers only two entities as of now. We have discussed this limitation under the *limitations and future directions* section (Lines 398-404 in the submission). The results on single entities (only human or animal subjects) are presented to demonstrate the capability of our reconstruction methods for *arbitrary deformable subjects* without using any template.\
For datasets with more than one entity, we selected two types: 1) human-object and 2) hand-object interaction datasets. To the best of our knowledge, there is no available dataset where an animal interacts with any object.\
We agree with the reviewer that the phrase multiple dynamic entities could be misleading, suggesting that our method can handle more than two entities. We will revise this phrase in the final paper to ensure clarity.
**[1]** Glenn Jocher, Ayush Chaurasia, and Jing Qiu. Ultralytics YOLO, January 2023.\
**[2]** Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao,
Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In Proceedings of the
IEEE/CVF International Conference on Computer Vision, pages 4015–4026, 2023.
Pdf: /pdf/3f0dffc3f9205933f29d324dfd5f92a69a7e8872.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image | Accept (poster) | Summary: This paper introduces a novel method to reconstruct 3D from a single image. The method is two-stage: (i) 4 images corresponding to the orthographic views are first generated along with normal maps, (ii) then, the multi-view maps are used to initialize and optimize a mesh using differentiable rendering techniques which are adapted to this sparse-view setting. This second stage contrasts with recent image-to-3D methods which typically train large feed-forward reconstruction models. Extensive comparisons are conducted on (i) samples from original papers for a qualitative evaluation and (ii) on the standard Google Scanned Objects (GSO) dataset for quantitative results. In both cases, the proposed approach performs significantly better than the presented baselines.
Strengths: [S1] **Novelty.** The proposed approach is interesting and novel in different aspects. The first stage aims at generating multi-view images and normal maps from a single image; this idea is not novel per se and was introduced/studied by prior works like Wonder3D. However, it is worth noting that it technically differs a bit from Wonder3D and there is an additional effort to generate high resolution outputs. The second stage about 3D mesh reconstruction using differentiable rendering is elegant, effective and rather novel. In particular, it contrasts with recent image-to-3D methods which typically train large feed-forward reconstruction models tailored for this task.
[S2] **SOTA performances.** For both the qualitative and quantitative evaluations, the proposed approach showcases results that are significantly better than prior works. Visually, the method not only generates more detailed textures but also more accurate geometry.
[S3] **Sound experiments.** The authors conducted extensive experiments that are sound and validate the proposed method. They not only compare the method to an exhaustive set of state-of-the-art competitors both qualitatively and quantitatively, but also conducted ablation studies on some method components.
Weaknesses: [W1] **Unrigorous technical presentation.** While the high-level overview of the approach is clear, some parts of the technical presentation lack clarity and contain incoherences. For example:
- Eq (1) looks wrong: a depth map can be obtained by integrating gradients of normals not the normals themselves
- Eq (5) is incoherent and lacks clarity: is $i$ an integer or an image? If $\mathcal{I}$ is a set of images, $i \in \mathcal{I}$ is an image, but $V_M(v, img)$ is undefined. The rightmost part of the equation $\sum_i V_M(v, \mathcal{I})$ does not make sense either. This part need revisions, $i$ can not be at the same time: an integer indexing the ground-truth views $\mathcal{I_m}$, a sample view image from $\mathcal{I}$ or an integer indexing the sample views $\mathcal{I}$. I would define this process for a single target views $I_t$ given the source views $I_s^1, ... I_s^N$, and then define the loss in Eq (6) using the sum over the pseudo ground-truth target views.
This lack of rigour harms readability and resolving these issues would greatly strengthen the paper quality.
[W2] **Missing ablation studies.** I would expect more ablation studies to better understand the impact of each component that are different from prior works. Currently, the two ablation experiments correspond to two very technical aspects which are a regularization term (called Expansion) and the explicit target technique. Experiments assessing the impact of the following components are missing:
- the image-to-multiview stage (e.g., compare results by replacing it with Wonder3D image-to-multiview stage)
- the super-resolution (e.g., compare results with/without super-resolution network)
- the multiview-to-3D stage (e.g., compare results by replacing it with Wonder3D multiview-to-3D stage)
Technical Quality: 3
Clarity: 3
Questions for Authors: Questions:
- Eq (2): how is the ground-truth mask computed?
- What happens when the input view does not correspond to the frontal orthographic view, e.g. say with elevation=20 and azimuth=45?
- L171 what are the details behind edge collapse/split/flip?
Remarks:
- the terms "wild image", "wild views" do not sound correct, the appropriate term would be "in-the-wild image" but this typically corresponds to random real-world images that one could find on internet or social medias, which is not the case here. I would suggest removing this aspect which is not necessary for the paper storyline
- the term "ExplicitTarget optimization" is not crystal clear out of context, I would suggest finding another term that clearly conveys what is under the hood, e.g. visibility-aware supervision
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, there is a brief limitation section. Including visuals illustrating failure cases would greatly help the readers better understand the model limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and valuable comments, some of which are fundamental and in-depth suggestions that help us greatly improve our paper. To address your concerns, we present the point-to-point response as follows.
**Comment 1: Unrigorous technical presentation**
We appreciate your observation regarding the clarity of the notations and explanations in Equations (1) and (5). In response to your feedback, we will make the revisions in the updated manuscript to enhance understanding.
For Equation (1), here $d(i,j)$ denotes the value of coordinate (i,j) in the depth map, while $x$ is the integration process along the vertical line $y=j$, $\vec{n}(x)$ denotes the vector of the input normal field at the position $x$. $n_x$ is the component of the normal field $\vec{n}$ along the direction of the x-axis (which is a scalar function). The confusion might arise because the letter $x$ is used in two different contexts. To clarify, we’ll make the following adjustments: We will use the formula $d(i,j)=∑t=0inx(t,j)d(i,j)=∑t=0inx(t,j)$, where $d$ and $n_x$ are considered as matrices in the discrete version. This change aligns better with how the variables are handled in the code.
For Equation (5) (***ExplicitTarget***). Let $Avg(V, W) = \frac{\sum_i{V_i W_i}}{W_i}$ represent the weighted average function, and $V_{M}(v, i): (\mathbb{N}^+, \mathbb{N}^+) \rightarrow \{0, 1\}$ represent the visibility of vertex $v$ in mesh $M$ under view $i$. $Col_M(v,i) $ Indicate the color of vertex v in viewpoint i. We compute the ExplicitTarget $ET$ of each vertex in mesh $M$ as
$$\\begin{equation}
ET_{M}(v) = \\begin{cases}
Avg(Col_M(v,i), V_{M}(v, i) W_M(v, i)^2)
& \text{, if } \sum_{i} V_{M}(v, \mathcal{i}) > 0 \\\\
\mathbf{0} & \text{, otherwise, }
\\end{cases}
\end{equation}$$
where $W_M(v, i) = -\cos(N_{v}^{(M)}, N_{i}^{(view)})$ is a weighting factor, $N_{v}^{(M)}$ is the vertex normal of $v$ in mesh $M$, and $N_{i}^{(view)}$ is the view direction of view $i$.
$ET_M$ computes the results with view direction weighting so that each viewpoint does not introduce significant errors for surfaces that are too skewed in the current viewpoint (because predictions in these places are usually inaccurate).
**Comment 2: Missing ablation studies**
We appreciate the depth of your feedback. To address your concern, we will enrich our paper with the following **additional ablation studies** to elucidate the contributions of each component within our model:
(a) ISOMER Module Analysis: We will **incorporate a comparative experiment** of the ISOMER module to better demonstrate its superiority over existing reconstruction algorithms.
(b) Explicit Target Algorithm Analysis: We will show **qualitatively how the ExplicitTarget algorithm improves** the final performance.
(c) Robustness analysis: we will **quantitatively study** the performance and differences of our method under non-front inputs.
(d) Resolution Impact Analysis: We will expand our study to **include a qualitative comparison** across various resolutions in order to demonstrate the differences between different resolutions.
**Question 1.** Thanks for your insightful question. The ground truth mask is determined based on the normal map predicted by the model. Predicted pixels with an RGB magnitude far from 1 are considered background.
**Question 2.** Thanks for your insightful question. To resolve your concerns, we **add a new test** with rotated objects in Table 2 to test robustness in non-front-facing views. The test results show that our method still performs well in this case, and even the geometry prediction is more accurate.
**Question 3.** We appreciate your attention to this matter. In response, we will **enhance the clarity and details** of our explanation in the revised version as follows:
- Edge Collapse: This operation is used to avoid and heal defects in the mesh. It involves selecting an edge within a triangle and collapsing it to the other edge, effectively merging the two triangles into a single triangle. This process can help to eliminate narrow triangles that might be causing issues in the mesh, such as those that are too thin to accurately represent the surface they are approximating. It prevents the creation of topological artifacts and maintains mesh quality.
- Edge Split: This is the opposite of edge collapse. In the edge split, an edge that is longer than a specified maximum length is divided into two, creating new vertices at the midpoint of the edge. This operation is used to refine the mesh, ensuring that the local edge length is kept close to the optimal length. It helps to maintain the quality of the mesh by avoiding edges that are too long, which could lead to an inaccurate representation of the surface.
- Edge Flip: Edge flip is an operation that adjusts the connectivity of the mesh to improve its quality. It involves flipping an edge within a triangle to connect two non-adjacent vertices, effectively changing the triangulation of the mesh. This can help to maintain the degree of the vertices close to their optimal value, which is typically six for internal vertices (or four for boundary vertices).
These operations aim to improve mesh quality, avoid defects, and ensure an accurate representation of the target geometry.
**Remark 1.** Thanks for pointing out the issue. We will **modify the statement** in our revision.
**Remark 2.** Thanks so much for your great suggestions. We are definitely considering adopting a name that is more straightforward and easily comprehended.
We appreciate your detailed comments. We will thoroughly revise our paper. Thanks again for your time and in-depth suggestions.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for the detailed answers. I have carefully read the rebuttal and my concerns were mostly addressed. One missing aspect is a discussion about failure cases and I strongly recommend adding some in the revised paper. In general, such an analysis is easy to build, it provides strong insights about the method performances and it can really drive the next iterations. Overall, I will keep my rating.
---
Reply to Comment 1.1.1:
Title: Thank You
Comment: Dear Reviewer,
We sincerely appreciate your thoughtful feedback and the time you have taken to carefully review our rebuttal. We are pleased to hear that our detailed responses have addressed most of your concerns.
Regarding your suggestion to include a discussion on failure cases, we fully agree with the value such an analysis would bring to the paper. It not only enhances the robustness of our method but also provides crucial insights for future improvements.
In the revised version, we will incorporate a comprehensive analysis of failure cases. This will include:
* Analysis of inconsistent predictions across views.
* Generation of geometric structures in unseen areas.
* Simple colorize algorithm.
* Inaccurate predicted normal maps.
We believe that this detailed analysis will significantly strengthen our paper and contribute to the advancement of the field.
Once again, thank you for your constructive feedback. We look forward to incorporating these improvements and hope to meet your expectations in the revised manuscript.
Best regards,
Authors
---
Rebuttal 2:
Title: Correction to Typo
Comment: Sorry for the messed up Latex in Equation (1) above, here is a corrected version.
> Here $d(i,j)$ denotes the value of coordinate (i,j) in the depth map, while $x$ is the integration process along the vertical line $y=j$, $\vec{n}(x)$ denotes the vector of the input normal field at the position $x$. $n_x$ is the component of the normal field $\vec{n}$ along the direction of the x-axis (which is a scalar function).
> The confusion might arise because the letter 'x’ is used in two different contexts. To clarify, we’ll make the following adjustments: We will use the formula $d(i,j)=\sum_t=0^i n_x(t,j)$, where `d` and `n_x` are considered as 2D arrays in the discrete version. This change aligns better with how the variables are handled in the code.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer inWh:
Thanks for reviewing this work. Would you mind to check authors' feedback and see if it resolves your concerns or you may have further comments?
Best, AC
---
Rebuttal 3:
Title: Thank You for Your Thorough Review and Invitation for Further Discussion
Comment: Dear Reviewer,
Thank you for your patience and thorough review. We have addressed the issues you raised with detailed responses. We welcome you to engage in further discussion with us. Your insights are invaluable, and we are eager to clarify any points or provide additional information as needed.
Looking forward to your feedback and any further questions you may have.
Best regards, Authors | Summary: The paper focuses on single-image-to-3D. Given a single image, it first finetunes Stable Diffusion Image Variations models to generate orthogonal multi-view RGB/normal images and ControlNet-Tile models to enhance resolution. Then, it proposes the instant and consistent mesh reconstruction algorithm (ISOMER). It has three stages: 1) Use the estimated depth (from predicted normal maps) of the front and back views to initialize the geometry. 2) Use differentiable rendering to optimize geometry with mask and normal loss. 3) Optimize geometry and vertex colors to fit the view-weighted color, where each view’s weight is the square of cosine between vertex normal and view direction. The total time required is claimed to be within 30 seconds. It compares with some recent approaches on 30 GSO objects and achieves better performance in terms of both visual and geometry quality.
Strengths: - **High-Resolution Appearance at Input View:**
It demonstrates improvements in the resolution of the generated appearances, especially at the input view. Unlike previous works that produced lower-resolution multi-view outputs, resulting in a lack of clarity and detail, this paper utilizes diffusion models to achieve effective super-resolution. This allows the generated views to maintain high resolution and exhibit more details (reimagined though).
- **Enhanced Geometry:**
The paper showcases impressive geometry, particularly in examples of garage kit figures. Technically, this work better leverages predicted normal map information to optimize geometry and color, surpassing previous models like Wonder3D, which also predicted normal maps. This enhanced utilization of normal maps carves more geometric details.
- **Many Details and Supporting Materials:**
It provides many network parameters, predicted meshes, and rendering videos, which are valuable for assessing the performance and effectiveness of the proposed methods.
Weaknesses: - **Limited Qualitative Results to Frontal View, Worse Side Views:**
The majority of qualitative results presented in the paper are focused on the frontal view, as it is prioritized by the approach. However, for non-frontal views, there are issues such as visible seams and artifacts. For example, the video in supplementary material shows several instances where characters have duplicated ears. Although the frontal view quality is enhanced, this improvement comes at the expense of other views, which suffer from more noticeable artifacts.
- **Potential Bottleneck by the Two-view Initialization of First Step**
Following the previous point, those artifacts are potentially influenced by the algorithm’s initialization process since only front and back views are used. Artifacts like dual-ear, if introduced in the first step of ISOMER, may not be easily corrected via subsequent refinement. What if the input view is not exactly frontal and more ill-posed? The artifacts can be even more severe.
- **Narrow Evaluation Scope, Limited Geometry Complexity:**
The paper predominantly evaluates small cartoon figures and animal models in terms of qualitative evaluation, with limited representation of real-world objects encountered in daily life. Real-world objects may have more complex geometry, where four orthogonal views are not enough to cover. Even for simple objects like a mug, the method may fail to generate a solid bottom and an empty interior given that they are unseen.
- **Pose Sensitivity and Distortion Issues:**
The network outputs exhibit significant distortions when input images are not taken from a frontal view with the elevation equal to zero. For instance, a machine example in the supplementary video shows clear distortion when the input image is taken from an elevated side angle. This indicates a lack of robust data augmentation during training, leading to heavy restrictions on input poses.
- **Oversimplified Texture Handling of Occluded Region:**
The approach to unseen vertex coloring is overly simplistic, using flood-fill colors to interpolate occluded areas. This may result in incorrect color transitions for complex geometries with significant occlusions. Even for simple cases, the bottoms or tops of objects are often unseen from 4 orthogonal views. Vertex colors in those regions are roughly smoothed out.
- **Small Evaluation Set:**
The evaluation set consists of only 30 objects, which is insufficient, especially considering all of those baselines are not purely optimization-based, whose inference time costs per image are no longer than a few minutes. A few hundred objects will be necessary. For example, GRM [63] uses 250 shapes for single-view reconstruction evaluation, providing more robust and reliable assessments.
- **Unclear Writing:**
The paper contains several sections where the writing is unclear. Please refer to the points listed in the Questions.
- **Lack of Quantitative Ablation Studies:**
It does not include any quantitative evaluation studies.
Technical Quality: 3
Clarity: 1
Questions for Authors: 1. Please properly cite, acknowledge, and compare with previous works if similar ideas are applied. For example, in the Multi-view Image Generation, two key points (L499-502) seem to be the same as ImageDream [15]. And, it also uses IP-Adapter (L515). A reference U-Net is also used in Zero123++ [46]. Please cite related work and explain the differences when introducing the proposed method. This helps readers understand how ideas are inherited and improved.
2. No notations in Eq. 1 are explained. I can understand the point. But it is not reader-friendly. Please delete it or explain it. The formula itself is also over-simplified.
3. Wrong references of figures, and algorithms. L181, L188, L557, etc.
4. In Algo. 1, the variable cnt is useless and not explained. The set ‘colored’ is initialized as empty but assumed as initialized by C (L13). The authors may consider putting it more clearly: For invisible vertex coloring, it applies the flood fill algorithm, where each vertex is colored by the average color of the neighboring colored vertices. Use proper terms and make your writing concise.
5. Is Expansion (L181) referring to applying normal-based loss L_normal?
6. L204: the result for vertex v → the predicted color of vertex v. Make it more clear.
7. Time cost of each ISOMER step.
Confidence: 5
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: Many significant limitations of the approach are not discussed. Please refer to the Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and valuable comments, some of which are fundamental and in-depth suggestions that help us greatly improve our paper. To address your concerns, we present the point-to-point response as follows.
**Comment 1: Limited Qualitative Results to Frontal View, Worse Side Views**
Thanks for bringing this comment up, and we acknowledge the presence of this issue. Our investigation indicates that it stems from occasional inconsistencies in the multi-view predictions, which is a challenge inherent in all current multi-view reconstruction solutions. We are actively exploring ways to mitigate these inconsistencies to enhance the robustness and reliability of our reconstruction process.
**Comment 2: Potential Bottleneck by the Two-view Initialization of First Step**
Thanks for your insightful question. As introduced in our main text, the initialization process is specifically designed to guarantee the correct overall topological structure and is unrelated to reconstruction details. The majority of artifacts are primarily derived from inconsistencies in multi-view predictions rather than the reconstruction process. As shown in Figure 9, our method can yield satisfactory results even with a sphere as the initial shape. Furthermore, Figure 5(a) demonstrates that the ISOMER reconstruction step is capable of correcting some of the artifacts introduced by multi-view predictions. This showcases the resilience and corrective capabilities of our approach in refining the final reconstruction output.
**Comment 3: Narrow Evaluation Scope, Limited Geometry Complexity**
Thanks for your insightful question. For a consideration of fairness, the samples we have chosen for qualitative evaluation are largely based on previous methods, thus not incorporating diverse real-world objects. However, in our quantitative evaluation using the GSO dataset, a set of real-world objects scanned by Google, our method significantly outperforms existing methods. Our approach excels in capturing fine details, showcasing our proficiency in reconstructing complex geometries with high fidelity.
Regarding geometry complexity, our method is comparable to existing works such as Wonder3D, OpenLRM, and InstantMesh. It's important to note that, like these methods, we cannot generate unseen features such as "a solid bottom and an empty interior of a mug" when these aspects are not present in the input views. This limitation is inherent to the field and represents a challenge for all methods dealing with incomplete view data.
**Comment 4: Pose Sensitivity and Distortion Issues**
Thanks for pointing out the issue. To resolve your concerns, we **add a new test** with rotated objects in Table 2 to test robustness in non-front-facing views. The test results show that our method still performs well in this case, and even the geometry prediction is more accurate.
**Comment 5: Oversimplified Texture Handling of Occluded Region**
We appreciate your detailed comments. We agree that our current method for handling colors in non-visible areas is quite straightforward. The simplicity of this approach allows for extremely efficient processing times, ensuring the overall efficiency and stability of our workflow. We are committed to exploring more sophisticated coloring techniques in future work to enhance the visual output of our reconstructions.
**Comment 6: Small Evaluation Set**
Thanks for your thorough comment. We chose 30 random objects in alignment with the standards set by previous work (CRM, SyncDreamer, Wonder3D). Given that SyncDreamer takes up to 50 hours to generate 30 objects and results from other papers show that SyncDreamer performs significantly worse than existing methods, we exclude SyncDreamer and **conduct an experiment** on the entire GSO dataset. The results for all methods are provided in Table 1.
**Comment 7: Lack of Quantitative Ablation Studies**
We are grateful for your comprehensive comments. Recognizing the need for a more robust ablation study, we have taken the following steps to enhance our original analysis:
(a) ISOMER Module Analysis: We will incorporate a comparative experiment of the ISOMER module to better demonstrate its superiority over existing reconstruction algorithms.
(b) Explicit Target Algorithm Analysis: We will show qualitatively how the ExplicitTarget algorithm improves the final performance.
(c) Robustness analysis: we will quantitatively study the performance and differences of our method under non-front inputs.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal!
Comment 1/2: If the error mainly originated from multi-view inconsistency, why are there more seams and artifacts like dual ears at the sides instead of on the front view?
Comment 3: It was stated that the unseen features pose a challenge for all methods. However, I don't fully agree with this. For example, I think methods like CLAY (Rodin Gen-1) or One-2-3-45++ can address this issue effectively with 3D supervision, so such challenges wouldn't arise in these cases.
Comment 4: Are the input views in the test rendered at elevation=0? Are rotations only horizontal? Currently, this method shows noticeable distortion given non-zero elevation input views, which doesn't occur in works like InstantMesh.
---
Reply to Comment 1.1.1:
Title: Respones to More Questions
Comment: Comment 1/2:
> Thank you for your insightful observation regarding the presence of seams and artifacts. As mentioned, our code implementation was inspired by the Wonder3D's code, where we assigned different weights to different views during the ExplicitTarget calculation: 2.0 for the front, 1.0 for the back, and 0.8 for the other sides. This weighting strategy helps to mitigate artifacts on the front view, aligning more closely with human preferences by maintaining higher quality in the most visible areas. We believe this approach enhances the overall visual quality, especially in cases where multi-view images are inconsistent.
Comment 3:
> Your perspective on the handling of unseen features by methods like CLAY (Rodin Gen-1) is insightful. Indeed, CLAY's explicit 3D representation and explicit 3D supervision effectively address the issue of unseen features, producing robust structures in invisible areas. While CLAY may not ensure high consistency with the input views, it excels in generating detailed unseen regions. Integrating such strengths into our future work is a promising direction for us.
> Regarding One-2-3-45++, its non-open-source nature prevents direct comparison. However, strikingly similar to One-2-3-45++, Instant-Mesh also employ Zero123++ for six-view generation and subsequent SDF generation with multi-view differentiable rendering supervision, face challenges such as generating "a solid bottom and an empty interior of a mug". For instance, Instant-Mesh fails to produce a empty interior of a mug due to the inability to observe the interior from surrounding views. This limitation suggests that One-2-3-45++ might also struggle with similar issues, as it does not inherently circumvent the problem of unobserved regions.
> CLAY is an excellent job that can't be ignored. However, it is important to note that CLAY was published on arXiv a week after our submission deadline, hence it was not included in our current discussion. Our work focuses more on geometric details and external visual effects of objects, contrasting with CLAY's emphasis on overall structural generation. We will incorporate a discussion of these related works into the refined version of our paper.
Comment 4:
> We acknowledge your concern about the potential distortion observed in non-zero elevation input views.
In the rebuttal appendix, additional tests on object rotation sampling follow $azimuth \in U[-180, 180], elevation \in U[-30, 30]$, rather than $elevation=0$. We have examined the test cases and did not find significant distortion in the non-zero elevation input views. We will supplement the multi-view visualization results under these conditions in the revised version. It is worth noting that if random rotations are not included during model training, the trained model will exhibit noticeable distortion in the non-zero elevation input views. Thank you for pointing this out!
Thank you very much for your questions and suggestions, and we welcome further discussion!
---
Rebuttal 2:
Title: Responses to questions
Comment: Question 1:
> We greatly appreciate your insightful feedback. In light of your suggestions, we will revise the introduction in the updated paper to clearly delineate our approach. Similar to 3D generation methods like Wonder3D and InstantMesh, our work draws inspiration from ImageDream and Zero123++ for multiview image generation. However, we employ two denoising models and a super-resolution model to accomplish multi-view generation with notable distinctions as follows:
> (a) Orthogonal View Generation: Our first denoising model, similar to ImageDream, generates four orthogonal views instead of perspective views, which simplifies the subsequent reconstruction process and enhances multi-view consistency. The choice of orthogonal views facilitates a direct geometric correlation between pixels. Moreover, we integrate learnable class embedding to encode view information, enhancing the model's ability to understand and process multi-view inputs.
> (b) IP-Adapter Utilization: Contrary to being used in the Multi-view Image Generation step, our method employs an IP-Adapter in the Multi-view Image Upscale step. We incorporate a controlnet model with IP-Adapter to allow for the enhancement of multi-view details and achieve the targeted resolution.
> (c) Normal Prediction Setting: In the Normal Prediction module, we adopt a denoising model with a reference U-Net, akin to the concept in Zero123++. However, unlike Zero123++, which shares all weights between the reference U-Net and the main network, we utilize an independent pre-trained reference U-Net. We freeze all parts within the reference U-Net except for self-attention (Lines 544-545) to preserve the generalization ability from the pre-trained model.
Question 2:
> Thanks a lot for pointing out! Here $d(i,j)$ denotes the value of coordinate $(i,j)$ in the depth map, while $x$ is the integration process along the vertical line $y=j$, $\\vec{n}(x)$ denotes the vector of the input normal field at the position $x$. $n_x$ is the component of the normal field $\\vec{n}$ along the direction of the x-axis (which is a scalar function).
> The confusion might arise because the letter ‘x’ is used in two different contexts. To clarify, we’ll make the following adjustments: We will use the formula $d(i,j)=\\sum_{t=0}^i n_x(t,j)$, where `d` and `n_x` are considered as 2D arrays in the discrete version. This change aligns better with how the variables are handled in the code.
Question 3:
> Thanks for pointing out the issue. We will correct these mistakes in the revision.
Question 4:
> We thank the reviewer for raising this question, and we deeply regret the lack of detailed comments on the appendix's algorithm which has caused confusion. The variable cnt is crucial as it determines the number of while loop iterations at Line 11 (L11). It is used to record the number of iterations before reaching Line 21 (i.e., all colors are applied), and afterward, the iteration repeats cnt times to guarantee the color completion process is completed.
> Besides, we acknowledge the mistake at Line 3 (L3), where the initialization of 'colored' should indeed be based on the Inv. In the invisible vertex coloring, a flood fill operation is applied on the array 'colored' to fill the colors, while an ongoing laplacian smoothing is performed on the array C[i] to achieve a smoother color transition for a more aesthetically pleasing result. We appreciate your correction and will ensure future versions have clearer documentation to prevent misunderstandings.
Question 5:
> The "Expansion" in our model serves as a regularization technique directly applied to the parameters, similar to weight decay, rather than functioning as a loss term. At each step, vertices are moved a small distance in the direction of their normals.
Question 6:
> Thanks for pointing out the issue. We will modify this statement in our revision following your suggestion.
Question 7:
> Thanks for your thorough comment. The entire ISOMER process takes approximately 10 seconds. Within this timeframe, the Mesh Initialization step accounts for about 2 seconds, the preliminary Mesh Reconstruction takes around 3 seconds, the Mesh Refinement step consumes approximately another 5 seconds, and the final Mesh Colorization is completed in less than 0.1 seconds.
We appreciate the reviewer for pointing out those issues. We will thoroughly revise our paper. Thanks again for your time and in-depth suggestions.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer GBd8:
Thanks for reviewing this work. Would you mind to check authors' feedback and see if it resolves your concerns or you may have further comments?
Best, AC
---
Rebuttal 3:
Title: Thank You for Your Thorough Review and Invitation for Further Discussion
Comment: Dear Reviewer,
Thank you for your patience and thorough review. We have addressed the issues you raised with detailed responses. We welcome you to engage in further discussion with us. Your insights are invaluable, and we are eager to clarify any points or provide additional information as needed.
Looking forward to your feedback and any further questions you may have.
Best regards, Authors | Summary: This paper proposes a novel method for converting a single image to 3D. The method mainly consists of two stages: multi-view RGB and normal generation, and multi-view guided mesh optimization and texturing. The key innovation of the paper is a multi-view-normal-based 3D mesh reconstruction module. Specifically, given a single input image, the method first generates multi-view RGB images by fine-tuning a 2D diffusion model. It then employs a ControlNet-tile model and a super-resolution model to increase the resolution of the multi-view images from 256 to 2048. Additionally, a multi-view normal model is fine-tuned to generate the corresponding multi-view normal maps. Following this, a module called ISOMER is proposed to first convert the multi-view normal maps into a 3D mesh by fusing the normal maps to depth and then applying Poisson surface reconstruction. The mesh geometry is then refined according to a specialized multi-view normal loss. Finally, the mesh vertex color is optimized through a similar approach.
Strengths: 1. The proposed method is well-motivated. Previous methods attempt to directly convert multi-view RGB images to 3D through a feed-forward model, which is harder to train and requires more computing resources. The proposed method leverages the multi-view normal information from a 2D diffusion model to directly optimize 3D geometry, avoiding expensive training.
2. The proposed multi-view normal to 3D module is novel and interesting.
3. The paper is generally well-written and easy to follow.
Weaknesses: 1. While I agree that the proposed method is technically sound and may be a nice supplement to the community, my major concern is that the experimental section is quite lightweight. For instance, the quantitative experiment is only conducted on 30 objects from the GSO dataset. Since the proposed method and the comparing baseline are not expensive to run compared to SDS-based methods, it is not acceptable to base quantitative results on only 30 objects. It's very easy to get biased results, and the conclusions are likely to change if we choose another set of 30 objects. As a result, I strongly urge the authors to follow the conventions of previous papers and rigorously compare the methods on the entire GSO dataset with careful alignment between prediction and ground truth before calculating metrics. Additionally, it is highly suggested to include evaluations on other datasets, especially some real-world object datasets, and user studies. Otherwise, I cannot support the acceptance of the paper without convincing justification.
2. The high-level pipeline of "multi-view normal generation with 2D diffusion models and 3D reconstruction with normal-based optimization" is not first proposed in this paper. For example, Wonder3D shares a very similar high-level pipeline, which greatly limits the contribution or novelty of the paper. While the paper proposes a novel and interesting reconstruction module, ISOMER, it has not been carefully analyzed separately. For instance, I would like to know whether the performance gains come from better multi-view prediction or a better multi-view normal-to-3D reconstruction module. An interesting experiment to include would be a direct comparison between the reconstruction modules of Wonder3D and ISOMER given the same multi-view normal maps. Only after a detailed ablation study can readers choose a better multi-view normal generation module and a better reconstruction module.
3. The multi-view generation part lacks significant novelty, mainly consisting of existing known techniques related to 2D diffusion models.
4. The ablation study is quite lightweight, and many important experiments are missing:
(a) The ISOMER module includes multiple stages. I suggest the authors include both the intermediate and final results of the module. A quantitative evaluation would also be beneficial.
(b) The method utilizes multiple 2D diffusion models to upscale the multi-view RGB (and normal?) images. It would be interesting to see the multi-view results before and after upscaling (resolutions at 256, 512, and 2048) and their impact on the final 3D models.
(c) For the "explicit target," only the geometry results are shown. What is the effect on texture?
5. The paper claims that "generate tens of millions of faces within seconds" as an advantage. However, I don't believe so. First, it's not difficult for existing methods to generate dense meshes efficiently. Also, dense meshes themselves are not required by applications but rather detailed and sharp geometry. In fact, many downstream applications prefer meshes with more compact faces and cannot tolerate tens of millions of faces.
6. Line 170 states, "Finally, the mesh is corrected after iteration through edge collapse, edge split, and edge flip to maintain a uniform face distribution and reasonable edge lengths." This introduction is too brief as it involves many operations but lacks the motivation and implementation details for each operation.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. When converting normal maps to depth maps, how do you handle discontinuity issues? For example, there may be occluded regions, leading to sudden jumps in the normal map.
2. The method uses two models to upscale the resolution, first from 256 to 512 and then to 2048. Do we really need two models? Can we directly upscale from 256 to 1024? Or can we change their order?
3. It seems that the mesh vertex colors are directly computed as a weighted sum of the projected colors. Am I correct? Will this cause any inconsistency issues or other artifacts?
4. Line 133: "we adopt a channel-wise noise offset strategy" is not very clear to me. Could you provide more details?
5. Line 160: "not yield a real normal field which is irrotational. To address this ..." is not clear to me. Could you explain this further?
6. Line 181: What does Figure 3(b) refer to?
7. Line 224: "By examining the epipolar lines corresponding to each horizontal ray, we identify 13k instances of illegitimate data." This is not very clear to me. Could you provide more details?
8. Line 241: What does "second level of training" refer to?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors briefly mention the limitations in the final section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive and thorough comments. Your main suggestions on the experimental setting and ablation studies help us refine our paper. To address your concerns, we present the point-to-point response as follows.
**Comment 1: Narrow Evaluation Scope**
Thanks for your thorough comment. GSO (Google Scanned Objects) is a set of real-world objects scanned by Google using specialized equipment for research. We chose 30 random objects in alignment with the standards set by previous work (CRM, SyncDreamer, Wonder3D). Given that SyncDreamer takes up to 50 hours to generate 30 objects and results from other papers show that SyncDreamer performs significantly worse than existing methods, we exclude SyncDreamer and **conduct an experiment** on the entire GSO dataset. The results for all methods are provided in Table 1. Additionally, following your suggestions, we will add more experiments on other real-world object datasets, such as MVImageNet, and provide a detailed evaluation in the revised paper.
For the user study, as shown in Appendix F, we rendered 360-degree videos of subject-driven 3D models and presented each volunteer with five samples of rendered videos from a random method. Volunteers rated the samples on four aspects: 3D consistency, subject fidelity, prompt fidelity, and overall quality on a scale of 1-10, with higher scores indicating better performance. We collected results from 30 volunteers, as shown in Table 2 in the Appendix. Our findings indicate that users significantly preferred our method across these aspects.
**Comment 2: Insufficient Analysis of ISOMER**
We appreciate your insightful comment. Addressing the concern about the novelty in the multi-view generation aspect, we will provide a unified response in the subsequent question. To resolve your concern regarding ISOMER, we will **revise its introduction** to emphasize the key insight of each component within ISOMER. The first step of ISOMER ensures topological consistency, while the second step efficiently reconstructs rough geometries, and the third step improves surface reconstruction accuracy by handling multi-view inconsistencies with ExplicitTarget.
Our experiments indicate that ISOMER is capable of enhancing consistency as demonstrated in Figure 5. We **include a new comparative evaluation** experiment that contrasts Wonder3D with and without ISOMER to provide a clearer understanding of the benefits in Table 1.
**Comment 3: Lack of Novelty in Multi-view Generation**
We thank the reviewer for raising this concern. The main focus of our paper is on addressing current limitations within current multi-view reconstruction approaches, exemplified by Wonder3D. Existing methods often suffer from low resolution, multi-view inconsistency, and low generation speed. Our methodology is specifically tailored to address each of these issues within the existing framework, rather than proposing an entirely new pipeline.
Here's a concise summary of our approach:
- Noval Reconstruction Algorithm: For the first time in such a pipeline, we have introduced a mesh-based reconstruction algorithm, which has gained significant improvements in speed and quality.
- Quality Enhancement: We have developed techniques to improve the resolution of reconstructed models, providing higher fidelity and detail.
- Consistency Improvement: Our approach includes mechanisms to reduce inconsistencies across different views, leading to more coherent and reliable 3D reconstructions.
**Comment 4: Lightweight Ablation Studies**
We are grateful for your comprehensive comments. Recognizing the need for a more robust ablation study, we have taken the following steps to enhance our original analysis:
(a) Quantitative Evaluation of the ISOMER Module: We have incorporated a detailed quantitative assessment of the ISOMER module to better demonstrate its contribution to overall performance.
(b) Resolution Impact Analysis: We have expanded our study to qualitative comparisons across various resolutions to see how different resolutions impact the final 3D reconstruction outcomes.
(c) Explicit Target's Effect on Texture: Mirroring the approach used for geometry, we will include additional experimental results that illustrate the impact of the Explicit Target method on texture quality in the revised manuscript. We show the difference in Figure 1.
**Comment 5: Advantage of Generating Dense Meshes**
We thank the reviewer for raising this concern. We believe that detailed and sharp geometry does require a sufficient number of surfaces. As illustrated in Figure 6 of the main text, the intricacies of text engraving cannot be achieved without enough surfaces. For scenarios that prefer more compact meshes, such as in gaming or other graphics-intensive applications, various existing mesh decimation and retopology techniques can be applied to achieve the desired outcome. Since this process is quite engineering-oriented, we did not include it within the scope of our paper.
It is worth noting that while simplifying a high-precision dense mesh model to a more compact one is feasible, the inverse—enhancing a model with fewer details to a high-precision one—is significantly more challenging. This consideration is a primary reason our paper emphasizes the optimization of generation accuracy and surface count, aiming to address the complexities involved in achieving high-fidelity geometric detail.
**Comment 6: Lack of Detailed Explanations**
Thanks for pointing out the issue. We will **modify our statement** in the revision.
---
Rebuttal 2:
Title: More Responses
Comment: We will add more details about edge collapse, edge split, and edge flip in the revision as follows.
>Edge Collapse: This operation is used to avoid and heal defects in the mesh. It involves selecting an edge within a triangle and collapsing it to the other edge, effectively merging the two triangles into a single triangle. This process can help to eliminate narrow triangles that might be causing issues in the mesh, such as those that are too thin to accurately represent the surface they are approximating. Edge collapse can prevent the creation of topological artifacts and maintain the quality of the mesh.
> Edge Split: This is the opposite of edge collapse. In edge split, an edge that is longer than a specified maximum length is divided into two, creating new vertices at the midpoint of the edge. This operation is used to refine the mesh, ensuring that the local edge length is kept close to the optimal length. It helps to maintain the quality of the mesh by avoiding edges that are too long, which could lead to an inaccurate representation of the surface.
> Edge Flip: Edge flip is an operation that adjusts the connectivity of the mesh to improve its quality. It involves flipping an edge within a triangle to connect two non-adjacent vertices, effectively changing the triangulation of the mesh. This can help to maintain the degree of the vertices close to their optimal value, which is typically six for internal vertices (or four for boundary vertices).
> The goal of these operations is to improve the mesh quality while avoiding defects and ensuring that the mesh accurately represents the target geometry.
Question 1:
> Thanks for your questions. The edges of sudden jumps will have a steep normal, and integrating over this normal gives a large depth difference. So this does not have an observable negative impact on the algorithm.
Question 2:
> Thanks for your insightful questions. The 256-to-512 model is specifically tuned to integrate information from multiple views, which is crucial for ensuring consistency across different perspectives. The 512-to-2048 model, on the other hand, is optimized to concentrate on the finer details of the reconstruction, enhancing the overall quality of the output. Since the multi-view aware 256-to-512 bears a relatively higher computational load compared to the 512-to-2048 model, it is part of our strategic design to balance computational load with multi-view consistency and accuracy. This dual-model strategy is crafted to enhance efficiency in both training and inference, making our method more practical for real-world applications.
Question 3:
> Yes. Since it is computed as a weighted sum, we rarely encounter inconsistency issues. However, It is indeed a promising direction to explore more advanced coloring methods in future work to further enhance the robustness and quality of our results.
Question 4:
> Thanks for pointing this out, we'll be more detailed in the revised version! For a noisy latent with shape [B, C, H, W], we further add a [1, C, 1, 1] shaped N(0, 0.1) gaussian noise to it, thus enhancing the generalization of the network and avoiding the zero terminal SNR problem.
Question 5:
> We thank the reviewer for raising this concern. For a legitimate normal field, any closed line integral should be zero, which is the meaning of being irrotational. However, predictions from neural networks cannot possibly meet this condition.
Question 6:
> Thanks for pointing out the typo. We will correct it to Figure 5(b).
Question 7:
> Thanks for your valuable comments. We will add more details as follows.
> Because it is four views that are in the same plane, there is an obvious pairwise polar geometric relationship, i.e., any pixel in one view corresponds to a horizontal straight line in the neighboring view. Thus if a non-null pixel, does not have any non-null pixels on the corresponding straight line in the neighboring view, then the data is ilegal. This problem is usually caused by objects in the data that have no thickness, i.e., they are observable in one view and happen to be invisible in another view.
Question 8:
> It refers to Multi-view Image Upscale, with a detailed description in Appendix B.
We thank the reviewer for pointing out these typos. We will carefully proofread the manuscript and sincerely hope that you will find the revision satisfactory. We appreciate your time and insightful comments.
---
Rebuttal 3:
Title: thank you
Comment: Thank you for the detailed response and additional experiments.
After reviewing your comments, I have the following questions and concerns:
1) It appears that Wonder3D+ISOMER performs slightly better than Wonder3D alone but still falls short (a lot) of Unique3D. Does this imply that the primary improvement in Unique3D comes from the multi-view generation (2D diffusion models) rather than the reconstruction model (ISOMER)? Could you explain the significant performance gap between Wonder3D+ISOMER and Unique3D? Additionally, would combining Unique3D's multi-view prediction with Wonder3D's reconstruction model result in better performance than "Wonder3D+ISOMER"? I'm asking because I want to understand whether the improvement is due to the reconstruction method or the multi-view prediction. Based on the current results, it's difficult to determine.
2) The authors claim that their multi-view module offers higher resolution, better multi-view consistency, and faster generation speed. However, the last two points are not supported by experiments. How do you quantitatively measure multi-view inconsistency?
3) When directly calculating the point color as a weighted sum of projected 2D pixels, why doesn't this method suffer from inconsistency, especially at the boundaries or overlaps between multiple views?
4) I still don't fully get the point. When there is a sudden depth change (e.g., due to occlusion), how can simply integrating the normals yield the correct depth? I don't think there would be steep normals; rather, there should be multiple segments of normals reflecting the surface properties of each region separately.
5) Regarding the generation of dense meshes, I agree that generating sharp features requires a sufficient number of faces. However, the ability to export a large number of triangles doesn't necessarily indicate that the method can generate sharp and detailed geometry. For instance, existing methods can increase their resolution to 512 or 1024 when using the Marching Cubes algorithm, which will produce many more triangles, but the underlying geometry remains unchanged. My point is that you should only claim the generation of sharp details (with verification) as an advantage, not just the generation of a large number of triangles.
6) Where is the "Quantitative Evaluation of the ISOMER Module"? I couldn't locate it in the rebuttal PDF.
7) Regarding the "irrotational normal", what is the motivation behind "introducing a random rotation to the normal map before integration? The process is repeated several times, and the mean value of these integrations is then used to calculate the depth, providing a reliable estimation." How does this solve the problem?
8) In the rebuttal, the qualitative examples provided are limited to one or two instances, which is not very convincing or helpful for understanding. Please avoid this and include more examples in your revision.
---
Rebuttal Comment 3.1:
Title: Response to questions
Comment: We sincerely appreciate your constructive and thorough questions. To address your concerns, we present the point-to-point response as follows.
1. It appears that Wonder3D+ISOMER performs slightly better than Wonder3D alone but still falls short (a lot) of Unique3D. Does this imply that the primary improvement in Unique3D comes from the multi-view generation (2D diffusion models) rather than the reconstruction model (ISOMER)? Could you explain the significant performance gap between Wonder3D+ISOMER and Unique3D? Additionally, would combining Unique3D's multi-view prediction with Wonder3D's reconstruction model result in better performance than "Wonder3D+ISOMER"? I'm asking because I want to understand whether the improvement is due to the reconstruction method or the multi-view prediction. Based on the current results, it's difficult to determine.
> Thank you for your question. We tested the Unique3D multi-view + Wonder3D reconstruction, and the results are as follows:
| Method | PSNR↑ | SSIM↑ | LPIPS↓ | Clip-Sim↑ | Chamfer Dist.↓ | Vol. IoU↑ | F-Score↑ |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Wonder3D | 18.0932 | 0.8995 | 0.1536 | 0.8535 | 0.0261 | 0.4663 | 0.6016 |
| Wonder3D+ISOMER | 18.6131 | 0.9026 | 0.1470 | 0.8621 | 0.0244 | 0.4743 | 0.6088 |
| Unique3D+Wonder3D | 19.1688 | 0.9219 | 0.1107 | 0.8732 | 0.0153 | 0.5232 | 0.6576 |
| Unique3D | 20.0611 | 0.9222 | 0.1070 | 0.8787 | 0.0143 | 0.5416 | 0.6696 |
> We analyzed the results:
> 1. Our multi-view training part differs from Wonder3D in data filtering, training strategies, and network architecture. For example, about 37% of the data in Wonder3D's data list cannot pass our data filtering (mainly due to issues like thicknessless surfaces, one-sided visibility, or too small projected areas), which may have led to different learned preferences.
> 2. We also observed that the main reason for Wonder3D's low scores is the overly flat or thick predictions (often problematic with elongated objects like shoes or flat objects like books), which is a significant factor in Wonder3D's lower scores. Such errors are less observed in our multi-view predictions.
> Based on the above analysis, it is consistent with your prediction. It can be said that the main improvement of qualitative comparisons comes from the accuracy of multi-view prediction. These indicators are not sensitive to geometric details and are more focused on evaluating basic geometric structures. Therefore, ISOMER has relatively limited improvement under these indicators, but we have observed that the visual effect of Wonder3D+ISOMER is much better than Wonder3D. We are also actively looking for a better metric to evaluate geometric details.
2. The authors claim that their multi-view module offers higher resolution, better multi-view consistency, and faster generation speed. However, the last two points are not supported by experiments. How do you quantitatively measure multi-view inconsistency?
> Thank you for your insightful question. The faster generation speed refers to the comparison between generating multi-view at 512 resolution directly and combining the generation at 256 resolution with upscaling to 512. Theoretically, the computational load of the latter is reduced by 55% compared to the former (since the computational load of 512 is four times that of 256). Regarding multi-view consistency, it is compared to the directly apply super-resolution from 256 to 2048. Since directly apply super-resolution does not include information from other views, it introduces more inconsistencies. We will clarify this in the revised version to avoid any ambiguity in understanding.
3. When directly calculating the point color as a weighted sum of projected 2D pixels, why doesn't this method suffer from inconsistency, especially at the boundaries or overlaps between multiple views?
> This is why the reconstruction method includes a Reconstruction Stage and a Refine Stage in our method. The Reconstruction Stage, which does not contain ExplicitTarget, quickly produces a model that approximates the correct shape but is limited by multi-view consistency. ExplicitTarget further addresses this issue in the Refine Stage. A straightforward understanding can be seen in Fig. 1 of the Rebuttal material, where ExplicitTarget is used for direct coloring. If the coloring is replaced with multi-view normal maps, it becomes the optimization target for each step in the Refine Stage. Without ExplicitTarget, the optimization would face inconsistencies due to multiple views, whereas ExplicitTarget does not have this problem. At the contour edges of each view, the weights used in the ExplicitTarget calculation are based on the angle between the normals, so that the typically lower weights at these edges. Additionally, since the adjacent views are orthogonal in a four-view setup, a vertex's color is mostly influenced by only one view, preventing incorrect superposition.
---
Reply to Comment 3.1.1:
Title: Remaining responses
Comment: 4. I still don't fully get the point. When there is a sudden depth change (e.g., due to occlusion), how can simply integrating the normals yield the correct depth? I don't think there would be steep normals; rather, there should be multiple segments of normals reflecting the surface properties of each region separately.
> The training data includes steep edge normals (the multi-view normals in the data are derived from depth maps, as direct surface normals would yield incorrect data due to many surfaces being wrong direction in Objaverse). Therefore, we observe such normals during generation, although they may not be accurate (as shown in Appendix Fig. 8). However, the accuracy of the initial depth estimation is not crucial for the final result (as demonstrated in Appendix Fig. 9); what matters is the topological holes, not the accuracy. The topological holes are reflected by the accuracy of the normal maps. We expect to explore more accurate initialization methods to improve the final method's accuracy in the future.
5. Regarding the generation of dense meshes, I agree that generating sharp features requires a sufficient number of faces. However, the ability to export a large number of triangles doesn't necessarily indicate that the method can generate sharp and detailed geometry. For instance, existing methods can increase their resolution to 512 or 1024 when using the Marching Cubes algorithm, which will produce many more triangles, but the underlying geometry remains unchanged. My point is that you should only claim the generation of sharp details (with verification) as an advantage, not just the generation of a large number of triangles.
> We fully agree with your statement. A large number of triangles are merely a prerequisite for sharp details, not indicative of sharp details themselves. As shown in Fig. 3 of the main text, existing methods fail to achieve good sharp details in geometry, largely because they use methods like Marching Cubes to extract meshes. For example, replacing the reconstruction in Wonder3D with ISOMER improves the results. Methods like OpenLRM, CRM, and Instant-Mesh use no more than $384$ resolution Marching Cubes or Flexible Cubes, as these algorithms require evaluating $512^3$ SDF values at $512$ resolution, which needs over 24GB of GPU memory and nearly a few minutes of runtime. A $1024$ resolution Marching Cubes theoretically takes over ten minutes and >100GB of GPU memory. In contrast, ISOMER can achieve sharp detail reconstruction in just a few seconds.
6. Where is the "Quantitative Evaluation of the ISOMER Module"? I couldn't locate it in the rebuttal PDF.
> The "Quantitative Evaluation of the ISOMER Module" refers to the indirect quantitative comparison between Wonder3D and Wonder3D + ISOMER. We will add a direct comparison based on ground-truth multiview normals in the revised paper. Compared to the ground-truth multiview normals, we believe the input for the reconstruction operation should be the generated multiview from the front view rather than the accurate ground-truth multiview. Therefore, we consider comparing using multi-view normal generated Wonder3D, which has inconsistencies across multiple views, to be more relevant to the actual task of this reconstruction algorithm.
7. Regarding the "irrotational normal", what is the motivation behind "introducing a random rotation to the normal map before integration? The process is repeated several times, and the mean value of these integrations is then used to calculate the depth, providing a reliable estimation." How does this solve the problem?
> "Irrotational normal" implies that the integral values along different paths should be the same, but since the predicted "normal map" is not "irrotational," these integral values actually differ. We choose straight lines as the integration path because it is the most straightforward to operate. Different straight lines yield different integral values, so we take their expectation as the final integral value. This is why random rotations are needed to calculate the mean value. We plan to explore using a direct depth prediction model to accomplish this task instead of calculating depth from normals in the future.
8. In the rebuttal, the qualitative examples provided are limited to one or two instances, which is not very convincing or helpful for understanding. Please avoid this and include more examples in your revision.
> Thank you for the suggestion! We agree with you, but due to the limited time for the rebuttal and the addition of extensive experiments and code, many experiments were not fully completed. We will include at least six representative examples for each qualitative experiment in the revision to enhance the paper.
Thank you very much for your insightful questions and suggestions, and we welcome further discussion! | Summary: This paper introduces Unique3D, a framework aiming to generate 3D meshes from single-view images with high quality and fidelity. Driven by the observation that 2D image pixel and normal priors with higher resolution can be crucial in generating intricate textures and complex geometries, Unique3D integrates a multi-level upscale process to progressively improve the resolution. It also proposes an instant and consistent mesh reconstruction algorithm called ISOMER to lower the computation complexity and improve reconstruction quality.
Strengths: Unique3D addresses the limitations of previous methods by generating high-resolution 2D images and normal maps and optimizing the color mesh according to these 2D guidances. The idea of increasing resolution in 2D is straightforward and powerful as the experiments suggest. The pipeline is efficient and can generate mesh within a short time frame (about 30 seconds per mesh). This makes it practical for many real-world applications. The paper provides solid and detailed results for the effectiveness of its methods and delivers rather convincing results. The proposed ISOMER is novel and could inspire further studies in object color mesh reconstruction. The paper includes a thorough analysis of the ISOMER algorithm which provides good insights into the module.
Weaknesses: 1. This pipeline features an image diffusion model, two super-resolution modules, and a normal diffusion model. All work in a sequential manner without interaction. This may bring significant compounding errors in multi-view/multi-resolution consistency before reconstructing mesh with ISOMER. The author should include a more thorough analysis on error patterns with particular focus on multi-view/multi-resolution consistency.
2. The subsection "ExplicitTarget Optimization for Multi-view Inconsistency and Geometric Refinement" (L183-) is hard to read and understand. Improvements in writing and equations are needed.
3. Based on my understanding of the ISOMER algorithm, it could be prone to inconsistencies, especially around the 2D boundaries of the generated multi-view images. Could you provide more insights into this aspect? Also, how well is the model working with non-front-facing input views? Adding some discussion for these could further improve the coverage of the experiments and help people understand the limitations.
4. 30 random objects (L257) are too few to be a meaningful quantitative evaluation.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness 1, 3, 4.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are discussed very briefly in the conclusion section. Addressing the questions in the above weaknesses could involve further discussions on the limitations. Societal impacts are included in the supplementary materials.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and valuable comments, some of which are fundamental and in-depth suggestions that help us greatly improve our paper. To address your concerns, we present the point-to-point response as follows.
**Comment 1: Multi-view Consistency Analysis**
We appreciate the reviewer bringing this comment up. To answer your concern, first, our choice to adopt a sequential design is based on the following observations: 1) Integrating the image diffusion model with two super-resolution models into a 2048-resolution diffusion model would result in an enormous computational load due to the high resolution. 2) We noticed that methods combining both image and normal diffusion models (e.g., Wonder3D) frequently yield normal results lacking sufficient details, and the guidance scale does not effectively control the output. Our experiments on this hybrid architecture were in alignment with these observations. By decoupling the two models, we have notably enhanced the accuracy of the normals, as the normal diffusion model does generations based on clear images rather than noisy image latents. Therefore, we have chosen to use separate image and normal diffusion models to improve the fidelity of the normals and the subsequent mesh reconstruction.
ISOMER can even be used to improve the consistency of other methods. For example, in Table 1, we replaced Wonder3D's reconstruction method with ISOMER, which is not only faster but also of higher quality.
**Comment 2: Clarity in "ExplicitTarget Optimization" Subsection**
We value your insightful suggestions. To address your concern, we will revise the explanation and equations in "ExplicitTarget Optimization" subsection for better comprehension. Here is our improved version:
(***ExplicitTarget***). Let $Avg(V, W) = \frac{\sum_i{V_i W_i}}{W_i}$ represent the weighted average function, and $V_{M}(v, i): (\mathbb{N}^+, \mathbb{N}^+) \rightarrow \{0, 1\}$ represent the visibility of vertex $v$ in mesh $M$ under view $i$. $Col_M(v,i) $ Indicate the color of vertex v in viewpoint i. We compute the ExplicitTarget $ET$ of each vertex in mesh $M$ as
$$\\begin{equation}
ET_{M}(v) = \\begin{cases}
Avg(Col_M(v,i), V_{M}(v, i) W_M(v, i)^2)
& \text{, if } \sum_{i} V_{M}(v, \mathcal{i}) > 0 \\\\
\mathbf{0} & \text{, otherwise, }
\\end{cases}
\end{equation}$$
where $W_M(v, i) = -\cos(N_{v}^{(M)}, N_{i}^{(view)})$ is a weighting factor, $N_{v}^{(M)}$ is the vertex normal of $v$ in mesh $M$, and $N_{i}^{(view)}$ is the view direction of view $i$.
$ET_M$ computes the results with view direction weighting so that each viewpoint does not introduce significant errors for surfaces that are too skewed in the current viewpoint (because predictions in these places are usually inaccurate).
**Comment 3: ISOMER Boundary Inconsistencies and Non-front-facing Views**
Thanks for your thorough comment. We agree that the key insight of ISOMER could be better clarified. Following your suggestions, we will **revise our paper** to emphasize that **ISOMER directly handles the case where the global normal of the same vertex is inconsistent across viewpoints.**
As demonstrated in our ablation study (Figure 5), previous reconstruction algorithms often yield poor results with blurriness or wavy patterns when faced with inconsistent inputs, while adopting ISOMER notably improves the reconstruction results. We **include a new evaluation experiment** comparing Wonder3D with and without ISOMER to provide a clearer understanding of the benefits in Table 1.
Additionally, we added **a new test with rotated objects** in Table 2 to test robustness in non-front-facing views. The test results show that Unique3D still performs well in this case, and even the geometry prediction is more accurate (Because the shape of an object is better estimated, e.g. a book).
**Comment 4: Insufficient Quantitative Evaluation**
We chose 30 random objects following the standards set by previous work (CRM, SyncDreamer, Wonder3D). Given that SyncDreamer takes up to 50 hours to generate 30 objects and results from other papers show that SyncDreamer performs significantly worse than existing methods. We exclude SyncDreamer and conduct a thorough experiment on the full GSO dataset. During our experiments, we found that GRM’s online samples were unavailable and lacked open-source code, preventing further testing of GRM’s results. The results for all methods are provided in Table 1.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer Bw7b:
Thanks for reviewing this work. Would you mind to check authors' feedback and see if it resolves your concerns or you may have further comments?
Best,
AC
---
Rebuttal Comment 1.2:
Comment: Thanks for the response. I find most of my concern addressed and will keep my score leaning towards acceptance.
---
Rebuttal 2:
Title: Thank You for Your Thorough Review and Invitation for Further Discussion
Comment: Dear Reviewer,
Thank you for your patience and thorough review. We have addressed the issues you raised with detailed responses. We welcome you to engage in further discussion with us. Your insights are invaluable, and we are eager to clarify any points or provide additional information as needed.
Looking forward to your feedback and any further questions you may have.
Best regards, Authors | Rebuttal 1:
Rebuttal: We thank the reviewers for their patience in reviewing, and we will respond to each of them individually, with additional experiments added in the Appendix PDF.
Pdf: /pdf/336b26b1659a208b93b4965db2a48c5890eeaba3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improved learning rates in multi-unit uniform price auctions | Accept (poster) | Summary: This paper considers online learning in repeated multi-unit uniform price auctions, motivated by the strategic participation of electricity producers in the electricity day-ahead market. By introducing a new modeling of the action space to EXP3, the authors propose an algorithm that achieves $\tilde{O}(K^{3/2}\sqrt{T})$, $\tilde{O}(K^{5/2}\sqrt{T})$, and $\tilde{O}(K^{4/3}T^{2/3})$ regret bounds for full information, all-winner, and bandit feedback, respectively. They also provide lower bounds for each case.
Strengths: 1. They provide regret bounds for various feedback settings: full feedback, partial feedback, and bandit feedback.
2. The proposed algorithm improves the regret bound by using a new model of the action space.
Weaknesses: 1. I think that the setting is not clearly written and seems to differ from previous work [1] without justification. In more detail, for the allocation described in (1), for player $i$, any items with a value larger than a price are allocated. However, in [1], the auctioneer allocates the $j$-th unit to the player who submitted the $j$-th highest bid. Additionally, the role of the valuation $v_l$ is not clearly described.
2. The motivation for the allocation policy in this setting may not be well described.
[1]Brânzei, Simina, et al. "Learning and collusion in multi-unit auctions." Advances in Neural Information Processing Systems 36 (2024).
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Could you provide a motivating example to justify the allocation policy in setting (1)? According to (1), it seems that multiple players may receive the same item, which may not be convincing to me.
2. According to the paper, the valuation $v_l$ is known to the bidder. What is the motivating example?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have addressed some of the open problems in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
W1/Q1 It is inexact to say that our setting is not the same as in [1]. While the way our allocation policy is described indeed differs from the description in [1] it results in the same allocations as allocating the $j^{th}$ items to the $j^{th}$ highest bid.
Indeed, since all items are identical, it is sufficient to keep count of how many bids from player $i$ are above the price (and hence amongst the $K$ highest). Therefore strictly allocating the $j^{th}$ items to the $j^{th}$ highest bid and allocating one item for each bid in the $K$ highest gives the same allocation (as permuting identical items does not change the allocation).
While it is possible for two players $i,k$ to have the same allocation $x_i(\mathbf{b})=x_k(\mathbf{b})$ this only means they get allocated the same number of items.
We take note that this might not be straighforward and will ensure to make it clearer.
W2. Multi-unit auctions with uniform pricing and the associated allocation rule [1] used in this paper is a natural model for the short-term electricity market. We give some references l99.
Q2. From both a motivation and technical point of view, it is reasonable to assume some knowledge of the valuations :
- The value is a quantity that represents how useful each item will be to the bidder. In that sense, any bidder trying to acquire items should have some knowledge of the valuation of this item $\textit{for them}$.
- We are studying the adversarial opposing bid setting, yet, as pointed out in [2] without any assumption on the valuation (ie adversarial valuation that can vary across auctions) no non-trivial regret guarantees can be obtained. Since their setting ( first price single item auction with valuation observed before each round ) is strictly easier this impossibility result also applies to multi-unit uniform auctions.
We chose to focus on known valuation as it allows for a more concise and understandable analysis. Another possible model would be to assume that valuations are unknown but constant across auctions and that a noisy version of the valuation is observed whenever an item is won. Adapting our algorithm to this setting would require to use estimates of valuations instead of the real values. We leave this adaptation to future work as we believe the main ideas of our work are easier to get in the setting we consider.
[1] Brânzei, Simina, et al. "Learning and collusion in multi-unit auctions." Advances in Neural Information Processing Systems 36 (2024).
[2] Balseiro, S., Golrezaei, N., Mahdian, M., Mirrokni, V., & Schneider, J. (2019). Contextual bandits with cross-learning. Advances in Neural Information Processing Systems, 32.
---
Rebuttal 2:
Title: Thank you for your response
Comment: Thank you for your response. I have some further questions based on your response.
According to your response, the items are identical. Then, what is the reason for having different valuations for each item? Also, based on your motivation example of the electricity market, I'm wondering whether the items are identical or may be different based on the usage.
---
Rebuttal 3:
Title: Response to your questions
Comment: Thank you for these questions.
First, we need to clarify that both our allocation function and valuation are parameters that relate to the number of items attributed to a player (because items are identical, their indexes are irrelevant). Furthermore, the valuations are marginal: for bidder $i$ and for $j \in [K]$, $v_{i,j}$ quantify how much more bidder $i$ values getting $j$ items than $j-1$ items.
With this in mind, the reason we have different valuations is to allow for diminishing marginal utility, a common microeconomics assumption introduced in [1], chapter 2. The main intuition is that the more someone has of an item, the less he values getting more of it (drinking water for instance is only very valuable for the first liters of the day).
In the example of electricity markets, while the first MWh and the tenth MWh attributed to a bidder are identical, the bidder will dedicate the first to make critical infrastructure work, while the tenth would be used to activate processes that are more easily shut down and restarted.
In our setting, an item cannot be allocated to multiple bidders at the same time. The description of our setting only specifies the number of items to be allocated to each player, hence ensuring that the total amount of allocated items is equal to $K$ is sufficient to ensure no items need to be allocated twice. Because items are identical it is unnecessary to give them an index and determine which item goes to which bidder.
[1] Greenlaw, Steven A., et al. Principles of Microeconomics 2e. United States, OpenStax, 2017.
---
Rebuttal Comment 3.1:
Title: Thank you for your response.
Comment: Your response addressed my concern regarding the bidding settings. However, in my opinion, the applicability of this model seems to be limited when dealing with identical items. Therefore, I raise my score to 5 while maintaining low confidence. | Summary: This work studies the problem of multi-unit uniform price auctions. By introducing a new modeling of the action space, the paper improves the regret of the online learning problem to $\tilde{O}(K^{4/3}T^{2/3})$ under bandit feedback, and $\Omega(T^{2/3})$ is a regret lower bound under this feedback model. Under the all-winner (partial feedback), the algorithm achieves a regret of $\tilde{O}(K^{5/2} \sqrt{T})$.
Strengths: The main strength of this work is to provide a new modelling of the action space to largely improve the regret of the considered problem. This new finding, together with the improvement, if correct, already makes a solid contribution in my view.
Weaknesses: The manuscript has the following weaknesses in my view:
1. Although the all-winner feedback model could be first raised in the multi-unit auctions, other similar partial feedback models have already been introduced in different settings, e.g., Chen et al., 2024.
2. The paper's method is an improvement over the method of Branzei et al., 2024, as claimed. Yet, the authors do not provide details on their DAG equivalence method, which makes it a bit hard to appreciate all the details of the newly proposed method.
3. The authors could provide more intuitions on some technical details; see Questions.
4. There seems to be an extra "}" in Equation (4); And should Equation (6) be $b_k \geq (j + 1) \epsilon \geq j \epsilon \geq b_{k + 1}$?
[Ref]
Chen et al., Dynamic Budget Throttling in Repeated Second-Price Auctions, AAAI 2024.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Could the authors provide more intuitions and details on the method of Branzei et al., 2024?
2. Could the authors provide some intuitions on Algorithm 2, the weight-pushing sampling? In my sense, this algorithm makes a sequential sampling of all $h$'s in $\mathbf{h}$. Please correct me if I am wrong.
3. In bandit and all-winner feedback models, why do you need to subtract $K$ in the numerator of the estimator? Could you please provide some intuition on this by comparing it with your treatment in the full feedback model where you do not do this step?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
W1. The all-winner feedback model is indeed a particular case of partial feedback that is widely studied. However, it has never been studied in multi-unit auctions which is the focus of our study. We will clarify this in the related work section.
W2/Q1. The key idea in [1] is the decomposition of the utility in a sum of terms where each term only depends on two consecutive bids. They therefore associate to every set of bids a path and the utility is seen as the sum over the utility of each edge. In contrast, we are able to decompose the utility as a sum of terms that can be estimated individually (without any need for a dependency in previous terms) and therefore obtain improved guarantees.
W4. Indeed equation (4) has an extra “}“ and equation (6) should be $b_k \geq (j+1) \epsilon > j \epsilon \geq b_{k+1}$.
Q2. Your understanding is correct. Algorithm 2 indeed performs a sequential sampling of the $h$’s in $\mathbf{h}$, this allows for both a more efficient sampling and the use of specific weights for each $h$, see [3] for a more complete overview. One could also directly sample $\mathbf{h}$, ensuring the weights of each $\mathbf{h}$ are the sum of all the weights of their components $h$’s. Intuitively this would have a higher complexity cost as it would sample in the product space instead of successively in each space. (for $N,M$ integers computing weights and sampling once in $[N]^M$ is exponentially less efficient than computing weights and sampling $M$ times an element of $[N]$).
Q3. Subtracting $K$ in the numerator is a technical detail which ensures that the value the estimator $\hat{u}^t(\mathbf{h})$ takes can be upper bounded which is necessary for the analysis of EXP3.
In the full-information feedback, we directly use the utility (and not an estimator) $u^t(\mathbf{h})$, which is already upper bounded by $K$. The difference between Hedge and Exp3 is discussed in [2] notes 11.5 remark 3.
[1] Brânzei, Simina, et al. "Learning and collusion in multi-unit auctions." Advances in Neural Information Processing Systems 36 (2024).
[2] Lattimore, Tor, and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020.
[3] TAKIMOTO, Eiji et WARMUTH, Manfred K. Path kernels and multiplicative updates. In : International Conference on Computational Learning Theory. Berlin, Heidelberg : Springer Berlin Heidelberg, 2002. p. 74-89.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. | Summary: The paper studies no-regret bidding algorithms in multi-unit uniform-price auctions with adversarial competing bids. In this problem, the bidder has diminishing marginal values for units of the item, and submits one bid for each unit. The top K bids win, and the payment is uniformly the lowest winning bid for each unit of the item. The authors present an algorithm that (1) under bandit feedback, achieves regret $\tilde{O}(T^{2/3})$, and (2) in a slightly more informative model where all winning bids are revealed, achieves regret $\tilde{O}(T^{1/2})$. Both bounds are almost optimal.
Strengths: The paper makes concrete improvement and presents almost optimal bounds for a meaningful problem that has received considerable attention. The techniques appear nontrivial.
Weaknesses: The paper is somewhat hard to navigate. For example, I was lost a few times when new concepts / definitions / constructions are introduced while it's not clear at the moment what purposes they serve. Some explanatory text would help. I'd also appreciate more motivation for the problem setup (e.g., diminishing marginal values). The paper could also use some polishing (there are many more small issues in addition to the ones listed below in detailed comments).
Technical Quality: 4
Clarity: 2
Questions for Authors: (also including detailed comments)
Line 9: "this feedback interpolate ..."
Line 27: "this represent ..."
Line 28: "... others bidding strategies"
Line 41, "... with uniform pricing is strictly harder than with uniform pricing": the latter "uniform" should be "discriminatory"? Also I wouldn't say they "suggest the former *is* strictly harder". Something like "the former *might be* strictly harder" sounds more accurate.
"Auction rules" paragraph: it might help to quickly motivate diminishing marginal values here. I imagine someone unfamiliar with auctions / microeconomics may not immediately see why this makes sense.
Line 59: competing bids being adversarial (and not stochastic) seems like an important modeling choice, and I'd mention this upfront (e.g., in the abstract or earlier in the introduction).
Line 137: the gaps here seems to be $K^{3/2}$ instead of $K$?
Line 183, eq (4): brackets don't match.
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: No concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Thank you for noticing the typos and for suggesting some improvements in the presentation of our approach. We will clarify this part and make the necessary correction in the revision.
Line 41: Indeed you are right, we will reformulate according to your suggestion.
“Auction rule paragraph”: The non-increasing assumption on the values $v_1,v_2,...,v_K$ results from the law of diminishing returns, a standard assumption in microeconomics see [2] chapter 2. It is also a standard assumption in multi-units uniform auction and is studied in [1] in the non repeated setting. We will add these comments just after line 61.
Line 59: We agree and will add that competing bids are adversarial in the abstract
Line 137: The gap with the lower bound is indeed a multiplicative factor of $K^{3/2}$. In that sentence we meant to highlight that under all winner feedback our algorithms upper bound is only worse of a factor $K$ compared to the full-information upper bounds ( $\mathcal{O}(K^{5/2} \sqrt{T})$ versus $\mathcal{O}(K^{3/2} \sqrt{T})$ ). We take note that it can be ambiguous and will reformulate it.
Line 183: Thank you for noticing, it should be $h_{k+\frac{1}{2},j} (\mathbf{b}) = \mathbb{1} \left \\{ \exists \boldsymbol{\beta} \in B_{ \setminus \epsilon}, \\{x(\mathbf{b},\boldsymbol{\beta})=k \\} \cap \\{ (j+1)\epsilon > p(\mathbf{b},\boldsymbol{\beta}) > j\epsilon \\} \right \\}$
[1] Ausubel, Lawrence M., Peter Cramton, Marek Pycia, Marzena Rostek, and Marek Weretka. “Demand Reduction and Inefficiency in Multi-Unit Auctions.” The Review of Economic Studies 81, no. 4 (289) (2014): 1366–1400.
[2] Greenlaw, Steven A., et al. Principles of Microeconomics 2e. United States, OpenStax, 2017.
---
Rebuttal Comment 1.1:
Comment: Thank you for your helpful response. I don't have further questions. | Summary: The paper analyzes repeated multi-unit uniform price auctions through the lens of online learning. At each time step, a bidder submits a sequence of bids $(b_1,\dots,b_K)$ to win up to $K$ identical items for which the bidder holds known valuations that depends only on the number of won items and are the same from one round to the other. Other bidders participate to these multi-unit uniform price auctions and the first $K$ higher bids are the one that get the items, paying the $K$-th highest price or the $K+1$-th highest price, depending on the model.
This setting was already studied by Branzei et al. (2024), but they obtained suboptimal learning rates for this problem.
In this paper, the authors close the gap in the regret rate and they further analyze and characterize the learning rates arising from a different type of feedback that was not previously considered.
Strengths: The paper provides interesting insights on how to tackle repeated multi-unit uniform price auctions and this understanding translates into improved (and matching in the time horizon) learning rate with respect to the SOTA.
The paper also provides a model for a new type of feedback that could be of interest in concrete applications and also characterize (in the time horizon) the learning rate of this problem.
Overall, the paper provides a valid contribution to the development of online learning in auctions.
Weaknesses: Though this is quite common in this literature, the paper does not explore what can be done in strategic settings where also the other bidders learn, and what kind of dynamic might arise in this case. It would be interesting to model other bidders not as an oblivious environment, but rather as other learning agents.
Typo. Line 41. "suggesting that bidding multi-unit auctions with uniform pricing is strictly harder than with uniform pricing." I assume that the second should be discriminatory pricing.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) I see that there's still a mismatching rate when it comes to the other parameter $K$. Do you think that this is an artifact of the proof or we need better algorithms / better lower bounds to close this gap?
2) What if also the other bidders learn? I think that in this case we probably need a different definition of the regret. Do you have in mind any way to tackle this problem?
3) What kind of dynamic could arise if also the other bidders utilize the algorithm you proposed? Does the dynamic converge somewhere?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors correctly state the assumptions under which the theorem they proved hold.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
Q1. Indeed our upper and lower bound do not match for the parameter K. We believe the lower bound in the bandit setting is not tight with respect to its dependency in $K$, we expect the tight bound to depend on the scale of the utility $K$. Regarding upper bounds, we do not see any obvious way to improve them. We therefore leave the question of closing the gap to future work and in particular the task to show whether the all-winner feedback is strictly harder than the full information feedback.
Q2. If the other bidders learn, we need to move from an oblivious adversary to an adaptive one. The notion of regret we should use remains the same : comparing the average utility to the best strategy in hindsight, but we need to allow the opposing bids at time $t$, $\boldsymbol{\beta}^t$ to depend on the history up to time $t-1$.
Our algorithm is a variation of EXP3 for which the regret bounds can be generalized to an adaptive (or $\textit{reactive}$ ) adversary, as discussed in [3] notes 11.5, point 4, this can be done without any change in the analysis. We are confident that the regret bounds therefore also hold in this setting.
Q3.The dynamics of play that arise when players use no-regret algorithms in a game is an active field of research. The multi-unit uniform auction is a game with continuous action space. However, even in finite games the dynamics are not fully characterized. In finite games, if all players play EXP3, the average play converges to a Coarse Correlated Equilibrium or Hannan set (see chapter 7.4 of [1]). The convergence of iterates towards a Nash equilibrium is only shown for finite potential games [2] which does not fit multi-unit uniform auctions. The current theory is therefore not sufficient to characterize the dynamics of play in our framework.
[1] Cesa-Bianchi, Nicolo, and Gábor Lugosi. Prediction, learning, and games. Cambridge university press, 2006.
[2] Heliou, Amélie, Johanne Cohen, and Panayotis Mertikopoulos. "Learning with bandit feedback in potential games." Advances in Neural Information Processing Systems 30 (2017).
[3] Lattimore, Tor, and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Rapid Plug-in Defenders | Accept (poster) | Summary: The paper proposes a method called CeTaD (Considering Pre-trained Transformers as Defenders) for the Rapid Plug-in Defender (RaPiD) problem, which aims to rapidly counter adversarial perturbations without altering the deployed model. The method leverages pre-trained transformer models and fine-tunes only a limited set of parameters using clean and adversarial examples. The paper evaluates the effectiveness of CeTaD in various scenarios and datasets, and compares it with other baselines. The results show that CeTaD achieves superior performance in both clean and adversarial accuracy compared to the baselines. This paper is easy to follow and well-written.
Strengths: * The proposed CeTaD can rapidly counter adversarial perturbations without altering the deployed model, which addresses an important challenge in the field.
* The CeTaD uses the pre-trained transformer models, which have strong generalization capabilities, and fine-tunes only a limited set of parameters, making it computationally efficient.
* Comprehensive evaluation of CeTaD's effectiveness, transferability, and the impact of different components in various scenarios and datasets are validated.
Weaknesses: * The performance of different l_p bound perturbations should be compared.
* Actually, there are some old defense methods that can also be a plug-in defense, like JPEG compression, which also should be compared.
* Some typo errors, like "Baselines" should be "Baselines."
* The placement of Table 4 and Table 5 are too far from the corresponding content.
* The experiment settings of adaptive attack is not right, in this setting, the plug-in defense is all known to the attacker.
Technical Quality: 3
Clarity: 3
Questions for Authors: /NA
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: /NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and overall positive evaluation. Here are our point-by-point responses to your concerns.
**1. l_p settings**
For the experiments in the current paper, we follow the same attack settings as in the related works. We will compare the performance of different ( l_p ) settings in the revised version, just as you suggested. Essentially, with larger ( l_p ), the clean sample becomes more distorted, the attack is stronger, and the defense is more challenging.
(We follow the same attack settings in the related works. We would compare the performance of different l_p settings in the revised version. Basically, with larger l_p, the clean sample would be more distorted, the attack is stronger while the defense is more difficult.)
**2. some old defense methods**
Our evaluation includes naive and training-free defense methods such as Random Noise [30] and R&P [43] (Table 2&3). Random Noise adds zero-mean normal distribution noise; R&P employs random resizing and padding, which is similar to JPEG compression. Results show that Random Noise could not balance clean&adversarial accuracy while R&P keeps high clean accuracy but has little effect on adversarial accuracy. JPEG compression is also poor on adversarial accuracy. Under the setting of Table 2, here's the performance of JPEG compression by changing the quality factor:
quality factor| CA(%) | AA(%)
--- | --- | ---
90 | 90.63 | 01.37
60 | 88.48 | 09.77
40 | 86.72 | 10.55
30 | 86.33 | 09.96
10 | 82.62 | 08.98
1 | 71.09 | 06.45
To conclude, these older methods do not work well in the RaPiD scenario, which is more practical yet challenging.
**3. typo errors & table replacement**
We will thoroughly proofread our paper to address all typos and adjust the table placement for better legibility in the revised version.
**4. settings of adaptive attack**
Unlike related works, our method requires limited tuning on few-shot adversarial samples, which may make continuous Attack & Defense possible and slightly different from usual adaptive attacks. In continuous Attack & Defense (Table 12), the plug-in defense in the last round is known to the attacker. We will clarify and emphasize this point in the revised version.
---
Rebuttal Comment 1.1:
Comment: I am satisfied with the rebuttal and thus keep my rate unchanged.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thank you very much for your support. We will incorporate all of your suggestions into the revised version. | Summary: The paper proposes CeTaD, a rapid plug-in defender (RaPiD) for deep neural networks (DNNs) against adversarial attacks. It leverages pre-trained transformer models and fine-tunes minimal parameters (e.g., layer normalization) using few-shot clean and adversarial examples. CeTaD aims to quickly counter adversarial perturbations without modifying the deployed model, addressing the limitations of existing methods that require heavy training or struggle to adapt to different scenarios. Experiments on various datasets and attack methods demonstrate CeTaD's effectiveness, transferability, and generalization capabilities.
Strengths: The paper addresses a practical problem in deep learning security: the need for rapid defense against adversarial attacks without modifying deployed models.
The proposed CeTaD method is computationally efficient, requiring minimal parameter tuning and few-shot examples.
Weaknesses: The discussion had limitations of the paper seems to have made this quite clear: It seems that the effectiveness is still not good enough when we consider stronger attacks. It is somewhat unclear how should we compare this with existing defenses
Technical Quality: 2
Clarity: 2
Questions for Authors: How does CeTaD compare to state-of-the-art defense methods in terms of both effectiveness and efficiency?
Can CeTaD effectively defend against stronger adaptive attacks where the attacker has access to the defender?
What are the underlying mechanisms that contribute to CeTaD's effectiveness? A more in-depth analysis would be valuable.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and valuable suggestions. Here is our response to your concerns.
**1. effectiveness and efficiency**
This paper compares methods capable of implementing RaPiD (Rapid Plug-in Defender) (see L130-L134). As a very promising line of technique, there are indeed existing works in purification and denoising methods, which we introduce in our paper (e.g., L40 to L48). That being said, our paper tackles more challenging scenarios with innovative technical developments. Specifically, we overcome the hurdle that existing defenses are not sufficiently capable of rapidly defending with limited data or adapting to different application scenarios. Our proposed method features the following novel designs: 1) the pre-trained model is applied for robustness; 2) limited tuning with only few-shot adversarial samples is utilized for scenario adaptation; 3) most parameters are fixed to maintain clean accuracy. These new technical designs yield better performance in robustness and generalization when compared to exisiting methods. Additionally, our method has the potential to support many extensions for better future work, as elaborated in the Discussion (Sec.5 Page 9).
**2. stronger attacks**
We evaluate our method on different attacks including PGD, AutoAttack and StAdvAttack. Besides, in the continuous Attack&Defense evaluation (L354-L363; Table 12), we simulate the situation that the defender is also leaked. The result shows that both CA and AA are continuously improved, which may support that life-long learning is a potential direction. Due to the limited time of the rebuttal period, we would try to extend more additional evaluations on black-box attacks and universal perturbations in the final version.
**3. underlying mechanism**
We have made comprehensive evaluation and analysis on the underlying mechanism of our method.
(L245-251) We do ablation studies on the structure. Linear, FFN, and Bottleneck, being more flexible with additional tuned parameters during training, tend to much bias towards clean data. Conversely, CeTaD’s fixed blocks with fewer tuned parameters, exhibit greater robustness, resulting in superior performance. limited tuning would avoid completely damaging the original capability while adapting the whole framework to the scenario.
(L290-296; L326-L345) We do evaluations on different parameter initializations and frozen parameters. Randomly initialized parameters do not work well and tuning all parameters would ruin the clean accuracy. CeTaD-VIT defenders excel in clean accuracy, while CeTaD-BERT defenders perform better in adversarial scenarios. CeTaD-VIT’s capability from similar training tasks renders it vulnerable to adversarial perturbations, whereas CeTaD-BERT’s diverse training complicates clean classification. Totally, introducing fixed&unrelated structure would improve the robustness.
---
Rebuttal Comment 1.1:
Comment: Thank you for your time and effort in reviewing our work, and we really appreciate your support.
There is only 1 day left to the rebuttal deadline, and we would like to know whether our responses can successfully address your concerns. Please also let us know if you have other concerns, and we are more than happy to address them.
Best Wishes
Authors
---
Rebuttal Comment 1.2:
Title: New results on Stronger attacks
Comment: We will incorporate all of your suggestions into the revised version.
**1. memory&latency**
We evaluate GPU memory (peak value) and inference time (average value per batch) on the test set under other default settings with the following device configuration:
- CPU: 14 vCPU Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
- GPU: 1 NVIDIA RTX 3090(24GB)
Here is the result:
| method | GPU memory (MB) | time (s/batch) |
| --- | --- | --- |
| No-defender | 2186 | 0.07818 |
| CeTaD-BERT | 2638 | 0.08014 |
| CeTaD-BERT-large | 3460 | 0.08496 |
With larger pre-trained models, the memory and latency increase accordingly. It is supposed to be a trade-off between defense performance and resource consumption.
**2. other attacks**
**black-box attack**
Additionally, we follow your suggestion to evaluate our method with a pure black-box attack, Square Attack [1] (n_queries=5000), under the default settings. Here is the result on CIFAR-10 indicating that our method is still effective against Square Attack:
Type of attack |method |CA(%)|AA(%)
--- |--- | --- | ---
SquareAttack |None| 93.75 |00.00
||CeTaD-VIT|83.20|74.02
||CeTaD-VIT-large|81.45|75.39
||CeTaD-BERT|82.42|79.30
||CeTaD-BERT-large|84.57 |83.59
L_inf-PGD|CeTaD-VIT| 82.81|30.27
||CeTaD-VIT-large |71.68 |44.14
||CeTaD-BERT |68.75 |44.34
||CeTaD-BERT-large |66.02 |48.83
Our method, CeTaD, demonstrates superior defense effectiveness against black-box attacks compared to L_{\infty}-PGD. Our approach fine-tunes CeTaD using adversarial examples generated from attacks to detect attack patterns and perform defense, without requiring any information about the attack strategies. As a result, it is not limited to any specific type of attack.
**semantic attacks & composite attacks**
We also tested CeTaD’s performance against semantic attacks and composite attacks. The experimental setup was consistent with [2], using WideResNet34 trained on CIFAR-10. CeTaD was trained using a 1-adv-1-clean data setting, with significantly fewer samples than the method in [2], and with a much shorter training time.
Type of attack |method |CA(%)|AA(%)
--- |--- | --- | ---
Semantic attacks |CeTaD-BERT-large|81.23 |64.59
Semantic attacks |R&P|91.2 |3.84
However, our experimental results show that our method achieves good performance against semantic attacks.
Additionally, we also test a composite attack, Composite Adversarial Attack [2] (CAA6: enabled_attack=(0,1,2,3,4,5); order_schedule="scheduled"), under the default settings. The results indicating that our framework is able to adapt to different scenarios by one-shot adversarial fine-tuning.
| method | CA(%) | AA(%) |
| --- | --- | --- |
| None | 93.75 | 00.00 |
| CeTaD-VIT | 85.74 | 61.52 |
| CeTaD-VIT-large | 69.33 | 52.15 |
| CeTaD-BERT | 78.52 | 65.23 |
| CeTaD-BERT-large | 84.18 | 68.36 |
[1] Square Attack: a query-efficient black-box adversarial attack via random search
[2] Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations | Summary: This submission studies few-shot tuning-based purification for adversarial defense, especially leveraging pre-trained transformers. By only tuning the normalization layers with few training examples, the proposed defense achieves decent accuracy and robustness.
Strengths: 1. The problem setting is novel and of practical value. Existing adversarial defense requires much computing and data and lacks generalization. The RaPiD problem is of value.
2. The proposed method has strong empirical performance. By leveraging existing pre-trained transformers and ViT as the purifier and a few examples to fine-tune the layer norm parameters, the proposed defense has strong accuracy and robustness supported by comprehensive experimental evaluation.
Weaknesses: 1. My major concern is the unclear description in both the method description and experimental evaluation (sometimes a bit messy). To name a few:
(1) How does the decoder in Figure 1 work? In my understanding, the decoder should be able to restore the potentially perturbed image. However, existing ViT and language models seem not to be capable of doing so. How does PixelShuffle for superresolution work as a decoder? In my understanding, it should take low-resolution image input which is not provided by the encoder.
(2) What do CA and AA mean? I infer that they are clean accuracy and attacked accuracy in all tables but no place to verify.
(3) When measuring AA, what is the specific attack used? PGD, AutoAttack, and StAdvAttack are mentioned but which one is actually used? Moreover, when using the attack, does the gradient flow go backward through the plug-in defender? When compared with other baselines, this is the required adaptive attack and when compared with other baselines, this defense-then-attack number should be used for fair comparison.
(4) What is StAdv attack? Any setup detail?
(5) What is the dataset in Table 5?
2. The proposed method introduces a large model as a pre-purifier which adds a non-trivial inference overhead. More elaboration on this overhead should be discussed. In what case this overhead can be neglected?
Given these many unclear points, I lean towards rejection.
Minor:
1. Table 1: better to mention method names rather than just numbered brackets for compared approaches.
2. Line 112: hallucinates -> hallucinate
Technical Quality: 3
Clarity: 2
Questions for Authors: Questions are listed in "Weaknesses" section. The most important one is how the proposed method works given the misaligned encoder and decoder module.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes, limitations are discussed in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and valuable comments. We provide point-to-point clarifications on your concerns below and will incoporate them in the revised version.
**1. framework mechanism**
The decoder does not need to restore the input. Encoder-decoder is a general framework. Usually, an encoder maps the data of source space into a latent feature, and a decoder maps the latent feature into the target space. For some methods to extract feature, source and target space are the same and the target of decoder is to restore the input. For others, they are different. For example, for a text translation model, source space may be the original language while target space is the target language.
In our method, the decoder maps latent feature into the space where the original image classification model could correctly work. For efficiency and robustness, we do not want to train a decoder from scratch. Thus, we take PixelShuffle as a simple decoder to adjust channel and size. You could consider PixelShuffle as a fixed&untrainable decoder and we just tuning the encoder to adapt the framework.
**2. CA & AA**
CA is the accuracy on clean data and AA is the accuracy on adversarial data, which can be found in Experimental Setup (L210). We will highlight their definitions to make them clearer.
**3. attack settings**
As described in Experimental Setup (L208), the default attack is PGD. We also evaluate our method on AutoAttack (Table 5) and StAdvAttack (Table 3). In the continuous Attack&Defense evaluation (L354-L363; Table 12), we simulate the situation that the defender is also leaked. The result shows that both CA and AA are continuously improved, which may support that life-long learning is a potential direction.
**4. StAdvAttack**
We illustrate all attack methods for evaluation in Experimental Setup (L199-L201). StAdvAttack is a strong attack proposed in [1].
[1] Spatially transformed adversarial examples
**5. the dataset in Table 5**
We illustrate default settings in Experimental Setup (L208-210). the default dataset is CIFAR-10.
**6. non-trivial inference overhead**
Totally, we agree that the trade-off is important in real-time applications. However, no pains no gains. For RaPiD problem, our method needs much less tuning overhead while outperforms other methods on robustness and generalization. In practical, we could flexibly choose the pre-trained model to balance performance and inference overhead. Besides, there are many speed-up techniques for inference, such as quantization, pruning, and distillation.
**7. Minor**
Thank you for your careful proofreading, which greatly helps us improve our paper. We will address the mentioned details in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your time and effort in reviewing our work, and we really appreciate your support.
There is only 1 day left to the rebuttal deadline, and we would like to know whether our responses can successfully address your concerns. Please also let us know if you have other concerns, and we are more than happy to address them.
Best Wishes
Authors
---
Rebuttal Comment 1.2:
Comment: Thanks for the response! Most of my concerns are resolved. I adjusted the score accordingly. However, here are follow-up questions:
> 1. framework mechanism
Now I understand how the decoder works here. Then, my questions are:
(1) For pure language model encoders, such as BERT and GPT-2, how do they perceive images? If you use an image encoder, what is it and are its weights fixed?
(2) For decoder-only transformers like GPT-2, how to attach a decoder like PixelShuffle next to it? GPT-2 does not generate embeddings directly.
> 6. non-trivial inference overhead
It would be great to quantitatively study the overhead across different datasets. For example, I would expect the overhead is too high for MNIST and may be tolerable for ImageNet.
---
Rebuttal 2:
Comment: I wish to express my sincere appreciation for the invaluable assistance and dedication you have provided during the review process of my paper. Your insightful comments and thoughtful suggestions have significantly contributed to the development and refinement of my work. Your efforts have been of immense help in enhancing the quality and clarity of my research. Thank you for your generous support. It is greatly appreciated. We will incorporate all of your suggestions into the revised version.
**1.framework mechanism**
1) As illustrated in L163-L164 and Figure 1, an image would be processed by the trained embedding from VIT, resulting in the sequence embedding which could be processed by the encoder model.
Besides, as shown in Figure 1, most parameters are fixed for generalization and robustbess. Only layernorm is tunable. Furthermore, we do additional experiment on tunable parameters in Table 10 for evaluation.
2) The decoder-based transformer model is similar to the encoder-based one, the difference is the attention mask. Thus, we could apply GPT-2 too. However, causal attention could not well extract needed feature, as shown in Table 2.
**2. non-trivial inference overhead**
We evaluate GPU memory (peak value) and inference time (average value per batch) on the test set under other default settings (cifar10) with the following device configuration:
- CPU: 14 vCPU Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
- GPU: 1 NVIDIA RTX 3090(24GB)
Here is the result:
| method | GPU memory (MB) | time (s/batch) |
| --- | --- | --- |
| No-defender | 2186 | 0.07818 |
| CeTaD-BERT | 2638 | 0.08014 |
| CeTaD-BERT-large | 3460 | 0.08496 |
Totally, the defender does not need non-trivial inference overhead. However, with larger pre-trained models, the memory and latency increase accordingly. It is supposed to be a trade-off between defense performance and resource consumption. In practical, the developer could choose the suitable pre-trained model for their device and scenario.
**3. other attacks**
**black-box attack**
Additionally, we also evaluate our method with a pure black-box attack, Square Attack [1] (n_queries=5000), under the default settings. Here is the result on CIFAR-10 indicating that our method is still effective against Square Attack:
Type of attack |method |CA(%)|AA(%)
--- |--- | --- | ---
SquareAttack |None| 93.75 |00.00
||CeTaD-VIT|83.20|74.02
||CeTaD-VIT-large|81.45|75.39
||CeTaD-BERT|82.42|79.30
||CeTaD-BERT-large|84.57 |83.59
L_inf-PGD|CeTaD-VIT| 82.81|30.27
||CeTaD-VIT-large |71.68 |44.14
||CeTaD-BERT |68.75 |44.34
||CeTaD-BERT-large |66.02 |48.83
Our method, CeTaD, demonstrates superior defense effectiveness against black-box attacks compared to L_{\infty}-PGD. Our approach fine-tunes CeTaD using adversarial examples generated from attacks to detect attack patterns and perform defense, without requiring any information about the attack strategies. As a result, it is not limited to any specific type of attack.
**semantic attacks & composite attacks**
We also tested CeTaD’s performance against semantic attacks and composite attacks. The experimental setup was consistent with [2], using WideResNet34 trained on CIFAR-10. CeTaD was trained using a 1-adv-1-clean data setting, with significantly fewer samples than the method in [2], and with a much shorter training time.
Type of attack |method |CA(%)|AA(%)
--- |--- | --- | ---
Semantic attacks |CeTaD-BERT-large|81.23 |64.59
Semantic attacks |R&P|91.2 |3.84
However, our experimental results show that our method achieves good performance against semantic attacks.
Additionally, we also test a composite attack, Composite Adversarial Attack [2] (CAA6: enabled_attack=(0,1,2,3,4,5); order_schedule="scheduled"), under the default settings. The results indicating that our framework is able to adapt to different scenarios by one-shot adversarial fine-tuning.
| method | CA(%) | AA(%) |
| --- | --- | --- |
| None | 93.75 | 00.00 |
| CeTaD-VIT | 85.74 | 61.52 |
| CeTaD-VIT-large | 69.33 | 52.15 |
| CeTaD-BERT | 78.52 | 65.23 |
| CeTaD-BERT-large | 84.18 | 68.36 |
[1] Square Attack: a query-efficient black-box adversarial attack via random search
[2] Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations
---
Rebuttal Comment 2.1:
Comment: Thanks for your effort in conducting additional evaluations to support the research!
1. framework mechanism
Got it. Now I understand what vision embeddings are used. But for the decoder-only transformer model, how to extract the features? Is it the output activation per token before the next-token predictor hand? If so, how are the activations concatenated to form a single feature for PixelShuffle?
---
Rebuttal 3:
Comment: 1. framework mechanism
We take the output of the last encoder layer, which is in the shape of (batch_size, patch/sequence_size, hidden_size). Then PixelShuffle rearranges (you could consider it as unfolding channels while increasing spatial resolution) the elements of the last two dimension into the shape of the images (batch_size, H, W). We will incorporate and clarify this information into the revised version. | Summary: This paper introduces an approach for defending machine learning models against adversarial attacks using transformer-based vision models. The proposed method, Continuous Transfer and Defense (CeTaD), leverages pre-trained transformers as defenders to provide rapid and effective adversarial protection. The approach is evaluated on various datasets, including MNIST, CIFAR-10, CIFAR-100, and ImageNet-1K, to demonstrate its adaptability and effectiveness across different scenarios. However, the approach leads to some impact on classification accuracy.
Strengths: 1. This paper provides extensive empirical evaluations across multiple datasets and models, showcasing the proposed method's robustness and versatility.
2. The study on transferability of defenders across different datasets and models is insightful, suggesting potential for practical deployment in diverse applications.
3. The discussion on continuous learning and adaptation against new attacks highlights the method's forward-thinking approach to evolving security challenges.
Weaknesses: 1. There are many works using purification or denoising methods in this field. Thus, the proposed method is not novel enough.
2. The paper acknowledges that end-to-end tuning in CeTaD can impact clean data performance, which might be a significant drawback in certain applications.
3. While the paper evaluates the method on several datasets, the focus remains primarily on image classification tasks. Broader evaluations across other types of tasks and data modalities are needed to fully establish its generalizability.
4. CeTaD's effectiveness relies heavily on the availability and quality of pre-trained models, which might not always be feasible in every scenario.
5. The paper lacks a comprehensive discussion of related works in the field, especially those that use denoising layers and plug-and-play mechanisms.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the specific computational requirements for deploying CeTaD in a real-time environment (e.g., memory, latency, efficiency), and how do these requirements scale with the size of the dataset?
2. How does the method handle adversarial examples that are generated using techniques not covered in the evaluation?
3. How does the model perform on black-box attacks or universal perturbations?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. The paper notes a notable performance gap for CeTaD, particularly in terms of clean data accuracy, when applying end-to-end tuning.
2. While the paper discusses the potential for scaling the method to larger datasets, the actual computational efficiency and scalability remain areas for further investigation.
3. The current focus is on a single attack method per experiment, which does not fully reflect the diversity of attack methods (e.g., semantic attacks, composite attacks, multiple-threat attacks [1, 2]) encountered in real-world scenarios.
[1] Hosseini and Poovendran. Semantic adversarial examples.
[2] Hsiung et al. Towards compositional adversarial robustness: Generalizing adversarial training to composite semantic perturbations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful review and insightful comments. We provide a point-to-point response to your concerns.
**1. related methods & novelty**
As a very promising line of technique, there are indeed existing works in purification and denoising methods, which we introduce in our paper (e.g., L40-L48). That being said, our paper tackles a more challenging scenario and provides innovative technical developments. Specifically, we overcome the hurdle that existing defenses are not sufficiently capable of rapidly defending with limited data or adapting to different application scenarios. Our proposed method features the following novel designs: 1) the pre-trained model is applied for robustness; 2) limited tuning with only few-shot adversarial samples is utilized for rapid scenario adaptation; 3) most parameters are fixed to maintain clean accuracy. These new technical designs yield better performance in robustness and generalization when compared to existing methods. Additionally, our method has the potential to support many extensions for better future work, as elaborated in the Discussion (L365-L392). Thus, we believe our proposed method is contributive enough.
**2. clean data performance**
We would like to clarify that it is an inevitable tradeoff between generalization performance and robustness enhancement as widely acknowledged in literature, which explains the decreased accuracy on clean samples. Our method actually achieves a better tradeoff under a more challenging problem setting. That is, with only limited adversarial data for tuning, our method outperforms other possible methods in balancing both clean and adversarial accuracy (Table 2).
Besides, our method shows the capability to gradually recover the performance of clean data with more samples for training (L346-L353; Table 11) or continous Attack&Defense (L354-L363; Table 12).
**3. other tasks and data modalities**
Thank you very much for your suggestions.
We comprehensively evaluate and analyze our method on different image classification datasets, ensuring the extensiveness of our empirical validation. Additionally, we choose to focus on attacks against the image classification task because they present one of the most significant threats due to the versatile and advanced capabilities of these attacks nowadays, which have already raised significant challenges for the design of defense mechanisms. Therefore, although we agree that it is a valuable future direction to study defenses against attacks on other tasks and domains, it does not diminish the significance of our current work. We will provide more discussions about the possibilities for large-scale, multi-task, and multi-modality applications in future work.
**4. dependence on high-quality pre-trained models in the specific scenario**
It is true that some methods rely on high-quality pre-trained models on the specific scenario (such as DiffPure; L45-L48). However, our method does not need a customized pre-trained model for the scenario. In fact, we find an interesting phenomenon: applying BERT (model pre-trained for text) is better than VIT (model pre-trained for image) for the image classification task (Table 2). We think uncustomized parameters matter for robustness here.
**5. requirements for real-time environment**
For our method, the original model is fixed. Only few-shot samples and limited training are needed. Compared with other methods, ours is neither time-consuming nor computationally expensive. In a specific scenario, detailed requirements (e.g., memory, latency, efficiency) rely on the selected pre-trained model, the quantity of tuning samples, available computational resources, inference settings, and so on.
**6. attacks not covered in the evaluation**
Our method is a general framework that does not restrict the type of attacks. For different attacks, it only requires a small number of adversarial samples without any additional operations. Following related methods, such as [1], we evaluate our method against common attacks, including PGD, AutoAttack, and StAdvAttack. These white-box attacks are stronger than black-box attacks or universal perturbations. The results demonstrate the effectiveness and robustness of our method against these white-box attacks. Moreover, due to the time constraints of rebuttal, we would try to extend more additional evaluations on other attacks in the final version.
[1] Diffusion models for adversarial purification, ICML-22.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for the rebuttal. I carefully read the author's rebuttal as well as other review comments. Nonetheless, it appears that some aspects of the weaknesses and questions identified in the initial review have not been addressed in the rebuttal, leaving some pieces of my concerns unresolved.
First, how the memory and latency scale with the size of the dataset/model remains unclear. Can the authors provide concrete examples to demonstrate this? Second, how does CeTaD handle black-boxed adversarial examples or other unforeseen attacks (e.g., semantic attacks & composite attacks)? This question also remains unanswered. Although white-box attacks are stronger than black-box attacks or universal perturbations, in terms of defense, white-box attacks are easier to defend than black-box attacks. The author's rebuttal does not really convenience me.
The paper should conduct more thorough evaluations and discussions on the points mentioned above. Therefore, I decided to keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your time and effort in reviewing our work, and we really appreciate your support.
There is only 1 day left to the rebuttal deadline, and we would like to know whether our responses can successfully address your concerns. Please also let us know if you have other concerns, and we are more than happy to address them.
Best Wishes,
Authors
---
Rebuttal 2:
Title: New Experimental Results
Comment: Thanks for your further comments. We provide a point-to-point response to your concerns. We will incorporate all of your suggestions into the revised version.
**1. memory&latency**
We evaluate GPU memory (peak value) and inference time (average value per batch) on the test set under other default settings with the following device configuration:
- CPU: 14 vCPU Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz
- GPU: 1 NVIDIA RTX 3090(24GB)
Here is the result:
| method | GPU memory (MB) | time (s/batch) |
| --- | --- | --- |
| No-defender | 2186 | 0.07818 |
| CeTaD-BERT | 2638 | 0.08014 |
| CeTaD-BERT-large | 3460 | 0.08496 |
With larger pre-trained models, the memory and latency increase accordingly. It is supposed to be a trade-off between defense performance and resource consumption.
**2. other attacks**
**black-box attack**
Additionally, we follow your suggestion to evaluate our method with a pure black-box attack, Square Attack [1] (n_queries=5000), under the default settings. Here is the result on CIFAR-10 indicating that our method is still effective against Square Attack:
Type of attack |method |CA(%)|AA(%)
--- |--- | --- | ---
SquareAttack |None| 93.75 |00.00
||CeTaD-VIT|83.20|74.02
||CeTaD-VIT-large|81.45|75.39
||CeTaD-BERT|82.42|79.30
||CeTaD-BERT-large|84.57 |83.59
L_inf-PGD|CeTaD-VIT| 82.81|30.27
||CeTaD-VIT-large |71.68 |44.14
||CeTaD-BERT |68.75 |44.34
||CeTaD-BERT-large |66.02 |48.83
Our method, CeTaD, demonstrates superior defense effectiveness against black-box attacks compared to L_{\infty}-PGD. Our approach fine-tunes CeTaD using adversarial examples generated from attacks to detect attack patterns and perform defense, without requiring any information about the attack strategies. As a result, it is not limited to any specific type of attack.
**semantic attacks & composite attacks**
We also tested CeTaD’s performance against semantic attacks and composite attacks. The experimental setup was consistent with [2], using WideResNet34 trained on CIFAR-10. CeTaD was trained using a 1-adv-1-clean data setting, with significantly fewer samples than the method in [2], and with a much shorter training time.
Type of attack |method |CA(%)|AA(%)
--- |--- | --- | ---
Semantic attacks |CeTaD-BERT-large|81.23 |64.59
Semantic attacks |R&P|91.2 |3.84
However, our experimental results show that our method achieves good performance against semantic attacks.
Additionally, we also test a composite attack, Composite Adversarial Attack [2] (CAA6: enabled_attack=(0,1,2,3,4,5); order_schedule="scheduled"), under the default settings. The results indicating that our framework is able to adapt to different scenarios by one-shot adversarial fine-tuning.
| method | CA(%) | AA(%) |
| --- | --- | --- |
| None | 93.75 | 00.00 |
| CeTaD-VIT | 85.74 | 61.52 |
| CeTaD-VIT-large | 69.33 | 52.15 |
| CeTaD-BERT | 78.52 | 65.23 |
| CeTaD-BERT-large | 84.18 | 68.36 |
[1] Square Attack: a query-efficient black-box adversarial attack via random search
[2] Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations
---
Rebuttal Comment 2.1:
Comment: Thanks to the author for the experimental results. My main concerns have been addressed, and I believe these results can further demonstrate the applicability of the proposed approach, especially in defending against unseen attacks. I recommend that the authors incorporate these results into the revision. I have updated my scores accordingly.
---
Reply to Comment 2.1.1:
Comment: I wish to express my sincere appreciation for the invaluable assistance and dedication you have provided during the review process of my paper. Your insightful comments and thoughtful suggestions have significantly contributed to the development and refinement of my work. Your efforts have been of immense help in enhancing the quality and clarity of my research. Thank you for your generous support. It is greatly appreciated. We will incorporate all of your suggestions into the revised version. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Separation and Bias of Deep Equilibrium Models on Expressivity and Learning Dynamics | Accept (poster) | Summary: This paper offers a comparative analysis of DEQ and FNN, examining their differences in terms of structure and performance. By investigating the learning dynamics, the authors provide insights into the implicit bias of DEQ. Their theoretical findings demonstrate the advantages of DEQ over FNN in specific scenarios and quantify key learning properties unique to DEQ.
Strengths: - This paper is well-written and has a clear logic.
- There is relatively little theoretical research on DEQ, and this paper serves as a valuable contribution.
- From the perspective of separation, the authors discuss the relationship between the width of DEQ and the number of layers in FNN. They also use gradient flow and gradient descent to explore DEQ's implicit bias, clearly explaining the properties of DEQ.
Weaknesses: Factors influencing DEQ's separation and bias need further discussion. For instance, is DEQ's implicit bias caused by initialization, the use of implicit differentiation in solving DEQ, or different gradient descent methods? The reasons behind its differences from FNN should be explained in detail.
Besides, the following relevant papers should be cited and discussed:
[1] Deep equilibrium networks are sensitive to initialization statistics, icml 2022
[2] GEQ: Gaussian Kernel Inspired Equilibrium Models, neurips 2023
[3] Wide Neural Networks as Gaussian Processes: Lessons from Deep Equilibrium Models, neurips 2023
[4] Deep Equilibrium Models are Almost Equivalent to Not-so-deep Explicit Models for High-dimensional Gaussian Mixtures, icml 2024
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can the implicit bias of DEQ be understood from a lazy training perspective?
- In Theorem 4.1, how is $\alpha$ defined? For example, if $\alpha$ is large and close to 1, then $m^{1-\alpha}$ would be particularly small, implying that $N_f$ would have almost no width.
- Will the conclusion differ under different circumstances of DEQ? For instance, if the inner structure is a ResNet or Transformer, will the conclusion change? Could the authors provide more experiments testing different backbones?
- Additionally, does the conclusion hold for various downstream tasks of DEQ, including classification, regression, or other tasks?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and thoughtful questions and comments. Please see our response below.
> Factors influencing DEQ's separation and bias need further discussion. For instance, is DEQ's implicit bias caused by initialization, the use of implicit differentiation in solving DEQ, or different gradient descent methods? The reasons behind its differences from FNN should be explained in detail.
Thanks for your suggestion. Our result of the bias of DEQ is indeed influenced by the optimization algorithms, the initialization, and possibly by the use of implicit differentiation (Although we do not consider the use of implicit differentiation in this paper). It is interesting to study the bias caused by other gradient algorithms such as Adam. We believe that different gradient algorithms may bring different biases on DEQ as it has been shown empirically that different gradient algorithms cause different implicit biases on FNNs in many works. The initialization is also an important factor and our Theorems also need assumptions on the initialization. We consider the Gradient Descent algorithm and a usual initialization in this paper since the paper proposes the first analysis for the implicit regularization of DEQ beyond the NTK regime. Different algorithms and initializations will be studied in future works. Regarding reasons why DEQ has different expressive power and bias from FNN, we believe that it is mainly because of the difference in architecture. We will add discussions to elucidate these issues in the final version of the paper.
> Besides, the following relevant papers should be cited and discussed.
Thanks for your valuable feedback. We will include all papers you mentioned in the final version of the paper.
> Can the implicit bias of DEQ be understood from a lazy training perspective?
We think that our result on the bias of DEQ is beyond the lazy training regime because the diagonal linear DEQ we consider is essentially a nonlinear model, albeit with some simplifications. Unlike the lazy training scenario, we do not require the DEQ to be sufficiently wide, nor do we require the updates to stay in a small domain near the initialization. Therefore, our result of the bias of DEQ is beyond the lazy training perspective. Nevertheless, our proofs of the convergence of DEQ indeed utilizes some techniques similar to those of NTK. We show that the simplified DEQ is convex in a relatively large domain under some assumptions of the initialization and we use induction to ensure the updates remain within this domain throughout the training. Thank you for your valuable question and we hope our response address your concerns.
> In Theorem 4.1, how is α defined? For example, if α is large and close to 1, then m1−α would be particularly small, implying that Nf would have almost no width.
We are sorry for the confusion in the statement of Theorem 4. In our proof, we actually assume that the width $k$ is upper bounded by $2^{m^{1-\alpha}-1}$ instead of $2^{m^{1-\alpha}-2}$ (see the equation before line 424 in Appendix) so that $k$ can take value $1$ even when $\alpha$ is large and very close to 1. Although the original statement is not wrong as the inapproximability pertains to a shallower FNN with less expressive power, it does lead to a problem of defining $\alpha$ to avoid FNN having no width. We will address this issue in the final version of the paper. In this case, $\alpha$ is an integer in $(0,1)$, making $2^{m^{1-\alpha}}-1$ sub-exponentially large when $m\to \infty$. Thank you very much for your valuable feedback.
> Will the conclusion differ under different circumstances of DEQ? For instance, if the inner structure is a ResNet or Transformer, will the conclusion change? Could the authors provide more experiments testing different backbones?
It is very interesting to consider the expressivity and bias of DEQ when the inner structure is a ResNet or a Transformer. The expressivity and bias of DEQ depends on the specific inner structure so that the conclusions may change. However, this paper aims to provide the first analysis for the implicit regularization of DEQ beyond NTK regime, so we focus on the fundamental FFN as the inner structure. We will analyze the properties of DEQ with other inner structures in the future. To further investigate this issue, we use Grad-CAM [1] to generate the saliency map of Multiscale DEQ (MDEQ [2], which is a variant of ResNet-DEQ) and compare it with that of ResNet-50 trained for image classification on ImageNet. The saliency map highlights the regions that are crucial for the model’s prediction, which can be regarded as the features. It is shown that MDEQ allocate attention to more features such as the fences and trees in the background, indicating that MDEQ may generate dense features. Please see our attached PDF file in Author Response for details.
***
[1] Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization, ICCV 2017.
[2] Multiscale Deep Equilibrium Models, NeurIPS 2020.
> Additionally, does the conclusion hold for various downstream tasks of DEQ, including classification, regression, or other tasks?
The conclusions for the expressive power of DEQ is not influenced by downstream tasks. However, the bias of DEQ may differ depending on the specific downstream tasks. For example, in classification tasks, the loss function is typically chosen as logistic loss or cross-entropy loss, which may lead to different analyses compared to the $L^2$ loss in regression problems. The implicit bias for linear models in classification problems and regression problems has been studied separately in numerous works. We think that it is interesting to further investigate the bias of DEQ in various tasks such as classification or sequence modeling and we leave them as future directions. | Summary: - The authors study the expressive power of the ReLU-based deep equilibrium model (DEQ) and the learning dynamics (including implicit bias, convergence, and generalization) of linear DEQ.
- The first separation result shows that there exists a width $m$ ReLU-DEQ that cannot be well-approximated by a ReLU-based fully-connected neural network (FNN) with sub-linear depth and sub-exponential (super-polynomial) width.
- The second separation result shows that a rational function, which can be $\epsilon$-approximated by a RELU-DEQ with $O(\epsilon^{-1})$ width and weights, cannot be well-approximated by a RELU-FNN with constant depth and sub-exponential depth (in dimension) unless its weights are exponentially large (in dimension).
- They also proved a certain implicit bias result for linear DEQ. They claim that the implicit bias implies that GF (or GD) learns a dense feature.
- Under the generalization on the unseen domain (GOTU) setting, they proved that linear DEQ with a bias term achieves a smaller GOTU error than a diagonal linear neural network.
- Overall, they hypothesize that DEQ is beneficial for learning certain high-frequency features.
Strengths: - The paper is well-written and well-organized. If the reader has knowledge about the expressive power of neural networks and the implicit bias of diagonal linear networks, the paper is easy to follow.
- The contribution seems clear if all the proofs are correct. The authors propose novel theoretical results on the properties of DEQ. Also, their result explicitly shows the advantage of DEQ over FNN.
Weaknesses: - W1. It is unclear whether the comparison between DEQ and FNN is fair.
- Of course, in terms of memory consumption, DEQ is comparable to a constant-depth FNN of the same width. However, it might be unfair to compare them if a DEQ requires a lot more computational cost than FNN during training. For example, suppose a DEQ needs exponentially more computational cost than FNN. Then it might be plausible to compare a DEQ with an FNN of exponentially large depth. Therefore, the authors must provide a comparison in terms of both memory consumption and computational cost (during training) to justify the separation results and claim that DEQ is better than FNN.
- W2. Issues in the proof of Theorem 2(B). (Section 4.3)
- The proof repeatedly uses an inequality $| \hat{u}(\hat{v}(z)) - \hat{v}(\hat{v}(z)) | \le | \hat{u}(z) - \hat{v}(z) |$. However, I cannot understand why this is true. If it isn’t wrong, the authors should provide proof of this. For the same reason, I am a bit suspicious about the verity of Lemma 2.
- W3. Critical issues in Section 5 (Implicit bias part).
- If you want to discuss the implicit bias of a given model in Equation 9, the model must be over-parameterized ($N\le d$) at first so that the model can fit all the data points. In other words, if the model is under-parameterized ($N>d$), the model can never fit all the training examples and the training loss cannot even reach zero. However, I cannot find any such discussion. (Please correct me if I am wrong.)
- You can find that the function $q(x)$ is nonconvex and unbounded below unless $x>0$. However, the solution space (defined as $X\beta = y$) may include a vector $\beta$ with negative components. Then can you guarantee that the function $Q(\beta)$ is convex (and bounded below) over the solution space with ease? For this reason, I strongly believe that the validity of Theorem 3 must be reconsidered.
- W4. The main paper must provide pointers to the proofs of the theorems.
- W5. Minor comments on typos & mistakes
- Equation (1): “$1\le i \le L-1$”, “$y = W_L z^{L}$”
- Equation (3): “$A = (0, \cdots, W_L)$”
- Theorem 1 must include that $m\ge 3$; otherwise, it allows the width (of FNN) less than 1. Also, the function $N_f$ must be defined on $[0, 1]^d$.
- Proposition 1 must specify $\sigma=$ReLU.
- Since the paper focuses on linear DEQ from Section 5, the authors must make clear that they consider ReLU-DEQ in Section 4. This must be considered in the explanation in Section 3 (especially in line 112).
- Lemma 1 must include the size of $\mathbf{b} \in \mathbb{R}^q$.
- The last sentence of Lemma 2 is a bit awkward because it has two “for all” phrases. You may replace “Then for any $x \in [0,1]^d$,” with “Then,”.
- Line 197: “$x_1 < 1-\text{poly}(d)^{-1}$”.
- Lines 212-213: it would be better to mention that $\sigma$ is no longer a ReLU.
- Line 219: “initialized”
- Line 221: the function $q(x)$ must have a dependency on the index $i$. For example, write as $Q(\boldsymbol{\beta}) = \sum_{i=1}^d q_i (\beta_i)$.
- Line 225: “$\tfrac{1}{2} \sum_{i=1}^d \tfrac{1}{\beta_i^2}$”
- Line 233: a full stop (’.’) is missing.
- Line 239: “constraint”.
- Line 248, see the line next to Equation (13): it seems that $X$ (next to the expectation symbol) must be a small, bold letter ($\mathbf{x}$).
I will definitely raise my score if all my concerns are properly addressed.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Q1. Can you explain what $f(\emptyset)$ is in Equation 13? Is it an arbitrary number?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The main paper does not clearly provide its limitations. However, as provided in the checklist, the main limitation is the simplified models for theoretical analyses. This barrier is difficult to overcome, which is understandable given the paper’s novelty.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your careful review and thoughtful questions and comments. Please see our response below.
> W1. It is unclear whether the comparison between DEQ and FNN is fair. …. Therefore, the authors must provide a comparison in terms of both memory consumption and computational cost (during training) to justify the separation results and claim that DEQ is better than FNN.
We admit that memory consumption and computational cost are vital in determining the model performance in real applications. However, considering these issues is beyond the scope of the expressivity of DEQ since they are also determined by the optimization and forward and backward algorithms used. The main goal for our comparison is to study the bias and potential advantage of DEQ compared to FFN on expressivity with a comparable number of parameters. This analysis provides intuitions for understanding the model bias of DEQ.
However, to address your concerns, we also provide some comparisons based on the computational cost of the forward propagation of DEQs. In our separation result (Theorem 1), the DEQ only need one iteration to converge in the forward propagation using fixed-point iteration, making its computational costs comparable to that of a constant-depth FFN with the same width. The DEQ in Theorem 2 needs $\log_2(\varepsilon^{-1}) + \log_2(b-a)$ iterations to achieve an $O(\varepsilon)$-solution using bisection method starting from an interval $[a,b]$ with constant length, resulting in a cost comparable to that of $O(\log(\varepsilon^{-1}))$-depth FFN with the same width. For the memory cost, DEQ’s memory usage is comparable to that of constant-depth FFN. Since the DEQs we consider in our theorems have fewer parameters, they are more advantageous than the corresponding FNNs in memory consumption. It is interesting to consider the computational and memory cost in a more general setting, and we will explore them in the near future. Additionally, we add an experiment showing that DEQ can achieve better performance with similar FLOPs per iteration compared to various FFNs. Please see our attached PDF file in Author Response for details.
> W2. Issues in the proof of Theorem 2(B). (Section 4.3)
We are sorry for the confusion. Lemma 2 should be stated as: “Then for any $\mathbf{x} \in [0,1]^d$, it holds that $$|z_u – z_v| \leq \min \left\\{ (1-L_u)^{-1}, (1-L_v)^{-1} \right\\} \cdot \max_{z\in \Omega}|u(z,\mathbf{x}) – v(z, \mathbf{x})|. $$”
And the inequality you mention should be $|\hat{u}\circ \hat{v}(z) - \hat{v} \circ \hat{v}(z)| \leq \max_{z\in \Omega}|\hat{u}(z) - \hat{v}(z)|$. This is why we need the range of $u(z,\mathbf{x})$ and $v(z,\mathbf{x})$ to be in $\Omega$ in Lemma 2. This will not influence the proof of Theorem 2 since we construct approximation in the sense of infinity norm to achieve $\lVert \tilde{h}-\tilde{g}\rVert _{\infty} \leq \text{poly}(d)^{-1}$, where $\tilde{h}$ and $\tilde{g}$ correspond to $u$ and $v$ in Lemma 2. We will carefully revise the statement of Lemma 2 and all related statements in the final version of the paper.
> Critical issues in Section 5 (Implicit bias part).
If you want to discuss the implicit bias of a given model in Equation 9, the model must be over-parameterized (N≤d) at first so that the model can fit all the data points.
We agree that the model must be over-parameterized to fit all data points. In Section 5, we also consider the over-parameterized setting by taking it as a condition or assumption for the theorems. For example, Theorem 3 states, “Suppose the gradient flow converges to some \hat{\bm{beta}} satisfying $\mathbf{X}\hat{\mathbf{\beta}} = \mathbf{y}$ “, which assumes that the model can fit all training examples. We will add clarifying statements at the beginning of 5 to emphasize that we consider over-parameterized DEQ in this Section to avoid any confusion.
> Then can you guarantee that the function Q(β) is convex (and bounded below) over the solution space with ease? For this reason, I strongly believe that the validity of Theorem 3 must be reconsidered.
Theorem 3 assumes that the model is initialized as $\beta_i(0)>0$ and the gradient flow converges to a global minimum $\hat{\mathbf{\beta}}$. From our proof, the dynamic of $\mathbf{\beta}(t)$ is $\dot{\mathbf{\beta}}(t) = -(\mathbf{X}^T \mathbf{r}(t)) \odot \mathbf{\beta}(t)^{\odot 4}$. Thus, $\mathbf{\beta} (t)$ will always remain positive for every entry in the training process since the trajectory cannot cross over $0$ by the continuity of gradient flow. In the space of $\mathbf{\beta}>0$, $Q(\mathbf{\beta})$ is convex and has a unique minimum, therefore, we believe that Theorem 3 is valid. As for why Theorem 3 only admits solutions with positive entries, it is because we simplify the model by setting the linear transformation in the last layer of DEQ to be an all-one vector for tractability and simplicity. To consider solutions with negative entries, we can set the corresponding entry (say entry $i$) in the linear transformation to be $-1$, and the corresponding $q_i(x)$ in Theorem 3 will still be $q_i(x) = \frac{1}{2x^2} - \beta_i(0)^{-3}x$ with $ \beta_i(0)<0$, which is convex in the space of $x<0$. We will include a remark and a discussion on these issues in the final version of the paper.
> The main paper must provide pointers to the proofs of the theorems.
Thanks for your valuable suggestions. We will provide pointers to the proofs for all theorems in the final version of the paper.
> Minor comments on typos & mistakes.
Thanks for your careful review. We will revise all the typos and confusions in the final version of the paper.
> Q1. Can you explain what f(∅) is in Equation 13? Is it an arbitrary number?
$\hat{f}(\emptyset)$ is the Fourier coefficient of $f$ on $\emptyset$, which is defined as $\hat{f}(\emptyset) = \mathbb{E}_{\mathbf{x} \sim \\{-1,1\\}^d}[f(\mathbf{x})]$. We will add a statement to clarify it in the final version of the paper.
---
Rebuttal 2:
Title: Remaining Questions
Comment: Thank you for your time and effort in preparing the author's rebuttal. Still, I have some minor remaining questions/comments.
* W1: I am mostly happy with your response. But could you provide some references or simple proofs for your claims about computational costs?
* W2: This response answers my question.
* W3-1: In fact, after posting the review, I found there was already a remark about "overparameterized regime" in the paper (Line 218). Thus, I want to withdraw my statement "I cannot find any such discussion." Nevertheless, it would be meaningful if the authors clarified that they are only interested in the case where $N\le d$ in order to study the implicit bias of diagonal DEQ. Also, putting a comment about "$\mu_{\min}>0$ can hold when $N\le d$ and the data matrix $\boldsymbol{X}$ is of full rank" would strengthen the writing. Moreover, I recommend putting a remark like "the current implicit bias holds for GD... other optimization algorithms may lead to different implicit biases, which we leave as an interesting future work...". I don't think that not studying other algorithms than GD is a weakness of the paper.
* By the way, can we even train a usual DEQ with GD? If it isn't, it might be a problem because the paper's implicit bias result can be seen as an artificial result just for writing a paper. Nonetheless, I understand it is difficult to study the implicit bias of the true learning algorithm for DEQ and I don't want to decrease my score.
* W3-2. This might be a stupid question, but could you explain why $\beta(t)$ remains positive (for every coordinate) in more detail?
Thank you again for your reply. After getting responses to these additional questions, I will reconsider my score. Also, if there are missing details for the response above due to the space limit, please leave them as a comment here, then I'll happily read them all.
---
Rebuttal Comment 2.1:
Comment: We sincerely thank you for taking time to review our response. We would like to further response to your questions and comments:
__Response to W1__:
* We provide the proofs of our claims about the computational costs as follows. For the DEQ in Theorem 1 (see Line 405-406 in Appendix A.1), we claim that it needs one step to converge using fixed-point iteration. This is achieved by initializing iteration point $\mathbf{z}^0$ according to Eq. (16) (While it may seem tricky to initialize $\mathbf{z}^0$ as the fixed point, the logic here is that we first observe such convergence under this initialization that we claim this $\mathbf{z}^0$ to be the fixed point). Then from the definition of $\mathbf{W}, \mathbf{U}$ and $\mathbf{b}$, each entry $t$ of $\mathbf{z}^1$ can be calculated as
$$z^1_t =\sigma\left(\sum_{i=1}^{t-1} w_{ti}z^0_i + u_{t1} x_1 -b_t \right) = \sigma\left(-\sum_{i=1}^{t-1} 2^{t-i+1}z^0_i + 2^t x_1 -1\right) = z^0_t,$$
showing that it converges in one iteration. Additionally, if we set $\mathbf{z}^0 = \mathbf{0}$, which may be considered less ad hoc, and denote the fixed point by $\mathbf{z}^*$. Then using the lower-triangularity of $\mathbf{W}$, we can show by induction that $\mathbf{z}^k_t = \mathbf{z}^*_t$ for all $1\leq t \leq k$. Thus, the convergence is achieved in at most $m$ iterations, which still remains far from exponential.
* For the DEQ in Theorem 2, we claim that it needs $\log_2(\varepsilon^{-1}) + \log_2(b-a)$ iterations to achieve an $O(\varepsilon)$-solution using bisection method. You can refer to Page 29 in [1] for the convergence rate of bisection method. The proof can be stated it as follows: Given the DEQ, we can derive a revised DEQ, i.e., $\mathbf{z} = \mathbf{V}\sigma(\mathbf{W}z +\mathbf{U}\mathbf{x} + \mathbf{b})$ based on our Lemma 1 (See Line 476-477 for the construction. We can inversely derive the revised DEQ through rank-one decomposition, and we admit any version of it). Then for the function $z - \mathbf{V}\sigma(\mathbf{W}z +\mathbf{U}\mathbf{x} + \mathbf{b})$, we choose some $[a^0, b^0]$ as the initial interval such that this function has opposite signs on $a^0$ and $b^0$. Then after $k$ iterations the solution $z^*$ will lie within an interval $[a^k, b^k]$. From the definition of the algorithm, we know that $b^{i+1} – a^{i+1} = 2^{-1} (b^i – a^i)$, leading to $b^k – a^k \leq 2^{-k}(b^0-a^0)$. To achieve an $O(\varepsilon)$-solution, i.e., $2^{-k}(b^0-a^0) \leq \varepsilon $, we can require $k \geq \log_2{\varepsilon^{-1}} + \log_2(b^0-a^0)$, which proves our claim.
***
An Introduction to Numerical Analysis, Cambridge University Press, 2003.
__Response to W3-1__: Thanks very much to your suggestions. We will add clarifying statements to emphasize that we only consider $N\leq d$ and include the comment and remark you mentioned in Section 5 and the conclusion section in the final version of the paper.
* Regarding your question, we admit that it is challenging to analyze the training of vanilla DEQ with GD except for some special cases such as in the NTK regime. Since this paper aim to analyze the bias of DEQ beyond NTK regime, we make simplifications on the model architecture to ensure tractability. Although it has some limitations, our model retains the implicit nature of DEQ and is essentially a nonlinear model. Thus, we believe our result provide some insights into the bias of standard DEQ (Please refer to Author Response and our response to Review HjzA for the weakness and Question 5 for more details). As for our focus on GD, it is worth noting that the primary optimization algorithms used for DEQ in real applications are also gradient-based algorithms such as variants of SGD and Adam (e.g., see [2,3] in our Reference). Since our paper proposes the first study on the bias of DEQ (beyond the NTK regime), we focus on the fundamental deterministic gradient algorithms. It is an interesting question to study the convergence and bias for usual DEQ under more general settings or with different optimization algorithms such as stochastic methods. We will include statements in our conclusion section and leave them as future works.
__Response to W3-2__:
According to Eq. (31)-(33) in Appendix A.3, the GF dynamic yields
$$\mathbf{\beta}(t)=\left(3\mathbf{X}^T \mathbf{v}(t)+\mathbf{\beta}(0)^{\odot -3}\right)^{\odot -\frac{1}{3}},$$
where $\mathbf{v}(t) = \int_{0}^t \left(\mathbf{X}\mathbf{\beta}(s) - \mathbf{y}\right) \mathrm{d}s$.
Thus it is impossible for any entry of $\mathbf{\beta}(t)$ to take value $0$ at any finite time $\tau$ since every entry $\mathbf{X}^T \mathbf{v}(\tau)$ cannot take infinity. Therefore, by using the continuity of $\mathbf{\beta}(t)$ and the fact that $\mathbf{\beta}(0)>0$, we know that $\mathbf{\beta}(t)$ remains positive throughout the training process.
Thank you again for your valuable comments and questions. We hope our responses address your concerns. Please feel free to reach out if you have any further questions. | Summary: This paper explores the theoretical foundations of Deep Equilibrium Models (DEQ) and their advantages over Fully Connected Neural Networks (FNNs). It demonstrates DEQs have superior expressive power compared to FNNs of similar sizes, particularly in generating linear regions and approximating steep functions; and it also analyzes the implicit regularization and biases effects of DEQs in learning dynamics, which can lead to improved generalization, especially in out-of-distribution tasks. The paper supports these theoretical findings with empirical results.
Strengths: The paper addresses a significant gap in the literature by providing a detailed theoretical analysis of the expressive power and learning dynamics of DEQs, which is a relatively new model in the field of machine learning. By providing a deeper understanding of DEQs' expressivity and learning dynamics, the paper opens new avenues for research in neural network architectures and their applications.
Weaknesses: The assumption in Equation (9) that DEQs favor dense features might be misleading. The model only updates the diagonal in the weights while keeping the rest as zeros, effectively constraining the model to be sparse. It is straightforward to claim that the diagonals are dense since that is the only part of the model being utilized.
The paper misses citing some important related works, such as [1] which studies the convergence of linear DEQ, and [2] which explores information propagation in DEQ and shows its equivalence to a Gaussian process in the infinite-width limit.
[1] On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers, ICLR 2021
[2] Wide Neural Networks as Gaussian Processes: Lessons from Deep Equilibrium Models, NeurIPS 2023
Technical Quality: 3
Clarity: 2
Questions for Authors: - DEQs differ from FNNs in three distinct ways: shared weights, input injection, and infinite depth. While the paper claims that DEQs are superior to FNNs in terms of expressivity due to their infinite depth, can the authors comment on or envision the significance of the other two features (shared weights and input injection)? How do these contribute to the overall performance and expressivity of DEQs?
- Comparing to FCN, actually DEQ has 3 distinct propoaretis, that is shared weights, input injection, and infinitely depth. The authors claim DEQ is better than FCN in terms of expressivity simply because of the infinitely depth. Can authors comments or envision the significance of the other 2 features?
- How do the authors envision the practical applications of their findings? Are there specific domains where DEQs could significantly outperform FNNs?
- Could the authors elaborate on the potential limitations of DEQs, especially in terms of computational complexity and training stability?
- How do the authors address the sparsity constraint in their model assumptions? Can they provide more justification for why the diagonals should be considered dense?
- Can the authors conduct experiments to show that the learned features are indeed dense? These experiments may include both the diagonal models in Equation (9) and the general models using the entire weight matrices $W$
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See the weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and thoughtful questions and comments. Please see our response below.
> The assumption in Equation (9) that DEQs favor dense features might be misleading. …. It is straightforward to claim that the diagonals are dense since that is the only part of the model being utilized.
While our simplified model in Eq. (9) has some limitations to ensure solvability, we do not agree that our result is a direct consequence of the model assumption. In fact, updating only the diagonal entries does not necessarily lead to dense diagonals. As a counterexample, it is known that for matrix sensing or one-hidden-layer neural networks with quadratic activation (see Theorem 1.2 in [37] in our references for details), Gradient Descent will converge to a low-rank (diagonal sparse, not dense) solution. Specifically, consider the model $f_U(x) = \mathbf{1}^T q(U^Tx)$ with labels $y= \mathbf{1}^T q((U^*)^T x)$, where $q(\cdot)$ is the quadratic activation, $U$ is diagonal and the ground truth $U^*$ is sparse diagonal. It is shown that $\lVert UU^T-U^*(U^*)^T\rVert _F$ can be sufficiently small after some gradient steps by adjusting the initialization scale of $U$. Thus, the diagonal entries of $U$ are sparse even if they are the only part of the model being utilized. So we argue that the bias of DEQs in our paper is essentially due to the network architecture, rather than the model assumption in Eq. (9). Unlike FNNs that often have the so-called simplicity bias (prefer learning minimum norm or sparse solutions), DEQ tends to learn some hard problems. We will add more explanations in the revised version.
> The paper misses citing some important related works.
Thanks for your feedback. We will include the papers you mentioned in related works in our final version of the paper.
> Can the authors comment on or envision the significance of the other two features (shared weights and input injection)? How do these contribute to the overall performance and expressivity of DEQs?
Our separation results have leveraged all three distinct properties. Theorem 2 uses both the shared weights and infinite depth so that the feature of DEQ can be characterized as the solution to a fixed-point equation. And without the input injection, the fixed point is not even a function of the network input. As for the separate significance of shared weights and input injection on expressivity, the shared weights enhance the parameter efficiency of the model in approximating certain functions. The input injection can be regarded as a skip connection for the weight-untied model, potentially improving the expressive power, much like how skip connections in ResNet enhance its expressive power. These properties together enable DEQ to be efficient in approximating specific functions such as those as solutions to fixed-point equations or those with relatively large derivatives.
> How do the authors envision the practical applications of their findings? Are there specific domains where DEQs could significantly outperform FNNs?
A high-level conjecture from our analysis is that DEQ has potential advantages in learning certain high-frequency components. In Appendix B.2 in our paper, we conduct an experiment on audio representation. We observe that DEQ outperforms FNN with a noticeable error, preliminarily validating our findings and hypothesis. Based on our results, we envision that DEQs may outperform FNNs in the area of audio. We will discuss this in the revised version. However, we leave the full study as a future work.
> Could the authors elaborate on the potential limitations of DEQs, especially in terms of computational complexity and training stability?
The limitations of DEQs in terms of computational complexity and training stability are not the main consideration of this paper. Because the paper mainly studies the bias of DEQ caused by its architecture and learning dynamics. However, we will add more review in the related work section for the limitations. From our understanding, one stability issue of training DEQ is ensuring the existence a unique fixed-point throughout training. The well-posedness may be violated for vanilla DEQ when $\lVert W\rVert _2$ is large, especially when trained using a large learning rate. Another stability issue is gradient explosion, similar to that in training RNNs. Specifically, the gradient w.r.t. $W$ involves $(I-W)^{-1}$. If $(I-W)$ is nearly singular, the gradient may explode.
> How do the authors address the sparsity constraint in their model assumptions? Can they provide more justification for why the diagonals should be considered dense?
Our model assumption in Eq. (9) is primarily for ensuring the tractability of the model due to the technical issues arising from the non-commutativity of matrices, especially as we aim to analyze the bias of DEQ beyond the NTK regime. This is a commonly considered tool that simply reduces a matrix problem to a vector problem (e.g., see work [29, 37], all these works start from analyzing vector problem and then consider matrix version). Again, we emphasize that diagonalization does not necessarily lead to dense diagonals. Therefore, we believe that this simplified model can still reveal the bias of general nonlinear DEQ to an extent. We hope this along with our response to the weakness above can address your concerns.
> Can the authors conduct experiments to show that the learned features are indeed dense?
Thanks for the suggestions. In the revised version, we add experiments to evaluate the feature density of a depth-$3$ FFN, a diagonal DEQ, and a vanilla DEQ for the GOTU task in Eq. (15). We generate heatmaps of the features of both DEQ and the feature before the last layer of FFN. It is shown that both the heatmap of diagonal DEQ and the heatmap of vanilla DEQ are lighter than that of FFN, indicating the features are indeed dense. Please see our attached PDF file in Author Response for details. | Summary: This paper studies the expressivity and inductive bias of deep equilibrium models. First, the authors generate a separation result for expressivity between a fully connected deep model model and a deep equilibrium model, showing that if the depth of the fully connected model scales as $m^{\alpha}$ for width $m$ and $\alpha \in (0,1)$, that there are more linear regions in the DEQ model of width $m$ which can achieve the full set of $2^m$ linear regions. Second, the authors provide an example of a "steep" target function that depends on a single dimension of the inputs and approximates a step function. For this target function, the FNN would require weights with large $\infty$-norm (that scales exponentially in the dimension) while the DEQ can achieve error $\epsilon$ with weights that scale as $1/\epsilon$. Lastly, the authors provide a study of diagonal linear DEQs and characterize the type of min-norm solution obtained by running gradient flow on this model, a norm $q(\beta) \sim 1/\beta^2 + C \beta$ for constant $C$ which penalizes both small and large parameters. This model is also evaluated on a special OOD task.
Strengths: This paper addresses an interesting architecture, deep equilibrium models, which have received less study than traditional feedforward architectures. The paper is therefore well motivated and timely. The analysis of the inductive bias (norm that is minimized) due to gradient flow training is quite interesting and novel to my knowledge.
Weaknesses: Some aspects of the comparison between deep FFN and DEQs were a bit unclear. Specifically, for the approximation result, I was wondering whether the comparison between depth $L \leq m^\alpha$ FFNs and DEQs was a fair comparison. (See my questions below).
The experiments that the authors provide do show some situations where DEQs outperform FFNs. However, I think it would be more compelling theoretically if the experiments could be used to support the theoretical claims made in the paper (ie varying $d$ in the steep target function and showing infinity norm of FFNs increases but not the DEQs, etc).
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. A DEQ could be viewed as the result of applying an infinite depth network with shared weights. If depth were to be unrestricted, then this hypothesis class should be contained within the set of all infinite depth FFNs. Is it fair to compare finite depth FFNs to DEQs? Would it perhaps be more interesting to compare the solutions identified by gradient flow in the FFN (where weights are not tied) to DEQ (where weights are tied)?
2. In Remark 1, it is stated that an FFN could in principle achieve $2^{ m L }$ linear regions. Supposing this could be achieved, wouldn't we expect a width $m$ and depth $L > 1$ FFN to have *more* linear regions than a DEQ model? Do the authors' results provide evidence that suggests that $2^{mL}$ linear regions cannot be achieved?
3. Should line 255 be $\sum_i \beta_i^{-2}$ instead of $\sum_i \beta_i^{-1}$ ?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors mention briefly that they study a diagonal linear DEQ for tractability, but they could include more limitations in the conclusion/discussion sections. They could also point out that some of their results are about approximation of particularly chosen target functions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful review and thoughtful questions and comments. Please see our response below.
> I was wondering whether the comparison between depth $L\leq mα$ FFNs and DEQs was a fair comparison. (See my questions below).
Please see our response for the question Q1 below.
> However, I think it would be more compelling theoretically if the experiments could be used to support the theoretical claims made in the paper (ie varying d in the steep target function and showing infinity norm of FFNs increases but not the DEQs, etc).
Thanks for your feedback. In our experiment in Figure 1 for approximating the steep function in Theorem 2, we vary the input dimension $d$ from $10$ to $20$ as we set $\delta = 2^{-10}$ and $\delta = 2^{-20}$. To further support the theoretical results of Theorem 2, we add experiments varying $d$ from $5$ to $20$ for the steep target function and showing the infinity norm of weights of both networks. Our results show that the infinity norm of FFNs increases as we apply larger $d$, while the infinity norm of DEQs remains lower than that of FFNs, consistent with Theorem 2. Please see our attached PDF in Author Response for details.
> A DEQ could be viewed as the result of applying an infinite depth network with shared weights. If depth were to be unrestricted, then this hypothesis class should be contained within the set of all infinite depth FFNs. Is it fair to compare finite depth FFNs to DEQs?
We think that our comparison of the expressivity of DEQ and finite-depth FFN is fair.
One primary goal for the comparison is to investigate how the DEQ architecture influences the model capacity with the same number of parameters as FFN whose weights are not tied. This analysis provides intuitions for understanding the model bias of DEQ. Note that one conceptual understanding here is ``no free lunch``. Under the same number of parameters, we study for what kind of function, DEQ is provably better than standard FFN. This paper follows the common ways to characterize expressivity and separations. Our separation results are mainly stated from the actual size of the networks (Theorem 2 also considers the parameter magnitude). Specifically, Theorem 1 compares the expressive power of DEQ with $O(m^2+md)$ parameters with a class of FFN (i.e., $L\leq m^{\alpha})$ with sub-exponentially many parameters by providing a separation result showing that DEQ is much more *parameter-efficient* in approximating certain target function. We will add more discussions to emphasize the fairness and importance of our comparisons in the final version of the paper.
> Would it perhaps be more interesting to compare the solutions identified by gradient flow in the FFN (where weights are not tied) to DEQ (where weights are tied)?
We agree that it is a very interesting problem. However, analyzing the FFN optimized by gradient flow is beyond the scope of model expressivity to an extent. The solvability of multilayer FFN remains widely open except for some special cases such as in the lazy regime of NTK. In this regime, the model requires to be heavily over-parameterized and approximates just a linear model. The analysis would mainly concern traditional methods, such as studying the decay rate of kernels. However, in Section 5, we take a further step by studying the dynamic bias (trained by GF and GD) of a simplified DEQ in comparison to that for FNN. Here though the model is simplified, it is non-linear and actually beyond the NTK regime. We leave the study in the NTK regime as future works. So we do have some results for studying the dynamic bias (implicit regularization) for DEQ.
> In Remark 1, it is stated that an FFN could in principle achieve $2^{mL}$ linear regions. Supposing this could be achieved, wouldn't we expect a width $m$ and depth $L>1$ FFN to have more linear regions than a DEQ model?
Thanks for the valuable question. As stated in Remark 1, our comparison of the number of linear regions between FFN and DEQ is actually stated in terms of the number of neurons (the input neurons are not counted in). Note that a DEQ with $mL$ neurons has width $mL$, thus it can generate $2^{mL}$ linear regions according to our Proposition 1. So an FFN with width $m$ and depth $L>1$, i.e., $mL$ neurons, cannot generate more linear regions than that of DEQ with the same neurons. Therefore, we claim that DEQ is can potentially generate a larger number of linear regions compared to FFNs with the same number of neurons.
> Do the authors' results provide evidence that suggests that $2^{mL}$ linear regions cannot be achieved?
The Lemma 5 in our Appendix A.1 may provide some evidence. It shows that when the input dimension is $1$ and the width of each layer is $m$ for a depth-$L$ ReLU-FFN, the number of linear regions is $O(m^L)$, strictly smaller than $2^{mL}$. Through our further investigation, it is shown in Theorem 1 in [1] (See the statement in Page 4 in [1]) that when the input dimension $m_0=O(1)$ and the width of each layer is $m$ for a depth-$L$ ReLU-FFN, the asymptotic upper bound is of $O(m^{Lm_0})$, which is of smaller order in magnitude compared with $2^{mL}$. We will add a statement to include the evidence in the final version.
***
[1] Bounding and Counting Linear Regions of Deep Neural Networks, ICML 2018.
> Should line 255 be $\sum_{i}\beta_{i}^{−2}$ instead of $\sum_{i}\beta_{i}^{−1}$?
We are sorry for the typo. It should be $\sum_{i=1}^d \beta_i^{-2}$ in line 225 (I think you refer to line 225 instead of 255). We will revise it in the final version.
> The authors mention briefly that they study a diagonal linear DEQ for tractability, but they could include more limitations in the conclusion/discussion sections. They could also point out that some of their results are about approximation of particularly chosen target functions.
Thanks for your suggestion. We will add a discussion about our limitations in the conclusion section in the final version.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for their answers to my question and the attached rebuttal experiment. The justification to investigate architectures at fixed parameter counts makes sense and also the polynomial scaling of linear regions with width for FNN and exponential scaling for DEQ improves my understanding of the work. I will thus increase my score.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you again for your valuable feedback! We will continue improving our paper. | Rebuttal 1:
Rebuttal: We express our sincere gratitude to all reviewers for the valuable and constructive comments. Many of the suggestions will be incorporated in the final version of the paper.
Some reviewers have raised questions regarding the fairness of our comparison between DEQ and FFN and the validity of our simplification of the diagonal linear DEQ. We emphasize that our comparison is fair in terms of the number of network parameters, which is essential for understanding the model capacity and the parameter efficiency for certain functions. Its analysis provides intuitions for understanding the model bias of DEQ. We study for what kind of function, DEQ is provably better than standard FFN. Even when considering the extra cost in the forward propagation of DEQ, our comparisons remain valid (see our response to Reviewer j6dr for W1).
Regarding our model simplification in understanding the implicit bias of the Gradient Descent algorithm, it is mainly for ensuring solvability because we aim to analyze the bias of DEQ beyond the lazy training regime. Similar kinds of simplification have been commonly considered in studying two-layer neural networks (e.g., see [29, 37] in our References). Our obtained results are not a direct consequence of our model assumption. The linear diagonal model preserves the implicit nature of DEQ and is essentially a nonlinear model. Thus, our results reveal the bias of general nonlinear DEQ to an extent.
Additionally, in response to the concerns raised, we have conducted additional experiments and included the results in the attached PDF file. It includes:
* Review qjxV: (1) Evaluation of the norm of weights of DEQ and FFN for approximating the steep function with varying $d$; Fig. 1.
* Review HjzA: An experiment on evaluating the density and norm of features; Fig. 2.
* Review j6dr: Comparison of DEQ with various FFNs having similar FLOPs per iteration; Table 1.
* Review QS3W: A saliency map experiment for a variant of ResNet-DEQ trained on image classification tasks; Fig. 3.
We hope that these additional results will address the reviewers’ concerns and strengthen our paper. Please let us know if you have any further questions!
Pdf: /pdf/3fa7eac3cf5ed031ba38db946ae034b7d7566528.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
D-LLM: A Token Adaptive Computing Resource Allocation Strategy for Large Language Models | Accept (poster) | Summary: The paper proposes D-LLMs, a novel dynamic inference framework for large language models (LLMs) that adaptively allocates computing resources based on the importance of individual tokens. By introducing a decision module for each transformer layer, D-LLMs can decide whether to execute or skip specific layers for each token, optimizing computational efficiency. The framework also includes a KV-cache eviction strategy to maintain compatibility with existing acceleration techniques and reduce storage overhead. Experimental results demonstrate that D-LLMs can significantly reduce computational costs and KV-cache storage by up to 50% without compromising performance across various tasks, including Q&A, summarization, and commonsense reasoning.
Strengths: The paper demonstrates significant originality by introducing D-LLMs, a dynamic inference framework that adaptively allocates computational resources in large language models. This novel approach addresses a critical challenge in deploying LLMs on resource-constrained platforms, offering a creative solution that dynamically determines the necessity of executing each transformer layer based on token importance. The quality of the work is evident in the thorough design and implementation of the decision modules and the KV-cache eviction strategy, which ensure compatibility with existing acceleration techniques. The clarity of the paper is commendable, as it systematically explains the motivation, methodology, and experimental setup, making it accessible to a broad audience. The significance of this research lies in its potential to drastically reduce computational costs and storage requirements for LLMs, making high-performance language models more practical for a wider range of applications. This work not only advances the field by improving efficiency but also opens up new avenues for optimizing model deployment in real-world scenarios.
Weaknesses: One key limitation of this work is the lack of detailed discussion on the granularity of tokens. The effectiveness of the D-LLMs framework heavily relies on accurately assessing the importance of each token during the inference process. However, the paper does not elaborate on how tokenization is handled, whether different granularities were considered, or the impact of token granularity on the model's performance and computational efficiency. This omission leaves questions about the generalizability and robustness of the proposed method across different types of tokens and tasks.
Additionally, the conditions under which the decision module determines to skip or execute layers are not clearly defined, leaving some ambiguity about the decision-making process. The paper would benefit from a more detailed explanation of the hyper-parameters used in the experiments, as this information is crucial for reproducing the results and understanding the model's behavior. Furthermore, there are instances of inconsistent terminology, such as the use of "dynamic decision module" versus "execution decision module," which can cause confusion.
Empirical analysis or ablation studies comparing different levels of token granularity, along with more precise definitions of the decision criteria and detailed hyper-parameter settings, would have strengthened the validation of the approach and provided clearer guidance for practical implementation.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. **Token Granularity:**
- How does the framework handle tokenization? Have different levels of granularity (e.g., subwords, words, characters) been considered, and how do they impact the model's performance and efficiency?
- Can you provide empirical results or ablation studies that compare different levels of token granularity to validate the chosen approach?
2. **Decision Module Criteria:**
- What specific criteria or metrics does the decision module use to determine whether a transformer layer should be executed or skipped for a given token?
- Are there any thresholds or heuristics involved in this decision-making process? If so, how are they set and optimized?
3. **Hyper-Parameters:**
- Can you provide more details on the hyper-parameters used in your experiments, such as those related to the dynamic decision module and the KV-cache eviction strategy?
- How sensitive is the performance of D-LLMs to these hyper-parameters, and are there any guidelines for tuning them?
4. **Compatibility with Different LLM Architectures:**
- While the paper demonstrates the framework's effectiveness on certain LLMs (e.g., LLaMA, GPT), how well does the approach generalize to other architectures, especially newer ones?
- Have you tested D-LLMs on any other LLM architectures or benchmarks not mentioned in the paper? If so, can you share the results?
5. **Impact on Latency:**
- Does the dynamic decision-making process introduce any latency during inference? If so, how significant is this latency, and what are its implications for real-time applications?
- Are there any optimizations or future directions planned to minimize this latency?
6. **Comprehensive Evaluation:**
- The experiments focus on specific benchmarks. Have you considered evaluating D-LLMs on a broader set of tasks to better understand its generalizability and robustness?
- Are there any plans to conduct further experiments that encompass a wider variety of NLP tasks and datasets?
7. **Implementation Details:**
- Can you provide more detailed implementation guidelines, including pseudocode or a step-by-step description of the D-LLMs framework?
- Are there any particular challenges or considerations to be aware of when implementing this framework on different hardware or software environments?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have partially addressed the limitations of their work, primarily focusing on the technical aspects of the D-LLMs framework. However, there are several areas where the discussion of limitations could be expanded and more detailed. Additionally, the paper does not explicitly address potential negative societal impacts, which is an important consideration for any work involving large language models.
1. The paper does not discuss the impact of token granularity on the effectiveness of the D-LLMs framework. Given that token granularity is crucial for the dynamic allocation of computational resources, a more thorough examination is needed. The authors should include an analysis of different granularity levels (e.g., subwords, words, characters) and their impact on model performance and efficiency.
2. The evaluation is limited to specific benchmarks. A broader set of tasks and datasets would provide a more comprehensive understanding of the framework's generalizability and robustness.
3. Dynamic inference mechanisms might inadvertently amplify biases present in the training data, as decisions on token importance might favor certain types of content over others. The authors should discuss how they mitigate such biases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate the reviewer for the valuable feedback. We have carefully considered the comments and suggestions and would like to address each of the concerns raised.
**Q1: Token Granularity**
We follow the tokenization released by corresponding LLMs. Today's LLMs utilize their own tokenization respectively. For example, Llama 2[1] employs a BytePair Encoding (BPE) algorithm with a vocabulary of 32k tokens, while Llama 3[2] uses the tiktoken tokenizer, expanding the vocabulary size up to 128k tokens. At the same time, LLMs are trained based on their specific tokenizations. The binding relationship between tokenization and LLM means that if we simply change the tokenization, the LLM will lose its ability. If we want to compare different levels of token granularity, training extra LLMs from scratch is necessary but unaffordable. Our method has been verified on llama2 and llama3, which demonstrates that D-LLM is applicable and effective across various tokenizers.
[1] Llama 2: Open Foundation and Fine-Tuned Chat Models
[2] The Llama 3 Herd of Models
**Q2: Decision Module Criteria**
- The decision module outputs a probability distribution with dimensionality 2, denoted as $b=[b_1,b_2]$ where $b_1+b_2=1$. The transformer layer is decided to be executed or skipped according to the max arguments in $b$. Specifically, if $b_1<b_2$, the layer should be executed; if $b_1\geq b_2$, the layer should be skipped.
- Decision-making depends on the max argument in vector $b$, therefore we don't have to set or optimize any thresholds.
**Q3: Hyper-Parameters**
Hyper-parameters in our method include target acceleration rate $\Omega$, reserved initial tokens $m$, and the weight factor $\alpha$ of $\mathcal{L}\_{rate}$. We provide additional descriptions and details for these hyper-parameters as follows.
- Target acceleration rate $\Omega$ is user-defined for expected computation overheads. We elaborate the performance under different $\Omega$ in Figure 2 (line 198-199). More details are available in Table 4 in A.2 (line 532-540).
- The importance of initial tokens is pointed out in recent works for LLMs' generalization to long contexts[1,2]. Initial tokens encode strong absolute position information and collect significant attention scores, which makes it necessary to be kept when pruning context. Our experiments in Table 3 (line 270-271) demonstrate that initial tokens reserved in the proposed eviction policy lead to better performance.
- We conduct additional analysis on hyper-parameters $\alpha$ in Eq.14. (line 213). Details are available in global rebuttal #2.
**Q4: Compatibility with Different LLM Architectures**
- Our proposed D-LLM is designed for decoder-only transformers, which is the mainstream architecture used by current LLMs. With respect to other architectures like encoder-decoder models, our method needs further adaptation, especially the strategy of KV-cache eviction.
- We are currently focused on the decoder-only architecture but are open to adapting our method to other LLM architectures in the future.
**Q5: Impact on Latency**
- We make a detailed analysis on latency and other overheads taken by decision modules in global rebuttal #1. In real-time applications, D-LLM with actual acceleration rate 40% raises the generating speed from 32 token/s to about 41 token/s. The latency is negligible compared to the gain of generating speed.
- We also have considered several optimization directions to minimize the latency, for example, reducing the complexity of decision module; utilizing global decision module to predict the routes for all transformer layers; reducing the frequency of utilizing decision module when generating; and so on.
**Q6: Comprehensive Evaluation**
- In terms of benchmark selection, we mainly follow the popular evaluation datasets used by the mainstream LLMs[1]. Currently, D-LLM is designed and verified on specific domain tasks, like commonsense reasoning and math solving. We used a considerable amount of benchmarks compared with other research on LLMs[2,3]. Certainly, we plan to expand our scale of training of D-LLM in the future, including large-scale common datasets for LLMs' training and the size of training model's parameters. We also plan to assess D-LLM's generalizability and robustness across a broader range of tasks, for example, coding, reading comprehension, and so on.
[1] Llama 2: Open Foundation and Fine-Tuned Chat Models
[2] DoRA: Weight-Decomposed Low-Rank Adaptation
[3] Not all Layers of LLMs are Necessary during Inference
**Q7: Implementation Details**
- We have released our implementation code and the web address is provided in the checklist (line 690). The framework of our proposed D-LLM can be viewed in the directory `./llama`. We also provide codes for training and inference in the repository.
- There are no particular challenges or restrictions to implement D-LLM on different hardware or software environments. We provide the necessary library list to train and infer our D-LLM in the file `./requirements.txt` from the repository. | Summary: The paper introduces D-LLMs, a dynamic inference paradigm for large language models that adaptively allocates computing resources based on token importance. D-LLMs reduce computational costs and KV-cache storage by up to 45% and 50% on various tasks.
Strengths: 1. The concept of dynamically adjusting the execution of transformer layers at the token level is both novel and impactful, providing a promising direction for reducing computational costs in LLMs.
2. The paper addresses the practical challenge of integrating dynamic inference with KV-cache methods, which is essential for real-world applications.
3. The authors conducted extensive experiments, demonstrating significant reductions in computational resources without sacrificing model performance across diverse NLP tasks.
Weaknesses: 1. While the experimental results are impressive, the paper lacks a deep theoretical analysis of the dynamic decision module's behavior and its impact on model performance.
2. The computational overhead introduced by the dynamic decision module itself is not sufficiently analyzed.
3. The manuscript requires a deeper examination of the impact of various components of the loss function. Specifically, a thorough analysis is needed to investigate how varying the value of 𝛼 affects the performance of model.
4. Although the paper demonstrates the effectiveness of the overall approach, it would benefit from more detailed ablation studies to isolate the impact of different components of the dynamic decision module and eviction policy.
5. The manuscript necessitates a more elaborate description of the evaluation metrics. For instance, a detailed explanation of the computation methodology for FLOPs is required to provide readers with a comprehensive understanding of the evaluation process.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the above
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See the above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate the reviewer for the valuable feedback. We have carefully considered the comments and suggestions and would like to address each of the concerns raised.
**W1: While the experimental results are impressive, the paper lacks a deep theoretical analysis of the dynamic decision module's behavior and its impact on model performance.**
We understand your expectations of theoretical analysis and would like to make following clarifications:
- Theoretically, dynamic inference is a conditional execution paradigm, similar to mixture of experts and sparsity algorithms. The key idea is to reduce network redundancy and customize a network topology for each sample.
- We formulate the inference of dynamic inference as a sequential decision problem. Each decision module decides whether a transformer block should be executed or not.
- We provide detailed analysis on dynamic decision module's behavior related to the property of grammatical terms (line 279-293), the relationship between FLOPs in A.1 (line 522-531), and instance complexity in A.5 (line 563-573).
In future work, we will explore more theoretical tools and methods to provide a more in-depth explanation of the model's behavior.
**W2: The computational overhead introduced by the dynamic decision module itself is not sufficiently analyzed.**
We provide computational overhead analysis taken by the decision module of D-LLM in global rebuttal #1.
**W3: The manuscript requires a deeper examination of the impact of various components of the loss function. Specifically, a thorough analysis is needed to investigate how varying the value of 𝛼 affects the performance of model.**
We perform a comprehensive analysis on hyper-parameter $\alpha$ and details are available in global rebuttal #2.
**W4: Although the paper demonstrates the effectiveness of the overall approach, it would benefit from more detailed ablation studies to isolate the impact of different components of the dynamic decision module and eviction policy.**
We would like to kindly remind you that we have performed ablation studies to demonstrate the impact of the dynamic decision module and eviction policy in our paper (line 254-270). We select two baselines for comparison: 'LoRA finetune' only utilizes LoRA modules to finetune the LLM, and 'D-LLM w/o Evl. Str.' utilizes dynamic decision module without eviction policy. The results show that decision module can effectively achieve acceleration by layer skipping while maintaining comparable performance on benchmarks. With the help of eviction policy, D-LLM can further reduce the computation overhead due to fewer KV calculations and lower computations in attention operation.
Additionally, we consider that you might be interested in components of dynamic decision module for finer granularity. We utilize one layer network 'Linear & Softmax' as dynamic module for ablation study. The results in following table demonstrate that the module designed in our D-LLM has non-linear fitting capability to help the model control the actual acceleration rate more precisely and achieve better performance.
Values in table represent (FLOPs$\downarrow$, Accuracy$\uparrow$).
| | D-LLM(Linear & Softmax) | D-LLM |
|:----- |:----------------------- |:-------------------- |
| MaWPS | (0.58, 0.70) | (**0.56**, **0.74**) |
| OBQA | (0.57, 0.79) | (**0.53**, **0.80**) |
**W5: The manuscript necessitates a more elaborate description of the evaluation metrics. For instance, a detailed explanation of the computation methodology for FLOPs is required to provide readers with a comprehensive understanding of the evaluation process.**
We give a detailed description of all metrics mentioned in our paper as follows:
- PPL measures the divergence between the model's predicted probability distribution and ground truth. Given a sentence $X=(x_0,x_1,\cdots,x_n)$ and predicted probability distribution $p_\theta$ from model, PPL can be computed as $\text{PPL}(X)=\exp\{-\frac{1}{t}\sum_{i}^t\log p_{\theta}(x_i|x_{<i})\}$.
- Accuracy: For math-solving tasks, the answers are marked as correct only if the numerical difference between results in prediction and ground truth is less than $10^{-5}$. For commonsense reasoning tasks, answers are marked as correct only if the output matches the correct options in lowercase.
- FLOPs: FLOPs, short for floating point operations, measures the complexity of algorithms/models[2]. We calculated the flops of fully connected layers and attention operations in transformer. For fully connected layers without bias, we compute FLOPs as $\text{FLOPs}=(2I-1)O$, where $I$ is the input dimensionality and $O$ is the output dimensionality. FLOPs of attention operations are proportional to the cache size, so we calculated the average FLOPs when generating the 1st token to the 1K-th token. The FLOPs of baselines are normalized to 1 for better comparisons.
We will add the description in the revised version.
[1] Perplexity—a measure of the difficulty of speech recognition tasks, 1977
[2] Pruning Convolutional Neural Networks for Resource Efficient Inference, 2017 | Summary: This paper proposes D-LLM, a novel layer-skipping framework for LLM inference to reduce computation costs. It designs and trains (fine-tuning) the additional decision module per transformer layer and implements KV-cache eviction by adjusting self-attention masks upon the execution decision. The framework presents ~50% reduction in computing costs.
Strengths: - The study defines a clear problem to solve, which is important for current trends in model inference
- The consideration of token-level importance and resource allocation, which results in a layer-skipping mechanism, is commendable
- D-LLM carefully contemplates several hyperparameters, such as the number of preserved tokens, to achieve reasonable accuracy compared to the baselines
- Evaluation contains measurements in diverse aspects
Weaknesses: - D-LLM requires additional computation costs/resources to train (fine-tune) the decision modules and adaptors (LoRA modules). Presenting such additional overheads from the decision module training would be interesting.
- Baselines do not include “dynamic inference mechanisms,” which are more closely related to D-LLM
- D-LLM introduces the concept of a “user-defined target acceleration rate;” however, the experimental results do not include its use case or the changing behavior depending on the user-defined rate.
- D-LLM claims that recent research shows preserving initial tokens is crucial, but I recommend the authors add references or clear evidence to support this statement (line 210).
- The paper’s writing needs to be carefully revisited. For example, AdaInfer, MoD, and Shorten-LlaMa (baselines) have two duplicated citations (e.g., 14 and 15), which could be confusing.
Technical Quality: 2
Clarity: 3
Questions for Authors: - What is the difference between 'dynamic pruning' (line 99) and 'dynamic inference mechanism' (line 105)? I understand the explanations in the paper, but the structure of the current paper distinguishes the related work into 1) LLMs acceleration methods and 2) dynamic inference mechanisms. I wonder why the above dynamic pruning belongs to the first.
- The idea of inserting a decision module as proposed by D-LLM seems to be applicable even without a PEFT like LoRA. Discussion on any specific use of LoRA in this study or applying the module in other situations would be interesting.
- If the LoRA modules (low-rank adaptors) are not trained together, will the desired accuracy not be achieved?
- As mentioned in the weaknesses section, additional training is required to achieve similar performance to the baseline. How much overhead is this additional training? Is it small enough that it doesn't need to be shown in the paper?
- If we want to infer on a completely new dataset, does D-LLM need to train further? If we use the decision module and low-rank adaptor already trained on a different dataset without additional training, how will the performance change?
- How much does the target acceleration rate differ from the actual LLM inference acceleration rate (skipped-layers ratio)?
- In equation 11, why is the average acceleration rate calculated with b^tilde rather than using b?
- The evaluation parts measure performance mainly by FLOPS. Can FLOPS be interpreted for various purposes of evaluation? For example, how do FLOPS (instead of actual memory consumption) explain the overhead in KV-cache?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: - Although the authors mentioned that the difficulty has not been modeled, a precise definition of the concept is required to generalize this design to be applied to other contexts
- Currently, D-LLM is only applicable to parameter-efficient fine-tuning workloads that use LoRA (even if LoRA is a widely adopted PEFT algorithm) and requires additional training for new datasets or benchmarks
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate the reviewer for the valuable feedback. We would like to address each of concerns raised as follows.
**W1 & Q4: Additional overheads from the decision module training**
We provide an analysis of overheads taken by decision modules in global rebuttal #1.
**W2: Baselines do not include “dynamic inference mechanisms”**
We would like to address the choice of baselines as follows:
- D-LLM w/o eviction strategy shown in Table 2 (line 260) can be regarded as a baseline for dynamic inference mechanisms on layer skipping. We propose KV-Cache eviction strategy on layer skipping for better adaption to LLM and prove its effectiveness in our ablation study.
- Dynamic inference mechanisms share similar methods with LLMs acceleration. AdaInfer, one of our baselines, introduces an early-exit mechanism to stop inference at intermediate layers, which is an important approach of dynamic inference mechanisms.
**W3: Add references or clear evidence for preserving initial tokens**
The importance of initial tokens is pointed out in recent works for LLMs' generalization to long contexts[1,2]. Initial tokens encode strong absolute position information and collect significant attention scores, which makes it necessary to be kept when pruning context. We will include the references in revision.
[1] LM-Infinite: Simple on-the-fly length generalization for large language models.
[2] Efficient streaming language models with attention sinks.
**W4: Writing needs revision. Duplicated citations.**
We will correct the duplicate citations and carefully check over other mistakes in revision.
**Q1: Difference between 'dynamic pruning' (line 99) and 'dynamic inference mechanism' (line 105)**
We discuss related works in other fields like computer vision in the context of dynamic inference mechanisms. 'Dynamic pruning methods' like Adainfer and MoD are designed specifically for LLMs. Although some dynamic inference mechanisms are shared in computer vision and LLMs, we want to discuss the LLMs acceleration methods separately to emphasize the domain of our work. We will make it clearer in revision.
**Q2-3: Specific use of LoRA**
LoRA primarily helps LLM adapt to the layer-skipping computation mode and specific domain tasks. We perform ablation study on D-LLM with and w/o LoRA under $\Omega=0.5$. The following table shows that the accuracy significantly decreases when LoRA modules are not trained together. The model also exhibits less control over acceleration rate.
Values in table represent (FLOPs$\downarrow$, Accuracy$\uparrow$).
| | w/o LoRA | D-LLM |
|:----- |:------------ |:-------------------- |
| MaWPS | (0.63, 0.00) | (**0.56**, **0.74**) |
| OBQA | (0.58, 0.25) | (**0.53**, **0.80**) |
**Q5: D-LLM on a completely new dataset**
Currently, we conduct experiments on specific tasks with few-shot training, To infer on a completely new dataset, further training of D-LLM is required to adapt model to domain knowledge and task templates. The table below shows that D-LLM maintains stable acceleration rates but has decreasing performance on new datasets, which also exists in PEFT methods (LoRA). We will research on generalization of D-LLM with more common datasets in the future.
Values in table represent (FLOPs$\downarrow$, Accuracy$\uparrow$).
| | D-LLM_PIQA | D-LLM_SIQA | LoRA_PIQA | LoRA_SIQA |
|:---- |:---------------- | ---------------- | ---------------- | ---------------- |
| PIQA | (0.52, **0.84**) | (0.53, 0.68) | (1.00, **0.84**) | (1.00, 0.68) |
| SIQA | (0.51, 0.54) | (0.54, **0.82**) | (1.00, 0.53) | (1.00, **0.81**) |
**Q6: Difference between target and actual acceleration rate**
We give comparison between actual acceleration rate and target in the following table. Results show that larger target rate brings the larger difference. The difference is also influenced by the difficulty of tasks and hyper-parameter $\alpha$ in Eq.14 (line 213). For example, actual rate of OBQA is closer to the target because making choices is easier than summarization (SAMSum). Analysis on $\alpha$ is available in global rebuttal #2.
| Target | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 |
|:------ | ---- |:---- | ---- | ---- | ---- | ---- |
| SAMSum | 0.19 | 0.28 | 0.36 | 0.44 | 0.52 | 0.59 |
| OBQA | 0.19 | 0.28 | 0.39 | 0.48 | 0.61 | 0.66 |
**Q7: Why is the average acceleration rate calculated with b^tilde rather than using b? (Eq.11)**
We use $\tilde{b}\_l$ instead of $b_l$ for gradients backward propagation. Utilizing $b_l$ from $\arg\max$ operation, which is not differentiable, makes it fail to optimize $\mathcal{L}\_{rate}$ in Eq.12.
In practical implementation, $\tilde{b}\_l$ can be approximated to one-hot vectors in 2 ways:
- Descend parameter $\tau$ to 0, then the softmax becomes 'sharp' and $\tilde{b}\_{l,1}$ numerically close to $b_l$.
- Use straight-through estimator as $\tilde{b}\_{l,1}\leftarrow(\tilde{b}\_{l,1}-b_l)\text{.detach()}+b_l$. The $\text{.detach()}$ operation means 0 gradients for backward propagation.
Both ways are viable in our framework. We will clarify details in revision.
**Q8: Can FLOPS be interpreted for various purposes of evaluation?**
In addition to computation, FLOPs can essentially measure the rate of memory overhead in KV-cache, which is exactly the same as executed-layers rate since only executed layers store the KV-cache in D-LLM. We provide the exact values on SAMSum in the following table to show the slight difference between FLOPs and other metrics, which is caused by extra few computation of decision modules.
| | executed-layers | KV-cache overhead | FLOPs |
|:-------------------- |:--------------- | ----------------- |:----- |
| D-LLM ($\Omega=0.3$) | 0.72 | 0.72 | 0.73 |
| D-LLM ($\Omega=0.5$) | 0.54 | 0.54 | 0.55 |
| LLM | 1.00 | 1.00 | 1.00 |
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in addressing the comments, including the comprehensive new measurements. I have changed my decision from borderline reject to borderline accept.
- Q1) I understood the authors' answer. Since the technical depth and level of dynamic pruning and dynamic inference are somewhat different, I suggest organizing them carefully.
- Q8) I agree that FLOPS might reveal the improvement on the memory side by considering the operations involved, but for model practitioners or system engineers, it is also important to understand the actual amount of increase or decrease in memory, especially given the limited memory of GPUs.
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind response and support. We are pleased that we have addressed most of your concerns.
[**Q1)**]
Thanks for your suggestion. We will carefully organize the writing of related work in the revision.
[**Q8)**]
We understand your comments and advice. While the precise memory overhead is closely related to specific implementations, such as operators and quantization techniques, providing data as an example is still enlightening for system engineers to assess the deployability of the model across various GPU memory constraints.
We provide the information of actual memory overhead in the table below including KV-cache (1K context) overhead and running overhead. We use D-LLM based on llama2-7b and parameters are stored as float16. For example, KV-cache of D-LLM ($\Omega=0.5$) with context length 1K costs about 276 MB while that of origin LLM costs 512 MB. The running memory when generating each token is only 7.92 GB since skipped layers are unnecessary to load into GPU, significantly less than the 13.90 GB needed by an origin LLM using all layers.
We will incorporate these details in the revision. Once again, we extend our gratitude for your constructive suggestions.
| | KV-cache memory (MB) | Running memory (GB) |
|:-------------------- | -------------------- |:------------------- |
| D-LLM ($\Omega=0.3$) | 368.64 | 10.26 |
| D-LLM ($\Omega=0.5$) | 276.48 | 7.92 |
| LLM | 512.00 | 13.90 | | Summary: This manuscript introduces a new dynamic inference paradigm for LLMs called D-LLMs, which adaptively allocates computing resources in token processing. With the dynamic decision module, the network unit is decided to be executed or skipped on the fly. The KV-cache eviction policy is proposed to exclude skipped layers from subsequent calculations, reducing storage overhead while maintaining compatibility with KV-cache methods. The experimental results demonstrate the notable computation reduction and KV-cache storage utilization.
Strengths: 1. The assumption that "not every word is equally important ..." is reasonable.
2. The experiments showcase the notable reduction in computation cost.
Weaknesses: 1. Highlighting how D-LLMs differ significantly from or improve upon existing methods like AdaInfer and SkipNet would strengthen the novelty claim. Providing a more detailed comparison with these methods in terms of theoretical underpinnings or empirical performance could clarify the unique contributions of D-LLMs.
2. The ablation on hyper-parameter \alpha in Eq. 14 is not provided. Please discuss its influence on task performance and computation reduction.
3. The font in Fig. 2-4 is too small for visualization.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weakness part.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate the reviewer for the valuable feedback. We have carefully considered the comments and suggestions and would like to address each of the concerns raised.
**W1: Highlighting how D-LLMs differ significantly from or improve upon existing methods like AdaInfer and SkipNet would strengthen the novelty claim. Providing a more detailed comparison with these methods in terms of theoretical underpinnings or empirical performance could clarify the unique contributions of D-LLMs.**
We make a detailed description of existing comparison methods in our paper as follows.
- ShortenLlaMA prunes unimportant layers based on designed metrics like Talylor and PPL and then apply LoRA to finetune pruned LLM on specific tasks. The pruning is static, which is fundamentally different from our dynamic inference method.
- AdaInfer is an early-exit method to stop the inference at intermediate layers. It underperforms on complex tasks such as Q&A.
- Mixture-of-Depth selects top-k tokens for calculation only at specific layers. Our method produces a more comprehensive dynamic inference mechanism and performs better at various benchmarks.
- Dynamic inference mechanisms in computer vision like SkipNet utilize layer skipping. Building on a similar mechanism, our method introduces the KV-cache eviction strategy for further overhead reduction in memory and computation for LLM inference.
We will incorporate these details in the revision to provide a clearer clarification of our contributions.
**W2: The ablation on hyper-parameter \alpha in Eq. 14 is not provided. Please discuss its influence on task performance and computation reduction.**
We analyze hyper-parameter $\alpha$ and discuss its influence on task performance and computation reduction. Details are available in global rebuttal #2.
**W3: The font in Fig. 2-4 is too small for visualization.**
We have increased the font size in Fig. 2-4 of our paper to enhance readability. The revised figures can be seen in Figure 2-4 of the PDF in the global rebuttal. We will update these figures in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, it addresses most of my concerns. I will raise my rating to weak accept.
---
Reply to Comment 1.1.1:
Comment: We are pleased that we have addressed most of your concerns. Thank you for your kind response and support. | Rebuttal 1:
Rebuttal: We are deeply grateful to the reviewers for their valuable feedback. We have carefully read the comments, and these insightful suggestions are crucial for enhancing our research. Given the limited length of individual rebuttals, we have chosen several key questions or concerns of interest to most reviewers for discussion in the global rebuttal.
The experiments and analysis in rebuttal are performed on L20 GPUs with D-LLM based on llama2-7b.
**#1. Additional costs/overheads/latency for decision modules in D-LLM. (To Reviewer fCWH, 9XGZ, Y7Gp)**
We provide extra overhead taken by decision modules of D-LLM in the table below, including information about training and inference. We give a detailed description as follows.
- **Params/FLOPs overheads**: Decision modules are parameter-efficient, which only takes 1.0% of the size of parameters of LLM and 0.9% FLOPs compared with the forward computation of LLM.
- **Training memory overheads**: During training, we store parameters to be updated in float32 and others in float16. The running GPU memory usage of training LoRA on LLM is about 26.2 GB and training D-LLM is about 32 GB, which means the extra memory cost taken by decision modules is only 5.8 GB.
- **Inference memory overheads**: During inference, the additional GPU memory cost taken by decision modules is only 0.3 GB. Furthermore, D-LLM requires less memory when generating each token since layers decided to be skipped are unnecessary to load into GPU memory. For example, D-LLM with acceleration rate 50% requires only 7.4 GB when inference, significantly less than the 13.9 GB needed by an origin LLM using all layers.
- **Latency**: To the latency of inference, computing through one decision module costs only 0.1 ms, which is quite lightweight compared with computing through a transformer block, which costs about 1.2 ms. The gain of acceleration brought by layer skipping is significant.
We will include the overhead analysis in the revision.
| | Params | FLOPs | Training Memory (GB) | Inference Memory (GB) | Latency Per block (ms) |
|:---------------- |:------ |:----- | -------------------- | --------------------- | ---------------------- |
| Decision Modules | 0.9% | 0.8% | 5.8 | 0.3 | 0.1 |
| LLM | 1 | 1 | 26.2(LoRA) | 13.9 | 1.2 |
**#2. Hyper-parameter analysis on $\alpha$ in Eq.14. (To Reviewer RWhA, fCWH, 9XGZ, Y7Gp)**
We use hyper-parameter $\alpha$ as the factor of $\mathcal{L}\_{rate}$ in loss function Eq.14 (line 213) to control the importance of acceleration ratio loss during training. $\alpha$ significantly influences performance on specific tasks and the ability to control the acceleration rate in D-LLM. Under the same training conditions, a higher value of $\alpha$ results in a more precise acceleration rate towards the target rate $\Omega$, while the performance of the model decreases because the cross-entropy loss $\mathcal{L}\_{CE}$ becomes less important.
We perform parameter analysis on the MaWPS. In the table below, we show the performance of D-LLM based on different settings of $\alpha=0.1/1/5$. The user-defined target acceleration rate $\Omega=0.9/0.8/0.7/0.6/0.5$. The results show that larger $\alpha$ provides better control over the acceleration rate. For example, with the target accleration ratio $\Omega$ set to 80%, the trained model with $\alpha=0.1$ achieves only 47%, whereas $\alpha=5$, D-LLM achieves 72%. In addition, an excessively large $\alpha$ can decrease the performance of D-LLM, even when trained models share the same computation overhead. For example, comparing cases on ($\Omega=0.7,\alpha=1$) with cases on ($\Omega=0.6,\alpha=5$), trained models both achieve approximately 60% acceleration. However, the accuracy in $\alpha=5$ decreases by 11% compared to the $\alpha=1$ case. We also provide a visualization of the table below as Figure 1 in the PDF in global rebuttal.
We will include the analysis experiments in the appendix of the revision.
Values in table represent (FLOPs$\downarrow$, Accuracy$\uparrow$).
| | $\alpha$=0.1 | $\alpha$=1 | $\alpha$=5 |
|:------------ |:------------ |:------------ |:------------ |
| $\Omega$=0.5 | (0.63, 0.75) | (0.56, 0.74) | (0.52, 0.62) |
| $\Omega$=0.6 | (0.57, 0.73) | (0.47, 0.72) | (0.42, 0.62) |
| $\Omega$=0.7 | (0.56, 0.75) | (0.40, 0.73) | (0.34, 0.61) |
| $\Omega$=0.8 | (0.53, 0.73) | (0.36, 0.71) | (0.28, 0.57) |
| $\Omega$=0.9 | (0.49, 0.72) | (0.33, 0.70) | (0.22, 0.50) |
Pdf: /pdf/fade441172f737d4d60196b7c682cee006ba8177.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exocentric-to-Egocentric Video Generation | Accept (poster) | Summary: This paper introduces a novel method for generating egocentric videos from exocentric videos of daily-life skilled human activities. This task is very challenging because of the significant viewpoint variations, sparsely posed exo videos, and dynamic environments. The authors propose to use a PixelNeRF-informed diffusion method to address the problem and achieves good visual performance and SOTA accuracy.
Strengths: This paper addresses a challenging problem and achieves great performance. The proposed method is novel and reasonable that effectively leverages the camera poses and multi-view video features. The experiments show that it achieves the SOTA accuracy.
Weaknesses: The proposed method lacks specific designs tailored for the "Exocentric-to-Egocentric" task. "exo" and "ego" perspectives are relative to "the person" in the video, yet the proposed method does not include any design elements focused on the person. The method actually has broad applicability to novel view video synthesis with significant viewpoint variations. One of the goals of the "Exocentric-to-Egocentric" task is to generate dynamic hand motions and object state changes. However, the visualization results in the paper show bad hand-object motions synthesis result. In summary, while the model excels at reconstructing the geometry of the surrounding environment, it lacks focus on hand-object dynamics which is important in egocentric video.
Technical Quality: 3
Clarity: 4
Questions for Authors: Could you provide a failure case analysis? Do you think the fine-grained hand-object interactions causes most of the failure cases?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitation included
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for recognizing our work and the valuable comments.**
# W1: Person-related design.
Thanks for your advice. We totally agree with you that person-related designs such as human hand pose prior or object states information would be very useful to improve the performance of Exo2Ego video generation. However, our method is the first work to achieve exocentric-to-egocentric video generation for daily-life real-world skilled human activities. For such challenging daily-life activities in real-world environments such as cooking in the kitchen, playing basketball on the court, **the exocentric cameras have to be placed far from the human** in order to capture the complete human activities. As a result, it is particularly difficult to capture detailed human hand-object regions on the exocentric cameras. **Therefore, it is particularly challenging to obtain human hand pose prior information or object state information from exocentric cameras from preprocessing**, and **generating human-object interactions is also one of the particular challenges for our Exo2Ego video generation task.**
Therefore, **our method focuses on the general exocentric to egocentric video generation and also has broad applicability to novel view video synthesis with significant viewpoint variations as you suggested.** In addition, **our method achieves much better performances on the human-object regions compared to the baselines, as shown in Fig. 3 of rebuttal pdf.** Our method can accurately generate the human hands that are operating the CPR on the model, while SVD generates much worse results. We agree that adding more person-related designs can further improve the performance and leave this as faithful future directions.
# Q1: Failure case analysis.
Thanks for your advice. **We provide two failure cases on Fig. 7 of rebuttal pdf.** Yes, most of our failure cases are caused by the fine-grained human-object interactions, **due to the significant differences between exocentric and egocentric viewpoints, as well as far-away placed exocentric cameras in the daily-life environments.** Although we achieve much better performance than baselines, we believe there is room for improvements especially based on your advice of person-related designs. We leave this as faithful future directions.
---
Rebuttal 2:
Title: Could you kindly review our response?
Comment: Dear Reviewer,
Thank you once again for your feedback. As the rebuttal period is nearing its conclusion, could you kindly review our response to ensure it addresses your concerns? We appreciate your time and input.
Best regards,
Authors of 10425 | Summary: This paper proposes a novel method for generating egocentric videos from multi-view exocentric videos using diffusion-based techniques. This method addresses the challenges of viewpoint variation and dynamic motion by employing a multi-view exocentric encoder and a view translation prior, along with temporal attention layers to enhance temporal consistency. Experiments results on the Ego-Exo4D dataset verify the effectiveness of the proposed method.
Strengths: 1. The logic of the paper is reasonable.
2. The motivation is interesting.
Weaknesses: 1. The exocentric-to-egocentric view translation prior is very similar to the existing work ReconFusion, and the authors need to clarify the differences.
2. In the egocentric video generation pipeline, why perform spatial attention before temporal attention instead of following the TimeSformer? Are there relevant experiments for further clarification?
3. The reasoning efficiency of the whole process should be evaluated. This determines whether it can be practically applied.
4. How does the number of exocentric videos affect performance? In real scenarios, it is difficult to capture 4 time-synchronized exocentric videos at the same time. Besides, do the 4 exocentric videos have to be evenly distributed around the scene?
5. The evaluation dataset is so single that it is impossible to validate the generalizability of the method on more diverse data. Furthermore, Ego-Exo4D contains a wide range of human activities, why only 5 of these categories are selected for the experiment?
6. The baselines compared are too few to adequately validate the superiority of the method.
7. Writing needs further improvement, tenses should be correct (Line 185, Page 5) and contextual expressions need to be consistent (Unet or UNet).
Technical Quality: 2
Clarity: 2
Questions for Authors: After reviewing the response letter, some of my concerns have been partially addressed. However, the paper's novelty, primarily inspired by ReconFusion, appears to have a weak technical contribution. Additionally, the evaluation is limited by the dataset used. Consequently, I have adjusted my score to a the borderline reject.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thanks for helpful comments.**
# W1: Differences with ReconFusion.
Our Exo2Ego prior is inspired by ReconFusion and based on PixelNeRF (see L199-L200, L204-L206 of main paper), but it significantly differs from ReconFusion in following aspects.
1) **The task of our method is significantly different from ReconFusion.** ReconFusion targets 3D static scene reconstruction, and thus it trains a diffusion prior on 3D static datasets and explores how to distill their diffusion prior into a 3D model through regularization losses. In contrast, our method is the first work to approach the challenging exocentric to egocentric dynamic video generation for skilled human activities with significant differences between exocentric and egocentric viewpoints.
2) **The design of our Exo2Ego prior is different from ReconFusion’s diffusion prior.** ReconFusion directly trains a diffusion prior that includes a PixelNeRF and diffusion UNet, and then explores distilling from the pretrained diffusion prior. In contrast, our method first trains a PixelNeRF-based Exo2Ego prior, and then we integrate this Exo2Ego prior into our complete Exo2Ego video generation pipeline and finetune Exo2Ego prior to provide the coarse yet spatially aligned egocentric features for our egocentric video generation. In addition, the PixelNeRF from the diffusion prior of ReconFusion is a 2-layer MLP, while our Exo2Ego prior is a 6-layer MLP.
3) **We further propose the multi-view exocentric encoder to provide fine-grained multi-scale exocentric features for egocentric video generation. Our multi-view exocentric encoder and Exo2Ego prior complement with each other.** Our complete Exo2Ego video generation model is the first video-level model for Exo2Ego video generation, and achieves new SOTA.
# W2: Ablation on temporal-spatial.
We conduct an ablation to first perform temporal attention and then spatial attention for our model. As shown in the following table, the original spatial-temporal model is slightly better than the temporal-spatial model in terms of PSNR and SSIM, and slightly worse for LPIPS. Since previous video generation methods such as AnimateDiff [16], SVD [5] all follow the spatial-temporal pipeline, our method also follows such spatial-temporal attentions. It is noted that since our temporal layers are finetuned from AnimateDiff [16] that is a spatial-temporal pipeline, our spatial-temporal model performs slightly better than temporal-spatial pipeline. **We provide qualitative results on Fig. 4 of rebuttal pdf where ours generates more accurate human-objects.**
||Unseen Action|||Unseen Take|||
|-|-|-|-|-|-|-|
|| PSNR↑| SSIM↑| LPIPS↓| PSNR↑| SSIM↑| LPIPS↓|
|Spatial-Temporal| **17.37**| **0.493**|0.408|**17.71**|**0.504**|0.456|
|Temporal-Spatial| 17.00| 0.484|**0.402**|17.29|0.490|**0.443**|
# W3: Reasoning efficiency.
Please refer to common response 2.
# W4: Number of exocentric videos.
**We conduct additional ablations on the number of exocentric views of 4, 3, 2, 1 views on the following table**. Our method’s performance slightly drops with the reduction of exocentric views. We provide qualitative comparisons on Fig. 5 of rebuttal pdf. **Even with only one exocentric view, our performance is still much better than other baselines, which significantly demonstrates the effectiveness of our approach with various numbers of exocentric views. This avoids the necessity of capturing time-synchronized exocentric videos at the same time, making our method more applicable for real-world applications**.
||Unseen Action|||Unseen Take|||
|-|-|-|-|-|-|-|
||PSNR↑|SSIM↑|LPIPS↓|PSNR↑|SSIM↑|LPIPS↓|
|Ours w/ 4 Views|**17.37** |**0.493**|0.408|**17.71**|**0.504**|0.456|
|Ours w/ 3 Views|17.23|0.486|**0.383**|17.40|0.489|**0.428**|
|Ours w/ 2 Views|16.93|0.474|0.399|17.02|0.479|0.445|
|Ours w/ 1 View|17.02|0.478|0.395|17.20|0.476|0.439|
# W5: Evaluation dataset.
We selected Ego-Exo4D dataset [15] as our main evaluation dataset due to its significant challenges of the complexity and diversity of daily-life scenarios, large differences between exo and ego viewpoints, complex human-object interactions, and its largest Ego-Exo data scale with 1286 hours of video. **Ego-Exo4D has 8 domains: Cooking, Health, Bike Repair, Music, Basketball, Rock Climbing, Soccer, Dance. It is different from Ego4D and does not contain a wide range of activities**. Since our task focuses on Exo2Ego video generation, **We select 5 categories that significantly emphasize both egocentric and exocentric activities, such as human-object interactions in both Ego and Exo viewpoints (see L235-L238 of main paper)**. Others like Dance, Soccer, Rock Climbing are mainly exocentric activities and contain very few human or human-object interactions in the egocentric view, which weakens the usefulness of generating egocentric views and thus are not included in our experiments.
**Our method can be generalized to other EgoExo datasets. Please refer to common response 1 for more results on H2O dataset.**
# W6: Baselines.
Our method is the first work to achieve Exo2Ego video generation for daily-life real-world skilled human activities. **This task is particularly challenging and there are no available baselines to compare with.** Therefore, we design three baselines including the **SOTA open-sourced video generation model** SVD, the **image generation method** SD, and the **3D-based method** PixelNeRF. We modify the input and condition modules of these baselines and train and evaluate them on the Ego-Exo4D dataset. We selected SVD because it is the SOTA open-sourced video generation model. **Therefore, we believe our baselines are adequate to demonstrate the superiority of our method. We additionally inference two more pre-trained image-to-video and video-to-video generation models on the Ego-Exo4D dataset as shown in Fig. 6 of rebuttal pdf, where these methods totally fail to generate egocentric videos from exocentric videos input**.
# W7: Writing.
We have corrected such typos.
---
Rebuttal 2:
Title: Response to Reviewer qWdm (1/2)
Comment: Dear Reviewer qWdm,
Thank you for updating your comment in the Reviews Questions. We are glad that we have addressed some of your concerns. Here we further respond to your remaining concerns about the paper novelty and evaluation dataset.
# Paper novelty.
**We respectfully disagree with your statement that our paper’s novelty is primarily inspired by ReconFusion.** We provided detailed responses about the differences between our Exo2Ego prior with ReconFusion in our rebuttal to address your concern about weakness 1. **Here we further clarify the overall novelty and technical contributions of our paper in comparison with ReconFusion.**
1. **Our method is the first work to achieve exocentric-to-egocentric video generation for daily-life real-world skilled human activities.** This task is particularly challenging in terms of the significant differences between the exocentric and egocentric viewpoints, as well as complex human-object interactions and real-world environments. In contrast, ReconFusion focuses on traditional 3D static scene reconstruction. **Therefore, our method is fundamentally different from ReconFusion.**
2. **To address the above challenges of Exo2Ego video generation, we propose a new diffusion-based multi-view exocentric encoder to extract fine-grained multi-scale exocentric features, as well as an Exo2Ego view translation prior to render spatially aligned geocentric features.** These two modules complement each other with fine-grained exocentric features and spatially aligned egocentric features to provide conditional information for **our egocentric video generation pipeline.** Our complete Exo2Ego video generation model is **the first video-level model for Exo2Ego video generation, and significantly outperforms previous approaches by a large margin of 35% for LPIPS.**
3. **Therefore, the technical contributions of our method are three-folds: 1) Diffusion-based multi-view exocentric encoder, 2) Exo2Ego view translation prior, 3) The first Exo2Ego video generation pipeline.** We conducted extensive ablation studies to demonstrate the effectiveness of each component in our main paper.
As discussed in our main paper (L199-L200, L204-L206) and rebuttal W1, **only the Exo2Ego view translation prior is partially inspired by ReconFusion — both our Exo2Ego prior and ReconFusion are based on PixelNeRF, but our Exo2Ego view translation prior is significantly different from ReconFusion. Furthermore, our proposed first exocentric-to-egocentric video generation model, our multi-view exocentric encoder, and our challenging Exo2Ego video generation task definition are all fundamentally different from ReconFusion.** **Below we further discuss the detailed technical differences between our Exo2Ego translation prior with ReconFusion.**
1. **The design and functionality of our Exo2Ego prior is different from ReconFusion’s diffusion prior.** Our method first trains a PixelNeRF-based Exo2Ego prior, and then we integrate this Exo2Ego prior into our complete Exo2Ego video generation pipeline and finetune Exo2Ego prior to provide the coarse yet spatially aligned egocentric features for our egocentric video generation. In contrast, ReconFusion directly trains a diffusion prior that includes a PixelNeRF and diffusion UNet, and then explores distilling from the pretrained diffusion prior.
2. **Our Exo2Ego prior is specially designed for rendering egocentric views from exocentric inputs.** To achieve this, we differentiate the egocentric views and exocentric views for our Exo2Ego prior model – it only extracts exocentric views’ ResNet features as condition signals to render the pixel colors and features. **Our exocentric and egocentric viewpoints are sparse and significantly different from each other. In contrast, ReconFusion does not differentiate between views and is trained on dense multi-view 3D static scenes datasets that contain hundreds of views for each 3D static scene.**
3. **Our Exo2Ego prior’s model architecture is different from ReconFusion.** We employ a 6-layer MLP as our Exo2Ego prior base model to handle the complex Exo2Ego translation with large Exo-Ego viewpoint differences, while ReconFusion only adopts a 2-layer MLP.
4. **Our Exo2Ego prior’s training strategy is different from ReconFusion.** We first pretrain our Exo2Ego prior on the Ego-Exo4D dataset. Then, we integrate our Exo2Ego prior into our Exo2Ego video generation pipeline and we alternatively finetune Exo2Ego prior and the complete pipeline during training. This strategy ensures that Exo2Ego prior renders spatially-aligned feature maps necessary for egocentric video generation, as well as keeps the original scene reconstruction capability.
---
Rebuttal 3:
Title: Response to Reviewer qWdm (2/2)
Comment: # Evaluation dataset.
We explained in our **rebuttal W5** that we selected Ego-Exo4D dataset [15] as our main evaluation dataset due to its **significant challenges of the complexity and diversity of daily-life scenarios, large differences between exo and ego viewpoints, complex human-object interactions, and its largest Ego-Exo data scale with 1286 hours of video.** We selected the 5 out of 8 categories from the Ego-Exo4D dataset that significantly emphasize both egocentric and exocentric activities, such as human-object interactions in both Ego and Exo viewpoints.
**We have demonstrated that our method can be generalized to other EgoExo datasets such as H2O dataset in common response 1**, where our method achieves much better performance than SVD, with significant 30.3% improvement over SVD in terms of LPIPS. **We are glad that Reviewer ANQv, who suggested this H2O dataset evaluation, has responded that our rebuttal has solved all Reviewer ANQv’s concerns, and we will add these results to provide more insights to readers.**
**Therefore, we believe our dataset evaluation is very challenging and extensive to evaluate the superiority of our proposed method.**
---
Rebuttal 4:
Title: Could you kindly let us know whether we address all your concerns?
Comment: Dear Reviewer,
Thank you once again for your feedback. We provided additional responses towards your remaining concerns. As the rebuttal period is nearing its conclusion, could you kindly review our response to ensure it addresses your concerns? We appreciate your time and input.
Best regards,
Authors of 10425 | Summary: This paper proposes a novel diffusion-based video generation method that translates exocentric views to egocentric view. Overall, I think the idea is novel and the method performs well on multiple daily human activities.
Strengths: 1. The application of view translation using Nerf-based approach seems interesting, despite minor improvement on several metrics.
2. Multiview exocentric encoder and temporal modeling is proven effective in supporting view transfer.
3. The paper is well written and easy to follow.
Weaknesses: 1. In Line 157-159, the authors mentioned that encoding multiview exocentric videos using CLIP image features lacks fine-grained details. However, there is no sufficient evidence in the experiments. For example, comparing the proposed method with global CLIP features or intermediate spatial CLIP features as conditions.
2. Generally, as most of the takes in EgoExo4d contain 4 exocentric views, which are significantly fewer than dozens of views used in 3D reconstruction approaches. The effectiveness of reconstructing a 3D scene using such sparse viewpoints is questionable, not to mention the significant camera pose differences between egocentric and exocentric views.
3. In the experiments, the rendered pixels of PixelNerf are extremely blurry, and thus it is not very convincing to use the CLIP features of rendered egocentric image in Eq.(4). To prove the effectiveness of exo2ego prior, the authors are encouraged to show some visualization results of the rendered egocentric features using PixelNerf.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The proposed method outperforms the baseline SVD that performs video generation given the first frame as condition. It is not clear to me whether the authors also adopted first frame as the condition while training the model since the first frame is a very strong prior. If not, could the authors explain why the proposed method can outperform SVD without using such strong prior, when both methods are given the same exocentric views as conditions.
2. It is not clear whether the proposed method can be generalized to other egoexo datasets that also contain camera poses, e.g. H2O [1], Assembly101[2].
[1] Kwon et al. H2O: Two Hands Manipulating Objects for First Person Interaction Recognition. ICCV21.
[2] Sener, Fadime, et al. Assembly101: A large-scale multi-view video dataset for understanding procedural activities.CVPR 2022.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for recognizing our work and the valuable comments.**
# W1: Ablation on Exocentric CLIP features.
Thanks for your advice. We conduct additional ablation study by replacing our exocentric feature encoders with the CLIP exocentric features. As shown in the following table, **our method achieves much better performance compared to using CLIP features across all metrics on the unseen action and unseen take evaluations.** This clearly demonstrates the superiority of our multiview exocentric encoder in maintaining fine-grained details compared to the CLIP features. We also provide **qualitative ablation comparison on Fig. 4 of rebuttal pdf**, where our method with Exo encoder outperforms the one with CLIP features on many detailed regions such as left and right human arm and objects. We will add this ablation result to our final paper.
| | Unseen Action | | | Unseen Take | | |
|--------------------|---------------|-------|-------|-------------|-------|-------|
| | PSNR↑ | SSIM↑ | LPIPS↓ | PSNR↑ | SSIM↑ | LPIPS↓ |
| Ours w/ Exo Encoder | **17.37** | **0.493** | **0.408** | **17.71** | **0.504** | **0.456** |
| Ours w/ Exo CLIP | 16.54 | 0.456 | 0.425 | 16.41 | 0.445 | 0.480 |
# W2: Only 4 exocentric views are very challenging.
The 4 sparse views and significant differences between exocentric and egocentric cameras are **exactly the key challenges for our Exo2Ego video generation task**. Therefore, **previous 3D reconstruction methods such as PixelNeRF fail in this task**, as shown in the baseline PixelNeRF results in Fig. 3 of main paper. To tackle these challenges, we propose our complete Exo2Ego-V pipeline with both multi-view exocentric encoder to extract multi-scale exocentric features and Exo2Ego translation prior to render coarse yet spatially aligned egocentric feature maps. **These two designs complement each other: the exocentric features are fine-grained but not spatially aligned with the ego input, while the egocentric features are a bit coarse but they are spatially aligned with the ego input.** Our experiments demonstrate that our Exo2Ego-V pipeline achieves much better performance than previous reconstruction and generation baselines for such a challenging task with large Exo-Ego viewpoint differences.
# W3: Visualization of rendered egocentric features.
As shown in Fig. 6 of appendix, our Exo2Ego prior rendered both the egocentric features and images. We want to clarify that we utilize two types of egocentric features: the egocentric features rendered directly from the Exo2Ego prior, and the CLIP features of the egocentric images rendered from the Exo2Ego prior. The first one is spatially aligned with the egocentric input latent and is directly concatenated with the input to provide spatial guidance, and the latter is utilized as the cross attention computation to maintain the original diffusion architecture.
Following your suggestino, we provide more **visualizations of rendered egocentric images and rendered egocentric features on Fig. 2 of rebuttal pdf**. As shown in the figure, **the rendered feature maps can accurately encode the geometric information such as the desktop shape, and the CPR models**. Our rendered features are spatially aligned with the egocentric inputs and are complemented with the fine-grained exocentric features extracted from our exocentric encoder. Although the rendered feature maps could be coarse for complex scenarios, but they are spatially aligned egocentric features (as mentioned in L150-L151 of main paper) can provide spatial guidance for our egocentric generation pipeline, and our ablation studies also demonstrate the effectiveness of our proposed Exo2Ego prior.
**We conducted an ablation study on our proposed Exo2Ego translation prior in Tab. 2 and Fig. 5 of the main paper, where removing Exo2Ego prior results in worse performance for PSNR and SSIM**. In addition, **through the visualization of Fig. 5, removing Exo2Ego prior (second row, second column) results in the missing of right human arm.** These ablations **demonstrate the effectiveness of our Exo2Ego prior.**
# Q1: SVD training.
**For fair comparisons, we provide the same 4 exocentric videos as inputs for our method and all baseline methods including SVD.** Therefore, **we modify the condition blocks of SVD to take 4 exocentric videos as input and finetune the pretrained SVD model on the Ego-Exo4D dataset.** As such, all methods are given the same exocentric videos as inputs to generate corresponding egocentric videos, and they are all trained and tested on the same dataset to achieve fair comparisons among each other.
# Q2: Generalization on other EgoExo datasets.
Thanks for your advice! Our method can be generalized to other EgoExo datasets. Following your suggestion, **we additionally train and evaluate our method on the H2O dataset, and compare our method with the SOTA baseline - SVD on the following table**. As shown in the following table, **our method achieves much better performance than SVD, with significant 30.3% improvement over SVD in terms of LPIPS**. **We provide qualitative comparisons of our method against SVD on Fig. 1 of rebuttal pdf**, where **our method achieves much more photorealistic and accurate results than SVD**, such as the details of human hands, interacted objects, and the environments. **These can demonstrate our method can be generalized to other datasets**, and we will add this result on the final paper.
| | PSNR↑ | SSIM↑ | LPIPS↓ |
|------|-----------|-----------|-----------|
| Ours | **18.60** | **0.581** | **0.189** |
| SVD | 16.53 | 0.468 | 0.271 |
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer ANQv
Comment: Dear authors,
Thanks for your responses and explanations. Overall, the rebuttal solves all my concerns. With these additional experiments, this paper would provide more insights to readers.
Regarding the response to W3 (visualization of rendered egocentric features), I have a few more questions that do not affect the rating.
The visualization results in Figure 2 show that the rendered feature maps/pixels generally capture the low-frequency of the visual content, e.g. table. , and these maps/pixels have exact spatial correspondence to the GT ego frame as mentioned by authors.
Given the GT ego frame/rendered feature map/rendered rgb image, is it possible to map one pixel (e.g. center of the table) back to each of the four exocentric frames given the ego and exo camera poses? Could you briefly introduce the solution if possible?
If the mapping is available, this could provide more fine-grained understandings of how each pixel in the rendered feature map/rgb image leverages the pixel-wise exocentric information from each exo camera. I think this is useful when the field-of-ego-view is under occlusion in one/two exocentric cameras, and we would like to know which exo camera(s) provide valuable visual clues to support generation in the ego view.
---
Rebuttal 2:
Title: Response to Reviewer ANQv
Comment: Dear Reviewer ANQv,
**Thanks for your valuable comments! We are glad that our rebuttal solves all your concerns! We will add these additional experiments to the final paper to provide more insights to readers!**
Regarding your additional question, “Given the GT ego frame/rendered feature map/rendered rgb image, is it possible to map one pixel (e.g. center of the table) back to each of the four exocentric frames given the ego and exo camera poses?”
**Yes, it is possible to map one pixel from the ego frame back to each of the four exocentric frames given the ego and exo camera poses.** To achieve that, we also require egocentric depth information $\mathbf{D}$ which can be conveniently obtained by additionally rendering the sampled points depths with their density values $\sigma$ along each ray using our Exo2Ego prior.
Specifically, we can first backproject one egocentric pixel $\left(u, v\right)$ to the 3D space, denoted as $\mathbf{X}$, using egocentric cameras pose (to provide projection direction) together with the egocentric depth at that pixel (to provide the projection distance) as follows,
$\mathbf{X}=E_{\mathrm{ego}}^{-1}\cdot K_{\mathrm{ego}}^{-1}\cdot\mathbf{D}\left[u, v\right]\cdot\left[u, v, 1\right]^{\mathrm{T}}$,
where $E_{\mathrm{ego}}$ and $K_{\mathrm{ego}}$ are the extrinsics and intrinsics of egocentric camera, and $\mathbf{D}$ is the depth map for egocentric frame.
After that, we can project $\mathbf{X}$ to each of the four exocentric frames using the exocentric camera poses $E_{\mathrm{exo}}$ and $K_{\mathrm{exo}}$.
$\left(u_{\mathrm{exo}}, v_{\mathrm{exo}}, 1\right)=K_{\mathrm{exo}}\cdot E_{\mathrm{exo}}\cdot\mathbf{X}$,
where $\left(u_{\mathrm{exo}}, v_{\mathrm{exo}}\right)$ are the projected exocentric pixels.
As you suggested, this could provide more fine-grained understandings of how each pixel in the rendered feature map/rgb image leverages the pixel-wise exocentric information from each exo camera.
Thank you very much for recognizing our work and valuable comments!
Best regards,
Authors of 10425 | Summary: This paper deals with the task of exocentric-to-egocentric video generation. It presents a diffusion-based framework of exo2ego-v to tackle the challenges of the significant variations between exocentric and egocentric viewpoints and high complexity of dynamic motions and real-world daily-life environments. It propose a multi-view exocentric encoder to extract the multi-scale multi-view exocentric features as the appearance conditions for egocentric video generation. It also designs an exocentric-to-egocentric view translation prior based on PixelNeRF to provide coarse yet spatially aligned egocentric features as a concatenation guidance for egocentric video generation. Experiments on a part of Ego-Exo4D dataset show the effectiveness of the proposed method.
Strengths: 1) The paper presents a diffusion-based framework to first address the challenging task of exocentric-to-egocentric video generation for real-world skilled human activities.
2) To tackle the challenges of exocentric-to-egocentric video generation, the paper proposes a new diffusion-based multi-view exocentric encoder and an Exo2Ego view translation prior that can extract dense exocentric features and spatially aligned egocentric features as conditions for the egocentric video diffusion pipeline.
3) Experiments on on 5 categories of skilled activities on the Ego-Exo4D dataset show that the proposed framework achieves best performance most evaluation cases.
Weaknesses: 1) The presentation of the paper needs further improvement.
1.1 For the method part, the network architecture is not clear for the exocentric encoder and the egocentric video diffusion model. As a result, the feature dimension of many notations such as $F_{exo}$, $Z^{t}_{exo}$ are not known. It is also difficult to understand the procedure within Equation (1) and (4).
1.2 For the experiment part, the qualitative results have taken too much space (e.g., Figure 3). Space should be saved for the presentation of the results on the unseen scenes (which are moved to appendix).
1.3 If the test scenes are not seen during training, Exo2Ego translation prior (i.e., pixelNeRF) need be trained for the new scenes. During testing, egocentric video frames are not available, while it is said in the paper that four exocentric frames and one egocentric frame are used to train the pixelNeRF.
2) As shown by the results on the more challenging unseen scenes (Table 3), the proposed method sometimes gets worse performance than the baseline methods. The authors should give more analysis and discussion for such results.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Do you train one Exo2Ego prior model for each video, or for each synchronized timestep of a video? It is not clear for me how regular the Exo2Ego prior model need be retrained.
2) What is the dimension of the encoded features for multi-view exocentric videos, and how are they combined with the camera poses which have different dimensions?
3) How are the egocentric camera poses determined during testing? Do you directly use the poses provided by the dataset?
4) In Figure 5, the same results of EgoGT are shown twice. Please have a check.
5) Current evaluation metrics are averaged over the generated video. I wonder how the performance would change over time.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper can also show the runtime of the video generation framework.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for recognizing our work and the valuable comments.**
# W1.1: Network architecture.
The base network architecture of the exocentric encoder and the spatial modules of the egocentric video diffusion model are extensions of the Stable Diffusion [44]. Therefore, they consist of **four downsample layers, one middle layer, and four upsample layers, where each layer includes 2D convolution, self-attention, and cross-attention**. Since Stable Diffusion operates in the latent space, the feature dimension of input encoded noisy latents is $\mathbb{R}^{N\cdot F\times C\times\frac{H}{8}\times\frac{W}{8}}$.
The exocentric attention features’ dimensions are different for different downsample, middle, and upsample layers with channel dimension varying at [320, 640, 1280], and the spatial dimensions $\frac{H}{8}\times\frac{W}{8}$ are downsampled by 2x2 or upsampled by 2x2 for each downsample and upsample layers, respectively. Eq. (1) and (4) are the **denoising processes of diffusion models** that predict the added noise of input noisy latents, except that in Eq. (1) we extract the exocentric features from each denoising layer. We will add these details to the main paper.
# W1.2: Appendix results.
Thanks for your advice. We will move the unseen scenes results to the main paper and move some qualitative results to the appendix.
# W1.3, Q1: Exo2Ego prior training.
**We only train one single generalizable Exo2Ego prior model for all exocentric videos and egocentric videos from the training set.** **For each training iteration, we randomly extract synchronized 4 exocentric frames and 1 egocentric frame to supervise our Exo2Ego prior (L197-L200).** During the complete training, each iteration will load different Exo-Ego videos from the training set to train our single generalizable Exo2Ego prior. Therefore, we only need to train one single Exo2Ego prior from all videos from the training set.
During testing, our Exo2Ego translation prior can **generalize to test scenes that are not seen during training without retraining.** Therefore, we do not need to train the Exo2Ego translation prior for novel test scenes. As shown in our experiments, we evaluate our Exo2Ego-V on unseen actions, unseen takes, and unseen scenes without retraining our Exo2Ego prior on any of the test sets, which is very challenging. We will explain it clearer in our final paper.
# W2: Unseen scenes results.
The evaluation of the unseen scenes is particularly challenging for two reasons. 1) **We only train one single Exo2Ego-V model**, including the Exo2Ego prior, from the training dataset and evaluate its performance on the proposed 3 challenging test setups without any retraining or finetuning. 2) **The number of different scenes from Ego-Exo4D is relatively limited, but the geographic diversity of the captured scenes is particularly significant across the world**. Therefore, all methods perform worse on the unseen scene evaluation.
**Despite these challenges, our method still outperforms the baseline methods in most metrics for the unseen scene evaluation.** As shown in Tab. 3 of the main paper, PixelNeRF only achieves the best PSNR metric for the Covid Test category, but it generates very blur results as shown in Fig. 7 since PSNR favors blurry images [37]. SVD occasionally achieves better metrics results but it only generates wrong scenes without any human-object interactions, as shown in Fig. 7 of main paper. In comparison, **our method is the only method that tries to generate both scenes and human-object interactions, such as the background and human arms and basketball in Fig. 7**. Although the performance on unseen scenes is not yet optimal, we believe jointly training our method with a more diverse multi-view dataset has the potential to improve performance and we leave this as future work.
# Q2: Exocentric feature dimension.
The exocentric attention features’ dimensions are different for different downsample, middle, and upsample layers with channel dimension varying at [320, 640, 1280], and the spatial dimensions $\frac{H}{8}\times\frac{W}{8}$ are downsampled by 2x2 or upsampled by 2x2 for downsample and upsample layers.
As explained in L183-L187, to encode the relative camera poses information, we do not directly concatenate the camera poses with exocentric features. Instead, **we first encode the camera pose to 1280 dimension with a 2-layer MLP, and then add it to the denoising timestep embedding as the residuals for the noisy latents** in the following denoising blocks. In these following denoising blocks, there are additional MLP layers to map the channel dimension of the timestep and camera embedding to the same dimension as the corresponding noisy latents, so that these embeddings are added to the noisy latents as residuals.
# Q3: Egocentric camera poses.
Yes, we directly use the camera poses provided by the Ego-Exo4D dataset.
# Q4: EgoGT Results.
Thanks for your advice, we will remove the repeated EgoGT which was intended for symmetrical layout.
# Q5: Performance over time.
Following your suggestion, we compute the PSNR for each frame of the generated videos over time, and then average these metrics across all test videos for each time step respectively. We evaluate this on the three test sets of the Cooking category and compute all metrics. We report PSNR due to space limits, and SSIM and LPIPS show similar results. As shown in following table, **the performance remains relatively stable over time, and the intermediate frames perform slightly better than the final frames due to larger motion in the latter.**
*Tab. 1 Quantitative metrics (PSNR) over time for Cooking category.*
||Frame 1|Frame 2|Frame 3|Frame 4|Frame 5|Frame 6|Frame 7|Frame 8|
|-|-|-|-|-|-|-|-|-|
|Unseen Action|17.35|17.42|17.44|17.44|17.43|17.35|17.38|17.27|
|Unseen Take|17.63|17.78|17.81|17.76|17.79|17.86|17.71|17.60|
|Unseen Scene|14.01|14.07|14.07|14.02|13.95|13.88|13.80|13.69|
# L1: Inference time.
Please refer to common response 2.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' detailed responses which have addressed my initial concerns. After reading all the responses and comments from other reviewers, I would like to increase the rating to "Weak Accept".
---
Rebuttal 2:
Title: Could you kindly review our response?
Comment: Dear Reviewer,
Thank you once again for your feedback. As the rebuttal period is nearing its conclusion, could you kindly review our response to ensure it addresses your concerns? We appreciate your time and input.
Best regards,
Authors of 10425 | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful and valuable comments. We also appreciate that the core contributions and the quality of our results are recognized in the review:
1. **First work** to address the **challenging** task of Exo2Ego video generation. **New** diffusion-based multi-view exocentric encoder and an Exo2Ego view translation prior. **Best performance** on most evaluation cases. (Reviewer W4kZ)
2. The application of view translation using the Nerf-based approach seems **interesting**. Multiview exocentric encoder and temporal modeling is proven **effective** in supporting view transfer. The paper is **well written and easy to follow**. (Reviewer ANQv)
3. The task is **very challenging**. The proposed method is **novel and reasonable** that effectively leverages the camera poses and multi-view video features. Achieves **SOTA accuracy**. (Reviewer aauy)
4. **The motivation is interesting**, and the logic of the paper is **reasonable**. (Reviewer qWdm)
**We have included additional figures in the global rebuttal PDF. Please refer to the PDF for further qualitative results.**
Below we first address the common advice from Reviewer ANQv and Reviewer qWdm to demonstrate the generalization of our method on other EgoExo datasets, as well as the common advice from Reviewer W4kZ and Reviewer qWdm on the inference time comparison.
For other concerns and comments, please refer to individual responses to each reviewer.
# Common 1: Generalization on other EgoExo datasets (Reviewer ANQv, qWdm).
Thanks for your advice. Our method can be generalized to other EgoExo datasets. Following your suggestion, **we additionally train and evaluate our method on the H2O dataset [1], and compare our method with the SOTA baseline - SVD on the following table**. As shown in the following table, **our method achieves much better performance than SVD, with significant 30.3% improvement over SVD in terms of LPIPS**. **We provide qualitative comparisons of our method against SVD on H2O dataset [1] on Fig. 1 of rebuttal pdf**, where **our method achieves much more photorealistic and accurate results than SVD**, such as the details of human hands, interacted objects, and the environments. **These can demonstrate our method can be generalized to other datasets**, and we will add this result on the final paper.
*Tab. 1 Quantitative comparison of our method against SVD on H2O dataset [1].*
| | PSNR↑ | SSIM↑ | LPIPS↓ |
|------|-----------|-----------|-----------|
| Ours | **18.60** | **0.581** | **0.189** |
| SVD | 16.53 | 0.468 | 0.271 |
# Common 2: Reasoning efficiency (Reviewer W4kZ, qWdm).
We provide the inference time comparison on the following table, where the inference time of our method to generate an 8-frame egocentric video is 9.06 second, which is comparable with other baselines. We believe it is feasible to use our model in offline applications to generate egocentric videos from the exocentric videos, such as capturing exocentric cooking videos and generating corresponding egocentric videos offline for cooking skills learning. Improving the inference speed towards real-time is very promising and we leave it as future works. We will include the inference time comparison in our final paper.
*Tab. 2 Inference time of our method in comparison with baselines.*
| | Ours | SVD | SD | PixelNeRF |
|-------------------------|------|------|------|-----------|
| Inference Time (second) | 9.06 | 4.26 | 6.91 | 5.65 |
Pdf: /pdf/9da4b144c1e837a32bb30c593426ebd11c453258.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Prioritize Alignment in Dataset Distillation | Reject | Summary: This paper makes the key observation that existing dataset distillation methods often introduce misaligned information during both the extraction and embedding stages, which leads to suboptimal performances. In response to this observation, the authors propose a method called Prioritize Alignment in Dataset Distillation (PAD), which aims to filter out misaligned information through two steps: 1). Pruning the target dataset based on sample difficulty according to the compression ratio, and 2) using only deep layers of the agent model during distillation to avoid encoding low-level, redundant information.
Strengths: 1. This paper synthesize a universal framework for existing data distillation methods by abstracting those methods into two steps: 1). Information extraction and 2) information embedding. Furthermore, it identifies a common theme of information misalignment in both steps. This observation enhances the understanding of current limitations as well as provides a clear direction for future research.
2. The method presented in this paper effectively combines known conclusion from two distinct areas of research: data selection and representation learning. By leveraging established principles from both domains, the provide improvements to existing data distillation methods. More importantly, their analysis builds on developing an understanding of data distillation and the underlying mechanism: 1) small datasets require simple data 2) large datasets do not benefit from low-level information/features.
Weaknesses: 1. The proposed method’s filtering of information extraction is supported experiments shown in Figure 2. However, in practice, the method introduces two sets of hyper parameters - initial ratio and data addition epoch. The sensitivity to these hyperparameters (especially AEE shown in Table c) relative to the incremental performance gain presents a challenge, as running AEE can be complex and time-consuming (involves retraining the agent). This sensitivity and the associated tuning complexity could hinder its practical adoption in larger-scale datasets.
2. The proposed method is adapted from the DATM framework with modifications at enhancing information alignment. However, the performance improves over DATM (shown in Table 1 and in the cross-architecture generalization results in Table 3) are not significant. This marginal improvement raises concerns about the practical value of the proposed changes, as they may not justify the added complexity.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How is data selection applied in Figure 2 (b)? In the original work, DM[1] uses untrained networks (random initialized network to embed features). In your experiment. are you using untrained agent models but applying data selection score (EL2N) from a differently pre-trained model? Overall, I am confused about how the EL2N is generated? Are you replicating the score from [2] and apply it in all your methods or are you recomputing it based on different distillation architectures?
2. Can the same analysis not applied to more recent SOTA methods such as SRe2L[3], which also involved a component similar to DM [1]? Would the same information alignments trick works as well?
[1] Bo Zhao and Hakan Bilen. Dataset condensation with distribution matching [arXiv:2110.04181](https://arxiv.org/abs/2110.04181)
[2] Mansheej Paul, Surya Ganguli, and Gintare Karolina Dziugaite. Deep learning on a data diet: 364 Finding important examples early in training. [arXiv:2107.07075](https://arxiv.org/abs/2107.07075)
[3] Zeyuan Yin, Erix Xing, and Zhiqiang Shen. Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective [arXiv:2306.13092](https://arxiv.org/abs/2306.13092)
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the authors adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer UeUP for valuable feedback. We make responses as follows.
**W1: PAD introduces two new sets of parameters, which add to the complexity of tuning**
Thanks for raising this concern. We would like to make the following clarifications:
- They don't need to be changed according to the target datasets. Through experiments, we find that setting IR=75% and AEE=40 generalizes well across various datasets. In all experiments reported in the paper, we use these settings by default except for changing AEE to 20 on CIFAR-10.
- Generally, 75% ~ 80% for IR and 20 ~ 40 for AEE are good settings as the performance doesn't change too much within these ranges. We show ablation results on the other two benchmarks in the tables below:
**CIFAR-100, IPC10** (notion: initial ratio = IR, addition end epoch = AEE)
| IR | AEE=20 | AEE=30 | AEE=40 |
| ---- | ------ | ------ | -------- |
| 50% | 47.0 | 46.9 | 47.1 |
| 75% | 47.5 | 47.4 | **47.8** |
| 80% | 47.4 | 47.6 | 47.7 |
**Tiny ImageNet, IPC1**
| IR | AEE=20 | AEE=30 | AEE=40 |
| ---- | ------ | ------ | -------- |
| 50% | 16.9 | 17.2 | 17.4 |
| 75% | 17.3 | 17.1 | **17.7** |
| 80% | 17.2 | 17.5 | 17.6 |
***
**W2: PAD's improvement may not justify the added complexity**
Thanks for the comment. We would like to clarify that PAD doesn't add complexity to DATM from the following three perspectives.
- **Expert training:** With our difficulty measurer and data scheduler, PAD filters out misaligned information for all IPCs in one set of expert trajectories. This means we only need to train expert trajectory **once**, instead of training experts according to IPCs like in DATM. We list the comparison of expert training costs in the table below:
| IPC | DATM | PAD |
| ----- | -------- | --------------- |
| 1 | ~4 hrs. | ~6 hrs. |
| 10 | ~5 hrs. | 0 hrs. (shared) |
| 500 | ~7 hrs. | 0 hrs. (shared) |
| Total | ~16 hrs. | ~6 hrs. |
- **Trajectory matching:** The matching loss is computed on fewer model parameters since we filter out part of model parameters that introduce misaligned information. In this case, the cost of distillation also doesn't increase. The comparison of running time of PAD with that of DATM is shown below. The distillation costs of these two methods are very close.
| IPC | DATM | PAD |
| ---- | --------- | --------- |
| 50 | 8.3 hrs. | 8.1 hrs. |
| 1000 | 11.1 hrs. | 11.2 hrs. |
- **Hyper-parameter tuning:** PAD introduces new hyper-parameters such as AEE and IR. However, these parameters are not costly to tune as we explained above.
***
**Q1: How is data selection applied in Figure 2 (b)? Are you using untrained agent models but applying data selection score (EL2N) from a differently pre-trained model? Are you replicating the score from the paper and apply it in all your methods or are you recomputing it based on different distillation architectures?**
Thanks for the questions. We would like to elaborate the process as follows:
1. Before the distillation begins, we use an untrained ResNet-18 to compute EL2N scores (this is the default setting of EL2N) to evaluate the difficulty of the data sample.
2. Then, after we filter out misaligned data points according to their scores, only reserved difficulty-matched data are used for the distribution matching.
In other words,
- We didn't modify the workflow of DM except now DM can only select samples from the pruned dataset during the distillation.
- For the EL2N score, we replicate it from the paper and use it for all our methods instead of recomputing it on different architectures.
- The implementation of computing EL2N is based on **DeepCore**.
***
**Q2: Can the same analysis be applied to more recent SOTA methods such as $SRe^2L$**
Thanks for the question. $SRe^2L$[8] is an excellent work on DD that proposes a "squeeze, recover, label" process to decouple previous bi-level optimization. It achieves great success on large-resolution benchmarks such as ImageNet-1k. To show PAD's compatibility, we apply PAD in the *squeeze* stage of SRe2L. Please see the experiment results and analysis below:
- **Setting:** We carry out data selection to filter information extraction in the *squeeze* stage. For simplicity, experiments are done on CIFAR-100. All parameters are the same as the original paper. The model used is ResNet-18.
- **Results:** Please see the table below
| IPC | SRe2L | SRe2L + PAD(FIEX) |
| ---- | ----- | --------------------- |
| 1 | 25.4 | 26.7 ($\uparrow$ 1.3) |
| 10 | 28.2 | 29.3 ($\uparrow$ 1.1) |
| 100 | 57.2 | 57.9 ($\uparrow$ 0.7) |
- **Analysis:** After applying PAD to filter out misaligned information extracted in the *squeeze* stage, the performance of $SRe^2L$ improves on both small and large IPC settings. This further validates our hypothesis that filtering misaligned information is effective.
- **Conclusion:** Filtering misaligned information can improve $SRe^2L$. It further supports that PAD is beneficial to matching-based methods for performance improvement.
**We will add the discussion of $SRe^2L$ in the revision as follows:**
- section 6, paragraph 5, "$SRe^2L$ introduces a "squeeze, recover, relabel" procedure that decouples previous bi-level optimization and achieves success on high-resolution settings with lower computational costs." (short version)
**We will add the comparison with $SRe^2L$ in the revision as follows:**
- section 5.5, a new section to discuss the compatibility of PAD on other DC/DM-based methods, add
"We implement the Filtering Information Extraction (FIEX) on $SRe^2L$ which involves a component similar to DM. As shown in Table 5, compared with the baseline, filtering misaligned information brings remarkable improvements on both small and large IPC settings." (short version)
---
Rebuttal 2:
Title: To save reviewer's time, we put a summary of rebuttal
Comment: Dear Reviewer UeUP,
Thanks so much again for the time and effort in our work. Considering the limited time available and to save the reviewer's time, we summarized our responses here.
**Reviewer UeUP:**
1. \[**New hyper-parameters introduced**\]:
**Response:**
- We explain that these two hyper-parameters don't need to be changed according to different datasets, so they are not hard to tune.
- We provide experimental results on other benchmarks to show that AEE=20~40 and IR=75%~80% are good settings and can be generalized well.
2. \[**PAD adds more complexity**\]:
**Response:**
- We explain why PAD **doesn't** add any complexity from trajectory training, distillation, and hyper-parameter tuning perspectives.
- We provide overhead comparisons with DATM in trajectory training and distillation to further demonstrate that PAD is efficient.
3. \[**Data selection in Figure 2(b)**\]:
**Response:**
- We provide detailed steps on how to apply data selection in DM.
- The data selection is done before the distillation starts. Misaligned samples are pruned and won't be used later.
- We replicate the EL2N score in the paper and use default settings to compute the score.
- The implementation is based on Deepcore.
4. \[**Discussion of SRe2L**\]:
**Response:**
- We summarize the work of SRe2L and discuss the differences between PAD and SRe2L.
- We apply PAD on SRe2L and provide experimental results to show that PAD can bring remarkable improvements to SRe2L.
- We promise to include a discussion of SRe2L and the experiment in the revision.
Since the discussion stage is already halfway through, may I know if our rebuttal addresses the concerns? If there are further concerns or questions, we are more than happy to address them. Thanks again for taking the time to review our work and provide insightful comments.
Best wishes,
Authors
---
Rebuttal Comment 2.1:
Title: Thank you!
Comment: Thank you for the detailed response and follow-up. All my concerns have been addressed. Authors clarification has demonstrated that the proposed method can be applied to more SOTA methods without introducing complicated hyperprameter tuning or computation overhead. I would like to keep my score of acceptance!
---
Reply to Comment 2.1.1:
Title: We appreciate the support!
Comment: Thanks very much for *supporting* and *accepting* our work. We are happy that we have *addressed* your concern that hyper-parameter tuning *is not complicated*. We are also glad to see that PAD *can be applied* to more recent SOTA methods. Your constructive feedback is very *helpful* for us to improve our work.
If there are any further questions, we are more than willing to provide a detailed explanation. Thank you! | Summary: This paper proposes to study the information misalignment problem in dataset distillation.
It proposes two basic pruning strategies: (1) learn the synthetic data with easy real samples first, and gradually change to harder samples, and (2) only match deep layers of the network during trajectory matching. The proposed method could enhance the current method in a wide range of dataset settings.
Strengths: - The paper is well-written.
- The motivation is reasonable.
- The idea of using a scheduler to dynamically adjust the real sample difficulty is smart.
Weaknesses: 1. The experimental observations to support the two strategies (Information Extraction and Information Embedding) involving Figure 2 and 3, is not sufficient. Experiments on a wider range of datasets and more pruning ratios could rationalize the method. And a comparison of discarding deeper-layer parameters is missing.
2. The wide existence of difficulty-aware dataset distillation could **potentially** weaken the contribution. Some discussion is appreciable:
```
[1] Prune Then Distill: Dataset Distillation with Importance Sampling
[2] Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection
[3] (DATM) Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
[4] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm
```
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Line 125-127: is there any experimental comparison to support the conclusion?
2. Is it possible to combine PAD with DATM, since they do not conflict?
3. The ablation study in Table 3(b) is weak. Conducting the experiments on other datasets may help.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer emvM for valuable feedbacks. We make responses as follows.
**W1: Experiments to support the two strategies are not sufficient**
Thanks for the comment. We provide more results as follows (***will be included in the revision***)
- **Settings:**
Dataset: CIFAR-10
Arch: ConvNetD3.
IPC 1/10: remove hard samples.
IPC 500: remove easy samples.
Other parameters are the same.
Init: real images.
- **Results: Pls ref to Author Rebuttal Table 1~6**.
- **Analyses:**
- **FIEX:**
- **Small IPCs:**
DC/DM: the performance consistently exceeds the baseline within 50% of removal.
MTT: removing up to 30% hard samples helps improve the performance.
- **Large IPCs:** removing up to 20% easy samples improves the distillation performance in large IPC settings.
- **Percentage Range:** The average percentage of data removed that exceeds baseline is nearly 40%, indicating misaligned information widely exists during the information extraction stage.
- **FIEM:**
- Both small and large IPCs benefit from discarding shallow-layer parameters.
- Discarding smaller ratios of shallow-layer parameters is more effective on small IPCs.
- Discarding larger ratios of shallow-layer parameters is more effective on large IPCs.
- **Conclusion:** Two strategies are effective in improving performances.
**CIFAR-100 is still running**. Will report when finished.
---
**W2: The comparison of discarding deeper-layer parameters is missing.**
Thanks for the comment. Here are results of discarding deep-layer parameters (***will be included in the revision***):
- **Settings:**
Dataset: CIFAR-10
Arch: ConvNetD3
Discard ratios: 25%, 50%, 75%.
- **Results:** **Please refer to Author Rebuttal Table 8**
- **Analysis:** Discarding deep-layer parameters **significantly reduces** the distillation performance. This indicates that useful information is mainly distilled by deep-layer parameters.
- **Conclusion:** Deep-layer parameters are more important than shallow-layer parameters.
---
**W3: Discussion of other difficulty-aware DD**
Thanks for the advice. We provide thorough analysis as follows (***will be put into the revision, pls ref to Author Rebuttal Revision for more details***).
**Prune-then-Distill**[5] propose to prune the target dataset with data selection methods before the distillation. For each IPC, only data with high EL2N score are then used for distillation since they are recognized being **important**.
- **Differences:**
- We use EL2N score to evaluate the **difficulty** of training examples, to filter out misaligned information. In practice, we propose to reserve samples with high EL2N scores in large IPC settings but reserving samples with low EL2N scores in small IPC settings, which turned out to be effective to all matching-based distillation methods.
**BLiP**[4] proposes part of data samples in the dataset are redundant. It designs a data utility indicator to evaluate if samples are **useful** for the distillation given an IPC setting, then samples with low utility are pruned.
- **Differences:**
- We find a few samples of target dataset are actually harmful for the distillation since they will provide over-hard or over-easy information. So we filter samples according to their difficulty measured by the EL2N score.
- The data scheduler allows our method to only buffer expert trajectories once for a dataset, which can be utilized by arbitrary IPC settings.
**RDED**[6] proposes to extract image patches directly from the real dataset and rearrange key ones according to their scores, instead of synthesizing new images.
- **Differences:**
- Our method focus on addressing the information misalignment problem for matching-based distillation methods, which will generate new synthetic samples during the distillation.
**DATM**[7] is the first to find that the difficulty of information embedded into the synthetic data should be aligned with the IPC setting.
- **Differences:**
- DATM only aligns the information by controlling the matching range of training trajectories, which is only effective for methods based on matching trajectory.
- We find out misaligned information exists in the *Information Extraction* stage and *Information Embedding* stage. By filtering out misaligned information, our method outperforms DATM on every benchmark.
---
**Q1: Experiments to support Line 125-127**
Thanks for the comment. We report experimental verification as follows(***will be included in the revision***):
- **Setting:**
Dataset: CIFAR-10
IPC: 500
- **Results: Please refer to Author Rebuttal Table 9**
- **Analysis:** Remove easy samples in one operation performs better. This supports our conclusion that after being trained on the full dataset for some epochs, letting the model focus on learning hard information is more effective for the distillation in large IPC cases.
---
**Q2: Can combine PAD with DATM?**
Yes, it is. As DATM is the SOTA trajectory matching method, to verify the effectiveness of PAD, our implementation is based on DATM. As shown in Table 1, PAD outperforms DATM in every setting.
Moreover, PAD can also be combined with other matching-based algorithms such as DC and DM (please refer to Author Rebuttal Table 1~6).
---
**Q3: More ablation studies in Table 3(b)**
Thanks for your advice. We add ablation on CIFAR-100 in the revision. Here are the results:
- **Settings:**
- Dataset: CIFAR-100
- IPC: 50
- **Results: Pls ref to Author Rebuttal Table 10**
- **Analyses:** Two filtering modules both bring improvements. FIEM brings more improvements.
- **Conclusion:** Both modules are effective.
**Tiny-ImageNet is still running**. Will report when finishes.
---
Rebuttal 2:
Title: To save reviewer's time, we put a summary of rebuttal
Comment: Dear Reviewer emvM,
Thanks so much again for the time and effort in our work. Considering the limited time available and to save the reviewer's time, we summarized our responses here.
1. \[**More explanations and experiments on Figure 2 and 3** \]:
**Response:**
- We offer results on more IPCs and test ratios as required in Table 1~6.
- New results further support our conclusion that misaligned information exists in matching-based DD, and our filtering strategy is effective.
2. \[**Discussion of listed works**\]:
**Response:**
- We discuss all mentioned works in details, including *Prune-then-Distill*, *BLiP*, *DATM*, and *RDED*.
- We provide thorough analyses of the differences between PAD and these works.
- We promise to add discussions of these works in the revision.
3. \[**Experimental results for Line 125-127**\]:
**Response:**
- We provide comparisons in Table 9 as required.
- Through experimental comparisons, our conclusion that directly removing easy samples during trajectory training is more effective has been further supported.
4. \[**Combine PAD with DATM**\]:
**Response:**
- We answer the question that PAD and DATM can be combined and we built PAD upon DATM.
- In Table 1 of the submission, PAD consistently improves DATM in all settings, further supporting our conclusion.
5. \[**More ablation studies**\]:
**Response:**
- We provide more ablation experiments on CIFAR-100 in Table 10 as required.
- The results match our conclusion: both filtering information extraction and filtering information embedding improve the performance.
Since the discussion stage is already halfway through, may I know if our rebuttal addresses the concerns? If there are further concerns or questions, we are more than happy to address them. Thanks again for taking the time to review our work and provide insightful comments.
Best Regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for the response and my concerns are mostly addressed. I appreciate the authors' contribution and would raise my rating to 5 since the revision meets the acceptance bar (but rejection is not that bad).
Besides, I suggest the authors **concisely** and clearly state the novelty according to W3 response in the revision to distinguish it from other work. And there are more contemporary works that the authors could consider discussing in the final version such as [1].
[1] SelMatch: Effectively Scaling Up Dataset Distillation via Selection-Based Initialization and Partial Updates by Trajectory Matching, ICML'24
---
Reply to Comment 2.1.1:
Title: Follow-up Discussion with Reviewer emvM
Comment: Thanks so much for the support. We appreciate the constructive feedback.
Below, we provide a concise summary of our novelty:
- We discover that misaligned information exists in existing matching-based DD methods. Such information comes from the mismatch between IPC capacity and actual information difficulty.
- We propose to filter out misaligned information from two perspectives:
- *Information Extraction*: We conduct data selection to add/remove hard/easy samples during different phases of trajectory training.
- *Information Embedding*: We conduct parameter selection to remove shallow-layer parameters according to different IPCs during trajectory matching.
- PAD can also be applied to other methods based on DC or DM.
We will point out our novelty in the revision more clearly as suggested. Thanks for the reminder.
Thanks for introducing an excellent work, **SelMatch**. We provide a thorough discussion as follows:
- **SelMatch** points out the limitation of MTT being less effective in the large IPC setting. It proposes selection-based initialization and partial update to improve the coverage of hard and diverse patterns. SelMatch achieves remarkable success in scaling MTT on large IPCs.
**Differences:**
- **Data Perspective:** SelMatch uses all samples of the target dataset for trajectory training. PAD finds that a few samples of the dataset are actually harmful for the distillation since they will provide overly hard or overly easy information. So, PAD adds hard and removes easy samples during trajectory training to reduce the overly hard and overly easy information extracted at different training phases.
- **Trajectory Perspective:** SelMatch uses all model parameters for trajectory matching. PAD aligns the difficulty of information distilled by different model parameters with each IPC capacity. It reduces low-level basic information distilled from shallow-layer parameters and improves the distillation performance.
In the revision, we will add the discussion of SelMatch as follows:
- section 6, paragraph 3, "**SelMatch** proposes selection-based initialization for synthetic images to increase the coverage of hard patterns. It also employs partial update that updates part of synthetic images and keeps the rest unchanged to maintain diverse patterns. **SelMatch** successfully scales MTT as the IPC increases."
We hope our response resolves your concerns. We are more than willing to answer any further questions you may have. Your support is appreciated and helps us improve our work. | Summary: The authors claim that existing data distillation methods introduce misaligned information, so they propose Prioritize Alignment in Dataset Distillation (PAD). PAD prunes the target dataset and uses only deep layers of the agent model to perform the distillation, achieving state-of-the-art performance.
Strengths: 1. The paper is somewhat well-written and mostly easy to follow. And the tables/figures are well-demonstrated.
2. The authors analyze the misaligned information from two perspectives and propose method.
3. PAD achieves improvements on various benchmarks, achieving state-of-the-art performance.
Weaknesses: The performance gains brought by the method proposed by the authors are subtle and limited, potentially attributable to other explanations. For instance, as mentioned in [1], discarding original data in certain ways, or even randomly, can yield minor performance improvements under different IPC conditions. Or tricks mentioned in [2].
The trend changes in Figure 2 are not pronounced, and there are even instances where the trends contradict the explanations. Could additional test ratios or test IPCs be included to validate the findings?
[1] Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection.
[2] Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality
Technical Quality: 2
Clarity: 3
Questions for Authors: In Figure 2, Figure 3, and Table 5, the baselines do not all align with those in Table 1 or the results presented in the original paper. Is there an explanation I have missed? Please clarify this discrepancy, as I am unable to evaluate the correctness of "Misaligned Information Extracted" without understanding the reason for this variation.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Due to the limitation of computing resources, the authors only validated their method’s effectiveness on DATM, DM, and DC.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer xUdN for the feedback. We make responses as follows.
**W1: Performance gains are limited and may have other explanations**
Thanks for the question. In Table 1, we achieve 11 SOTAs out of 12 settings. Moreover, PAD can be generalized to methods based on matching gradients (DC[1]) and distributions (DM[2]), further showing the effectiveness of filtering out misaligned information.
To clarify our differences between "PDD" and "BLiP", we make analyses and experimental verification as follows (***will be put into the revision, please refer to Author Rebuttal Revision for more details***):
- **PDD**[3] progressively synthesizes multiple sets of images, to capture the training dynamics of different stages. It achieves success in enhancing existing DD methods.
**Differences:**
- **Data Perspective:** PDD uses all samples of the target dataset. PAD filters out samples that contain misaligned information according to the IPC setting, and they won't be used during the entire distillation process.
- **Trajectory Perspective:** PAD aligns the difficulty of information distilled by different model parameters with each IPC capacity and improves the distillation performance.
- **BLiP**[4] proposes that it is not necessary to use all samples of the target dataset to perform the distillation since they are redundant. It greatly improves the distillation efficiency by pruning a large percentage of redundant data.
**Differences:**
- **Data Perspective:** BLiP proposes a data utility indicator to evaluate if samples are 'useful' given an IPC setting, then samples with low utility are pruned. We find that a few samples of the target dataset are actually harmful for the distillation since they will provide over-hard or over-easy information. So, PAD filters data samples according to difficulty measured by EL2N scores.
- **Trajectory Preparation Perspective:** For trajectory-matching-based methods, BLiP needs to train expert trajectories case by case due to different pruning ratios. Our data scheduler allows our method to only buffer expert trajectories once, which can be utilized by arbitrary IPC settings.
**Experimental Comparison with BLiP:**
- **Settings:** For a fair comparison, we only compare the improvement brought by our data selection module (FIEX) with BLiP. Experiments are done on CIFAR-10 and we use MTT[5] as the distillation method. The data pruning ratios and IPCs tested are the same as BLiP.
- **Results: Pls ref to Author Rebuttal Table 7**
- **Analyses:**
- PAD brings better performance improvements on IPC1/10/50.
- Under the given data-dropping ratios, PAD's improvements over BLiP get larger as the IPC increases.
- **Conclusion:** Difficulty misalignment between IPC and real data used is more harmful. PAD's data selection module is more effective in removing such misaligned information.
***
**W2: Figure 2 needs to be explained**
Thanks for the comment. In Figure 2, we show a group of ablation results to demonstrate that removing misaligned information from the information extraction step helps improve the performance. We make additional clarifications as follows:
- The percentage of easy(hard) samples to remove is not always the higher the better for large(small) IPCs. When the removing ratio crosses a certain value, we may lose too much information to achieve a good performance. For example, the performance of trajectory matching on IPC500 is below baseline when removing more than 20% of easy samples. This explains why some points are below the baseline.
- We find the percentage of easy(hard) samples that are not aligned with large(small) IPCs resides in a **large range** (around 40%, please refer to Tables 1,2,3 in the Author Rebuttal). This means misaligned information widely exists in previous DD and our proposed solution is effective.
***
**W3: Ablation under more IPCs and reduction ratios in Figure 2.**
Thanks for the advice. More ablation studies are reported as follows:
- **Settings:**
Dataset: CIFAR-10
Architecture: ConvNetD3.
IPC 1/10: remove various ratios of hard samples.
IPC 500: remove various ratios of easy samples.
Initialization: real images.
Other parameters: same as the default.
- **Results:** Please refer to **Author Rebuttal Table 1~3**.
- **Analyses:**
- **Small IPCs:** Removing part of hard samples helps improve the distillation performance.
- **Large IPCs:** Removing part of easy samples improves the distillation performance.
- **Percentage Range:** The average percentage of data removed that gives better performance than the baseline is nearly 40%, indicating that misaligned information is a common issue for existing DD, and our filtering strategy is effective.
- **Conclusion:** Misaligned information indeed exists and filtering it out can alleviate the negative effect.
***
**Q1: Results of DC and DM in Figure 2, 3 don't match Table 1**
We are sorry for the confusion. We elaborate on the reasons as follows:
- In the DC and DM, there are two ways of synthetic image initialization, random gaussian noise and images from the real dataset. In the current version, we initialize the synthetic data with random noise.
- We used smaller inner loop and outer loop steps in previous experiments for convenience.
Thanks for the reminder. To avoid confusion, we adjust our experiments with real image initialization and official hyper-parameter settings. New results are reported as follows:
- **Settings:**
Init: real images
Other parameters: same as the default.
- **Results:** Please refer to **Author Rebuttal Table 1~6**.
- **Analysis:** As can be observed, the trend still matches our previous experiments. Removing difficulty misaligned information distilled by shallow-layer model parameters according to IPCs achieves better performances.
- **Conclusion:** Misaligned information exists in the matching-based methods. Filtering it out brings improvements.
---
Rebuttal 2:
Title: To save reviewer's time, we put a summary of rebuttal
Comment: Dear Reviewer xUdN,
Thanks so much again for the time and effort in our work. Considering the limited time available and to save the reviewer's time, we summarize our responses here.
1. \[**Other explanations to performance gains**\]:
**Response**:
- PAD achieves SOTAs on 11 out of 12 settings.
- We provide a detailed discussion of the two mentioned works, BLiP and PDD.
- We also provide experimental comparisons between PAD and BLiP in Table 7: PAD is more effective than BLiP on MTT.
2. \[**More explanations and experiments on Figure 2**\]:
**Response**:
- We offer more clarifications on the trend of Figure 2 and explain why some points could be below the baseline.
- According to the requirement, we add more IPCs and test ratios and listed all results in Author Rebuttal Table 1~3.
- Results still support our conclusion that misaligned information exists in existing matching-based DD, and removing it improves the performance.
3. \[**Figure 2,3 results and Table 1 don't match**\]:
**Response**:
- We explain reasons why these results don't match: we used noise for synthetic image initialization and we reduced the inner and outer loop steps.
- We rerun all experiments with the default hyper-parameter setting and fix the results in Figure 2,3. Updated results are shown in Author Rebuttal Table 1~6. The trend still matches our previous results.
- We will update all results in the revision.
Since the discussion stage is already halfway through, may I know if our rebuttal addresses the concerns? If there are further concerns or questions, we are more than happy to address them. Thanks again for taking the time to review our work and provide insightful comments.
Best Regards,
Authors
---
Rebuttal Comment 2.1:
Title: Official Comment by xUdN
Comment: Thanks for the author's efforts and thorough clarifications. The author has provided detailed supplementary explanations and experiments and the rebuttal has addressed most of my concerns. I‘d like to increase my score.
---
Reply to Comment 2.1.1:
Title: We appreciate the support
Comment: Thanks very much for the support. We are happy to see that we have addressed your concerns. Your valuable feedback is very helpful for us in improving our work.
If you have any further questions, we are more than happy to provide a detailed explanation. Thanks again for accepting our work. | null | null | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their valuable feedback, which is very important to further improve our work.
We begin by making the following responses about results of additional experiments and revision of the paper, and later to allow more space for responses to each reviewer.
### Tables
**Table 1: Filtering Information Extraction in Matching Gradients**
| CIFAR-10 | baseline | 5% | 10% | 15% | 20% | 25% | 30% | 50% |
| - | - | - | - | - | - | - | - | - |
| IPC1 | 27.8 | 28.0 | 28.4 | 28.5 | **29.1** | 28.8 | 28.1 | 27.9 |
| IPC10 | 44.7 | 45.2 | 45.5 | 45.7 | 46.1 | **46.3** | 45.3 | 44.7 |
| IPC500 | 70.8 | 71.7 | **71.9** | 71.2 | 71.4 | 70.3 | 69.8 | 67.1 |
**Table 2: Filtering Information Extraction in Matching Distributions**
| CIFAR-10 | baseline | 5% | 10% | 15% | 20% | 25% | 30% | 50% |
| - | - | - | - | - | - | - | - | - |
| IPC1 | 26.4 | 26.5 | 27.1 | 27.3 | 27.9 | 28.2 | 28.5 | **29.2** |
| IPC10 | 48.4 | 48.6 | 48.9 | 49.7 | **50.3** | 49.6 | 49.2 | 48.5 |
| IPC500 | 75.1 | 75.6 | 76.2 | **76.3** | 75.8 | 75.3 | 74.6 | 74.2 |
**Table 3: Filtering Information Extraction in Matching Trajectories**
| CIFAR-10 | baseline | 5% | 10% | 15% | 20% | 25% | 30% | 50% |
| - | - | - | - | - | - | - | - | - |
| IPC1 | 46.4 | 46.9 | 47.1 | 47.3 | **47.6** | 47.2 | 47.0 | 46.7 |
| IPC10 | 66.5 | 66.7 | 67.2 | **67.4** | 67.2 | 67.3 | 66.8 | 65.4 |
| IPC500 | 83.5 | 83.6 | **84.3** | 83.9 | 83.5 | 83.2 | 82.7 | 81.1 |
**Table 4: Filtering Information Embedding in Matching Gradients**
| CIFAR-10 | baseline | 12.5% | 25% | 50% | 62.5% | 75% |
| - | - | - | - | - | - | - |
| IPC10 | 44.6 | 44.8 | **45.2** | 44.7 | 44.1 | 43.8 |
| IPC500 | 72.2 | 72.3 | 72.5 | 72.8 | 73.2 | **73.4** |
**Table 5: Filtering Information Embedding in Matching Distributions**
| CIFAR-10 | baseline | 12.5% | 25% | 50% | 62.5% | 75% |
| - | - | - | - | - | - | - |
| IPC10 | 48.9 | 49.1 | **49.5** | 49.1 | 48.5 | 48.3 |
| IPC500 | 75.1 | 75.2 | 75.5 | 75.9 | 76.2 | **76.3** |
**Table 6: Filtering Information Embedding in Matching Trajectories**
| CIFAR-10 | baseline | 12.5% | 25% | 50% | 62.5 | 75% |
| - | - | - | - | - | - | - |
| IPC10 | 66.8 | 67.1 | **67.2** | 66.9 | 66.2 | 65.5 |
| IPC500 | 83.5 | 83.7 | 83.8 | 84.2 | 84.3 | **84.5** |
***
**Table 7: Comparison between BLiP and PAD on MTT**
Notion: The left in the bracket denotes the improvement over MTT, and the right denotes the percentage of real data used for distillation.
| IPC | PAD (Ours) | BLiP |
| - | - | - |
| 1 | 46.8 (**+0.6**, 80%) | 46.3 (+0.2, 80%) |
| 10 | 66.5 (**+1.1**, 90%) | 65.7 (+0.4, 90%) |
| 50 | 73.0 (**+1.4**, 95%) | 72.0 (+0.4, 95%) |
***
**Table 8: Performances of discarding deep-layer parameters for distillation**
| IPC | PAD (w/o data selection) | 25% | 50% | 75% |
| - | - | - | - | - |
| 1 | 46.9 | 44.1 (-2.8) | 43.2 (-3.7) | 41.8 (-5.1) |
| 10 | 66.9 | 62.2 (-4.7) | 57.7 (-2.8) | 51.1 (-15.7) |
| 50 | 76.1 | 69.2 (-6.9) | 66.5 (-9.6) | 58.3 (-17.8) |
***
**Table 9: Comparison between direct removal and gradual removal of easy samples.**
| Strategy | CIFAR-10 IPC500 | CIFAR100 IPC50 |
| - | - | - |
| Gradual remove | 84.2 | 55.6 |
| Directly remove | 84.6 (+0.4) | 55.9 (+0.3) |
***
**Table 10: Ablation of modules on CIFAR-100 IPC50**
| FIEX | FIEM | Accuracy |
| - | - | - |
| | | 55.0 |
| ✓ | | 55.2 |
||✓| 55.6|
|✓|✓| 55.9|
### Revision
**We thank the Reviewer xUdN for adding two other related works. We will add the discussion and experiment results in the revision as follows:**
- section 6, paragraph 3, add "**BLiP**\[4\] discovers the issue of data redundancy in previous distillation..."
- section 6, paragraph 3, add "**PDD**\[3\] identifies the change of learned pattern complexity at different training stages..."
- section 5.4, add "We compare FIEX with **BLiP**\[4\] on MTT. As shown in Table 5, FIEX in PAD performs better on IPC1/10/50..."
**We thank Reviewer emvM for introducing three other difficulty-aware distillation works. In the revision, we will add the discussion of these papers in the related work as follows:**
- in section 6, paragraph 3, "**BLiP**\[4\] and **Prune-then-Distil**\[5\] discover the issue of data redundancy..."
- in section 6, paragraph 3, "**PDD**\[3\] identifies the change of learned pattern complexity at different training stages... " ( *will present its results in Table 1* in the revision)
- in section 6, paragraph 5, "**RDED**\[6\] proposes a computationally efficient DD method that doesn't require synthetic image..." (*will present its results in Table 1* in the revision)
- in section 5.4, we add experimental comparisons and analysis between PAD and **BLiP**\[4\].
### **References**
[1] *Dataset Condensation with Gradient Matching*, ICLR 2020
[2] *Dataset Condensation with Distribution Matching*, CVPR 2021
[3] *Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality*, ICLR 2024
[4] *Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection*, ECCV 2024
[5] *Prune Then Distill: Dataset Distillation with Importance Sampling*, ICASSP 2023
[6] *On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm*, CVPR2024
[7] *Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching*, ICLR 2024
[8] *Squeeze, Recover and Relabel: Dataset Condensation at ImageNet Scale From A New Perspective*, NeuIPS 2023 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LLM-based Skill Diffusion for Zero-shot Policy Adaptation | Accept (poster) | Summary: This paper presents a novel framework called LLM-based Skill Diffusion (LDuS), designed to enable zero-shot skill-based policy adaptation to various contexts specified in natural language. It leverages LLMs to guide a skill diffusion model, allowing for the generation of adaptable skill trajectories. The framework uses a hierarchical skill learning structure and a loss-guided diffusion process with sequential in-painting techniques. The authors demonstrate the effectiveness of LDuS in adapting to diverse contexts through experiments on robotic manipulation tasks, showing improved performance compared to other methods.
Strengths: * The approach is interesting and tackles an important problem. The method appears to be novel to my knowledge on the topic.
* The method is tested on various challenging tasks and achieves superior performance compared to several baselines, demonstrating its effectiveness.
Weaknesses: ### Major comments:
* The combination of a diffusion model and a VAE to learn the skills from a dataset of trajectories is not well justified. In particular, Equation (5), which is the loss function for learning the skills, informally replaces the reconstruction loss in ELBO of VAE with the noise prediction loss of a diffusion model. It is unclear if this is mathematically and conceptually correct. If this equation is adapted from a paper, it should be cited. Otherwise, more justification for this combination is needed. The authors are encouraged to look into works that combine these two models, such as:
Kingma, Diederik, Tim Salimans, Ben Poole, and Jonathan Ho. "Variational diffusion models." Advances in neural information processing systems 34 (2021): 21696-21707.
* The seeding procedure is unclear and possibly incorrect. In the caption of Table 1, the authors state that the methods are "evaluated" on 3 seeds. This suggests that they trained on a single seed and evaluated on 3 seeds, which is not the correct use of seeding. The proper approach is to train and evaluate for each seed. Additionally, more seeds should be used to ensure robustness. While computation costs are a concern, using at least 5 seeds, and preferably 10, is recommended.
* The contribution of the paper does not appear to be significant. It combines multiple components into a framework. In some cases these combinations are not well-justified either (e.g., VAE+diffusion mentioned above). While I understand the results are promising, I have concerns about the significance of the contributions.
### Minor comments:
* Notation fix: In Equation 1, the action sampling needs to be added to the subscript of the expectation: $a_t \sim \pi_c(\cdot | s_t, g_t)$.
* Given that VAE is one of the main components of the method, its paper should be cited:
Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).
* The cross-reference to Table 3 in the related works section is out of place. Table 3 presents empirical results, and referring to it during the literature review breaks the coherence of the paper and the order of cross-references.
* A bit nit picky, but the abbreviation "LDuS" for "LLM-based skill diffusion" is unclear and should be clarified.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What is the justification for Equation (5)?
2. Please explain the seeding procedure.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors briefly discuss one limitation of their work in line 320. However, more discussion is needed given the multiple components and heavy reliance on LLMs for tasks such as encoding the goal with CLIP or encoding the context with GPT. Additionally, potentially negative societal impacts are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## [Weakness 1 & Question 1] Mathematical Justification for VAE+Diffusion
We appreciate the reviewers for addressing concerns about the mathematical justification LDuS. We provide the mathematical justification for Equation (5), where the reconstruction loss in the ELBO is replaced with the diffusion loss. Due to length constraints in the rebuttal text, **the complete proof is included in the PDF of the global rebuttal**.
We start from the ELBO formulation for VAEs [1],
$$\mathop{\mathbb{E}}\_{x_T \sim q(x_T|x,z), z \sim q(z|x)} [\log p(x)] \geq \underbrace{\mathop{\mathbb{E}}\_{x_T \sim q(x_T|x,z), z \sim q(z|x)} \left[\log p(x|z)\right]}\_{\text{reconstruction loss}} - \underbrace{D_{\text{KL}}(q(z|x)||p(z))}\_{\text{regularization loss}}$$
Then, the reconstruction error term satisfies the following,
$$\mathop{\mathbb{E}}\_{x_T \sim q(x_T|x,z), z \sim q(z|x)} [\log p(x|z)] \geq \mathop{\mathbb{E}}\_{x_T \sim q(x_T|x,z), z \sim q(z|x)} \left[\log p(x_T)\right] + \underbrace{\mathop{\mathbb{E}}\_{x_T \sim q(x_T|x,z), z \sim q(z|x)} \left[ \log \frac{p(x|x_T,z)}{q(x_T|x,z)}\right]}\_{\text{diffusion loss}}$$
where $T$ is the total denoising timestep. Thus, the reconstruction term can be seamlessly replaced with the diffusion loss. Moreover, on how the diffusion loss term is optimized with the noise reconstruction term, please refer to the original DDPM paper [2]. We will include this theoretical analysis in the final version.
Moreover, our LDuS framework is structurally different from the approach presented in the "Variation Diffusion Models (VDM)" [3] paper. In VDM, a diffusion model itself functions as a VAE. In contrast, LDuS utilizes an encoder-decoder structure, where an encoder generates skill embeddings from trajectories, and a diffusion-based skill planner produces skill trajectories conditioned on these embeddings. Moreover, using VAE structure to learn skill embeddings and decode skills is a common practice in skill-based RL [4,5].
[1] Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv 2013
[2] Ho, Jonathan, et al. "Denoising diffusion probabilistic models." NeurIPS 2020
[3] Kingma, Diederik, et al. "Variational diffusion models." NeurIPS 2021
[4] Pertsch, Karl, et al. "Accelerating reinforcement learning with learned skill priors." CoRL 2020
[5] Hakhamaneshi, Kourosh, et al. "Hierarchical few-shot imitation with skill transition models." ICLR 2022
----
## [Weakness 2 & Question 2] Random Seed
We used 3 different seeds to **train** and evaluate our model. To avoid misunderstanding, we will revise the caption in Table 1.
In the below table, we report the performance of LDuS and the baselines on 5 random seeds. As shown, LDuS consistently outperforms the baselines, demonstrating the robustness of our experiments. In the final version, we will conduct experiments using 5 or more random seeds for all cases.
|Method|Without Context (SR)|Language Context (CR)|Language Context (SR)|
|-|-|-|-|
|LCD+Guidance|52.9\%|42.0|49.9\%|
|Diffuser+Guidance|92.2\%|69.8|76.4\%|
|LDuS|97.0\%|87.4|94.6\%|
----
## [Weakness 3] Contribution of LDuS
We believe our work addresses a novel practical problem where the agent adapts to unseen language-specified contexts in a zero-shot manner, even when trained without a context-labeled dataset. Previous works on language-conditioned skill-learning [1,2] primarily focus on learning through direct supervision from language-annotated datasets. However, obtaining datasets annotated with a broad range of contexts is impractical in real-world scenarios due to the open-ended nature of language. Furthermore, in the realm of RL, while diffusion-based policies have been explored primarily for imitating given datasets [3,4], language-guided test-time control for zero-shot adaptation has not been thoroughly investigated. To the best of our knowledge, our LDuS is the first to integrate LLMs with diffusion models for zero-shot adaptation to language-specified contexts in the domain of sequential decision-making.
[1] Garg, Divyansh, et al. "LISA: Learning interpretable skill abstractions from language." NeurIPS 2022
[2] Zhang, Edwin, et al. "Language control diffusion: Efficiently scaling through space, time, and tasks." ICLR 2024
[3] Wang, Zhendong, et al. "Diffusion policies as an expressive policy class for offline reinforcement learning." ICLR 2024
[4] Chen, Chang, et al. "Simple hierarchical planning with diffusion." ICLR 2024
----
## [Weakness 4,5,6] Minor Comments
* Subscript in Equation 1: We will add the subscript for the Equation 1.
* Citation: We will cite the paper "Auto-encoding variational bayes" in Section 3.2.
* Cross-Reference: We will remove this reference to Table3 in the related works, to enhance the coherence of our paper.
----
## [Weakness 7] Abbreviation of LDuS
As the reviewer mentioned, the abbreviation "LDuS" is unclear. Initially, we intended to name our model 'LSDu', based on the order in "LLM-based Skill Diffusion". However, to avoid the association with the acronym 'LSD', which is commonly known for the drug, we rearranged the letters, resulting in the final model name 'LDuS'.
----
## [Limitations] Reliance on LLMs
The LDuS framework relies on LLMs for its performance; however, it maintains robustness through an iterative refinement process, with its effectiveness detailed in Table 4. For goal description, using RoboCLIP [1], a vision language model trained on robot manipulation tasks, would improve the representation of the goal descriptions. Additionally, the use of LLMs to control the generation process of diffusion models raises concerns about potential negative societal impacts, such as the possibility of malicious prompts being injected and used for harmful purposes. We will discuss these concerns in the limitation section and provide potential negative social impacts.
[1] Sontakke, Sumedh, et al. "Roboclip: One demonstration is enough to learn robot policies." NeurIPS 2024
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their reply. Most of my concerns were addressed and I have increased my score.
---
Rebuttal 2:
Comment: We are very satisfied that your concerns have been addressed. We truly appreciate your valuable feedback, as well as your decision to increase the score.
We fully agree with your perspective that a mathematical justification for the VAE+Diffusion model is required. As mentioned in our rebuttal, the reconstruction loss in the VAE can be seamlessly replaced by the diffusion loss. We will include this justification in the final version of our paper.
We once again deeply thank you for your thoughtful feedback on our paper. | Summary: This paper presents LDuS, a framework that adapts skill diffusion models to unseen contexts in a zero-shot manner. The proposed method first couples VAE with a diffusion planner for hierarchical skill learning. To perform zero-shot policy adaptation to the language-specified context, LDuS translates contexts to loss functions using LLM and leverages loss-guided diffusion for controllable trajectory generation. Furthermore, LDuS adopts LLM to iteratively evaluate and improve the alignment between the generated trajectory and the given context. The method is extensively evaluated in two zero-shot settings of MetaWorld, and achieves superior performance in both success rate and context reward compared to the baselines.
Strengths: - LDuS innovatively integrates the reasoning ability of LLM and loss-guided diffusion for policy adaptation and effectively improves the zero-shot performance given unseen contexts.
- The authors further demonstrate the robustness of LDuS in different context types, and provide more insights on LDuS capabilities through the results of waypoint generation and trajectory coverage.
- The ablation study highlights the importance of several design choices.
- The illustration of motivation and methodology is sound and clear, and the paper is well-written and organized.
Weaknesses: - Tested contexts seem to be mainly limited in speed specifications.
- More seeds should be tested in experiments to draw more convincing and robust conclusions.
- The application of loss guidance may harm the model's capability of completing the goal/task (i.e. success rate).
Technical Quality: 3
Clarity: 3
Questions for Authors: - Would sequential in-painting increase the training cost (e.g. training time) of the learning phase?
- How is the context reward defined specifically for each context?
- In line 228, how is the loss function manually designed for the "Guidance" setting?
- How many times of refinement are usually needed before a generated trajectory is considered contextually aligned? Would that be too time-consuming for long-horizon tasks?
- How sensitive is LDuS to loss guidance strength?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately discussed the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## [Weakness 1] Scope of Context
As the scope of context is limited in our experiments, we conduct additional experiments on two different types of user requirements including energy and spatial. In the energy context, the agent aims to minimize its energy consumption by reducing its acceleration or deceleration [1]. In the spatial context, the agent is tasked to not cross a specified spatial boundary. In the table below, we conduct tests on a single task in MetaWorld. As shown, LDuS outperforms the baselines in context reward (CR), demonstrating its scalability on diverse context types.
|Method|Energy Context (CR)|Energy Context (SR)|Spatial Context (CR)|Spatial Context (SR)|
|---|---|---|---|---|
|LCD+Guidance|56.3|66.7\%|29.4|33.3\%|
|Diffuser + Guidance|62.3|66.7\%|75.4|100.0\%|
LDuS|88.8|100\%|87.0|100.0\%|
[1] Soori, Mohsen, et al. "Optimization of energy consumption in industrial robots, a review." Cognitive Robotics 2023
----
## [Weakness 2] Random Seed
In the below table, we conduct the additional experiments using 5 different seeds for LDuS and the baselines (Diffuser, LCD). As shown, LDuS consistently outperforms the baselines, demonstrating the robustness of our experiments. Moreover, we conduct 120 different combinations of task and context to draw the result for language context, in our tables. In the final version, we will conduct additional experiments using 5 or more random seeds for all cases. Moreover, we conduct 120 different combinations of task and context to draw the result for language context, in our tables.
|Method|Without Context (SR)|Language Context (CR)|Language Context (SR)|
|---|---|---|---|
|LCD+Guidance|52.9\%|42.0|49.9\%|
|Diffuser+Guidance|92.2\%|69.8|76.4\%|
|LDuS|97.0\%|87.4|94.6\%|
----
## [Weakness 3] Loss Guidance
While there is potential for loss guidance to impair the model's ability to complete tasks, a similar gradient-based diffusion control approach has been extensively studied across various domains such as vision [1,2,3,4], motion generation [5,6], traffic scene generation [7], and reinforcement learning [8,9,10]. These studies consistently demonstrate its robustness in diverse applications. Moreover, we have shown that our LDuS framework achieves robust performance using loss guidance, further validating its effectiveness.
[1] Ho, Jonathan, et al. "Classifier-free diffusion guidance." NeurIPS 2022
[2] Kwon, Gihyun, et al. "Improving Diffusion-based Image Translation using Asymmetric Gradient Guidance." ICML 2023
[3] Dinh, Anh-Dung, et al. "PixelAsParam: A gradient view on diffusion sampling with guidance." ICML 2023
[4] Liu, Xihui, et al. "More control for free! image synthesis with semantic diffusion guidance." WACV 2023
[5] Song, Jiaming, et al. "Loss-guided diffusion models for plug-and-play controllable generation." ICML 2023
[6] Karunratanakul, Korrawe, et al. "Guided motion diffusion for controllable human motion synthesis." CVPR 2023
[7] Zhong, Ziyuan, et al. "Language-guided traffic simulation via scene-level diffusion." CoRL 2023
[8] Janner, Michael, et al. "Planning with diffusion for flexible behavior synthesis." ICML 2022
[9] Ni, Fei, et al. "Metadiffuser: Diffusion model as conditional planner for offline meta-rl." ICML 2023
[10] Liang, Zhixuan, et al. "Adaptdiffuser: Diffusion models as adaptive self-evolving planners." ICML 2023
----
## [Question 1] Training Cost
As the sequential in-painting requires optimization with respect to $m \sim [1,h]$, the training time is increased for LDuS at the learning phase. However, since our objective is to enable zero-shot policy adaptation, efficiency at the training time is not our primary concern. Nonetheless, we ensured that all baselines received a sufficient amount of training time to achieve convergence.
----
## [Question 2] Definition of Context Reward
For **precise contexts**, where specific user requirements such as speed are quantified numerically, we calculate the difference between the specified and actual values. In cases of **abstract contexts**, where user requirements are expressed ambiguously (without explicit numerical values), we assess context reward by comparing the actual speed with the average speed in the dataset. For **multi-modal context**, where the agent should follow the user requirement only when it makes contact with the manipulating object, we compute context reward by calculating the difference between the specified and actual value when contact is made. Additionally, since the agent should not apply loss guidance without making contact, we impose penalties when the agent attempts to provide loss guidance while not in contact with the object.
----
## [Question 3] Loss Function Design for "Guidance"
The loss functions are manually designed by the domain experts for each context in the "Guidance" setting. To ensure the optimality of the loss functions, we conducted multiple tests and revisions to effectively reflect the given contexts.
----
## [Question 4] Number of Iterative Refinement
The refinement process is required between 0 to 4 depending on the context and the tasks. Moreover, our iterative refinements are executed for every skill horizon. For long-horizon tasks, we set the skill horizon at 16 steps, compared to 8 steps for short-horizon tasks. Consequently, the time required for iterative refinement is not significantly increased for long-horizon tasks.
----
## [Question 5] Sensitivity to Guidance Strength
LDuS is less sensitive to the loss guidance strength, as it employs iterative refinement to control the frequency of guidance application. A small guidance weight is preferable, because it allows for more detailed control. However, a small guidance weight may necessitate a more iterative refinement process to meet the specified context, presenting a tradeoff between detailed control and increased inference time.
---
Rebuttal 2:
Title: Rebuttal Acknowledgement
Comment: I want to thank the authors for their efforts in adding experiments and detailed responses to my questions. I suggest that the authors also include the standard deviation in the final paper for better comparison. Most of my concerns have been addressed, I will maintain my positive rating towards the paper.
---
Rebuttal Comment 2.1:
Comment: We deeply appreciate your positive rating of our paper and are very satisfied that your concerns have been addressed. Due to the text limit on the rebuttal, we did not include the standard deviation; however, it will be included in our final paper.
Regarding to the scope of the context, we will include experiments both 'spatial' and 'energy' contexts in our final paper. Additionally, we will address the several questions raised by the reviewer, such as the definition of context reward, the number of iterative refinement required, and sensitivity to guidance strength in our final paper.
We thanks for the reviewer once again for your thoughtful response. | Summary: LDuS is a diffusion-based approach for offline skill learning that adopts several advances to improve performance in goal-driven settings. The main contribution is the adoption of LLM-based guidance, allowing to comply with contextual information/conditions, while achieving the given goal.
Strengths: The work presents advances that improve skill diffusion and allow to comply with additional contextual information. The strengths of the work are:
* **Novelty**: the work presents two mechanisms, sequential in-painting and LLM-guided skill diffusion, which are somehow incremental but contribute in a novel way to the field. Given that these topics are of particular interest, I consider the novelty introduced significant.
* **Presentation**: the work is clearly presented, well-structured and with high-quality figures and tables
* **Evaluation**: the evaluation of the work is extensive, in terms of tasks, including various context types, in terms of baselines, where I found the "+guidance" baselines particularly useful, and in terms of ablation studies performed.
Weaknesses: The work presents some weaknesses, other than the limitations presented by the authors:
* **Problem**: the main problem addressed by this work (how to "stylize behaviours" given context information) is interesting but it's somehow narrow. Nonetheless, the method seems to perform better also in settings where context is not provided
* **Prompting**: as many works employing LLMs, the way the LLM is prompted seems to be crucial for correctly guiding the diffusion process. This means that changing the LLM, a new specific prompt structure would be required
Technical Quality: 3
Clarity: 3
Questions for Authors: * what is the current inference time and how does it compare with the other approaches?
* line 461 multimodal, should be multi-modal to align with the rest of the text
* typo "Planing", in Appendix
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are described in the paper, though the authors may consider adding some of the Weaknesses described above (e.g. prompting).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## [Weakness 1] Problem Definition
We believe our work addresses a novel practical problem where the agent adapts to unseen language-specified contexts in a zero-shot manner, even when trained without a context-labeled dataset. Previous works on language-conditioned skill learning [1,2,3] primarily focus on learning through direct supervision from language-annotated datasets. However, obtaining datasets annotated with a broad range of contexts is impractical in real-world scenarios due to the open-ended nature of language. Furthermore, in the realm of RL, while diffusion-based policies have been explored primarily for imitating given datasets [4,5], language-guided test-time control for zero-shot adaptation has not been thoroughly investigated. To the best of our knowledge, our LDuS is the first to integrate LLMs with diffusion models for zero-shot adaptation to language-specified contexts in the domain of sequential decision-making.
[1] Garg, Divyansh, et al. "LISA: Learning interpretable skill abstractions from language." NeurIPS 2022
[2] Zhang, Edwin, et al. "Language control diffusion: Efficiently scaling through space, time, and tasks." ICLR 2024
[3] Chen, Lili, et al. "Playfusion: Skill acquisition via diffusion from language-annotated play." CoRL 2023
[4] Wang, Zhendong, et al. "Diffusion policies as an expressive policy class for offline reinforcement learning." ICLR 2024
[5] Chen, Chang, et al. "Simple hierarchical planning with diffusion." ICLR 2024
----
## [Weakness 2] LLM Prompting
In order to obtain better responses from LLMs, it is widely known that the prompt should be optimized for each LLMs. However, our primary focus is on addressing the challenge of zero-shot skill adaptation in language contexts, not on developing prompting methods for LLMs. Therefore, there remains room for improving our prompting approach.
----
## [Question 1] Inference Time
In the table below, we present the average inference time (in milliseconds) required per timestep for LDuS and the baselines in MetaWorld. For comparison, we also report LDuS-Guidance, which is LDuS without the loss guidance. As shown, LDuS exhibits inference times similar to the baselines when only considering the diffusion sampling time. However, LDuS requires additional time for LLM inference. The LLM inference time can be further reduced by distilling essential knowledge, such as code generation and verification capabilities, into a smaller language model.
|Model|Diffuser|LCD|LDuS-Guidance|Diffuser+Guidance|LCD+Guidance|LDuS|
|---|---|---|---|---|---|---|
Inference Time|55ms|56ms|56ms|102ms|100ms|108ms (Diffusion Sampling) + 55ms (LLM)|
----
## [Question 2 & 3] Typo
We thank the reviewer for identifying the typos. We will correct these typos in our final version.
---
Rebuttal Comment 1.1:
Comment: I am satisfied with the author's rebuttal.
I would recommend including the inference time comparison in the paper, as this shows one of the current limitations of the approach.
I will keep my score and recommend acceptance of the work.
---
Rebuttal 2:
Comment: We are very encouraged by your satisfaction with our rebuttal. As addressed in response to question 1, we will include the inference time comparison in our paper. We deeply appreciate for your thoughtful feedback, as well as your decision to the recommend acceptance of our work. | Summary: This paper presents an LLM-based policy adaptation framework for a language specified context. Here, context is a slight variation in how the task is performed. It has two stages, in skill-learning phase, a skill-based diffusion policy is learned with in-painting technique. Here, skill is a VAE encoding of a sequence of states-actions. In adaptation phase, the generation process of diffusion policy is guided with loss-guided diffusion (LGD) mechanism, where loss is generated by LLM conditioned on the context. Experimental results generally show that their skill-based policy along with LLM-generated-LGD is better at context following that other baselines.
Strengths: This paper does a good job of combining LGD's usefulness with controlled generation and capabilities of LLMs to generate context following loss functions for robotic manipulation problem. The experiments/ablations are exhaustive and comparison with diffuser validates most key design choices like skill-conditioning, iterative-refining, in-painting etc). The paper is clear and coherent. Overall, it's a good proof of concept of how knowledge in LLMs can be distilled at test-time in zero-shot manner for controlled generation.
Weaknesses: 1. The scope of "context" is only limited to the agent's speed in the experiments. Meaning, the only type of adaptation/user-requirement studied in the paper is speed variation. Generating loss function for this is an easy task for an LLM given we have already seen objective generation on much harder tasks in works like [Eureka] and [Lang-to-Rew]. Perhaps experimenting with other types of context would have offered better insight into scalability of such approaches.
2. The methods has only been shown to work with low-dimensional state and not with images which raises question on it's real-world applicability. Although for given contexts, proposed framework might work with image-action data (as actions are sufficient to define trajectory for loss generation by LLM) but proof of that is missing. Perhaps an ablation where no low-dimensional state is used elsewhere except in skill-encoder and skill-prior, with only actions in trajectory, could have been performed to study the reliance of performance on state-info in both stages.
[Eureka]: https://arxiv.org/abs/2310.12931
[Lang-to-Rew]: https://arxiv.org/abs/2306.08647
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Line 197-198: How's error in the loss function identified if true loss function is not known? Line 638 mentions "if LDuS detects errors in the loss function", how is this detected? If this step is not robust then how will it be sure if mismatch is due to loss function or due to guidance weight?
2. Line 207: If speed variation is already captured in the training data then how is the evaluation ensured too be zero-shot? Details about the range of speed values during data collection and evaluation should be mentioned. t-SNE figure does show OOD generation but does not imply if all evaluation instances were OOD.
3. LDuS's no-iterative-refinement version outperforms diffuser suggesting skill-based diffusion is contributing a lot to the performance gain. In this regard, [Skill-diffuser] is a more relevant baseline to compare as it also conditions the action diffuser on latent skill computed using lang and image obs. It would have been a more fairer comparison in skill-based category as compared to LCD, as LCD is diffusion in latent skill space with RNN action head. Do you agree with this? If no, why? If yes, why wasn't this baseline compared?
[Skill-diffuser]: https://arxiv.org/abs/2312.11598
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: Image-based policy experiments are important to ensure real-world applicability of robotic methods and their abscence should have been addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## [Weakness 1] Scope of Context
We thank the reviewer for the thoughtful and constructive feedback. As the scope of context is limited in our experiments, we conduct additional experiments on two different types of user requirements including energy and spatial. In the energy context, the agent aims to minimize its energy consumption by reducing its acceleration or deceleration [1]. In the spatial context, the agent is tasked to not cross a specified spatial boundary. In the table below, we conduct tests on a single task in MetaWorld. As shown, LDuS outperforms the baselines in context reward (CR), demonstrating its scalability on diverse context types.
|Method|Energy Context (CR)|Energy Context (SR)|Spatial Context (CR)|Spatial Context (SR)|
|---|---|---|---|---|
|LCD+Guidance|56.3|66.7\%|29.4|33.3\%|
|Diffuser + Guidance|62.3|66.7\%|75.4|100.0\%|
LDuS|88.8|100\%|87.0|100.0\%|
Moreover, for complex tasks such as dexterous control, interactions with the environment [2] or API functions [3] are necessary. This is because LLMs are not inherently grounded on these complex environments, making them challenging to generate loss functions directly. Therefore, we believe that incorporating such approaches (Eureka and Lang-to-Rew) will enable LDuS to accommodate more complex tasks.
[1] Soori, Mohsen, et al. "Optimization of energy consumption in industrial robots, a review." Cognitive Robotics 2023
[2] Ma, Yecheng Jason, et al. "Eureka: Human-level reward design via coding large language models." ICLR 2024
[3] Yu, Wenhao, et al. "Language to rewards for robotic skill synthesis." CoRL 2023
----
## [Weakness 2 & Limitation] Image-based Experiment
Our LDuS framework can be seamlessly extended to accommodate image inputs, and we conduct an additional experiment in a single task in MetaWorld. To handle image inputs, we first train a ResNet-based autoencoder to obtain the image encoder. Subsequently, we use the image embeddings produced by the encoder as the state for training LDuS, without using any low-dimensional states.
In the below table, LDuS+Img denotes the image-based version of LDuS. As shown, LDuS+Img exhibits only a slight performance drop compared to LDuS, demonstrating the applicability for image-based experiment setups. Additionally, there is certainly room for improvement of LDuS+Img, as we do not have enough time to optimize the hyperparameters or test with larger network sizes. We will include these experiments in the final version of our paper.
|Method|Without Context (SR)|Language Context (CR)|Language Context (SR)
|---|---|---|---|
LDuS |100.0\%|92.7|100.0\%|
LDuS+Img|83.3\%|87.5|83.3\%|
----
## [Question 1] Error Detection in Loss Function
For errors due to the loss function, we employ self-critic mechanism of LLMs. This involves prompting the generated code again into the LLM to verify its alignment with the given context, and this verification occurs only at the initial. For error due to the guidance weight, the LLM evaluates whether the guided trajectory requires further modification, and if so, increases the frequency of loss guidance application. This verification is executed for every skill horizon. As mentioned in the response to the first question, for complex tasks, grounding the LLM to the environment is necessary to ensure robust error detection in the loss function.
----
## [Question 2] Experiment Settings for Zero-shot Adaptation
We set the target contexts for evaluation to those not present in the training dataset. For example, in the button press task, the training dataset comprises speeds of $[0.22, 0.27, 0.31, 0.34, 0.37]$, and we test speeds of $[0.32, 0.33, 0.38, 0.40]$ for zero-shot adaptation. As we set different speed configurations for each task, we will include the details of our experiment settings in our final version.
----
## [Question 3] Skill-diffuser Baseline
Given the similarities in the overall skill learning structure between Skill-diffuser and LCD, we deem it sufficient to compare only one as the baseline. Instead, we choose to compare with Diffuser and LangDT, which employ different structural approaches, such as decision diffusion-based planner and decision transformer, to highlight the differences between such design choices. Moreover, as the Skill-diffuser also focuses on learning skills by direct supervision from the datasets, we conjecture that it may not be capable of zero-shot adaptation to contexts. We will include the Skill-diffuser as a baseline for our final version.
---
Rebuttal Comment 1.1:
Comment: I thank authors for running additional experiments, especially the spatial context ones, it definitely builds more confidence into the method. I believe the proposed work is a promising direction towards incorporating various user-contexts and safety constraints. All my concerns have been addressed and I have updated my score.
---
Reply to Comment 1.1.1:
Comment: We are very satisfied that your concerns have been addressed and appreciate your decision to increase the score. We are also greatly encouraged by your recognition of our work as promising.
For the final version of our paper, we will include experiments with the 'spatial' and 'energy' contexts, as well as the image-based version of LDuS. Additionally, we will include details of our zero-shot evaluation settings.
Once again, we truly appreciate your thoughtful and insightful feedback. | Rebuttal 1:
Rebuttal: We deeply appreciate the valuable and constructive feedback from the reviewers. We addressed all identified weaknesses and questions to resolve the concerns raised. For those that require additional experiments, we have conducted further studies to provide a more comprehensive understanding. Lastly, we provide mathematical justification for Equation (5) in the attached PDF.
Pdf: /pdf/4d7a0452ee2b4033f3e7d78fabe40d96d9dadff6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Distribution-Aware Data Expansion with Diffusion Models | Accept (poster) | Summary: This paper proposed a training-free data augmentation method based on the diffusion models. To alleviate the poison phenomenon of the diffusion model, which is that the distribution of generated images will deviate from the natural distribution, this paper proposed a simple yet effective method based on the prototypes to introduce additional constraints to alleviate the deviation. The experimental results show that the proposed method achieves the SOTA performance compared to the baselines in this paper.
Strengths: 1. Leveraging the prototypes as the constraint to guide the generation process is novel. Directly using the diffusion model to enrich the dataset will cause the poison phenomenon that will harm the classifier's performance since the distribution of generated images deviates from the natural image distribution. In this case, this paper leverages the cluster-based method, i.e., the prototype technology is sound and interesting.
2. This paper is easy to follow.
3. The proposed method achieves the SOTA performance compared with various baselines.
Weaknesses: 1. The baselines lack the latest works, such as Brandon et al.[1] and Khawar et al [2]. For Brandon et al., although this method needs to fine-tune the pre-trained diffusion models, the time cost for fine-tuning them is very low and is close to the time cost reported in this paper. Under the closed computation cost, it should be considered the baseline.
[1] Effective Data Augmentation With Diffusion Models. Brandon et al.. ICLR 2024.
[2] DIFFUSEMIX: Label-Preserving Data Augmentation with Diffusion Models. Khawar et al. Arxiv:2405.14881.
2. The datasets used in this paper may not be enough. Since this paper does not only focus on small datasets, ImageNet should be considered the dataset to test the performance of the proposed method, which is similar to DIFFUSEMIX [2]. ImageNette is insufficient to replace ImageNet since it only contains 10 classes.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How to keep such a low time cost for the proposed method? The motivation for this question is that the author claims: "Stable Diffusion generates per sample in 12.65 seconds on average, while our DistDiff achieves the same in 13.13 seconds." The weakness of the guidance method (i.e., the energy function guidance used in this paper) is that it will dramatically increase the time cost, especially for a stable diffusion model. The reason is that the guidance method needs to calculate the gradient of the diffusion model. Concretely, please see Eq. 6. Eq. 6 needs first to calculate $\nabla_{e}D_{\theta}^{c}(z_{0|t},p_{c})$. $\nabla_{e}D_{\theta}^{c}(z_{0|t},p_{c}) = \frac{\partial D_{\theta}^{c}(z_{0|t},p_{c})}{\partial z_{0|t}}\frac{\partial z_{0|t}}{\partial e}$. By Eq. 5 and Algorithm 1, $\frac{\partial z_{0|t}}{\partial e} = \frac{\partial z_{0|t}}{\partial z_{t}}\frac{\partial z_{t}}{\partial e}$. In this condition, $\frac{\partial z_{0|t}}{\partial z_{t}}$ needs to calculate $\frac{\partial \psi}{\partial z_{t}}$ and $\psi$ is the pre-trained diffusion model. Therefore, the overall process needs to calculate the gradient of the diffusion model at least 50 times, while the time cost only increases by less than 1 second, which is hard to understand.
2. Could the author offer the ablation study for K=1 that only one group situation? The motivation for this question is that Table 6 shows that the increase of K has a limited influence on performance, which is confusing. In theory, following the storyline of this paper, more groups should increase the diversity of the generated images, which should have a positive influence. Based on the Table 6, it seems that group strategy is redundant.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: No additional limitations, including societal impact, should be discussed. All my concerns are listed in Weaknesses and Questions. If the author could clarify these concerns, I'm willing to increase my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort you have dedicated to reviewing our paper. We value your feedback on our method, noting that the **prototype technology is sound and interesting** and that it is **easy to follow.** We also thank you for recognizing that our method **achieves SOTA performance** compared to various baselines.
**Re Weakness #1:**
We apologize for missing this comparison. We have now included the baseline results in _Table 1_ and will update the revised paper accordingly. Our method outperforms both baselines. DA-Fusion modifies images while respecting semantic attributes but lacks fine-grained control and incurs additional time costs (around 14 minutes per class on a 3090 GPU) . Our training-free method uses hierarchical guidance for better alignment with the real distribution. DiffuseMix, which uses bespoke prompts and fractal images for augmentation, treats all datasets equally and may not handle distribution shifts well. Our method shows superior performance compared to these approaches.
| Method | Accuracy |
|-------------|----------|
| DA-Fusion | 79.14 |
| DiffuseMix | 82.33 |
| Ours | 83.09 |
*Table 1:* Comparison of accuracy in expanding Caltech-101 by $5\times$.
**Re Weakness #2:**
We follow previous work, such as GIF-SD and DA-Fusion, in conducting experiments on small datasets, as data augmentation strategies are often necessary in data-scarcity scenarios. Taking your advice into account, we have included results on ImageNet with $0.2\times$ expanding ratios, resulting in approximately 256K generated samples. As shown in Table 2, our method demonstrates improvements on large-scale datasets.
| Method | Accuracy |
|----------|----------|
| Original | 69.30 |
| Ours | 69.95 |
*Table 2:* Comparison of accuracy in expanding ImageNet by $0.2\times$. We trained ResNet18 with a $224 × 224$ resolution for 90 epochs.
**Re Question #1:**
You might have some misunderstanding regarding our method's principles. The energy function guidance used in our paper does not dramatically increase the time cost. As illustrated in Lines 310-315 (_More Optimization Steps_) of the manuscript, our method only introduces two additional optimization steps, which means calculating the gradient **2 times rather than 50 times**. Therefore, our method **does not dramatically increase the time cost** as you suggested.
**Re Question #2:**
When $K=1$, the group-level prototype becomes the class-level prototype, leading to results similar to those with class-level prototypes. In Table 6 of our manuscript, performance changes are not apparent, as the differences between $p_c$ and $p_g$ are minimal in the small class variance dataset Caltech-101. Our method shows more benefits with fine-grained datasets with greater class variance, as illustrated in _Table 3_ with StanfordCars. Additionally, having more groups is not always beneficial; an excessive number of groups may cause prototypes to fit noise points or outliers, which could degrade performance.
| Method | Accuracy |
|------------|----------|
| Stable Diffusion Baseline | 88.45 |
| $p_c$ | 89.55 |
| $p_c + p_g$ $(K=1) $ | 89.69 |
| $p_c + p_g$ $(K=2) $ | 90.36 |
| $p_c + p_g$ $(K=3) $ | 90.69 |
| $p_c + p_g$ $(K=4) $ | 90.62 |
*Table 3:* Prototypes comparison of accuracy in expanding StanfordCars by $2\times$. We trained ResNet50 with a 448 × 448 resolution for 128 epochs.
---
Rebuttal Comment 1.1:
Title: Response to Author
Comment: Thanks for the author`s rebuttal. I have carefully read all the contents. The new experimental results show the improvement of the author's proposed method. Although the improvement in ImageNet may be fair, I think it is due to the expansion, and the time limitation of the rebuttal period does not allow us to try (5x) expansion. Most of my concerns have been addressed. Thus, I will consider increasing my score to the borderline accept. Additionally, about the group ($K$), based on the rebuttal, it seems there is a trade-off about $K$ since $K$ cannot be small and be large. Meanwhile, $K$ will be influenced by the Dataset. A natural question is raised: Is there any way to choose the $K$ in an adaptive way?
---
Rebuttal 2:
Title: Response to Reviewer EsYJ
Comment: Thank you for recognizing that **"most of the concerns have been addressed"** and for **considering an increase in the score**. We truly appreciate your positive feedback!
1. Regarding the ImageNet experiment, we further applied the Stable Diffusion (SD) baseline to expand ImageNet by $0.2\times$ and conducted expriments. The baseline method achieved $69.66\%$ accuracy. Our method surpasses the SD baseline by $0.29\%$ and the accuracy of training on the original dataset by $0.65\%$. This confirms that our method has an advantage in data expansion compared to the original SD method. In addition, training accuracy typically increases with a larger expansion ratio, and the performance gap between our method and existing methods tends to grow. This phenomenon has been validated across multiple datasets, as shown in Figure 4 of the manuscript. We are adding more experiments with ImageNet expansion ratios and will include these in the final version.
2. Thank you also for your suggestion to choose $K$ adaptively. We have explored this idea using classical adaptive clustering strategies. However, this introduces another parameter to tune, such as the neighborhood radius or cluster distance, which can be more challenging to adjust than $K$. Additionally, this poses challenges for parallel computation since $K$ may vary within each batch. Nevertheless, this is a valuable idea, and we will consider your suggestion for further exploration and optimization of adaptive $K$ selection methods.
Thank you once again for your valuable feedback. If you have any further questions regarding our rebuttal, we would be happy to provide additional clarification.
---
Rebuttal Comment 2.1:
Title: Response to Author
Comment: Thanks for the author`s further clarification for the $K$ and my concerns about the fair improvement in ImageNet. Considering the novel for this paper, I think the strengths outweigh the weaknesses ($K$ should be chosen based on the dataset case by case) since there are enough ablation studies for $K$ to improve the reproduction of this paper. Therefore, I increased my score to borderline accept.
---
Reply to Comment 2.1.1:
Title: Thank you for your prompt response and feedback.
Comment: Thank you for your recognition and for raising the score. We really appreciate your positive feedback! We will consider your suggestion for further exploration the adaptive $K$ selection methods. | Summary: The authors present DistDiff, a training-free data expansion framework based on a distribution-aware diffusion model. DistDiff constructs hierarchical prototypes to approximate the real data distribution, optimizing latent data points within diffusion models through hierarchical energy guidance. The framework demonstrates its capability to generate distribution-consistent samples, significantly improving data expansion tasks. DistDiff consistently enhances accuracy across a diverse range of datasets compared to models trained solely on original data.
Strengths: 1. The authors claim high efficiency for DistDiff.
2. The framework effectively generates samples that align with the original distribution, markedly enhancing data augmentation tasks.
3. DistDiff consistently improves accuracy across a wide variety of datasets when compared to models trained only on the original data.
Weaknesses: 1. The manuscript introduces the concept of hierarchical prototypes but falls short in sufficiently explaining how these prototypes are selected or generated from the dataset. A more detailed description or examples of the prototype generation process would greatly enhance the reader’s understanding and bolster the credibility of the proposed method.
2. The authors claim high efficiency for DistDiff but do not provide supporting data. It is recommended to add a comparison table detailing DistDiff's computational time, resource usage, and scalability against other methods, to substantiate its efficiency claims.
3. The paper claims that " these two scores reinforce each other and are indispensable mutually," in Table 4, the observed impact appears minimal without P_g. Additionally, the k value for P_g seems to have little effect on the results. Further explanation and clarification are needed to substantiate these claims.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Figure 6, it is unclear why the optimal number of prototypes is set at K=3, as the difference from the figure is not readily apparent and the quantitative metrics only differ by 0.01. Could this be further elaborated to justify the selection of K=3 as the optimal prototype count?
2. It is suggested that the captions be further elaborated to explain the entire pipeline process in detail. Currently, the process terms used in the caption do not correspond directly to those labeled in the figure, leading to potential confusion. A clearer alignment between the text and graphical elements would improve comprehension and the overall effectiveness of the figure.
3. It is recommended to revise Figure 1. While it depicts the complex process of DistDiff, it does not provide clear benefits. Including performance comparisons or showcasing significant differences in generated results would enhance the figure's utility and informative value.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: As stated above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort you invested in reviewing our paper.
We address the raised concerns as follows:
**Re Weakness #1:**
Thank you for your valuable feedback. We appreciate your suggestion to provide a clearer explanation of the hierarchical prototypes.
We recognize that the manuscript would benefit from a more detailed description of the prototype generation process. In Lines 135-136 and Lines 139-141, we describe how prototypes are selected or generated from the dataset. Specifically, our method first extracts feature vectors using a pre-trained image feature extractor. Class-level prototypes are then obtained by averaging feature vectors within each class. To refine these prototypes further, we apply the agglomerative hierarchical clustering algorithm [1] to group samples from the same class into $K$ clusters. The group prototypes are computed by averaging feature vectors within each cluster.
In the final version of the manuscript, we will include a more comprehensive explanation of the prototype generation process and provide illustrative examples to clarify this methodology further.
**Re Weakness #2:**
The high efficiency of DistDiff stems from its direct guidance on the pre-trained model without the need to retrain the diffusion models. We provide computational time in Lines 334-339 and resource usage in Line 617. Compared to the stable diffusion baseline model, we only introduce an extra 0.48 seconds of inference time per sample, which is only 4% of the original inference time.
We included an efficiency analysis in _Table 1_. Our method incurs minimal additional time costs and generates samples directly, unlike LECF, which needs extra post-processing to filter low-confidence samples. This makes LECF slower in generating the same number of samples. We are incorporating this cost analysis into the paper.
| Method | Inference Time (s) |
|----------|-----------------------|
| SD | 12.65 |
| LECF | 12.73 (17.20 \*) |
| GID-SD | 13.47 |
| Ours | 13.13 |
*Table 1:* Inference Efficiency comparison with existing methods on Caltech-101 dataset. \* denots that the actual time required of LECF to derive one sample after filter post-processing. Evaluation processes are conducted on a single GeForce RTX 3090 GPU.
**Re Weakness #3 and Question #1:**
As shown in Table 4, both scores significantly improve performance, and further improvements can be achieved by combining them. However, we did not provide a clear analysis of their relationship, and the claim that these two scores are "indispensable mutually" lacks sufficient evidence and may not be appropriate. This is primarily due to the small class variance in the Caltech-101 dataset, where the differences between $p_c$ and $p_g$ are minimal. In contrast, when we applied hierarchical prototypes to a dataset with larger class variance, such as StanfordCars, the differences between hierarchical prototypes were more pronounced. This led to a more significant combined improvement, as shown in _Table 2_.
Additionally, we selected $K=3$ as the optimal prototype since it shows the best quantitative performance, although changes in the $K$ value are not apparent in the Table 6 of the manuscript. The influence of $K$ is more pronounced for fine-grained StanfordCars dataset, highlighting the importance of $p_g$, as shown in _Table 2_.
| Method | Accuracy |
|------------|----------|
| Stable Diffusion Baseline | 88.45 |
| $p_c$ | 89.55 |
| $p_c + p_g$ $(K=1) $ | 89.69 |
| $p_c + p_g$ $(K=2) $ | 90.36 |
| $p_c + p_g$ $(K=3) $ | 90.69 |
| $p_c + p_g$ $(K=4) $ | 90.62 |
*Table 2:* Prototypes comparison of accuracy in expanding StanfordCars 2$\times$. We trained ResNet50 with a 448 × 448 resolution for 128 epochs.
**Re Question #2:**
Thank you for the valuable suggestion. We will revise the captions to provide a more detailed explanation of the entire pipeline process and ensure that the terms used in the captions align clearly with those labeled in the figure. This will help improve comprehension and the overall effectiveness of the figure.
**Re Question #3:**
We appreciate your insightful recommendation. We will revise Figure 1 to include performance comparisons and highlight significant differences in the generated results. This will enhance the figure's utility and informative value, making the complex process of DistDiff clearer and more impactful.
[1] Hierarchical clustering. Introduction to HPC with MPI for Data Science 2016.
---
Rebuttal 2:
Title: Kindly Request for Your Feedback!
Comment: Dear **Reviewer 4Bhc**,
We appreciate your thoughtful evaluation and the opportunity to clarify and expand upon key aspects of our work. Based on the detailed responses and additional data provided, which directly address the concerns raised:
1. We have provided a detailed explanation of the generation process for our hierarchical prototypes.
2. We have included a comprehensive analysis of time complexity compared to existing methods, substantiating our claims of high efficiency.
3. We have supported the effectiveness of our hierarchical prototypes with additional ablation results on downstream tasks, confirming the efficacy of these prototypes.
We hope that our responses have clarified and addressed your questions satisfactorily. We will carefully revise the manuscript in accordance with the suggestions from all reviewers. If our explanations have resolved your concerns, we would be grateful if you could reconsider your rating. We are eager to make any further improvements necessary to meet the conference's standards.
Given the comprehensive nature of our response, we kindly request that you review our rebuttal and provide your feedback.
If you have any further questions regarding our responses, we would be happy to provide additional clarification.
Thank you for your time and consideration.
---
Rebuttal Comment 2.1:
Title: Hope to hear your response before discussion phase end
Comment: Dear **Reviewer 4Bhc**,
Thank you for your careful review of our paper. With approximately 20 hours remaining in the discussion phase, we sincerely hope our rebuttal has addressed your concerns. If our responses have clarified the issues you raised, we kindly request that you consider raising your score.
We greatly value your feedback and have tried to provide thorough responses to each point. If you have any unresolved questions or need further clarification, please don't hesitate to let us know. We will do our utmost to provide additional information within the remaining time.
Once again, thank you for your valuable time and expert opinion. Your feedback is crucial in improving the quality of our research.
Best regards | Summary: This paper focuses on data augmentation or expansion by generating synthetic data from pre-trained large-scale diffusion models. To ground the samples from these large-scale diffusion models, the paper proposes an energy-based guidance approach where the energy function depends on hierarchical prototypes. In the paper, Hierarchical prototypes are essentially feature vectors that define the object classes. The hierarchical prototypes are obtained as follows - First, the features for each class are aggregated to obtain a class level representation. Further, the features of each class are clustered into K clusters to obtain sub-group representations within each class.
The paper shows clear improvement for classification tasks on many standard datasets, when classifiers are trained from scratch on the augmented dataset.
Strengths: The paper addresses a very important problem - How to perform effective data augmentation using synthetic data generation models. The approach does not require any training or fine-tuning of the diffusion models to adapt the generation to the required data distribution. The paper has detailed ablation studies for each design choices. Also, the experimental results suggest considerable improvement over the prior data expansion approaches.
Weaknesses: 1. There seems to be some confusion in the explanation. Section 3.3 (Transform Data Points) seems to suggest that the approach always starts with a sample from the dataset. However, the algorithm in the appendix does not include any sample from the dataset.
2. Is the approach extendable to other supervised learning tasks like segmentation, detection, etc? It looks to me like the augmentation approach is tailored for classification tasks only, with the usage of class-specific hierarchical prototypes. Whereas, traditional augmentation approaches like random cropping, rotation, etc., are generic.
3. One of the main contributions of the paper is the residual multiplicative transformation. The paper does not give a clear answer to "why should I not adjust the latent $z_t$ directly to optimize for the energy function $g_t$?" The paper shows empirical explanations in the ablation, but there is a lack of concrete reasoning as to why this approach works.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the authors please address the questions 2 and 3?
Additionally, how is the distribution across classes chosen for the synthetic data generation process? Are all classes equally sampled? I couldn't find this information in the paper.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for the detailed and professional attention you have given to our work during the review process. We greatly appreciate your recognition of the **very important problem** our work addresses, as well as your acknowledgment of our **detailed ablation studies** and the **considerable improvements** made.
Below are our detailed responses to the weaknesses and questions:
**Re: Weakness #1**
In Section 3.3, we introduce our method starting with a sample $x$. However, in the algorithm presented in the appendix, we begin with the latent point $z_T$, which omits the image encoding process from $x$ to $z_T$ for simplicity. Thank you for pointing this out. We recognize that this may cause confusion and will revise it in the final version.
**Re: Weakness #2**
Our method effectively generates classification data and has the potential to be extended to detection and segmentation tasks. This extension requires incorporating more advanced foundational models, such as ControlNet [1], which can use layout maps or segmentation masks as conditions to control the spatial positioning of targets. An intuitive approach to extend our method might involve cropping features from relevant target regions, averaging them to obtain feature vectors, and then constructing prototypes to guide the generation process.
**Re: Weakness #3**
The residual multiplicative transformation applies channel-level transformation to the original latent point. Compared to directly optimizing the latent point, this transformation constrains the optimization space, preventing out-of-control guidance and making the optimization process easier.
**Re: Question #1**
We apologize for missing this detail. We follow previous data expansion methods by expanding each sample certain ratios. The sampling category distribution is consistent with the original dataset category distribution. We will add an description in the final version.
[1] Adding Conditional Control to Text-to-Image Diffusion Models, ICCV2023.
---
Rebuttal Comment 1.1:
Comment: I agree with the author's rebuttal. Though approaches like ControlNet can be used for generating synthetic data augmentations, the energy function used for guidance, built using hierarchical prototypes, looks very much tailored for classification tasks alone. It is very unclear how this approach can be extended to tasks like detection where prototypes have to be constructed not only for object classes but also for object locations, which is not so straightforward.
---
Reply to Comment 1.1.1:
Title: Thanks for your prompt reply!
Comment: Dear **Reviewer ghiL**,
Thank you for your thorough review and for agreeing with our rebuttal. We appreciate your responsible evaluation.
For the extension to detection and segmentation tasks, we need to guide latent points at the instance level. Based on the segmentation mask conditioned ControlNet model, a potential guiding design is as follows:
**(a) Deriving Hierarchical Prototypes:**
Given a sample $x$ with $N$ instances and its annotation mask $ M =$ \{ $m_1, m_2, m_3, \ldots, m_N$\}, we first derive each instance image $I_i$ by suppressing its background pixels to zero and taking the minimum bounding box of the foreground region. Then, all instance images within a class are resized and fed to the pre-trained image encoder to derive their feature embeddings. We construct hierarchical prototypes for each class using a feature clustering strategy as mentioned in Section 3.2.
**(b) Guiding in the Denoising Process:**
During the denoising process, we apply instance-level energy guidance as follows: First, we transform each instance in the latent point $z_t$ with a residual multiplicative transformation similar to Section 3.3. We then predict a clean data point $x_{0|t}$ at $t $-step and derive $N$ instances by suppressing the corresponding background pixels in $x_{0|t}$ using predefined mask conditions. Finally, we calculate the energy score and apply our energy guidance based on these $N$ predicted instances and their corresponding hierarchical prototypes, similar to Equation 6 in our manuscript.
Additionally, considering that guiding $N$ instances during denoising may introduce extra computational load, we further propose an more efficient strategy: **Guide Multiple Instances Once-for-All**. Unlike the previous design, this method directly inputs the predicted $x_{0|t}$ into the image encoder and performs energy guidance at the final layer of the image encoder's feature map, thus forwarding the image encoder only once. Compared to the first design, this approach is more efficient but may result in instance feature disturbances due to the convolutional nature.
These designs theoretically extend the application of energy guidance to detection and segmentation tasks. We are conducting additional experiments to evaluate this extension for detection and segmentation data augmentation tasks, which will be included in future versions of our work.
Given these clarifications and the positive aspects of our work that you've previously acknowledged, we kindly ask if you would reconsider your evaluation. We believe our research makes a valuable contribution to the field and addresses important challenges in classification data augmentation.
With approximately 18 hours remaining in the discussion phase, we sincerely hope our rebuttal has addressed your concerns. If our responses have clarified the issues you raised, we kindly request that you consider raising your score.
We greatly value your feedback and have tried to provide thorough responses to each point. If you have any unresolved questions or need further clarification, please don't hesitate to let us know. We will do our utmost to provide additional information within the remaining time.
Once again, thank you for your valuable time and expert opinion. Your feedback is crucial in improving the quality of our research.
Best regards
---
Rebuttal 2:
Title: Kindly Request for Your Feedback!
Comment: Dear **Reviewer ghiL**,
Thank you again for your valuable comments. We have tried our best to address your questions (see rebuttal above), and will carefully revise the manuscript by following suggestions from all reviewers. Please kindly let us know if you have any follow-up questions.
Your insights are crucial for enhancing the quality of our paper, and we would greatly appreciate your response to the issues we have discussed.
Thank you for your time and consideration. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Bridging Inter-task Gap of Continual Self-supervised Learning with External Data | Reject | Summary: This paper focused on continual contrastive self-supervised learning (CCSSL), highlighting that the absence of inter-task data results in sub-optimal discrimination in continual learning. The authors then proposed a method that performed contrastive learning of external data as a bridge between continual learning tasks. The proposed method achieves some improvements in a plug-in manner.
Strengths: 1. This paper is basically well-organized and easy to follow.
2. I appreciate the idea that continual learning should consider the inter-task discrimination, which is limited by historical samples and under explored in literature. This results in a gap between ideal continual learning performance and joint training performance.
3. The proposed method seems to provide plug-in improvements over continual learning baselines.
Weaknesses: 1. The proposed method is essentially a straightforward extension of contrastive learning with external data, which limits novelty and technical contributions.
2. As acknowledged by the authors, the similarity of external data to the continual learning tasks is highly relevant to the performance improvements. The use of relatively different / OOD data tends to provide less improvements. Compared with the large amount of external data in use, such improvements may not be significant enough.
3. The employed external data is basically public datasets with careful pre-processing. In realistic applications, the external data in the wild (i.e., without such pre-processing) may result in additional differences and thus further limit the applicability of the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: I appreciate the motivation and key argument of this paper. However, my major concerns lie in the technical contribution and the applicability in realistic applications. Please refer to the Weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed their limitations and societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We address your questions or concerns below.
##### **Q1: Novelty and technical contributions**
> **A1:** Thank you for your comments. However, we would like to clarify that our work is not a simple combination of contrastive learning and external data, but can bring many insights to the CCSSL field.
> Our novelty and contribution are mainly:
> 1. We point out the problem of ignored inter-task comparisons in CCSSL and demonstrate through many experiments (Fig 1 Right, Table 7 in Appendix A2.1) that this problem can exacerbate the model's confusion of inter-task classes. As reviewers oaUg and hAsz said, this problem is “significant but often overlooked”, “The finding that existing regularization-based CCSSL methods overlook inter-task discrimination is novel”, we believe that this problem is inspiring and novel. Although this problem has often been overlooked in previous works, it should be considered in any future work in CCSSL.
> 2. We demonstrate the soundness and effectiveness of using external data in CCSSL, which is novel in this field. As reviewers oaUg and hAsz said, “the proposed method leveraging external data is novel”, “BGE offers a creative solution”. For a long time, there has always been a gap between replay-based and regularization-based approaches (i.e., replay-based approaches generally perform better than regularization-based approaches, but have limited applicability due to privacy, safety, and other concerns). Considering the characteristics of contrastive learning (refer to Appendix A.2.4), we propose that using external data can also play a role in improving the performance of existing regularization-based approaches while avoiding the aforementioned potential concerns.
##### **Q2: The performance improvements in realistic applications**
> **A2:** Admittedly, there is a relevance between the performance improvement of BGE and the quality of the external datasets. However, in realistic applications, we do not have to use wild data as external data. Instead, we can choose high-quality general datasets such as ImageNet, which are already capable of handling most of the realistic tasks.
> In our paper's experiments (Table 1, 2, 5, 6), we artificially added a lot of OOD data to the external data, intending to test the ability of our method in more challenging scenarios, but such scenarios are not common in real-world situations, as most domains have established relatively well-developed public datasets nowadays. Therefore, our method can make sense in most real-world scenarios. In addition, even in extreme cases (where the external data we collect contains many OOD data), our OPO sampling algorithm can select external data that is more suitable to be added to the training, and the results in the Table 3 in our paper demonstrate the performance improvement of the sampling algorithm when the external data contains a variety of OOD data.
##### **Q3: More detailed discussion of external data quality**
> **A3:** To clear the quality requirements for external data, we conduct experiments using various datasets as the external dataset. In Table 6, we show the results using Internet data (CC3M), generated data (GenImage), or fine-grained data (CUB-200) as the external dataset. Although these datasets have different characteristics, they are all still effective in improving the baseline method. One of them, CUB-200, is a dataset containing only various classes of birds, which can still be utilized by the BGE even though it is of very low quality for our task (due to lack of diversity). To simulate using external data in the wild, we use CC3M as the external dataset, which has images harvested from the Internet with very simple filtering (thus representing a wider variety of styles). Nevertheless, BGE still enhances the baseline method. We believe that even if we truly use external data in the wild with only a little filtering, the results would probably be in line with using CC3M.
> In addition, the sampling algorithm we designed can also filter out OOD data to further improve the quality of external datasets. Therefore, concerns about applicability from the perspective of external data quality are less significant.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. I carefully read all reviewers' comments, the corresponding rebuttal, and the original paper. I understand the idea of using external data to improve inter-task representation in continual learning. However, this idea is not completely novel (a lot of papers have explored this idea). The proposed method is relatively straightforward and only leads to moderate improvements. Therefore, I cannot be more positive at the current stage.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our rebuttal and for your detailed assessment of our work. We appreciate your recognition of our approach to using external data to enhance inter-task representation in continual learning.
While we value your feedback, we respectfully disagree with the statement that "a lot of papers have explored this idea." To the best of our knowledge, our work is the first to investigate this approach within the CCSSL context. If there are other papers that have addressed this concept in a similar way, we would greatly appreciate it if you could provide references so we can better understand and compare our work with theirs.
Thank you again for your valuable insights. We remain committed to refining our research based on the feedback provided and hope you will reconsider our work in a positive light. | Summary: This paper finds that existing methods in continual contrastive self-supervised learning (CCSSL)--a class-incremental learning scenario where the data is unlabeled--overlook contrasting data from different tasks, leading to inferior performance compared to the joint training upper bound. The authors propose to sample external data that are similar to each of the learned tasks to augment learning the current task. The self-supervised learning (SSL) objective on the union of the selected external data and the current-task data encourages the model to distinguish the current task and the learned tasks better.
The authors perform experiments with ResNet-18 and (mostly) BarlowTwins on CIFAR-100 and ImageNet-100, with a mix of other datasets as the external data. The authors find that their method, BGE, consistently improves existing CCSSL methods that do not perform inter-task discrimination. Differently, the joint training model does not benefit from external data.
Strengths: 1. The finding that existing regularization-based CCSSL methods overlook inter-task discrimination and the proposed method leveraging external data are novel.
2. The experiments performed in the analysis section (Sec. 4.3) provide insights into why the method works and are interesting to me, especially on whether the benefit of external data comes from positives or negatives.
Weaknesses: 1. The SSL method used is mainly BarlowTwins (except in Table 4 where SimCLR is used), which is not usually considered as a contrastive learning method because it does not contrast the anchor with negatives. I wonder if (a) providing preliminaries in the contrastive sense (Sec. 3.1), (b) including "contrastive" in the setting name (CCSSL), and (c) arguing that OPO enforces diversity because of findings based on contrastive learning (L#191) are misleading. I think there also needs to be some intuitions on how non-contrastive SSL methods like BarlowTwins help distinguish inter-task data since it is the one used in the experiments, and such an analysis could be very interesting.
2. My general feeling about the writing is that, although the main ideas are conveyed clearly, some claims require justification, and can be improved. Besides some big words ("much more meaningful" in L#69, "widely agree" in L#138, "extremely low" in L#167, etc.), please see the questions below for concerns regarding specific reasonings.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. (L#157) Could the authors provide references for privacy concerns? I wonder if the model can still encode information about the (potentially private) training data into its parameters, even without storing data into a memory buffer. External data can also be private, which are stored into a large memory buffer (10K in the experiments) by the proposed method. Since replay-based methods do perform inter-task discrimination by minimizing the loss over the replay buffer, I think there needs to be more justifications regarding why they are not suitable here.
2. (Eq. 3) Is this a weighted sampling? I assume $|D_e^{t-1}|$ is much smaller than $|D_t|$ and the ratio changes as you see more tasks, so a uniform sampling would oversample the current task.
3. (Sec 3.3) Is diversity encouraged here because you sample the current task data uniformly one by one when selecting the closest external data? Also, why is diversity a "proxy for...future task data?" It seems to me that diversity just means that the external data estimates $D_t$ well.
Minor:
4. (Eq. 1) It is a bit weird to say that this equation is the objective of the CCSSL setting. I think the objective is always to minimize the loss when the expectation is taken over the global distribution $D$ (i.e., not over each $D_i$). Maybe a better way is to say that it is an approximate of the objective of existing regularization-based CCSSL methods.
5. (L#138-143) I think inter-task discrimination is not specific to CCSSL. In supervised continual learning (CL), there are works that discuss this problem, e.g., [1, 2]. The writing here suggests that lacking inter-task discrimination stems from the absence of labels, but it seems to me that it stems from CL itself.
6. (L#52) Why is the proposed method "more generalizable and robust to OOD data?" I thought the experiments (e.g., in Table 1 and 2) show that ID data is always the best choice for external data.
7. (L#54) Why does the proposed method "not require extensive external data?"
8. (L#59) "enables" -> "It enables".
9. (Eq. 2) Should there be coefficients before each loss term for the equality to hold?
10. (L#283) Which result is this paragraph referring to? In Table 3, I find some improvement on ImageNet100 also quite small (<1% in some cases).
11. (L#496) SGD with a learning rate of 0.4 seems a bit high to me. How did the authors perform hyperparameter search?
12. [3] also finds that existing CCSSL methods do not perform inter-task discrimination (which they call "cross-task consolidation"). They also propose an optimization objective (Eq. 8) similar to Eq. 2 in this paper. This is a concurrent work, but I wonder if the authors should discuss it due to the similarity.
[1] Learning a unified classifier incrementally via rebalancing. Hou et al. CVPR 2019.
[2] A theoretical study on solving continual learning. Kim et al. NeurIPS 2022.
[3] Integrating present and past in unsupervised continual learning. Zhang et al. CoLLAs 2024.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors mention that their method uses external data which preserves privacy. One concern is that when the external data is not curated (e.g., scraped from the internet), there is risk that they contain private or harmful information that can be learned by the model.
Another point is that the findings are limited to BarlowTwins (and SimCLR in one experiment) and regularization-based CL methods, and may not generalize.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We address your questions or concerns below.
**Q1: Analysis of how BarlowTwins “contrast” during learning**
> **A1:** We treat BarlowTwins here as a generalized contrastive method. Although it is not directly contrast the anchor with negatives, [1] has approximately regularized its loss to keep positive pairs aligned (alignment) and negative pairs away (divergence). Specifically, after computing the cross-correlation matrix of two views' features, BarlowTwins constrains the diagonal elements to be close to 1 (first loss term) and off-diagonal elements to be close to 0 (second loss term). [1] indicates that the first term can be formulated as an upper bound on alignment, and the second term as an upper bound on keeping class centers apart (approximate divergence). Thus, its loss optimization has an implicit contrast effect. Therefore, BarlowTwins can also be explained by contrastive learning theory, justifying the use of "contrastive" word in paper. We will clarify this in future editions to avoid misunderstanding. In addition, we have evaluated some other contrastive methods, including SimCLR (Table 4, **PDF Table R3Q5 (a)**) , BYOL (Table 8) and VICReg (**PDF Table R3Q5 (b)**).
>
> [1] Towards the generalization of contrastive self-supervised learning. Huang W, et al. ICLR 2023.
**Q2: More explanation about privacy concerns**
> **A2:** Please see the common response.
**Q3: Is this a weighted sampling?**
> **A3:** No, it’s a uniform sampling. We conduct a weighted sampling experiment based on "PFR CIFAR100 4 tasks, with CIFAR10 as external data", which ensures that the data in each epoch is well-balanced from each task. However, the result is 62.69, lower than 64.37 for uniform sampling.
> We believe the reason may be: 1. $D_e^{t-1}$ has a limited size, thus it cannot provide sufficiently diverse external data as seeing more tasks. 2. Since the current task has not been learned before, the learning of in-task data is crucial to improve the performance of the current task.
**Q4: Is diversity encouraged here because you sample the current task data uniformly one by one when selecting the closest external data?**
> **A4:** Yes, we sample one closest external data by one current task data for diversity purposes.
> External data do not necessarily belong to the same class as in-task data, and their classes may also be similar to future classes. Since the feature distribution learned by contrastive learning is relatively uniform, the distribution of external data obtained by letting each in-task data select the nearest external data is also relatively uniform, enhancing diversity. The understanding of "$D_e^t$ estimates $D_t$ well" is correct, but $D_t$’s feature distribution is relatively uniform. If the goal of sampling $D_e^t$ is merely to proxy $D_t$, we should select data with the most distinctive features of each class in $D_t$ (usually selected through class prototypes in supervised learning). In contrast, the goal of diversity helps in selecting external data that may feature future tasks, rather than exclusively best matching the old class features .
**Q5: More contrastive learning method paradigm experiments**
> **A5:** We use BarlowTwins for most of our experiments because most of the prior works commonly include it, thus using it is convenient for comparisons. We also show results using another self-supervised learning method, BYOL, in Appendix A.2.2.
> To further exhibit the generalizability of BGE, we extend BGE to SimCLR (ImageNet100 benchmarks, ImageNet-900 as external dataset) and VICReg (CIFAR100 benchmarks, CIFAR10 as external dataset) based on CaSSLe and FT, the experimental results are shown in the **PDF Table R3Q5(a) and R3Q5(b)**.
**Q6: Minor question responses**
> **(Eq. 1, L#59, Eq. 2) Presentation questions:** Thank you for your suggestions, we'll revise them carefully.
>
> **(L#138-143) Presentation of inter-task discrimination:** Inter-task confusion is not specific to CCSSL; it's partly caused by catastrophic forgetting, which is common in all continual learning scenarios. My point in here is that due to the lack of labels, CCSSL needs to learn by inter-sample comparisons, and the absence of inter-task comparisons exacerbates inter-task confusion.
>
> **(L#52, L#54) Presentation of our method’s advantages:** In paragraph of L#52, we compare BGE with prior continual learning methods using external data. So the “more generalizable and robust to OOD data” and “not require extensive external data” are our advantages over them. Some prior methods use external data in a supervised manner, needing to generate pseudo-labels, which degrade in quality with OOD data, affecting performance. Other methods require extensive data to stabilize feature space when training, compared with them, our required external data is less.
>
> **(L#283) OPO sampling algorithm analysis:** This paragraph refers to the Table 3. It's true that in some cases it's not obvious.
>
> Sorry for the misunderstanding of above minor questions. We will revise them in the future.
>
> **(L#496) Hyperparameters search:** The learning rate in CaSSLe and PFR’ code is 0.3. We follow them and search hyperparameters around them.
>
> **Discuss the concurrent work:** [1] discovers the lack of inter-task comparisons in existing CCSSL methods concurrently with us, but [1] solves it by simply saving old task data for comparison with new task data. We argue that saving old task data may be impractical due to privacy concerns and inaccessibility, so we propose using external data to compensate for inter-task comparisons. Our work not only resolves inter-task confusion but also provides a new performance improvement scheme for CCSSL without saving old task data based on the properties of contrastive learning (the properties refer to Appendix A.2.4).
>
> [1] Integrating present and past in unsupervised continual learning. Zhang et al. CoLLAs 2024.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ rebuttal as well as the reference of Huang, et al.
I have read through the other reviews and the corresponding rebuttals. Most of my concerns are resolved, but I’m still not entirely convinced by the arguments around privacy concerns/motivation. This is mainly due to the lack of a **concrete** practical scenario where (i) we cannot store past training images, (ii) the external dataset is different enough with the training data (and thus not private) and (iii) using this external dataset helps performance. Unless based on a concrete scenario and experiment, I find it hard for people to agree on (ii).
Therefore I tend to keep my score of weak accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thorough review and valuable feedback!
Regarding privacy concerns, we can provide several concrete scenarios where our method would be particularly applicable. For instance, in the context of remote sensing or surveillance images, storing past training images could raise significant security and privacy issues. As a result, it would not be feasible to retain these images directly. However, our method allows us to leverage in-task data to sample relevant external data from similar publicly available datasets [1][2] after the current training stage. These external datasets can be stored and used for subsequent training in a privacy-preserving manner. Given their relevance to the tasks at hand, it is reasonable to expect that using this external data can improve performance.
[1] Sun X et al., "FAIR1M: A benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery," ISPRS 2022.
[2] Li D et al., "A richly annotated pedestrian dataset for person retrieval in real surveillance scenarios," TIP 2018.
We hope this clarification addresses your concerns. Thank you again for your insightful comments. | Summary: The paper introduces BGE, a novel approach to address the challenge of inter-task data comparison in Continual Contrastive Self-Supervised Learning (CCSSL). BGE incorporates external data to bridge the gap between tasks, facilitating implicit comparisons and improving feature discriminability. The paper also presents the One-Propose-One (OPO) sampling algorithm to select relevant and diverse external data efficiently. Experiments demonstrate BGE's effectiveness in enhancing classification results across various datasets and its seamless integration with existing CCSSL methods.
Strengths: 1.BGE offers a creative solution to a significant but often overlooked problem in CCSSL, enhancing the feature learning process through external data.
2.The paper provides extensive experimental results that validate the effectiveness of BGE in improving classification performance across different datasets.
Weaknesses: 1.The introduction of external data may increase the computational cost and training time, which could be a limitation for resource-constrained environments. The authors may provide more analysis about the extra time comsumption problem.
2.While BGE shows promising results, the paper could provide more insight into how the method scales with the size of the external datasets, which is crucial for very large-scale problems.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does BGE handle the potential privacy concerns that may arise from using external data, especially if the data contains sensitive information?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper acknowledges the increased computational cost due to the use of external data. However, it could further discuss the trade-off between performance improvement and time comsumption.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We address your questions or concerns below.
**Q1: The trade-off between performance improvement and time consumption**
> **A1:** The additional computational consumption of BGE comes mainly from the additional amount of data, therefore, we statistic the performance and corresponding computational consumption ratios based on PFR 4 tasks as the external data budget $K$ increases.
>
> | Budget | 0 | 2000 | 5000 | 10000 | 20000 | 30000 |
> | ----------- | :---: | :---: | :---: | :---: | :---: | :---: |
> | Performance | 60.92 | 61.96 | 62.79 | 64.37 | 65.43 | 65.55 |
> | Cost | 1 | 1.12 | 1.3 | 1.6 | 2.2 | 2.8 |
> The cost represents the ratio of the number of iterations to the baseline. As the budget increases, the improvement of BGE to the baseline method becomes more significant until the amount of external data reaches approximately 20,000, at which point it is difficult to improve performance by further increasing the budget. Considering the trade-off between performance improvement and time consumption, we set a budget of 10K in our method.
> At the same time, to validate the effectiveness of BGE under the constraint of computational resources, we control the computational consumption of BGE to be consistent with the baseline method to conduct the following experiments. All experiments are conducted under the “PFR+BGE, CIFAR100 4 tasks, CIFAR10 as external dataset” setting.
> 1. **Train less epoch.** In this case, the model can only be trained for fewer epochs when more external data is used, the results of BGE with different external data budgets are shown below:
>
> | Budget | 0 | 2000 | 5000 | 10000 | 20000 | 30000 |
> | ----------- | :---: | :---: | :---: | :---: | :---: | :---: |
> | Epoch | 500 | 431 | 357 | 277 | 192 | 147 |
> | Performance | 60.92 | 61.47 | 61.99 | 62.87 | 62.59 | 63.13 |
>
> The second row of the table indicates the number of epochs that can be trained at different budgets. When the budget is large (>10000), even if the number of epochs that can be trained is low, the performance is still better than the baseline.
> 2. **Incorporating external data by mixup.** In this case, instead of adding external data directly to the training, we mix each in-task data with a randomly selected external data when training, making the number of data iterations the same as the baseline. The only additional consumption introduced here is the mixup operation, which is negligible.
>
> | Method | PFR | PFR+BGE$_{mixup}$ | PFR+BGE |
> | ----------- | :---: | :---------------: | :-----: |
> | Performance | 60.92 | 62.57 | 64.37 |
**Q2: How BGE scales with the size of the external datasets?**
> **A2:** As shown in the first table of A1, as the size of the external datasets increases, BGE provides more improvements to the baseline method. Until the budget reaches about 20,000, further increases in scale cannot seem to keep improving performance. The budget is slightly larger than the scale of our in-task data at this time. Therefore, we observe that the scale of external data should roughly match the scale of in-task data in our method.
**Q3: How does BGE handle the potential privacy concerns that may arise from using external data, especially if the data contains sensitive information?**
> **A3:** We can limit the selection of external data to publicly available datasets on the Internet, and refer to the license and other statements of each dataset to select data that are allowed to be publicly used, and they usually do not have privacy risks.
> If in some special cases, using publicly available datasets as external data cannot harvest the desired results, and therefore we have to collect the data by ourselves, we can introduce some techniques from other domains (e.g. privacy filtering, differential privacy, machine unlearning) that are complementary to our method to address privacy concerns. Since the number of candidate external data is almost infinite, even after filtering, some useful external data can still be retained.
> Another perspective of privacy is data accessibility, where we may not have access to old task data. For example, some existing models on the Internet may only release its checkpoint but not the training data for some privacy reasons. While if we choose to use publicly available external datasets, there are no accessibility concerns. | Summary: The authors:
- argue that an optimal model for continual contrastive self-supervised learning should perform as well as a model trained with contrastive learning on the whole set of data, including negative samples taken between different temporal slices of the dataset, no just within the same temporal slice
- propose a method for using pre-existing external data to augment the temporally constrained dataset
For context on my background, I am very familiar with SSL literature, only loosely familiar with continual learning, and have never heard of continual contrastive learning before.
Strengths: Researchers measure the performance of their technique on top of several existing techniques for preventing catastrophic forgetting, showing the performance of the combination of methods. It is valuable to know this.
The authors make comparison against some baselines - not just training on the joint data from scratch (non-continual learning paradigm); but also with external data added. They also demonstrate the performance with random sampling of the external dataset vs smart sampling with their algorithm, and investigate some ablations. These results help to inform where their approach provides value.
The discrepancy between the performance of a model trained with negative samples between subdatasets and with negative samples only taken within subdatasets appears to be a noteworthy observation and one which should be discussed within the continual contrastive learning community. To my understanding, the joint task as the authors suggest sounds appropriate. However, this change could be construed as changing the task that is posed to the model and moving the goal-posts (defining an easier task than is used in the literature at present), indeed as could incorporating external data. The question is in some sense what the goal of CCSSL truly is. In standard continual learning, the goal is to retain performance on previous tasks in the face of training on new tasks. This is often simulated by having classes within a dataset arrive in staggered batches. However, in contrastive continual learning there appears to be only one task (contrastive learning) and the data on which that one task is trained is merely staggered. Does the authors' approach break down an artificial barrier that was in place to simulate a harder task? Or a barrier that should not have been present in the first place and is a vestigial barrier inherited from continuous learning? This is unclear to me.
The work is generally well presented.
Weaknesses: My understanding of continual learning is that one experimental paradigm that is used is to retain previously presented subdatasets/subtasks and to prevent catastrophic forgetting by including the old tasks in the mix while introducing a new task (e.g. [Robins, 2001](https://www.tandfonline.com/doi/abs/10.1080/09540099550039318), [Aljundi, 2019](https://proceedings.neurips.cc/paper_files/paper/2019/file/e562cd9c0768d5464b64cf61da7fc6bb-Paper.pdf)). This setup is not considered in the paper, but it seems it would address a significant fraction of the issues the method is attempting to address with regards to the joint vs intra-only training configurations. I imagine results may still be improved by incorporating external data in the "imaginative" capacity before the full dataset has "arrived", even in this scenario. It is unclear why the authors retain this barrier (refusing to continue training on datasets $D_{0, ... i-1}$ even whilst changing the task from a series of isolated contrastive learning tasks to a joint contrastive learning task where data arrives at staggered intervals.
The paper is missing comparison to some additional baselines which would be useful to see:
- What is the performance for a model trained solely on external data, without using the continual learning dataset?
- The performance Joint+ED uses a static subset of the external data. One could also consider using Joint + a subset of size $K$ of the external data that changes every epoch, so potentially the model eventually sees all samples from ED and not just a subset.
- What would the performance be if instead of finding external data proxies for the existing data, you simply retained the previous D_i datasets from previous tasks without discarding them?
### Statistical significance
There are no evaluation as to whether the difference in results is statistically significant. This could be done over repeated runs with different seeds; experiments appear to only be performed with a single random seed. As the authors have repeated runs with different experimental paradigms which could be combined together to make a test for difference without needing to perform experiments with multiple seeds, however there may be correlated randomness between the experiments (i.e. it would be better if experiments are not all performed with the same seed, nor the same ordering of tasks; these should be held constant between comparators and varied between runs to eliminate the effect of these hidden variables on the findings).
### Figures
Fig 1: tSNE has parameters that need to be tuned correctly (perplexity in particular) in accordance with the scale of the features, whereas the more recent technique PaCMAP doesn't and typically produces better results without tuning. The lack of tuning of tSNE may impact the distribution seen in the figure, resulting in one method appearing better than the other by chance where a different choice of perplexity may have resulted in different findings. It is not clear whether the classes were cherry-picked to give favourable results for the authors' method and bad for existing methods. (I am not asserting that they were cherry-picked, but it is not indicated how the classes were selected in the paper so it is not possible to know whether they were or if these results are representative.) These points are not so important as the figure is more illustrative than quantitative anyway.
Fig 2: Font size is too small; to maintain legibility, figure fonts should be no smaller than ~70% the font size of the main text.
### Tables
Table 5: Not clear why this experiment was performed with PFR only. The experiment does not necessarily need to be run with FT and CaSSLe too, but the authors should say on what basis PFR was selected (i.e. it performs better than FT and CaSSLe).
Table 6: Not specified which method was used (FT, CaSSLe, PFR)
Table captions should indicate what the initalisms (CP, CPI, IN, INP, IND) stand for, so readers don't have to look in a distant part of the text to find out. In general, these initialisms are not intuitive - the characters are all run together and the number of characters coming from a dataset in the group is sometimes 1, 2, or 5; "I" and "P" can not be intuitive when there are multiple datasets being used that start with this character - and this makes it hard to follow the results. The table headings could be restructured to make this clearer e.g. instead of CIFAR, CP, CPI; use as headings C-10, +P365, +IN-R, which are immediately readable and convey the difference between the columns from each other succinctly.
Tables would be more readable if you used `\cmidrule` to indicate the groupings that the headings apply to, instead of having a rule across the whole table.
### Typographical
- L59 Missing word "with them. [This] enables the"
- L68 "performance doesn't improve even sometimes decreases."
- L89 "Since no labeling requirement, incorporating"
- L239 sentence is not written correctly
### Citations
Casing of initialisms is wrong on numerous citations, e.g.
- [2] vit
- [15] Pathnet
- [40] icarl
- [47] t-sne
- [48] caltech-ucsd birds
Some citations provide no location at which the paper being cited can be found, e.g.
- [48]
Some citations cite arXiv versions of papers instead of peer-reviewed versions, e.g.
- [13] https://openreview.net/forum?id=YicbFdNTTy
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the anticipated scenario where this experimental paradigm would be utilized? Without an understanding of the intended usage, it is unclear whether linear probing is a sufficient evaluation, or if others such as fine-tuning (advocated by e.g. [MAE](https://arxiv.org/abs/2111.06377)), kNN (advocated by e.g. [DINOv2](https://arxiv.org/abs/2304.07193)), or clustering (advocated by e.g. [ZSC](https://arxiv.org/abs/2406.02465)) should be considered in addition to LP.
The size of the external dataset being added, $K$, does not appear to be indicated in the paper. What is this value and how was it selected? What is the impact of the choice of $K$?
Are the groupings of classes the same or randomized between experiments?
The sweep to add new data to the externally derived dataset includes a loop in a loop, comparing every sample in the current subdataset against every sample in the external dataset(s). How expensive is this step? There are presumably trade-offs that could be considered in practice such large the pool of external data should be and whether to prune it in advance in order to minimize the cost of the external data selection sweep, and how often the external data can be swept over (in this paper the sweeps are locked to the data arrive/departure times but in a more general continual learning framework it would be important to know how often to update the subset of the external dataset being used for training).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The motivation for the method is a niche of a niche. I can not see the union of these restrictions being a scenario encountered in practice. The requirements for the paradigm are:
- A large repository of unlabelled training data for this task does not yet exist to train the model on.
- A continual stream of training data for the task will become available over the course of the period of time where the model is trained (and the model subsequently refined as more data becomes available).
- A very large repository of publicly available data that is near-OOD to the domain of the task does exist.
- There is domain-shift in the continual stream of incoming data that is of a magnitude comparable to the domain shift between the stream of data and the pre-existing external data.
- Although it is fine to train our model on the continual stream of data when it arrives, for privacy reasons we want to periodically destroy the in-domain data we have collected.
This set of restrictions seem unlikely to occur in practice:
- For modalities other than vision, contrastive learning is often challenging to deploy due to its reliance on a robust, manually-curated, augmentation stack.
- For photographs of objects in the world, large datasets already exist (such as is used in the paper).
- For medical images, large near-OOD datasets are not available; furthermore, if you have the rights to train the model on data in a way that is secure and retains the privacy of that data, you do not lose those rights to access the data, so you can keep training on previously collected data.
- For personal images that are requested to be deleted from the company's database by the owner, models may be _required to forget_ the personal images, in which case catastrophic forgetting is advantageous! These requirements have created the nascent field of machine unlearning [[1]](https://arxiv.org/abs/1912.03817), [[2]](https://arxiv.org/abs/2308.07061), [[3]](https://unlearning-challenge.github.io/).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. We address your questions or concerns below.
**Q1: The goal and setups of CCSSL**
> **A1:** In self-supervised learning, we expect the model could learn discriminative representation from unlabelled seen data. In Continual Self-Supervised Learning (CSSL), the data to be seen is set as a non-stationary data stream with task intervals, and we expect that the model can accumulate the discriminative representation of all seen data as tasks come. Continual Contrastive Self-Supervised Learning (CCSSL) focuses on contrastive self-supervised methods in continual learning. In CCSSL, the concept of “a task” represents "a stationary data distribution", and by learning this task, the model can discriminately represent the data for this distribution. In concrete experimental setups, we divide all the classes of the dataset equally into each task, and the classes in each task are not intersected.
> CCSSL is crucial because it allows the model to retain the discriminative representation for old tasks while enhancing the representation for new tasks in continual training without training from scratch, enhancing the training efficiency in practical applications. Meanwhile, self-supervised learning offers more generalized representations than supervised learning, so SSL also has the advantage of less forgetting in continual learning research.
**Q2: Additional baseline: compare with a model trained solely on external data**
> **A2:** We appreciate your comments about useful baseline settings. We conduct experiments with training the model using solely external data, the results are shown in the **PDF Table R1Q2**. Although total external data (CIFAR10 or a subset of CC3M (\~431K images)) are jointly used to train the models, their performances are worse than our method.
**Q3: Additional baseline: change external data every epoch during Joint+ED experiment**
> **A3:** We also update the external data at each epoch in Joint+ED, achieving a result of 68.02, which is still close to Joint without ED (68.09). This indicates that BGE’s performance is not dependent on seeing all samples from ED.
**Q4: Why we can’t simply retain the previous tasks’ datasets**
> **A4:** Please see the common response.
**Q5: The experimental anticipated scenario**
> **A5:** Our evaluation aims to measure the performance of different methods in maintaining stable discriminative representations in continual self-supervised learning, rather than pursuing the best performance on each downstream task. Therefore, we use linear probing as a straightforward tool for the classification task without updating the backbone. As the reviewer suggested we also evaluate the KNN classification accuracy in the **PDF Table R1Q5**, where incorporating BGE on top of PFR also improves the KNN accuracy. Other evaluation methods we will explore in future work.
**Q6: Related information about external data budget $K$**
> **A6:** We report the budget $K$ value in our paper's L228 and L256. We set $K$ 10K in the main experiments. We also conduct experiments as $K$ changes, under the setting of CIFAR100 4 tasks, using CIFAR10 as the external dataset (in the **PDF Table R1Q6**).
> As $K$ increases, the improvement of our method becomes more significant. However, it also introduces additional computational costs. Considering the trade-off between performance and efficiency, we choose the $K$ value 10K.
**Q7: Are the groupings of classes the same or randomized between experiments?**
> **A7:** To ensure fairness, groupings of classes were the same for all experiments.
**Q8: The cost, pruning, and sweep interval of external data selection sweep algorithm**
> **A8:** After extracting all data features, 13,000 in-task data sweep the external dataset at a rate of approximately 48 seconds per 100,000 data. Compared with training time (typically many hours), this cost can be ignored.
> It is certainly possible to use a subset when the external dataset is too huge. In our paper’s Table 6, the experiment on CC3M( total \~3M) just used a randomly chosen subset (\~431K). To further validate, we set the external data to ImageNet-1K (\~1.3M) or a randomly selected subset of it (\~100K) and the results are shown in the **PDF Table R1Q8(a)**. These results demonstrate that using a subset of a large external dataset yields comparable results to using the full dataset.
> We also explored the impact of updating external data at different intervals, setting the update interval to 100 epochs and 200 epochs, with results shown in the **PDF Table R1Q8(b)**. The results indicate that more frequent updates do not improve BGE's performance. Considering that the external data must compensate for the inter-task comparison, we update external data at the end of each task's training.
**Q9: Discuss the limitation**
> **A9**: Please see the common response.
**Q10: Statistical significance**
> **A10:** We report the statistical significance of primary experiments in Appendix A.2.6. And experiments in Table 1 and 2 are all performed with the same seed, the same ordering of tasks, to eliminate the effect of hidden variables.
**Q11: t-SNE Figure**
> **A11:** To ensure fairness, we set all parameters of each t-SNE plot strictly consistent. We show the t-SNE plots to visualize the problem of inter-task confusion and the effect of our solution in an intuitive and illustrative manner, and to achieve complementarity with the quantitative results in the subsequent tables.
**Q12: Writing comments**
> **A12:** We appreciate your valuable suggestions about figures, tables, typography, and citations, and we will revise them carefully.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response, and for running several initial experiments along the lines of the additional baselines I recommended. These are encouraging for the effectiveness of the proposed method. The additional results showing the effect of changing the external data budget are also useful to see. If the costs are low, as the authors indicate, then these appear to motivate an increase to $K=20000$, in my opinion,
I note that the method proposed by the authors has two names in the paper, BGE and One-Propose-One. I think a more apt name might be "replay by proxy", or "surrogate replay", or something along these lines. When viewing the method as a replay-based method that uses a proxy sample instead of the original sample, the need for an "oracle" baseline that uses the original samples for replay instead is made more stark. Random should be a lower-bound to OPO, whilst a model that replays the original samples should serve as an upper-bound to OPO. This would be especially useful to have in order to indicate how far the performance is between these two bounds - are the proxies just as good as the originals?
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback!
> Comparison of BGE and the "oracle" baseline
We appreciate your comments about the comparison with the oracle baseline. To validate the effectiveness of our method, we compared the results of BGE using CIFAR10 or ImageNet as external data with the oracle baseline on the "CIFAR100 PFR" setup, and the results are as follows:
CIFAR10 as external dataset:
4 tasks:
| Budget | 0 | 2000 | 5000 | 10000 |
| ------ | ----- | ----- | ----- | ----- |
| BGE | 60.92 | 61.96 | 62.79 | 64.37 |
| Oracle | 60.92 | 62.31 | 64.19 | 65.21 |
10 tasks:
| Budget | 0 | 2000 | 5000 | 10000 |
| ------ | ----- | ----- | ----- | ----- |
| BGE | 55.57 | 58.41 | 59.66 | 61.02 |
| Oracle | 55.57 | 59.16 | 60.66 | 61.75 |
ImageNet as external dataset (4 tasks):
| Budget | 0 | 10000 |
| ------ | ----- | ----- |
| BGE | 60.92 | 64.75 |
| Oracle | 60.92 | 65.21 |
In most cases, the difference in performance improvement between BGE and Oracle is relatively small (<1%), which proves that with adequate external data, BGE is capable of achieving performance improvement close to Oracle without preserving old data.
We have also demonstrated that our method outperforms the random baseline in the initial submission. We appreciate the suggestion of “a more apt name might be "replay by proxy", or "surrogate replay", or something along these lines.”. We will incorporate this idea into the final version by explaining our method's connections to both the random and oracle baselines. | Rebuttal 1:
Rebuttal: Thanks to all reviewers thought feedbacks. We have carefully read all the comments and summarized the recognition of our work as follows:
| Reviewer | |Comments|
|-|-|-|
| hAsz&oaUg&xWKh | Finding's Novelty| a **significant** but often **overlooked** problem in CCSSL; The finding that existing regularization-based CCSSL methods **overlook** inter-task discrimination is **novel**; I **appreciate** the idea... |
| hAsz&oaUg | Method's Novelty| ...a **creative** solution; the proposed method leveraging external data is **novel**. |
| gaeT&xWKh | Method's Description | The work is generally **well presented**; This paper is basically well-organized and **easy to follow**. |
| gaeT&hAsz&oaUg | Evaluation| These results help to inform where their approach provides value; ...**extensive experimental results** that validate the effectiveness of BGE; The experiments performed in the analysis section (Sec. 4.3)...are **interesting** to me. |
Due to the character limit, the table data of the responses to some reviewers **(R-gaeT, R-oaUg)** had to be put into the **PDF**. We apologize for the inconvenience.
To address common questions from multiple reviewers about privacy concerns and external data quality, we offer common responses below:
## Common response to R-gaeT, R-hAsz, R-oaUg: More explanation about privacy concerns
**Q1: Why we can’t simply retain the previous tasks’ datasets?**
> **A1:** Under the constraints of continual learning, we cannot save the entire old task data. Some continual learning methods overcome catastrophic forgetting by saving a small subset of the old task data and replaying it in subsequent tasks (replay-based method). However, even if only a subset of previous data is saved, storing the data may lead to sensitive information privacy concerns.
> The privacy concerns we refer to in the paper arise primarily from the direct storage of in-task data. When data contain sensitive information, we do not want it to be saved and used continuously. However, if the replay-based method saves in-task data that happened to contain sensitive information, the replay-based method will not achieve the desired result after deleting them.
> Another perspective on privacy concerns lies in the accessibility of the data. We may not have access to previous task data. For example, when we continue to train a model based on other people's open-source checkpoint, its training data may not be released together. At this point, it is not possible to continue training the model using replay-based methods, whereas BGE can still be used.
> In summary, simply saving previous data cannot solve all the problems of continual learning. Studying continual learning methods with the restriction of not saving previous data makes sense, and the barrier of refusing to save any previous data is not maintained only in our work. In recent years, works \[1]\[2]\[3] that do not save previous data have received more attention, turning into a mainstream research paradigm.
> [1] Self-sustaining representation expansion for non-exemplar classincremental learning. K Zhu et al. CVPR 2022.
>
> [2] Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality. L Wang et al. NeurIPS 2023.
>
> [3] Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning. X Liu et al. CVPR 2024.
**Q2: How does BGE handle the potential privacy concerns that may arise from using external data?**
> **A2:** We can limit the selection of external data to publicly available datasets on the Internet, and refer to the license and other statements of each dataset to select data that are allowed to be publicly used, and they usually do not have privacy risks.
> If in some special cases, using publicly available datasets as external data cannot harvest the desired results, and therefore we have to collect the data by ourselves, we can introduce some techniques from other domains (e.g. privacy filtering, differential privacy, machine unlearning) that are complementary to our method to address privacy concerns. Since the number of candidate external data is abundant, even after filtering, some useful external data can still be retained.
> From the perspective of data accessibility, if we choose to use publicly available external datasets, there are no accessibility concerns.
> Indeed, during learning, the model encodes the learned knowledge into its parameters, which may also pose potential privacy concerns, but this problem arises due to the training process rather than data storage, therefore usually requires approaches from other domains such as machine unlearning to address.
## Common response to R-gaeT, R-xWKh: Discuss the limitations of external datasets
> Thanks for the summarization of BGE's limitations, but we would like to clarify that BGE's limitations on external data are not that strict.
> In terms of scale, external data comparable to the in-task data can yield promising results. In terms of data quality, BGE does not require all external data to be of the same domain as the in-task data, as shown in our ImageNet100 experiments using the DomainNet dataset as the external dataset. When the domains of external data are complicated, the OPO sampling algorithm helps align external data with the in-task domain. For tasks with scarce data (e.g., medical images), the real external datasets are limited, but thanks to the development of image generation, methods for generating scarce data have been widely explored. We also show the compatibility of BGE for generated data in our paper’s Table 6. Finally, since in real-world continual learning, we cannot know the specific target of the next task, even if a large-scale image dataset already exists, training directly on it does not necessarily yield positive results on the task-specific target. Our work precisely helps utilize large-scale existing datasets effectively for continual learning.
Pdf: /pdf/654b77cabf4c61df4b38f8a6ae82eca8a5bb4b95.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models | Accept (poster) | Summary: This paper introduces RAVL, a method designed to identify and address spurious correlations in Vision-Language Models (VLMs). The authors emphasize two key areas: (a) prioritizing local image features over global image-level features, and (b) concentrating on the fine-tuning phase rather than the pre-training phase. They implement the first idea through the following steps: 1. Utilizing the VLM embedding space to extract candidate image features from the validation set. 2. Filtering these candidate features to pinpoint those that directly contribute to classification errors. 3. Ranking the remaining image features based on the extent of their learned spurious correlations. To mitigate spurious correlations, the authors regularize the dissimilarity between the spurious regions and correlated labels while encouraging high embedding similarity.
Strengths: 1. The paper is well-structured and easy to follow.
2. The proposed framework is sensible and, to the best of my knowledge, novel. I believe its contribution is slightly above the acceptance borderline.
3. The enhancement of performance quantitatively demonstrates the proposed method. It is notable.
Weaknesses: 1. This framework is limited to mitigating geometric spurious correlations by focusing on regional candidates. While some studies such as [1] highlight texture and background information as significant contributors to spurious correlations, it is more desirable to address all potential sources of spurious correlations.ss all the potential spurious correlations.
2. Although they used RoIAlign, there are no qualitative visualization results to show which regions affect which classes. I believe this would be important for understanding the model's behavior from a human vision perspective.
[1] Geirhos, Robert, et al. "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness." arXiv preprint arXiv:1811.12231 (2018).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What if this framework, instead of using RoIAlign, considers feature maps as candidates and filters them channel-wise? If this approach is effective, it could provide a more general framework from the perspective of Weakness 1, as feature maps are activated differently, representing various features [1].
2. Why did the authors use K-Medoids clustering? Additionally, I am curious about the authors' choice to use softmax-normalized embeddings. Feature maps might offer more diverse information for comparison during clustering.
3. Is this framework robust on batch size?
[1] Jeon, Myeongho, Myungjoo Kang, and Joonseok Lee. "A Unified Framework for Robustness on Diverse Sampling Errors." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed the limitations and potential negative societal impacts of their work in the Appendix. I fully agree with their assessment.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer ONXK for reviewing our work and providing helpful feedback.
**[Q1] Assumption of region-level spurious correlations.**
We thank the reviewer for raising this point. In line with prior works in vision-only and vision-language settings [1,2,3], RaVL is specifically designed to surface and mitigate fine-grained, region-level spurious features. As we acknowledge in the Appendix (Lines 779-787), we agree with the reviewer that there are some sources of spurious signal that do not manifest in this way (e.g. global, image-level features like texture or background information as mentioned in Geirhos et al.). However, we focus explicitly on region-level spurious features due to the fact that such spurious correlations have been shown to negatively affect model performance in many real-world, practical settings; some notable examples include image-level markings in dermoscopic images [8], medical devices in radiographs [9], and text markers in medical images [10]. As a result, we believe our work effectively complements work done by Geirhos et al. and others by targeting an important yet underresearched category of spurious correlations (namely, region-level features) that have been shown to contribute to significant model performance gaps.
**[Q2] Visualizations.**
Thank you for this suggestion. We refer the reviewer to the PDF provided with the general response. In Rebuttal Figure 2, we demonstrate the utility of our mitigation approach by providing qualitative visualizations using GradCAM. Visualizations show that after the RaVL mitigation procedure, the VLM $\mathcal{M}_{new}$ more precisely relies on core features.
We also note that in order to quantitatively evaluate the extent to which the VLM understands fine-grained region-level information, we introduced the Region Overall and Region Worst Group metrics in Section 4.2. Results in Table 3 show that RaVL contributes to improvements on these metrics, suggesting better understanding of region-level information.
**[Q3] Use of feature maps.**
We agree that feature maps serve as a rich source of signal and can be leveraged for identifying and addressing spurious correlations. Indeed, several works have explored this direction by identifying neural features that influence classification decisions and then using humans in the loop to annotate feature maps as core or spurious [2,11]. Although this approach was shown to work well, a key consideration is the need for human supervision in order to interpret feature maps and distinguish between core and spurious features.
In our work, we used ROIAlign and region-level embeddings rather than feature maps for the following reasons:
- Enable automated analysis: Our approach is designed to operate without the need for human supervision, enabling generalizability to datasets from diverse domains (e.g. medical image data). Using region-level embeddings enables us to perform automated detection and mitigation of spurious correlations via our novel clustering approach and our region-aware loss function. On the other hand, using feature maps will likely require human supervision, as shown by [2] and [11].
- Perform dimensionality reduction: Whereas feature maps may be large in size with many channels, region-level embeddings exhibit low dimensionality while still capturing relevant semantic information. With regards to the reviewer's point about the clustering stage, using softmax-normalized region-level embeddings can enable more efficient and accurate clustering in comparison with using large feature-maps that encompass diverse features.
- Target fine-grained spurious correlations: As discussed in [Q1], our goal is to specifically detect and mitigate fine-grained, region-level spurious correlations. Using local region-level embeddings rather than global feature maps helps us accomplish this goal.
**[Q4] Clustering approach.**
We used K-medoids clustering for the following reasons:
- Robustness to outliers: When compared to a standard K-Means clustering approach, K-Medoids is less sensitive to the presence of outliers due to the fact that cluster centroids are medians rather than means. Robustness to outliers is important in our setting, since region-level embeddings represent visually-diverse features and there is likely to be significant variation across samples.
- Ability to operate with custom distance functions: Whereas K-Means traditionally minimizes the Euclidean distance between the centroid and the samples in a cluster, K-Medoids can be implemented with custom distance functions. Due to the fact that the original VLM $M$ is trained with respect to cosine distance, we implement K-Medoids with a cosine distance metric in this work in order to effectively model the patterns learned by $M$.
**[Q5] Batch Size.**
In order to train model $M_{new}$, we utilize a combination of the CLIP loss $L_{CL}$ and our region-aware loss $L_R + L_A$. Both loss functions are contrastive in nature and require a large set of negative samples in a batch in order to learn useful representations. As a result, our framework is sensitive to batch size; this is expected behavior [12] and is consistent with a long, existing line of work on contrastive learning methods (e.g. [7]). We note that for all results reported in Table 3, we keep batch size consistent across methods in order to enable fair comparisons. Below, we provide a brief batch size ablation on a single COCO evaluation setting. As expected, significant reductions in the batch size reduce efficacy of contrastive learning.
| Batch Size | Method | Image Worst-Group |
|----------------------|------------------|------------------|
| 128 | Standard FT | 51.5 |
| 128 | RaVL (Ours) | **62.1** |
| 32 | Standard FT | 39.4 |
| 32 | RaVL (Ours) | **48.3** |
We again thank Reviewer ONXK for their review of our manuscript and their positive overall assessment of our work. We hope that the above responses adequately address all concerns.
---
Rebuttal Comment 1.1:
Title: Response to Author Feedback
Comment: Thank you for your feedback. However, after further consideration, I have become less favorable toward this submission and have decided to downgrade my initial score for the following reasons:
- In response to Q1, the authors suggest that the empirical results are a key justification. However, with only a brief experiment provided in Q5, I am not convinced that RaVL outperforms other baselines across various batch sizes. The impact of batch size is not a trivial factor.
- Thank you for including the additional qualitative experiment. However, I find the statement "The new VLM more precisely relies on core features" unclear. Could you please specify which core features you are referring to?
- (i) Is it not possible to mitigate shortcut correlations (SCs) in an automated manner using feature maps? (ii) This makes sense. (iii) Why do you believe that feature maps represent global features? Given the receptive field, feature maps typically capture local features within the feature space (e.g., the first channel corresponds to the top-left, while the last channel corresponds to the bottom-right).
- The justification provided for the use of K-medoids clustering seems appropriate to me. | Summary: Vision-language models (VLMs) tend to exhibit poor zero-shot performance when compared to task-specific models. However, fine-tuned VLMs may capture spurious correlations in domain-specific datasets which may be small in size. The paper proposes an automated spurious correlation detection and mitigation method for fine-tuned VLMs. The method first discovers spurious correlations by leveraging a region-level clustering approach to identify precise image features contributing to zero-shot classification errors. Then, it mitigates the identified spurious correlation with a region-aware loss function that enables the VLM to focus on relevant regions and ignore spurious relationships during fine-tuning. Experiments demonstrate the effectiveness of this method on general domain and medical-domain VLMs.
Strengths: - The paper is well-written. Key assumptions are clearly stated and the proposed method is easy to follow.
- The paper proposes a useful method called RAVL to discover and mitigate fine-grained spurious correlations that can be easily interpreted by humans.
- Experiments in the controlled and uncontrolled settings demonstrate that RAVL effectively discovers and mitigates spurious correlations between image features and textual attributes.
Weaknesses: There is a lack of computational complexity analysis. To detect fine-grained spurious correlations, RAVL needs to segment images into multiple regions, cluster all the regions per class, and calculate the contributions of each cluster to mispredictions. Therefore, the computational complexity seems high. It would be better to analyze the time cost of the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How to obtain region-level labels when constructing the evaluation dataset (Line 220)?
2. How to determine the hyperparameters used in the spurious correlation discovery and mitigation stages?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer ACX1 for reviewing our work and providing helpful feedback.
**[Q1] Computational complexity. “There is a lack of computational complexity analysis. To detect fine-grained spurious correlations, RAVL needs to segment images into multiple regions, cluster all the regions per class, and calculate the contributions of each cluster to mispredictions. Therefore, the computational complexity seems high. It would be better to analyze the time cost of the proposed method.”**
We refer the reviewer to General Response [Q1], where we address this point.
**[Q2] Region-level labels for evaluation. “How to obtain region-level labels when constructing the evaluation dataset (Line 220)?”**
In order to evaluate RaVL, we create 654 evaluation settings from two domains: (1) synthetic data (MNIST and FashionMNIST) and (2) real-world data (COCO). Each evaluation setting includes an *evaluation dataset* with images $I_i$, class labels $y_i$, region bounding boxes $R_i$, and region-level labels $L_i$. We obtain region-level labels as follows:
- For our synthetic data settings, we construct each image $I_i$ to include four equally-sized quadrant regions $\mathcal{R}_i$. We randomly select one quadrant to include the core object (i.e. digit or fashion item) associated with the class label $y_i$. In some images, we randomly select another quadrant to include the pre-defined spurious feature $e^{eval}$, which is a red rectangle in this case. Since these images are synthetically generated, we have a priori knowledge of the features in each quadrant, which we use to generate the region-level label set $L_i$.
- For our real-world settings, the set $\mathcal{R}_i$ consists of the ground-truth bounding boxes annotated in COCO and $L_i$ consists of the associated object labels. We note that recent advances in open-set object detection (e.g. Recognize Anything) can enable this procedure to be extended to other datasets even if ground-truth bounding boxes and labels are not available.
We additionally emphasize here that the RaVL discovery and mitigation approaches do not require region-level labels, and users who wish to use RaVL do not need access to region-level labels. Here, we exclusively use region-level labels within our evaluation framework in order to quantitatively assess performance.
**[Q3] Hyperparameter selection. “How to determine the hyperparameters used in the spurious correlation discovery and mitigation stages?”**
Hyperparameters for the discovery stage (RaVL Stage 1) are selected as follows:
- *Number of clusters*: RaVL includes a clustering step to identify groups of visually-similar regions. We do not manually set this hyperparameter; rather, we leverage an automated approach for selecting the optimal number of clusters. For each evaluation setting, we sweep all cluster sizes ranging between $|\mathcal{Y}| * 2$ and $|\mathcal{Y}| * 5$ where $|\mathcal{Y}|$ represents the size of the class label set; we then select the optimal number of clusters using Silhouette scores. We select these bounds to be larger than the class label set size by several multiples in order to ensure that clusters adequately separate distinct features. Prior works have also utilized overclustering approaches for this objective [5,6]. Users can adjust the bounds based on their composition of their dataset; for instance, complex datasets with diverse features may require a larger range.
- *Threshold for pruning cluster influence scores ($\tau_l$)*: In order to identify candidate image features that directly contribute to classification errors, we prune all clusters with low cluster influence scores below a threshold $\tau_l$. We set $\tau_l$ to 0.25 in this work. This hyperparameter was determined empirically based on experiments on a small set of validation data.
Hyperparameters for the mitigation stage (RaVL Stage 2) are selected as follows:
- *Contrastive loss temperature $\tau$*: We set the contrastive loss temperature to a default value of 0.07 based on prior work [7].
- *Loss weighting factor $\lambda$*: The loss weighting factor $\lambda$ balances the tradeoff between the original CLIP loss function $L_{CL}$ and our novel region-aware loss function $L_{R} + L_{A}$. We set $\lambda$ to 0.8 in this work, which upweights the CLIP loss function $L_{CL}$. This weighting factor ensures that the model $\mathcal{M}_{new}$ learns accurate global image-text relationships without capturing fine-grained spurious correlations. We determined this hyperparameter based on experiments on a small set of validation data. We additionally note here that the "Upsampled FT" baseline in Table 3 is an ablation where $\lambda$ is set to 1 (and the region-aware loss function is not utilized); this demonstrates the efficacy of our loss function.
We note that like all hyperparameters, these values could likely be further optimized; however, even without extensive hyperparameter tuning, we achieve competitive results. Additional details on hyperparameters are provided in Appendix B, C, and D.
We again thank Reviewer ACX1 for their review of our manuscript and their positive overall assessment of our work. We hope that the above responses adequately address all concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed responses. My concerns are well addressed. | Summary: This paper tackles spurious correlations between image features and textual attributes in fine-tuned VLMs. It proposes an approach to discover and mitigate spurious correlations using local image features (image regions rather than a whole image). Experiments are done in both controlled settings and realistic settings.
Strengths: Discovering and mitigating spurious correlations is an interesting problem. The approach of looking into image regions is sound.
The paper is generally well written.
Weaknesses: 1) Looking into the regional features may make the approach computationally expensive. Computational complexity analysis is missing in the paper.
2) A lot of experiments were done under controlled settings. More evaluations in the wild would make the results stronger.
3) A small error in writing, line 259 says “are designed for unimodal settings” - ref [50] is designed for multi-modal settings.
Technical Quality: 3
Clarity: 3
Questions for Authors: Have you tried the approach on other settings, other than fine-tuned VLM? For example, spurious correlations can happen in other vision classification models as well.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There are discussions on limitations in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer LC5I for reviewing our work and providing helpful feedback.
**[Q1] Computational complexity. “Looking into the regional features may make the approach computationally expensive. Computational complexity analysis is missing in the paper.”**
We refer the reviewer to General Response [Q1], where we address this point.
**[Q2] In-the-wild experiments. “A lot of experiments were done under controlled settings. More evaluations in the wild would make the results stronger.”**
Thank you for this suggestion, and we agree about the importance of real-world validation. However, we note that it is challenging to quantitatively evaluate the accuracy of discovery and mitigation approaches, since the ground-truth spurious correlations learned by a VLM are typically unknown. As a result, we opted to use controlled evaluations, which artificially induce spurious correlations and then quantitatively evaluate the extent to which the correlation can be detected and mitigated. Importantly, using controlled evaluations enables us to **perform large-scale, quantitative evaluations of RaVL without using humans in the loop**. We believe that this procedure is essential before scaling to unlabeled real-world datasets.
In addition to these controlled quantitative evaluations, our original manuscript provided qualitative results from several in-the-wild evaluations (Section 3.3 and Figure 3). In response to the reviewer's suggestion for expanding these analyses, we have extended this analysis with additional qualitative evaluations. We refer the reviewer to the PDF provided with the general response above. In Rebuttal Figure 1, we show the following:
- For the OpenCLIP ViT-L/14 model, RaVL surfaces a feature cluster consisting of green plants and fences. We observe a performance gap of 18.3 points between images with class label *"outdoor chicken coop"* that contain the RaVL-identified feature and those that do not contain the feature. This suggests that the OpenCLIP ViT-L/14 model can better classify an *"outdoor chicken coop"* scene when green plants and fences are present.
- For the CLIP ViT-B/32 model, RaVL surfaces a feature cluster consisting of people. We observe a performance gap of 24.3 points between images with class label *"pub (indoor)"* that contain the RaVL-identified feature and those that do not contain the feature. This suggests that the CLIP ViT-B/32 model can better classify a *"pub (indoor)"* scene when people are present.
- For the OpenCLIP ResNet-101 model, RaVL surfaces a feature cluster consisting of chairs. We observe a performance gap of 23.3 points between images with class label *"restaurant patio"* that contain the RaVL-identified feature and those that do not contain the feature. This suggests that the OpenCLIP ResNet-101 model can better classify *"restaurant patio"* scenes when chairs are present.
**[Q3] References. “A small error in writing, line 259 says “are designed for unimodal settings” - ref [50] is designed for multi-modal settings.”**
Thank you for raising this point. In Line 259, the phrase "designed for unimodal settings" is intended to refer solely to Distilling Failures, George, and Domino (referenced in Line 257). We agree with the reviewer that Spurious-Aware Detection (reference [50]) is not unimodal, and we will be sure to improve the clarity of this sentence in the final version of our manuscript.
**[Q4] Extending beyond VLMs. “Have you tried the approach on other settings, other than fine-tuned VLM? For example, spurious correlations can happen in other vision classification models as well.”**
Thank you for this suggestion, and we agree that spurious correlations can occur in many types of models. In this work, we focus specifically on the setting of fine-tuned vision-language models because fine-tuned VLMs (i) are becoming increasingly commonplace, particularly in domain-specific applications like medicine, (ii) are likely to be trained on small domain-specific datasets, preventing models from gaining the robustness benefits associated with web-scale training and increasing the likelihood of learning spurious correlations, and (iii) are an underresearched yet important problem setting. In order to address spurious correlations in the fine-tuned VLM setting, RaVL assumes the existence of paired language; consequently, RaVL utilizes text embeddings for both discovery (RaVL Stage 1) and mitigation (RaVL Stage 2).
In the context of vision-only classification models where the language modality is not present, our discovery approach can be modified by computing the image score distribution vector $s_{I_i}$ and the region score distribution matrix $S_{R_i}$ using softmax-normalized class logits. The mitigation approach can be modified by framing the region-aware contrastive loss function as a cross entropy loss between region-level logits and class labels. We aim to explore these directions in future work.
We again thank Reviewer LC5I for their review of our manuscript and their positive overall assessment of our work. We hope that the above responses adequately address all concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal.
---
Rebuttal 2:
Comment: Thanks again for the rebuttal.
This paper is based on the assumption that, “a model M that has learned a spurious correlation between an image feature e_a and a textual attribute y will demonstrate low zero-shot performance on (i) images in D_V with label y without the feature e_a and (ii) images in D_V with other labels Y \ {y} with the feature e_a. ” (e.g., see lines 124 - 127 in the paper)
I have one more question on the paper. For some “true” image features, when the following are true, they may also appear to have the property (e.g., see lines 124 - 127 in the paper) mentioned above.
1) The “true” image features only appear in a subset of the class y. This can happen because not all images in the same class have the same features.
2) The training data and test data are sampled differently: the “true” features only appear (or appear more often) in the training data with class y, and appear less often in the test data with class y.
How do you distinguish between spurious image features and the “true” image features with different appearing frequencies in training and test data?
---
Rebuttal Comment 2.1:
Title: Response to Reviewer LC5i
Comment: Below, we respond to the points raised by Reviewer LC5i during the discussion period.
**Definition of spurious correlations. “For some “true” image features, when the following are true, they may also appear to have the property (e.g., see lines 124 - 127 in the paper) mentioned above. (1) The “true” image features only appear in a subset of the class y. This can happen because not all images in the same class have the same features. (2) The training data and test data are sampled differently: the “true” features only appear (or appear more often) in the training data with class y, and appear less often in the test data with class y. How do you distinguish between spurious image features and the “true” image features with different appearing frequencies in training and test data?”**
Our definition of spurious correlations is consistent with prior work (e.g. [1,2]). In reference to the specific cases listed by the reviewer above, it is standard practice to consider a "true" image feature as one that appears consistently within the class. This means that “true” features will (1) appear in all images rather than just a subset and (2) will be consistently associated with the class label $y$ at both training and test time. For example, when considering images from the class “bird”, it is reasonable to assume that all images will include the true features of feathers, wings, and a beak, which are defining characteristics of birds. This will be true for all birds, independent of whether they are in a training or testing dataset.
We refer the reviewer to Singla et al. [2], which formally defines true features as those that are "always a part of the" class definition; meanwhile, spurious features are defined as those that "are likely to co-occur" with the true features but are not a part of the class definition. We follow these definitions in our work. We demonstrate with 654 evaluations in both synthetic and real-world settings that our approach can differentiate between spurious features and true features, as shown in Figure 2 and Table 1; we corroborate this further with in-the-wild experiments.
We hope that this clarifies the definitions of true and spurious features.
[1] Eyuboglu et al. “Domino: Discovering Systematic Errors with Cross-Modal Embeddings” ICLR 2022.
[2] Singla, S. et al. Salient ImageNet: How to discover spurious features in Deep Learning? ICLR, 2022.
---
Rebuttal 3:
Comment: Thanks for the response. My concerns have been well addressed. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful review of our manuscript. We were encouraged to see that all reviewers rated our work as a "technically solid, moderate-to-high impact paper". Reviewers also found the paper to be "well-written" and "well-structured" (Reviewers LC5I, ACX1, ONXK); the problem setting to be "interesting" (Reviewer LC5I); and our proposed approach to be "sound", "novel", and "easy to follow" with "notable" performance improvements (Reviewers LC5I, ONXK, ACX1).
In response to feedback, we provide a general response here to points raised by multiple reviewers, individual responses below to address each reviewer’s concerns, and an attached one-page PDF with new figures.
**[Q1] Reviewers LC5I and ACX1 asked for additional analysis on the computational complexity of RaVL.**
We designed our approach to be computationally inexpensive; in particular, the discovery stage can be run efficiently on CPU and the mitigation stage adds only a small computational overhead. Although RaVL does involve multiple stages as noted by reviewer ACX1, our procedure delivers the significant advantage of enabling **fully-automated analysis** of spurious correlations; in contrast, several recent works on fine-grained robustness (both in the vision-only and vision-language settings) have leveraged humans-in-the-loop [1,2,3]. Below, we provide an analysis of computational complexity for each stage of RaVL.
*Computational complexity analysis of RaVL Stage 1*: In response to points from Reviewer ACX1 about the discovery stage (RaVL Stage 1), we note that our approach is specifically designed to be run on a labeled validation dataset $\mathcal{D_V}$; in real-world settings, validation datasets are often relatively small in size due to the human effort needed for securing labels, rendering this stage as computationally inexpensive for diverse applications. Even if the validation dataset is large in size, RaVL operates efficiently as follows:
- First, RaVL preprocesses images by decomposing each image into candidate regions; there are a variety of ways in which a user can decompose an image into regions, such as by using equal-sized segments (e.g. quadrants) or running inference with region proposal networks (RPNs). Both methods are inexpensive and only need to be run once in an offline manner. Similar approaches have been applied to large-scale datasets in prior work [4].
- Then, embeddings need to be generated for each region, which can be done by utilizing VLM $\mathcal{M}$ for inference (forward passes only). Across a set of 10 FashionMNIST and COCO evaluation settings, we observe embedding generation to take a mean of 24.5 seconds on a single A100 GPU.
- Finally, given candidate regions and corresponding embeddings, the remainder of the RaVL discovery procedure (clustering and computation of metrics) can be run completely on CPU. Across a set of 10 evaluation settings on COCO and FashionMNIST, we observe that clustering and computation of metrics require a mean of 3.4 seconds to run on a single A100 GPU.
*Computational complexity analysis of RaVL Stage 2*: The mitigation stage (RaVL Stage 2) requires finetuning a VLM $M_{new}$. Across a set of 10 evaluation settings on COCO and FashionMNIST, we observe that the inclusion of our fine-grained region-aware loss function at this stage adds an average of 0.15 seconds per training step (on a single A100 GPU) in comparison to the original fine-tuning procedure for $M$.
We will be sure to update our manuscript with this computational complexity analysis.
We would again like to thank all reviewers for their time and feedback, and we hope that our responses adequately address all concerns.
**References:**
[1] Yang, Y., et al. Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning. ICML, 2023.
[2] Singla, S. et al. Salient ImageNet: How to discover spurious features in Deep Learning? ICLR, 2022.
[3] Moayeri, M. et al. Hard ImageNet: Segmentations for Objects with Strong Spurious Cues. NeurIps, 2022.
[4] Zhong, Y. et al. RegionCLIP: Region-based Language-Image Pretraining. CVPR, 2022.
[5] Eyuboglu et al. “Domino: Discovering Systematic Errors with Cross-Modal Embeddings.” ICLR 2022.
[6] Sohoni et al. “No subclass left behind: Fine-grained robustness in coarse-grained classification problems.” NeurIps 2020.
[7] Radford et al. “Learning Transferable Visual Models From Natural Language Supervision.” ICML 2021.
[8] Winkler, J. K., et al. Association between surgical skin markings in dermoscopic images and diagnostic performance of a deep learning convolutional neural network for melanoma recognition. JAMA dermatology, 2019.
[9] Oakden-Rayner, L., et al. Hidden stratification causes clinically meaningful failures
in machine learning for medical imaging. ACM Conference on Health, Inference, and Learning (CHIL), 2020.
[10] DeGrave et al. AI for radiographic COVID-19 detection selects shortcuts over signal. Nature Machine Intelligence, 2021.
[11] Moayeri et al. Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases. NeurIps, 2023.
[12] Chen et al. “Why do We Need Large Batchsizes in Contrastive Learning? A Gradient-Bias Perspective.” NeurIps 2022.
Pdf: /pdf/12c2ff4ca2cff3933b9e096067a3e1511c1de81d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Frequency-aware Generative Models for Multivariate Time Series Imputation | Accept (poster) | Summary: This paper proposes FGTI that uses frequency-domain information with high-frequency and dominant-frequency filters for accurate imputation of residual, trend, and seasonal components.
Experimental results demonstrate FGTI's effectiveness in improving both data imputation accuracy and downstream applications.
Strengths: 1. The authors leverage high-frequency and dominant-frequency features to enhance time series imputation.
2. They introduce time-domain and frequency-domain representation learning modules to comprehensively capture relevant information.
3. Extensive experiments are conducted to demonstrate the method's effectiveness.
Weaknesses: 1. The formulation and writing of this paper should be improved, with attention to correcting several typos.
2. The authors claim that the residual of a time series mainly comprises high-frequency components. However, in many time series, there may be multiple scales of seasonality and trend features, potentially resulting in a final residual that is too noisy to contain useful information.
3. It appears that the proposed method utilizes frequency features of time series as conditions for the diffusion model, which seems straightforward and may limit its contribution.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In Table 2, the absence of the High-frequency filter appears to impact performance less on KDD and Guangzhou datasets. However, on PhysioNet, the situation is reversed. What factors might explain this discrepancy?
2. How were the hyper-parameters for the experiments chosen? Could you elaborate on the methodology used for their selection?
3. What is the purpose of Proposition 3.1? It seems unnecessary and almost self-evident.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments and valuable suggestions. We have tried our best to incorporate the suggestions in the revised version. Below, we provide our response to the questions and concerns.
**1. (W1) The formulation and writing of this paper should be improved**
We appreciate your feedback on improving clarity and grammatical accuracy. We have meticulously revised the manuscript, correcting errors such as "Recently, researchers attempt to utilize large language models" to "Recently, researchers have attempted to utilize large language models" (line 62), and ensuring all sentences are structurally sound and clearly presented.
**2. (W2)Explain how multiple scales of seasonality and trend features in time series affect the residual’s usefulness, which may be too noisy**
Thank you for your comments.
Since our method does not explicitly perform STL decomposition, it is not affected by the multiple scales of trends and seasonality.
Our method leverages Fourier transform to analyze both dominant-frequency and high-frequency information, which inherently addresses multiple scales of trend and seasonality.
By our dominant-frequency filter, it is possible to capture multiple scales of trend and seasonal feature information.
In addition, even when the high-frequency information that mainly contributes to the residual is noisy, we can still use the dominant-frequency information and adjust the weights of the high-frequency condition by attention layers for accurate imputation.
As shown in Table 1, our method always outperforms other methods in different scenarios.
**3. (W3)Straightforward and may limit its contribution**
Our core insight is that existing methods cannot accurately imputation the residual term, therefore introducing the corresponding frequency-domain information.
**The insight of modelling high-frequency signals for time series imputation has not yet been recognized by existing studies.**
Our proposed filters and representation learning modules are simple and effective for imputation, and can shed light on further works.
In addition, our proposed filters and representation learning modules can also be flexibly ported to other generative models and are not limited to applying diffusion models.
**4. (Q1)Explain why the absence of the High-frequency filter impacts performance differently on KDD, Guangzhou, and PhysioNet datasets**
Thank you for your question.
The difference in impact of dominant-frequency and high-frequency information depends on the characteristics of datasets.
For the PhysioNet dataset, the obtained dominant-frequency information may be biased due to it contains at least 79.71% missing values.
For the KDD and Guangzhou datasets, the dominant-frequency information will play a larger role because of the lower missing rates.
In these cases, our method can adaptively adjust the weight of high-frequency information and dominant-frequency information to help the imputation model.
**5. (Q2)Hyper-parameters for the experiments chosen**
For the two critical hyperparameters of the high-frequency filter and the dominant-frequency filter, we conduct empirical search to identify the optimal hyperparameters, as shown in Fig. 9 and Fig. 10 in Appendix A.4.2.
For other settings related to diffusion model, we adopt hyperparameters recommended by the existing well-established models [23,43]. These models have demonstrated strong performance in similar tasks, and their hyperparameters have been extensively validated in those literature.
To improve the presentation of the hyperparameters, we will add the above description of the parameters for the diffusion model to Appendix A.4.2.
**6. (Q3)Purpose of Proposition**
The purpose of Proposition 3.1 is to illustrate the mechanism behind introducing the high-frequency condition and the dominant frequency condition for facilitating generative models to impute missing values.
This Proposition shows that with the introduction of the two conditions, the entropy of the target distribution will be reduced, the randomness of the target distribution will be narrowed, so that the generative models will more accurately impute missing values.
To add empirical evidence for this proposition, we also add experiments using CRPS to evaluate the gap between the learned and ground truth distributions for different generative baselines in the following table:
| Dataset | Miss. Rate | MIWAE | GPVAE | TimeCIB | GAIN | CSDI | SSSD | PriSTI | FGTI |
|---------|------------|--------|--------|---------|--------|--------|--------|--------|--------|
| KDD | 10% | 0.524 | 0.443 | 0.466 | 0.709 | 0.224 | 0.352 | 0.232 | **0.158** |
| | 20% | 0.526 | 0.445 | 0.467 | 0.718 | 0.245 | 0.370 | 0.248 | **0.170** |
| | 30% | 0.532 | 0.447 | 0.469 | 0.729 | 0.259 | 0.374 | 0.268 | **0.186** |
| | 40% | 0.530 | 0.457 | 0.471 | 0.746 | 0.278 | 0.401 | 0.301 | **0.216** |
| Guang. | 10% | 0.312 | 0.333 | 0.360 | 0.692 | 0.265 | 0.316 | 0.209 | **0.155** |
| | 20% | 0.312 | 0.330 | 0.357 | 0.694 | 0.277 | 0.299 | 0.244 | **0.168** |
| | 30% | 0.312 | 0.335 | 0.356 | 0.695 | 0.292 | 0.353 | 0.310 | **0.193** |
| | 40% | 0.312 | 0.333 | 0.358 | 0.697 | 0.324 | 0.382 | 0.362 | **0.243** |
| Phy. | 10% | 0.689 | 0.659 | 0.466 | 0.739 | 0.544 | 0.617 | 0.444 | **0.343** |
| | 20% | 0.717 | 0.665 | 0.467 | 0.761 | 0.589 | 0.665 | 0.457 | **0.369** |
| | 30% | 0.750 | 0.674 | 0.469 | 0.787 | 0.627 | 0.630 | 0.467 | **0.389** |
| | 40% | 0.779 | 0.680 | 0.471 | 0.814 | 0.671 | 0.676 | 0.491 | **0.441** |
As shown in the above table and Table 1, our method outperforms other methods for all metrics due to the introduction of frequency domain conditions, thus echoing the proposition. | Summary: This paper proposes a generative model, Frequency-aware Generative Models for Multivariate Time Series Imputation (FGTI), for multivariate time series imputation. The proposed model is designed to enhance the imputation performance by modeling the residual component of time series data. To this end, the paper leverages the observation that the residual components are usually high-frequent. It utilizes two filters in the frequency domain to extract high-frequency and dominant-frequency information from the time series using FFT and IFFT. The frequency information is then input into a denoising network to generate the missing data conditioned on the observed part of the time series. Experiments are performed using three real-world datasets and a number of recent baselines are benchmarked.
Strengths: - The idea of using frequency filters to model the residual component of time-series data is natural and reasonable.
- The empirical evaluation is systematic and the experiment results show consistent improvement compared with existing methods.
Weaknesses: - It is not clear if the modeling of temporal dependency is sufficient in the time-frequency and attribute-frequency representation learning module. From Eq. (8-9), it seems that only two attention layers are used without considering the time stamp.
- The model architecture and specification used for Projectors in the denoising network is not described.
- There is a gap between Sections 3.2 and 3.3. The time-frequency representation and attribute-frequency representation learning are described and the final output $\mathbf{R}^a$ is defined. However, it is not clear how this representation is used in the diffusion model. It seems from Eq. (14) and Appendix A.2.1 that $\mathbf{R}^a$ is not involved in the parameterization $\epsilon_\theta$.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please kindly see and clarify my comments in the Weaknesses section.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations of high demand of computational resources are discusses with relevant experiment results analyzed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We really appreciate your thoughtful comments and valuable suggestions. We have incorporated the suggestions in the revised paper. Below, we provide our response to the concerns.
**1. (W1) Clarify if temporal dependency modelling is sufficient**
We would really appreciate your feedback.
We apologize for the inadequate illustrations and explanations of the time-frequency and attribute-frequency representation learning modules
In line 144, we state that the Encoder(.) is implemented by the transformer backbone.
Since a position encoding layer exists in this encoder, we can add the timestamp information for the corresponding Query and Key in the two representation learning modules.
To improve the presentation, **we add a figure illustrating the detailed architecture of the denoising network, which is uploaded in the supplementary pdf file at the top of the page**.
And we will add this figure to Section 3.3.
In addition, the setup where modelling temporal dependencies than modelling attribute dependencies by two attention layers is common in time series imputation methods [26,43].
Considering that our method outperforms other imputation methods for different missing rates and different missing mechanisms in Table 1, Fig. 4, Fig. 7 and Fig.8, the modelling of the time dependence is sufficient.
**2. (W2) Describe the model architecture and specifications used for Projectors in the denoising network**
Thank you for your suggestions.
We apologize for the inadequate description and explanation of the detailed structure of the denoising network.
As shown in Fig.2, which is uploaded in the supplemental pdf, the input projector is an MLP layer, and the output projector is a 2-layer MLP with a ReLU activation function.
This setup is consistent with the other SOTA imputation methods [26,43], which have been extensively validated in those literature.
**3. (W3) Clarify how time-frequency and attribute-frequency representations are used in the diffusion model**
Thank you for your valuable suggestions for boosting our manuscript.
We apologise for not having a very clear representation of the denoising network architecture of the diffusion model.
The Fig.2 in supplemental pdf illustrating the detailed architecture of the denoising network will add in the Section 3.3.
To achieve the imputation of generative models that are helped by the guidance of frequency-domain information, we should incorporate the representation learning modules into the generative model architecture.
As shown in the figure, $\mathbf{R}^{a}$ is a hidden vector in the denoising neural network of the diffusion model.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I appreaciate the response from the authors and the additional figure to clarify the model architecture. My concerns have been addressed. Specifically,
- The time information can indeed by captured by Transformer models. I would suggest authors consider spelling this out by revising Eq. (7) from $\text{Encoder}(\cdot)$ to $\text{Transformer}(\cdot)$ to enhance clarity.
- The architecture and specifications for the Projectors are clarified. Please add the relevant descriptions in the main text.
- The use of $\mathbf{R}^a$ and $\mathbf{R}^t$ are clarified with the revised architecture figure.
Given that my concerns are addressed, I increase my overall rating from 4 to 5.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: Thank you for your thoughtful evaluation and the feedback provided in your review. We are pleased to hear that the additional figure and our responses have helped in addressing your concerns.
- We will certainly revise Equation (7) and Fig.3 to replace Encoder() with Transformer() to more explicitly indicate that time information can be captured via Transformer models, as you've suggested.
- Additionally, we will incorporate detailed descriptions of the input projector and output projector in the main text,
as detailed in the Rebuttal of response W2,
to avoid any ambiguities regarding their architecture and specifications.
- We also acknowledge your satisfaction with the clarification provided through the revised figure regarding the use of
Ra and Rt. We will ensure that similar clarity is maintained throughout the manuscript to aid the understanding of our model components.
Thank you again for your invaluable feedback and guidance. If you have any questions, please send them to us, we look forward to discussing with you to further improve our work.
Best regards,
Authors | Summary: The paper proposes a new model, called FGTI, to address the issue of missing data in multivariate time series extracting frequency-domain information. The authors argue that existing methods, in general, neglect the residual term, which is the most significant contributor to imputation errors. FGTI incorporates frequency-domain information using high-frequency and dominant-frequency filters to improve the imputation of the residual, trend, and seasonal terms. The model is evaluated on three real-world datasets and demonstrates superior performance in both imputation accuracy and downstream tasks compared to existing methods.
Strengths: - The paper is well-written and well-motivated.
- The authors tackle a very important problem of time-series imputation, focusing on fully utilizing frequency-domain information, which has not been considered enough.
- The experiments are thorough, including ablation studies and different missing scenarios. The method is compared with cutting-edge baselines.
Weaknesses: - The experiments are thorough and include multiple state-of-the-art imputation methods. However, the paper misses two important time-series imputation methods: one with a VAE-based approach [A] and one applying trend-seasonality decomposition [B]. While it's understandable that the authors might not have had enough time to include these papers since they are from ICLR 2024, technical comparisons with these methods would be necessary before acceptance.
- While the authors provided experiments with various missing scenarios and missing rates, it is not clear why the proposed method outperforms the conventional methods and under what circumstances. For instance, extracting high-frequency information could be significantly distorted if missingness occurs with a certain frequency (which seems realistic considering periodic noises). The paper would be more convincing if the authors provided experiments that assess the effect of the trend and seasonality components of the datasets (or using synthetic datasets) and missing patterns.
- The experiments would be more appealing if the authors provided a breakdown (seasonality and trend) of the assessments on the imputed time-series.
Minor comment:
- Regarding Figure 1, since the RMSE and MAE comparison essentially conveys similar information about the population-level importance of the trend-seasonality decomposition, replacing one with specific time-series examples where the trend and seasonality of the missing values are correctly imputed would better illustrate the motivation. Also, information about how Figure 1 is reported is missing.
References:
[A] Choi and Lee, "Conditional Information Bottleneck Approach for Time Series Imputation, ICLR 2024.
[B] Liu et al., "Multivariate Time-series Imputation with Disentangled Temporal Representations," ICLR 2024.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Regarding Weakness 2: Please describe why the proposed method works better than conventional methods and under what circumstances. Do the seasonality and trend profiles of the given time-series dataset matter, and if so, how?
- It seems that how to mask the input (to learn the underlying diffusion model) is very important. How does the model's performance change depending on the masking ratio and/or masking patterns? This is crucial as masking may distort the trend and seasonality components of the given time-series.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the constructive comments and suggestions for further experiments. We will try our best to incorporate the suggestions in the revised version.
Below, we provide our response to the questions and concerns.
**1. (W1) Compared with the most recent baselines**
Thank you for the valuable suggestions.
TimeCIB [a] is a very competitive VAE-based imputation method, so we incorporate TimeCIB into the baselines for comparative experiments.
In addition, we have already considered TIDER [27] in our experiments.
We first report the part of updated Table 1 for the added baseline:
| | Miss. Rate | Metric | FreTS | FGTI |
|--------|------------|--------|:------:|:----------:|
| KDD | 10% | RMSE | 0.630 | **0.406** |
| | | MAE | 0.412 | **0.149** |
| | 20% | RMSE | 0.741 | **0.451** |
| | | MAE | 0.489 | **0.161** |
| | 30% | RMSE | 0.796 | **0.448** |
| | | MAE | 0.546 | **0.176** |
| | 40% | RMSE | 0.850 | **0.478** |
| | | MAE | 0.591 | **0.205** |
| Guang. | 10% | RMSE | 0.456 | **0.230** |
| | | MAE | 0.340 | **0.170** |
| | 20% | RMSE | 0.602 | **0.258** |
| | | MAE | 0.460 | **0.176** |
| | 30% | RMSE | 0.709 | **0.291** |
| | | MAE | 0.547 | **0.202** |
| | 40% | RMSE | 0.787 | **0.356** |
| | | MAE | 0.611 | **0.254** |
| Phy. | 10% | RMSE | 0.804 | **0.580** |
| | | MAE | 0.540 | **0.286** |
| | 20% | RMSE | 0.825 | **0.577** |
| | | MAE | 0.576 | **0.309** |
| | 30% | RMSE | 0.861 | **0.624** |
| | | MAE | 0.603 | **0.336** |
| | 40% | RMSE | 0.883 | **0.669** |
| | | MAE | 0.626 | **0.376** |
We then report the imputation results for varying missing mechanism with 10% missing values in the following table, and update Fig.7, Fig.8 and Fig.9:
| | Miss. Mech. | Metric | TimeCIB | FGTI |
|--------|-------------|--------|---------|------------|
| KDD | MCAR | RMSE | 0.589 | **0.406** |
| | | MAE | 0.367 | **0.149** |
| | MAR | RMSE | 0.589 | **0.406** |
| | | MAE | 0.367 | **0.149** |
| | MNAR | RMSE | 0.693 | **0.499** |
| | | MAE | 0.408 | **0.174** |
| Guang. | MCAR | RMSE | 0.451 | **0.230** |
| | | MAE | 0.300 | **0.170** |
| | MAR | RMSE | 0.376 | **0.218** |
| | | MAE | 0.245 | **0.150** |
| | MNAR | RMSE | 0.327 | **0.200** |
| | | MAE | 0.230 | **0.140** |
| Phy. | MCAR | RMSE | 0.697 | **0.580** |
| | | MAE | 0.450 | **0.286** |
| | MAR | RMSE | 0.327 | **0.200** |
| | | MAE | 0.230 | **0.140** |
| | MNAR | RMSE | 0.697 | **0.580** |
| | | MAE | 0.450 | **0.286** |
After this, we report the updated resource consumption over the KDD dataset with 10% missing values in the following table, and update Fig.5:
| | GPU Consumption Usage (MiB) | Running Time (s) |
|----------|-----------------------------|------------------|
| TimeCIB | 930 | 6.52 |
| FGTI | 9103 | 3874.30 |
Finally, we report on the updated downstream application experiments in the following table,
and update Fig.6:
| | Air quality prediction (RMSE) | Mortality forecast (AUC) |
|------------|-------------------------------|--------------------------|
| TimeCIB | 0.63 | 0.82 |
| FGTI | **0.59** | **0.86** |
We can find that FGTI still outperforms all baselines due to the guidance of high-frequency information and dominant-frequency information, as well as the generating ability of the diffusion model.
---
Rebuttal 2:
Title: Continued response for W2, Q1
Comment: **2. (W2,Q1) Clarify why your method outperforms conventional ones, considering trend, seasonality, and missing patterns in datasets**
Thank you for your valuable suggestions for boosting our manuscript.
We conduct a case study of FGTI for trend, seasonal, and residual terms over the pre-decomposed KDD dataset.
We first perform STL decomposition of the KDD dataset into Trend, Seasonal and Residual terms.
Then we select 10% observations of the original KDD dataset as the mask positions by MCAR,
and then mask the corresponding positions of the three terms.
To study the role of high-frequency information, dominant-frequency information,
and frequency-domain information,
we consider the three ablation scenarios: (1) w/o Dominant-frequency filter
(2) w/o High-frequency filter and (3) w/o Frequency condition in Section 4.3.
We report the imputation results of different scenarios over different terms in following table:
| Component | Trend | | Seasonal | | Residual | |
|-------------------------------|------------|------------|------------|------------|------------|------------|
| | RMSE | MAE | RMSE | MAE | RMSE | MAE |
| w/o Frequency condition | 0.048 | 0.0155 | 0.0572 | 0.0364 | 0.5132 | 0.2975 |
| w/o Dominant-frequency filter | 0.0482 | 0.0157 | 0.0533 | 0.0334 | **0.4956** | **0.2814** |
| w/o High-frequency filter | **0.0409** | **0.0143** | **0.0485** | **0.0301** | 0.5129 | 0.2912 |
| FGTI | 0.0448 | 0.0159 | 0.0523 | 0.0325 | 0.5068 | 0.2885 |
We can find that for the Trend term, retaining the dominant-frequency information gives the best results,
while the-high frequency information may interfere the imputation.
For the Seasonal term, the results are similar to the Trend term,
but the dominant-frequency information contributes less to the imputation for the Seasonal term than for the Trend term.
This suggests that the Seasonl term mainly corresponds to the dominant-frequency information, but also contains some of the high-frequency information.
In contrast, the results of the experiments on the Residual term show that it mainly corresponds to high-frequency information.
Since we choose Transformer as the encoder in Cross-domain Representation Learning and utilize cross-attention as the fusion mechanism of the two frequency domain information in Time-frequency Representation Learning and Attribute-frequency Representation Learning, our method can self-adaptively adjust the weights of the high-frequency information and the dominant-frequency information for different timestamps.
Thus our method can outperform conventional methods for datasets with any circumstances in most cases, as illustrated in Table 1.
In addition, our method's pipeline does not perform STL decomposition of the time series, but extracts the frequency domain informations to guide imputation, so we do not need to know the trend and seasonality of the dataset.
Thanks to the encoder structure and cross-attention based fusion mechanism in Cross-domain Representation Learning, we can adaptively adjust the weights of the two frequency domain information on datasets dominated by different terms.
Furthermore, we consider three missing mechanisms, MCAR, MAR, and MNAR, in our comparative experiments. MCAR implies that each attribute could be missing at any frequency, while MAR potentially implies that all attributes' missing status depends on the frequency of a particular attribute (e.g., high temperatures could cause sensor failure).
For MNAR, the missing status of all attributes depends on the frequency of that attribute.
As shown in Fig 4, Fig 7 and Fig 8 in the manuscript, our method still outperforms conventional methods
with different missing mechanisms (patterns).
---
Rebuttal 3:
Title: Continued response for W3, C1, Q2
Comment: **3. (W3) Provided a breakdown (seasonality and trend) of the assessments**
Thank you for your suggestions.
The pipeline of our method does not perform a direct STL decomposition, but instead uses the high-frequency and dominant-frequency information to potentially guide the imputation of the three terms.
Due to the effect of missing values, we will get different decomposition results before and after imputation, so for each term the ground truth will be lacking for the evaluation.
Therefore, we first perform STL decomposition of the original KDD dataset, then inject the missing values for the three terms respectively to verify the imputation results, and the results are shown in Fig. 1 of Section 1.
In addition, we also add a Case study about the trend, seasonal and residual terms using the pre-decomposed KDD dataset, and the results are shown in the above table.
The results show that dominant-frequency information can mainly help the imptuation of the Trend and Seasonal terms, and the high-frequency information is mainly help to the Residual term.
**4. (C1) Replace RMSE or MAE with examples, and clarify Figure 1 reporting details**
Thank you for your valuable suggestions for boosting our manuscript.
Regarding the setting for the study of Fig. 1, we first perform STL decomposition of the KDD dataset into Trend, Seasonal and Residual terms.
Then we select 10% observations of the original KDD dataset as the mask positions by MCAR, and mask the corresponding positions of the three terms.
Finally, we impute the missing values of the three terms by different methods separately.
Following your valuable suggestion, **we add the examples of imputations on trend, seasonal and residual terms in Fig. 1, which is uploaded in the supplementary pdf file at the top of the page**.
and add the setup for this survey study.
The results also indicate that the SOTA time-series imputation methods are inaccurate in imputing the residual term.
**5. (Q2) Explain how masking ratio/patterns affect model performance**
Thank you for your comments. We agree that mask ratio and mask pattern are critical for the learning process.
We randomly mask different ratios of observations as the imputation target at each training step, instead of using a fixed mask ratio, following CSDI [43].
The performance of FGTI with different mask ratios is shown in the following table:
| Mask Ratio | KDD | | Guangzhou | | PhysioNet | |
|------------|---------|---------|-----------|---------|-----------|---------|
| | RMSE | MAE | RMSE | MAE | RMSE | MAE |
| 10% | 0.4372 | 0.1925 | 0.2388 | 0.1647 | 0.5992 | 0.3235 |
| 20% | 0.4143 | 0.1700 | **0.2312** | **0.1576** | 0.5853 | **0.2893** |
| 30% | 0.4257 | 0.1707 | 0.2335 | 0.1600 | 0.5836 | 0.2800 |
| 40% | 0.4185 | 0.1697 | 0.2372 | 0.1634 | 0.6181 | 0.3105 |
| 50% | 0.4183 | 0.1714 | 0.2343 | 0.1578 | 0.6308 | 0.3138 |
| Random | **0.4057** | **0.1489** | 0.2325 | 0.1584 | **0.5801** | 0.2856 |
We can find that since the random mask strategy can increase the learning complexity and enhance the modelling ability of the diffusion model, thus our masking strategy achieves optimal or sub-optimal performance in most cases.
Then, we explore the performance when using different mask patterns.
Following CSDI [43] and PriSTI [26], we consider (1) Block missing (2) Mix missing strategy (3) Random missing mask pattern,the results is shown in the following table:
| Mask Pattern | KDD | | Guangzhou | | PhysioNet | |
|--------------|---------|---------|-----------|---------|-----------|---------|
| | RMSE | MAE | RMSE | MAE | RMSE | MAE |
| Block | 0.4187 | 0.1778 | 0.2387 | 0.1596 | 0.6224 | 0.3384 |
| Mix | 0.4193 | 0.1792 | **0.2325** | **0.1531** | 0.6034 | 0.3208 |
| Random | **0.4057** | **0.1489** | **0.2325** | 0.1584 | **0.5801** | **0.2856** |
It can be found that Block missing or Mix missing is not comparable to Random missing in most cases due to the possibility that the mask pattern may not correspond to the actual missing scenario.
So, we use the Random missing mask pattern by default.
Reference:
[a] Conditional Information Bottleneck Approach for Time Series Imputation
---
Rebuttal Comment 3.1:
Comment: I have read all the rebuttals and I fully appreciate the experiments the authors made. This is an interesting paper with good motivation and thorough exepriments. I tend to keep my initial decision, which is "Weak Accept".
---
Reply to Comment 3.1.1:
Title: Thanks for the response
Comment: Thank you for appreciating our detailed rebuttals and experimental efforts. We are delighted to know you find our paper interesting with solid motivation and thorough experiments.
**Given these positive remarks and to strengthen our position for final acceptance, may we kindly request you to reconsider your score and potentially upgrade it?** This would greatly enhance our chances of contributing our findings to a broader audience.
Best regards,
Authors | Summary: The authors present Frequency-aware Generative Models for Multivariate Time Series Imputation (FGTI), a model which addresses the challenge of missing data in multivariate time series by focusing on the often-overlooked residual term. The paper also incorporates frequency-domain information to enhance imputation performance. Experiments show that this outperforms many existing time series imputation baselines on three real-world datasets.
Strengths: S1. The work proposes an approach for time series imputation, which is an important topic for a wide range of applications.
S2. The authors aim to address the influence of frequency-domain information, w.r.t, both high-frequency condition and dominant-domain condition.
S3. Extensive experiments have been carried out.
Weaknesses: W1. In Introduction and Section 3.1.2, it shows less explanation on why frequency components with large amplitudes could guide both the imputation of trend and seasonal terms. Did it address the intricately entangled trend-seasonal representations, which is highly important in current approaches?
W2. In the section of Related Work, I did not find discussions regarding existing research work and this study in frequency domain for time series tasks, this would be hard to clearly justify the main difference/novelty of this paper compared with current approaches.
W3. In the evaluation, only RMSE and MAE are used as metrics. However, it would be better to include additional metrics such as CRPS.
W4. It would be fairer if the authors compared their approach with a time series model using frequency domain information, which also addresses the trend, seasonal, and residual terms.
W5. Both CSDI and PriSTI address the issue of missing strategies. From the results of Figure 4, 7 and 8, the improvement of MAR and MNAR was much lower than that of MCAR on the Guangzhou and Physio datasets. Was it due to differences in the data distribution, or was it due to the setting of the hyper-parameters?
[1] Learning Latent Seasonal-Trend Representations for Time Series Forecasting.
[2] Frequency-domain MLPs are More Effective Learners in Time Series Forecasting
[3] CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation
[4] PriSTI: A Conditional Diffusion Framework for Spatiotemporal Imputation
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. My main concern with the work is that the contribution of the proposed method might be incremental. Compared with CSDI, frequency domain filters are utilized to incorporate frequency domain information, and further cross-domain representation learning and frequency-aware diffusion framework are natural implementations, which lack additional novelty.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments and valuable suggestions. Below, we provide our response to the questions and concerns.
**1. (W1) Why large amplitude frequency components guide trend and seasonal term imputation**
Thank you for your comments.
We recognise that our current expression is insufficient.
In STL decomposition, the dominant frequency components(with large amlitude) typically correspond to the trend and seasonal terms, while the high-frequency information is often associated with the residual term [a].
This is because trend captures long-term movements, and seasonal captures major period patterns.
We also add an empirical evidence on the pre-decomposed KDD dataset in following table:
| Component | Trend | | Seasonal | | Residual | |
|-------------------------------|------------|------------|------------|------------|------------|------------|
| | RMSE | MAE | RMSE | MAE | RMSE | MAE |
| w/o Frequency condition | 0.048 | 0.0155 | 0.0572 | 0.0364 | 0.5132 | 0.2975 |
| w/o Dominant-frequency filter (Preserving high-frequency information) | 0.0482 | 0.0157 | 0.0533 | 0.0334 | **0.4956** | **0.2814** |
| w/o High-frequency filter (Preserving dominant-frequency information) | **0.0409** | **0.0143** | **0.0485** | **0.0301** | 0.5129 | 0.2912 |
| FGTI | 0.0448 | 0.0159 | 0.0523 | 0.0325 | 0.5068 | 0.2885 |
This suggests that the Trend term mainly corresponds to the dominant-frequency information, the Seasonl term mainly corresponds to the dominant-frequency information, but also contains some of the high-frequency information, the Residual term mainly corresponds to high-frequency information.
Since we choose transformer as the encoder and utilize cross-attention as the fusion mechanism of the two frequency domain information, our method can self-adaptively adjust the weights of the high-frequency information and the dominant-frequency information for the characteristics of the three terms in different datasets.
And we will add the content of the above expressions in Section.1 and Section 3.1.2.
**2. (W2) Discuss existing frequency domain research for time series tasks**
Thank you for your valuable suggestions for improving the quality of the manuscript.
Our current Related work section mainly focuses on the currently state-of-the-art time series imputaiton methods.
For the time series imputation methods in frequency domain, mvLSWimpute [b] utilizes wavelet transforms to guide imputation, APDNet [c] uses the Fourier Temporal and Fourier Variable Interaction modules to model dependencies.
In addition, the frequency domain time series forecasting methods FEDformer [d], FreTS [e] can also be applied to imputation task.
However, they did not consider how to use frequency domain information to accurately model the residual terms of the missing data,
which is critical for boosting the overall imputation performance.
In contrast, our FGTI captures high-frequency information and dominant-frequency information to get a more accurate modeling of the residual term, while assisting in describing trend and seasonal terms.
We will add to the discussion on imputation methods in frequency domain to the related work section.
---
Rebuttal 2:
Comment: **3. (W3) Include additional metrics such as CRPS**
Thank you for the thoughtful comments and valuable suggestions.
Since the CRPS is used to evaluate probabilistic imputation baselines, we report the CRPS for all probabilistic imputation baselines according to the experimental settings in CSDI [43]and PriSTI [26].
First, we report the CRPS performance with different missing rates in the following table:
| Dataset | Miss. Rate | MIWAE | GPVAE | TimeCIB | GAIN | CSDI | SSSD | PriSTI | FGTI |
|---------|------------|--------|--------|---------|--------|--------|--------|--------|--------|
| KDD | 10% | 0.524 | 0.443 | 0.466 | 0.709 | 0.224 | 0.352 | 0.232 | **0.158** |
| | 20% | 0.526 | 0.445 | 0.467 | 0.718 | 0.245 | 0.370 | 0.248 | **0.170** |
| | 30% | 0.532 | 0.447 | 0.469 | 0.729 | 0.259 | 0.374 | 0.268 | **0.186** |
| | 40% | 0.530 | 0.457 | 0.471 | 0.746 | 0.278 | 0.401 | 0.301 | **0.216** |
| Guang. | 10% | 0.312 | 0.333 | 0.360 | 0.692 | 0.265 | 0.316 | 0.209 | **0.155** |
| | 20% | 0.312 | 0.330 | 0.357 | 0.694 | 0.277 | 0.299 | 0.244 | **0.168** |
| | 30% | 0.312 | 0.335 | 0.356 | 0.695 | 0.292 | 0.353 | 0.310 | **0.193** |
| | 40% | 0.312 | 0.333 | 0.358 | 0.697 | 0.324 | 0.382 | 0.362 | **0.243** |
| Phy. | 10% | 0.689 | 0.659 | 0.466 | 0.739 | 0.544 | 0.617 | 0.444 | **0.343** |
| | 20% | 0.717 | 0.665 | 0.467 | 0.761 | 0.589 | 0.665 | 0.457 | **0.369** |
| | 30% | 0.750 | 0.674 | 0.469 | 0.787 | 0.627 | 0.630 | 0.467 | **0.389** |
| | 40% | 0.779 | 0.680 | 0.471 | 0.814 | 0.671 | 0.676 | 0.491 | **0.441** |
Then we report the CRPS by varying the missing mechanism with 10% missing values in the following table:
| Dataset | Miss. Mech. | MIWAE | GPVAE | TimeCIB | GAIN | CSDI | SSSD | PriSTI | FGTI |
|---------|-------------|--------|--------|---------|--------|--------|--------|--------|--------|
| KDD | MCAR | 0.524 | 0.443 | 0.466 | 0.709 | 0.224 | 0.352 | 0.232 | **0.158** |
| | MAR | 0.539 | 0.437 | 0.470 | 0.710 | 0.229 | 0.489 | 0.239 | **0.164** |
| | MNAR | 0.615 | 0.435 | 0.490 | 0.715 | 0.244 | 0.456 | 0.252 | **0.174** |
| Guang. | MCAR | 0.312 | 0.333 | 0.360 | 0.692 | 0.265 | 0.316 | 0.209 | **0.155** |
| | MAR | 0.258 | 0.334 | 0.298 | 0.692 | 0.252 | 0.367 | 0.208 | **0.148** |
| | MNAR | 0.241 | 0.340 | 0.294 | 0.693 | 0.251 | 0.267 | 0.210 | **0.144** |
| Phy. | MCAR | 0.689 | 0.659 | 0.466 | 0.739 | 0.544 | 0.617 | 0.444 | **0.343** |
| | MAR | 0.679 | 0.660 | 0.593 | 0.739 | 0.550 | 0.724 | 0.454 | **0.356** |
| | MNAR | 0.726 | 0.655 | 0.608 | 0.743 | 0.566 | 0.715 | 0.476 | **0.366** |
Combining the results in Table 1 and Fig.4, Fig.7 and Fig.8, it can be found that the variations of CRPS are basically the same as RMSE and MAE for a specific model with different settings.
Thanks again to the authors for their valuable suggestions. We will add this part of the experiment in the Appendix.
Title: Continued response for W3
---
Rebuttal 3:
Title: Continued response for W4
Comment: **4. (W4) Compared with the frequency domain method, and the method address the three terms**
Thank you for the valuable suggestions.
We add LaST [f] and FreTS [e] as baselines for comparison.
Since they focus on the time series forecasting task, we adapt them to the imputation task based on the TimesNet [48] setting.
Furthermore, we have considered the TIDER model which takes into account trend, seasonal and residual terms in the MF process.
We first report the part of updated Table 1 for the added baselines:
| | Miss. Rate | Metric | LaST | FreTS | FGTI |
|--------|------------|--------|:------:|:------:|:----------:|
| KDD | 10% | RMSE | 0.473 | 0.630 | **0.406** |
| | | MAE | 0.287 | 0.412 | **0.149** |
| | 20% | RMSE | 0.532 | 0.741 | **0.451** |
| | | MAE | 0.310 | 0.489 | **0.161** |
| | 30% | RMSE | 0.574 | 0.796 | **0.448** |
| | | MAE | 0.350 | 0.546 | **0.176** |
| | 40% | RMSE | 0.634 | 0.850 | **0.478** |
| | | MAE | 0.393 | 0.591 | **0.205** |
| Guang. | 10% | RMSE | 0.347 | 0.456 | **0.230** |
| | | MAE | 0.244 | 0.340 | **0.170** |
| | 20% | RMSE | 0.440 | 0.602 | **0.258** |
| | | MAE | 0.312 | 0.460 | **0.176** |
| | 30% | RMSE | 0.545 | 0.709 | **0.291** |
| | | MAE | 0.388 | 0.547 | **0.202** |
| | 40% | RMSE | 0.637 | 0.787 | **0.356** |
| | | MAE | 0.458 | 0.611 | **0.254** |
| Phy. | 10% | RMSE | 0.768 | 0.804 | **0.580** |
| | | MAE | 0.516 | 0.540 | **0.286** |
| | 20% | RMSE | 0.786 | 0.825 | **0.577** |
| | | MAE | 0.550 | 0.576 | **0.309** |
| | 30% | RMSE | 0.825 | 0.861 | **0.624** |
| | | MAE | 0.578 | 0.603 | **0.336** |
| | 40% | RMSE | 0.850 | 0.883 | **0.669** |
| | | MAE | 0.603 | 0.626 | **0.376** |
We then report the results for varying missing mechanisms with 10% missing values in the following table, and update Fig.7, Fig.8 and Fig.9:
| | Miss. Mech. | Metric | LaST | FreTS | FGTI |
|--------|-------------|--------|--------|--------|------------|
| KDD | MCAR | RMSE | 0.473 | 0.630 | **0.406** |
| | | MAE | 0.287 | 0.412 | **0.149** |
| | MAR | RMSE | 0.473 | 0.630 | **0.406** |
| | | MAE | 0.287 | 0.412 | **0.149** |
| | MNAR | RMSE | 0.619 | 0.809 | **0.499** |
| | | MAE | 0.326 | 0.473 | **0.174** |
| Guang. | MCAR | RMSE | 0.347 | 0.456 | **0.230** |
| | | MAE | 0.244 | 0.340 | **0.170** |
| | MAR | RMSE | 0.327 | 0.465 | **0.218** |
| | | MAE | 0.234 | 0.356 | **0.150** |
| | MNAR | RMSE | 0.309 | 0.441 | **0.200** |
| | | MAE | 0.227 | 0.337 | **0.140** |
| Phy. | MCAR | RMSE | 0.768 | 0.804 | **0.580** |
| | | MAE | 0.516 | 0.540 | **0.286** |
| | MAR | RMSE | 0.309 | 0.441 | **0.200** |
| | | MAE | 0.227 | 0.337 | **0.140** |
| | MNAR | RMSE | 0.768 | 0.804 | **0.580** |
| | | MAE | 0.516 | 0.540 | **0.286** |
After this, we report the resource consumption over KDD dataset with 10% missing values in the following table,
and update Fig.5:
| | GPU Consumption Usage (MiB) | Running Time (s) |
|----------|-----------------------------|------------------|
| LaST | 444 | 4.63 |
| FreTS | 1118 | 9.09 |
Finally we report the updated downstream application experiments in the following table, and update Fig.6:
| | Air quality prediction (RMSE) | Mortality forecast (AUC) |
|------------|-------------------------------|--------------------------|
| LaST | 0.64 | 0.80 |
| FreTS | 0.65 | 0.80 |
| FGTI | **0.59** | **0.86** |
We can find that our FGTI still outperforms all baselines due to the guidance of high-frequency information and dominant-frequency information, as well as the generating ability of the diffusion model.
---
Rebuttal 4:
Title: Continued response for W5, L1
Comment: **5. (W5) What cause lower MAR and MNAR improvements compared to MCAR for Guangzhou and PhysioNet**
Thank you for your question.
MCAR, MAR and MNAR reflect different missing scenarios in reality.
For MCAR, the missing value is not dependent on other attributes.
For MAR, the probability of missingness depends only on available information (e.g. missing records of traffic flow data may be related to rush hour, activities in public places).
For MNAR, the missing observation depends on othe the attribute itself (e.g. reliability of recording devices).
The effect of the models on different mechanisms depends on the temporal and attribute relations they capured in the dataset.
Under the MAR and MNAR mechanisms, the critical temporal or attribute dependencies are more likely to be missing compared to MCAR, and therefore models may perform not well for MAR and MNAR.
This also reflects the importance of introducing frequency domain information to guide the imputation.
**6. (L1) The contribution of the proposed method might be incremental**
Thank you for your comments.
**As far as we know, this is the first work to recognize the insight of paying special attention to residual term when impute time series, and introduce frequency-domain information to guide generative models.**
We choose to implement FGTI through the diffusion model because diffusion models are currently demonstrating excellent performance in several fields [17,23].
The frequency domain filters and the cross-domain representation learning module can be flexibly ported to other generative models.
We think this insight can shed light into potential future directions in time series imputation.
Therefore, we argue that our FGTI is not an incremental work.
References:
[a] MSTL: A Seasonal-Trend Decomposition Algorithm for Time Series with Multiple Seasonal Patterns
[b] A wavelet-based approach for imputation in nonstationary multivariate time series
[c] Rethinking general time series analysis from a frequency domain perspective
[d] FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting
[e] Frequency-domain MLPs are More Effective Learners in Time Series Forecasting
[f] Learning Latent Seasonal-Trend Representations for Time Series Forecasting | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their thoughtful and constructive comments. We are greatly encouraged that they found our idea and contributions to be significant (Reviewer WWjA, qcDA, Q2t8 and SUMs), and technical sound (Reviewer WWjA, qcDA, SUMs). We are grateful that they identified our method to be effective (Reviewer WWjA, qcDA, Q2t8 and SUMs) and our paper to be well-written (Reviewer WWjA, qcDA, Q2t8). However, we believe that there are still several questions and concerns mentioned in the reviews that need to be addressed.
Meanwhile, we also revised the paper and the appendix according to the reviewers' valuable suggestions. The main changes are as follows:
- We add several competitive imputation methods (i.e. LaST, FreTS, TimeCIB) recommended by reviewer WWjA and qcDA in our experiments.
- Thanks to the comments from WWjA and SUMs,we add a section about experimental results of CRPS metric as suggested by reviewer .
- We also add a case study exploring the role of high-frequency condition and dominant-frequency condition in the Appendix, relying on the comments of Reviewers WWjA and qcDA.
- We provide the experiment analysis of the mask ratio and mask pattern in the Appendix as recommended by Reviewer qcDA.
- We check the full manuscript thoroughly and improve the presentation and grammar as recommended by Reviewer qcDA and Q2t8.
- As shown in the attached pdf file, we update Fig. 1 according to Reviewer qcDA's suggestion and add a figure presenting the detailed architecture of the denoising network according to Reviewer Q2t8's comment.
Best,
Authors
Pdf: /pdf/421121c23e82f27b329037545c007047cdcccbbc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exploring the trade-off between deep-learning and explainable models for brain-machine interfaces | Accept (poster) | Summary: The study provides a rigorous comparison of four neural decoders typically used in BCIs: Kalman Filter, KalmanNet, tcFNN, and LSTM in both offline and online conditions on a NHP performing a 2 degree of freedom dexterous finger task. Authors explore the trade-off in decoding capabilities of these decoders with their explainability. Authors find that the KalmanNet which incorporates deep-learning techniques by estimating the Kalman Gain through GRUs can attain comparable performance to fully "Black-Box" methods using LSTMs, showing there exists methodology which can still retain the performance of deep-learning techniques without fully sacrificing the explainability of linear models. Furthermore, authors also analyze the behavior of the GRUs of the KalmanNet by analyzing the corresponding Kalman Gain, and show that the behavior of KalmanNet can be replicated by a explainable heteroskedastic Kalman Filter. Authors also discuss some limitations of these deep learning approaches (including KalmanNet) when dealing with out-of-distribution inputs.
Strengths: - The paper is well-written and easy to follow.
- The paper tackles an important problem of understanding the trade-off between the performance of "black-box" decoders with superior decoding abilities and simpler linear decoders with more explainability in the context of BCI.
- The authors provide rigorous comparison between 4 different decoders with varying levels of "black-box-ness" starting from the conventional linear Kalman Filter, to KalmanNet (which introduces non-linearity by estimating the Kalman Gain using a GRU architecture), and two conventional deep-learning approaches: LSTM & tcFNN on both online and offline data collected from NHPs.
- The authors are able to show that it is possible to adapt traditional linear approaches by incorporating deep learning approaches (i.e., KalmanNet) while still retaining some amount of explainability without sacrificing the superior performance.
- Authors are also able to explain the KalmanNet using the explainable Kalman Filter with heteroscedastic noise models.
- Authors also demonstrate that deep learning approaches (including KalmanNet) are typically unable to handle out-of-distribution inputs (a known problem in deep learning method).
- Authors also show a surprising result that LSTM are more robust to input noise compared to other methodologies tested, which is at the very least quite an intriguing result.
Weaknesses: - Please do not use $p>0.05$ as evidence to conclude that there is no significant difference (lines 221-230). The only scientific conclusion that can be made when $p>0.05$ is that using this test we *cannot conclude* that a significant difference exists not that there is not a significant difference (emphasis on conclude). Please run a proper statistical equivalence test using which it can be concluded that the two quantities being compared are equivalent.
- Please specify the statistical tests used whose p-values have been reported.
- The argument that there is no difference between the MSE in position between KF, KalmanNet, and LSTM based on p-values > 0.05 is misleading. From figure 2, it is clear that KalmanNet struggles to estimate the position.
- The error bars for online metrics are missing in Fig 2.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why is the variance of MSE in the position so high in Figure 2?
- Just out of my curiosity, the KalmanNet seems similar in spirit to Extended Kalman Filter, where the Kalman gain is also updated using a known non-linearity in the dynamics of the system. I would like to hear authors' opinion on how extended Kalman filters where the non-linearity can be introduced in the dynamics compare against the methodologies tested in this work, particularly KalmanNet which seems to do something similar where the Kalman gain is updated non-linearly through a GRU. It might provide another avenue for understanding KalmanNet by analyzing if the non-linear trust introduced by the KalmanNet can be modeled as a non-linearity introduced in the dynamics of the system.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes authors adequately discuss the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for highlighting the strengths and main contributions of the paper as well as for suggesting specific avenues for improvement. We have addressed the statistical weakness brought up by the reviewer as well as responding to their questions below.
### Weaknesses:
We thank the reviewer for the various comments and suggestions regarding the statistical analyses of the paper, which we will add in greater detail in a revised main text. The p-values shown in the paper were the result of conducting paired-sample t-tests on the mean squared errors and on the Fisher Z-transform of the correlation values. From the reviewer's suggestion, we also added Bayes paired-sample t-tests for each of the comparisons, to give readers more information about the likelihood of the null hypothesis (models have similar performance) versus the alternative hypothesis (models differ in performance). Specifically, we propose adding the following after the last sentence in line 162, both to specify the statistical test we ran, as well as introduce the new Bayes factor analysis:
>To compare offline performance across models, we used a combination of frequentist and Bayesian statistical methods. First, paired-sample t-tests on the MSEs as well as in the Fisher Z-transformed correlation coefficients [1] determined whether the difference was significant ($\alpha =0.05$) under the null hypothesis assumption. Second, we computed the Bayes factor ($B_{01}$) to determine the ratios of likelihoods between the null and alternative hypotheses.
Additionally, we propose changing lines 221 through 230 to the following:
>In terms of correlation with velocity, which controls the visualization when running online, KalmanNet did not significantly differ from the LSTM ($p = 0.64, B_{01}=3.3$) and had significantly higher correlation than the KF ($p < 1E−7, B_{01}=0.048$) and the tcFNN ($p < 0.001, B_{01}=0.062$; Figure 2, B). All approaches had similar correlations with position, other than the tcFNN, which is a velocity-only approach. Regarding velocity MSE, there was also no significant difference between KalmanNet and the LSTM ($p = 0.72, B_{01}=3.4$) or between KalmanNet and the tcFNN ($p = 0.29, B_{01}=2.1$). However, KalmanNet significantly outperformed the KF ($p < 0.01, B_{01}=0.14$). In terms of position, there were no significant differences in correlation (KNet vs. LSTM, $p = 0.32, B_{01}=2.4$; KNet vs KF, $p=0.93, B_{01}=3.5$; LSTM vs. KF, $p=0.08, B_{01}=1.1$) or MSE (KNet vs. LSTM, $p=0.26, B_{01}=2.0$; KNet vs KF, $p=0.77, B_{01}=3.5$; LSTM vs. KF, $p = 0.1, B_{01}=1.1$), although KalmanNet had higher variance in MSE.
Finally, regarding the error bars representing the standard error of the mean of online metrics in Figure 3: they are present, but are very small in part due to the large number of trials we conducted. The trial numbers can be found in line 234, but for easier access to the reader we propose adding them to the end of the caption, as follows:
>Tested on monkey N across T=601 (KF), 576 (tcFNN), 2801 (KNet), 393 (LSTM) trials in a total of five days.
### High MSE variance in Figure 2:
We thank the reviewer for their question. The higher variance stems from KalmanNet having a bias in the position prediction on some days, which increases the MSE without decreasing the correlation. This effect most likely stems from bias shifts in the neural data, a known problem in brain-machine interfaces [2], which can greatly affect the bias in the linear observation model used in KalmanNet. However, when using KalmanNet online, the virtual hand is driven mostly by predicted velocity, with the predicted position serving only as a stabilization parameter (see equation in line 479).
### Similarity to EKF:
We thank the reviewer for this insightful question. The extended Kalman filter, as well as other varieties such as the unscented Kalman filter, indeed are similar to KalmanNet in the sense that they allow for non-linearities to be introduced either in the dynamics or in the relationship between sensor information and the tracked state. In a linear Kalman filter, the Kalman gain is computed by propagating the noise variance matrices through the linear system. In an extended Kalman filter, on the other hand, since the system is not linear anymore, the Kalman gain is computed by linearizing the system at the predicted state at every time point, and then proceeding as in the linear Kalman filter. This linearization then depends on the state estimate, and the filter can diverge if the system is not linearized around the correct point. In contrast, KalmanNet computes the Kalman gain by implicitly computing the noise of each source of information by using the differences between what it observes and what it predicts (see section A.2.1 of the Technical Appendix). Thus, the difference between the extended Kalman filter and KalmanNet lies in how the noise is propagated through the system, which determines how the Kalman gain is computed. This allows KalmanNet for a much faster switching in which information source to trust, which proved beneficial in this application.
The reviewer also raises an interesting question about whether the non-linear trust introduced by KalmanNet could be modeled as non-linearities introduced in the dynamics. The heteroscedastic Kalman filter (HKF) model presented in the paper presents some evidence towards this: by making the noise on the observations covary with the velocity, the HKF could match the performance with KalmanNet but the HKF’s Kalman gain did not perfectly track the velocity (see Figure 4B). It is not clear whether introducing a non-linearity in the dynamics could make the Kalman gain perfectly track the velocity, but it is definitely an interesting avenue for further understanding KalmanNet and potentially using that to inform model development.
### References:
1. Meng, X. L., & Rubin, D. B. (1992). Biometrika.
2. Degenhart, A. D., …, & Yu, B. M. (2020). Nature BME.
---
Rebuttal 2:
Comment: I have read the review and have elected to keep my original score. I appreciate the extra analysis performed by the authors but they have seemed to missed the point of my comment. I would encourage the authors to read the following statement by the American Society for Statisticians on interpreting p-values, "Wasserstein, Ronald L., and Nicole A. Lazar. "The ASA statement on p-values: context, process, and purpose." The American Statistician 70.2 (2016): 129-133." Particularly, see point 5, "A p-value, or statistical significance, does not measure the size of an effect or the importance of a result." Reporting large p-values does not necessarily indicate that there is no "significant" difference. A large p-value could also result from not having enough samples to run the test with required precision. The english statement of using a large p-value to conclude no significant difference can be misleading and significantly increases chances of mis-interpretation. Let me illustrate through an example. The authors claim that there is no significant difference between the MSE of KalmanNet and LSTM in position but looking at the Fig 2B, it seems more likely that MSE of LSTM is much better than the MSE of KalmanNet. The test used by the authors is most likely not able to produce a smaller p-value simply due to a much larger variance of the MSE of KalmanNet (which is also a bad thing).
The mis-interpretation of p-values is a significant problem in scientific literature leading to hot debates, and as "experts" of machine-learning and statistics, we should do our utmost to not propagate bad-practices regarding interpretation of p-values or statistics in general. I would encourage authors to be very careful when stating conclusions using p-values and carefully translate the "math" to "english"-statements such that they cannot be mis-interpreted.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their comment and want to note that we completely agree with all the points that they made. We apologize for not being more clear; we originally formulated our results with the underlying assumption of a minimum background of statistics for the average reader, but the reviewer is correct in that explicitly stating the meaning of p-values over the significance level can help the reader interpret our results better. We propose modifying the sentence starting in line 230 to the following:
>Overall, it is important to note that the absence of p-values below the significance level for position and velocity correlations and MSE between the LSTM and KalmanNet should not be interpreted as definitive evidence that the two models are equivalent. Instead, it indicates that the data did not provide strong enough evidence to conclude that the models were different. However, the Bayes factor ($B_{01}$) values between $2$ and $3.4$ suggest that the models may have comparable performance, with the null hypothesis (models are equivalent) being at least twice as likely as the alternative hypothesis (models are different). This suggests a higher likelihood that any observed differences are not substantial, though it remains essential to consider the variability and sample size when interpreting these results. | Summary: This paper addresses the trade-off between performance and explainability in brain-machine interface (BMI) decoders. The authors introduce KalmanNet, a novel decoding algorithm that combines the traditional Kalman filter (KF) with deep learning techniques, specifically recurrent neural networks (RNNs).
Key contributions:
1. Development of KalmanNet: A hybrid model that maintains the interpretable structure of the KF while leveraging the flexibility of RNNs to compute the Kalman gain dynamically.
2. Comprehensive evaluation: The authors conduct both offline and online experiments using neural data from two non-human primates performing a 2-DoF dexterous finger task. They compare KalmanNet against standard KF, tcFNN, and LSTM decoders.
3. Performance and explainability balance: KalmanNet achieves comparable or better performance than state-of-the-art deep learning models while maintaining a degree of explainability.
4. Behavioral analysis: The paper provides insights into KalmanNet's decision-making process, showing how it adjusts trust between the dynamical model and neural observations.
The paper demonstrates a promising direction for developing BMI decoders that balance high performance with interpretability, potentially enabling safer and more effective neural prosthetics. It also provides valuable insights into integrating control theory and deep learning in the context of neural decoding.
Strengths: 1. Originality:
- The paper presents a novel approach (KalmanNet) that creatively combines traditional control theory (Kalman filter) with modern deep learning techniques (RNNs).
- Using RNNs to compute Kalman gain dynamically is innovative and addresses a long-standing challenge in BMI decoder design.
- The introduction of the Heteroscedastic Kalman Filter (HKF) as an analytical tool to understand KalmanNet's behavior is an original contribution.
2. Quality:
- The experimental design is relatively comprehensive, including both offline and online evaluations, which is crucial for BMI applications. Also, The comparison with state-of-the-art methods (KF, tcFNN, LSTM) provides a robust benchmark for the proposed method.
3. Clarity:
- The paper is well-structured and follows a logical flow from problem statement to methodology, experiments, and conclusions.
4. Significance:
- The performance of KalmanNet, matching or exceeding deep learning models while maintaining some interpretability, represents a significant advancement in the field.
Weaknesses: 1. Limited generalization: The model's poor performance in new task contexts is a significant weakness. The authors could explore transfer learning techniques to improve the robustness and generalization capabilities of KalmanNet.
2. Insufficient comparison with state-of-the-art methods: The paper lacks comparison with more recent and advanced approaches, particularly transformer-based models which have shown superior performance in various domains. These models also offer various interpretability techniques that could be relevant to this work.
3. Lack of ablation studies: The paper does not provide a detailed analysis of the contribution of each proposed module. Comprehensive ablation studies would help understand the specific gains from different components of KalmanNet.
4. Inadequate analysis of interpretability: While improved interpretability is claimed as a key advantage of KalmanNet, the paper lacks a detailed analysis and concrete examples demonstrating this interpretability in practical scenarios.
5. Questionable paper structure: The absence of a dedicated related work section hinders reviewers' ability to quickly grasp the context of relevant prior work. Additionally, the discussion section is disproportionately long, and some of this space could be better utilized for more detailed experimental analysis and insights.
These weaknesses, if addressed, could significantly strengthen the paper and provide a more comprehensive evaluation of the proposed method in the context of current BMI research.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. KalmanNet shows higher sensitivity to injected noise compared to LSTM. Could you provide more insight into why this occurs?
2. Could you provide a more detailed analysis of KalmanNet's sensitivity to hyperparameter choices?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed the limitations in the checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their very thorough review of the paper, for recognizing its main strengths and contributions, and for suggesting specific avenues for improving our work. We have addressed the weaknesses and questions raised by the reviewer below.
### Weaknesses:
First in terms of a comparison with better state of the art models, we acknowledge that transformer based models have surpassed other models, such as the LSTM, in many applications. However, in previous work [1], with a user in the loop, a transformer that works particularly well for this type of data offline did not perform as well as an LSTM online. This may be most likely explained by the transformer overfitting to the offline brain data and having too complex dynamics for the user to control in a closed loop setting. A recent theoretical study noted that transformers can accurately approximate a Kalman Filter offline [2], but in practice it may not properly discover the domain dynamics from training data alone. Another recent study has shown that transformers may be able to replace the GRUs in KalmanNET [3], which is of strong interest to our group for future work.
Regarding the generalization issue raised by the reviewer, note that while KalmanNet has a big percentage increase in error when testing on a different context, the overall velocity error still falls below that of the Kalman filter and is comparable to the other deep-learning models (see Figure 6). None of these models were optimized for generalization and multiple previous studies have found that generalizing to other tasks in the domain of brain-machine interfaces is a challenging problem [4-5]. In our work, we felt it was important to show that KalmanNet carried over the generalization disadvantages common in non-optimized deep learning models but, as the reviewer suggests, techniques such as transfer learning or data augmentation can have a great impact on the models’ generalization ability and we plan to explore them in future work.
We thank the reviewer for suggesting ablation studies. The original KalmanNet paper (Ref #25 in main) did some ablation work by modifying the architecture and the state-space models, arriving at the architecture we used in our paper. We also did a high-level ablation study of our own by comparing the Kalman filter, KalmanNet, and the heteroscedastic Kalman filter (HKF). The only difference between the KF and KalmanNet is how the Kalman gain is computed, while the only difference between the HKF and the KF is that the noise model changes over time. With our work, we showed that the flexible modulation of the Kalman gain was the key to the better performance in KalmanNet, and we further proved that by emulating that behavior in a linear model. We have also included a new figure (Supplemental figure 2) in the Technical Appendix showing the sensitivity of KalmanNet to the sequence length used during training, also attached to the rebuttal.
We hope that the comments in the general rebuttal on the benefits of explainability help assuage some of the reviewer’s concerns. Additionally, we want to thank the reviewer for suggesting a section specifically about related work: we will condense the discussion and some of the introduction and introduce more details in relevant prior work. Specifically, some of the references we will include, grouped by topics are: Variations on KFs (e.g., adaptive, extended) for BMI applications [6-9]; deep learning models for BMI [5, 10-11] and Refs #16-19 from the main text; and model-based deep learning [12].
### Higher sensitivity to injected noise:
We thank the reviewer for their question. We have addressed this point in the general rebuttal, but briefly: we exposed the models to extreme, out-of-distribution, and non-Gaussian noise, which falls into the worst case for the Kalman filter-based models. The LSTM showing high robustness to this extreme noise was interesting and unexpected and we plan to explore it further in future work.
### Analysis of KalmanNet's sensitivity to hyperparameter choices:
We thank the reviewer for the suggestion. We propose including a more detailed sensitivity analysis on some key parameters during KalmanNet training: length of each sequence, learning rate, and training time. We have included a new figure in the supplement (Supplemental Figure 2) showing the variation in offline MSE across days for different sequence lengths during training, and we will include the learning rate and training time analyses upon publication. Note, however, that the main objective in BMI experiments is to perform well online with a user in the loop, which does not necessarily follow from offline results (Refs #19 and 35 from main text). Online experiments are difficult to perform, necessitating some use of offline data in parameter optimization. We found that the chosen hyperparameters striked a good balance between overfitting to the training data and performing well online.
### References:
1. Costello, J., ... & Chestek, C. (2024). NeurIPS, 36.
2. Goel, G., & Bartlett, P. (2024). 6th Annual Learning for Dynamics & Control Conference, 1502-1512.
3. Wang, J., Geng, X., & Xu, J. (2024). arXiv preprint arXiv:2404.03915.
4. Mender, M. J., ... & Chestek, C. A. (2023). Elife, 12, e82598.
5. Temmar, H., ... & Chestek, C. A. (2024). bioRxiv, 2024-03.
6. Li, Z., … & Nicolelis, M. A. (2009). PloS one, 4(7), e6243.
7. Dangi, S., …, & Carmena, J. M. (2011). 5th International IEEE/EMBS Conference on Neural Engineering, 609-612.
8. Tsui, C. S. L., Gan, J. Q., & Roberts, S. J. (2009). Medical & biological engineering & computing, 47, 257-265.
9. Malik, W. Q., …, & Hochberg, L. R. (2010). IEEE TNSRE, 19(1), 25-34.
10. Pandarinath, C., ... & Sussillo, D. (2018). Nature methods, 15(10), 805-815.
11. Sussillo, D., …, & Shenoy, K. (2012). Journal of neural engineering, 9(2), 026027.
12. Shlezinger, N., …, & Dimakis, A. G. (2023). Proceedings of the IEEE, 111(5), 465-499.
---
Rebuttal Comment 1.1:
Comment: You have resolved some of the concerns I had, and I am inclined to raise my score. However, there are still some issues that need further improvement in the final version.
---
Rebuttal 2:
Comment: Dear Reviewer JYj7,
The rebuttal stage deadline is coming soon. Please do not forget to engage in the conversation and let the authors know about your take on their rebuttal, and if appropriate update your score. Thanks for supporting NeurIPS.
Best, | Summary: - This approach studies a few approaches that can be used for neural decoding. The baseline approaches are blackbox DNNs and a vanilla Kalman filter. The proposed approach, the KalmanNet, is a hybrid model, in which a DNN is used to control the gain on a Kalman filter.
- Approaches such as Kalman filtering have the benefit of being more interpretable, at the potential expense of performance. Despite this they find comparable results with pure deep learning approaches.
- Other aspects of the trade-off between the "traditional" KF approach and the DNN approaches are explored.
- By examining the KalmanNet's predicted gain, the authors can measure at which time points the KalmanNet relied more on observation and when it relied more on the prior.
Strengths: - The paper is well written; it is of general interest to the BMI community to see the results of exploring this tradeoff
- The benchmarked models are relevant to the ones currently popular in the BMI field
- There are specific takeaways for future BMI decoding models given in the conclusion
- Data for evaluation seems to have been newly collected for this study (is this indeed the case?) which represents significant novelty
Weaknesses: - Overall, I think the technical novelty is a little limited. That is, the application of KalmanNet is a good engineering contribution, but the presented modifications to the KalmanNet seem mainly to adapt an existing model to this particular decoding task.
- The main contribution of this paper seems to be an empirical comparison of existing methods on a specific neural decoding problem. This might have limited significance to the broader machine learning community.
- It's mentioned that the KF has a safer operation (line 268). Can this still be said of KalmanNet, since the contribution of KF can be zeroed out at anytime by the network?
- If I read section 3.3 correctly, it seems that the KalmanNet is not as robust to injected noise as the LSTM, which is unfortunate. I don't think the authors should be penalized for disclosing this result. They should be commended. But it does hurt the significance of the proposed modified KalmanNet approach, which we might have expected to be more robust to noise.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Line 93 mentions dynamics model $A$, but this does not appear in equation 1. Is it supposed to?
- What do online and offline refer to? Simply the presence of ground truth finger measurements?
- A basic question: what is the difference between modifying the Kalman gain using a network and building a black box network that takes the observations directly? Is it a difference in expressive power? Is it a difference in safety guarantees?
- A related question: what are the benefits of explainability in this context? What are some things that we get for knowing that we are relying on the observations?
- line 223: What is $p$? Pearson's $r$ correlation? The significance of the difference between approaches?
- Did any modifications to KalmanNet need to be made to accomodate this domain?
- Figure 5: Show the whole product of duration and magnitude?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations are discussed adequately. Negative societal impacts are not discussed, but I think this is less relevant here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments and suggestions, as well as for recognizing the main strengths of the paper. We have addressed the paper weaknesses brought up by the reviewer in the general rebuttal. Here, we will address each of the reviewer’s questions.
### Equation 1:
We thank the reviewer for noticing this oversight. We propose modifying the referred sentence to the following:
>The trainable parameters for the KF correspond to the linear observation model (C), the linear dynamics model, and the noise covariances of the state and the observations.
### Online and offline experiments:
We thank the reviewer for this question, which allows us to expand on one of the paper's key contributions. We refer to offline experiments as those in which we analyzed offline the brain activity and finger kinematics from the monkey doing the task in hand control. For these, we have the ground truth readings of the finger measurement, as the reviewer stated, which means we can determine the error in predictions by each model. Offline analysis of neural data is a domain in which these neural network tools are widely used and have very high performance.
In online experiments, there is a user in a control loop reacting moment by moment to how well the output movements match their desired movements. This is the domain for which Kalman filters are much more commonly used than neural networks due to their physically stable, easily controllable dynamics. In our application, we have the monkey control animated fingers with his brain signals in real-time using one of the tested decoders. We feed those signals every 50ms to the model we are testing and use that model to predict the desired finger velocity and then we move the virtual hand based on those predictions. The monkey sees the movement of the virtual hand and reacts to try to acquire targets as fast as possible. Offline performance is not necessarily predictive of online performance (Refs #19, 35 from main). The user’s reactions to movement at each timestep can generate brain activity that looks very different from the training data, consequently affecting the predictions. We saw this effect in this paper with the tcFNN model. Thus, our work included the very essential online experiments, beyond just offline modeling, to validate that the tested models would work in a real-life scenario.
To clarify the difference between these two experiment modalities, and relate them to the descriptions of hand and brain control, we propose adding a sentence at the end of the paragraph starting on line 129:
>Analyses of performance during hand and brain control trials are referred to as offline and online, respectively.
### Kalman gain with a network vs a black box:
We thank the reviewer for this question. The biggest difference between modifying the Kalman gain using a network and building a black box network that directly takes the observations lies in the framework in which KalmanNet operates. In KalmanNet, the network does not predict the velocities directly but rather just determines at every time point how much to trust the linear observation model versus how much to trust the linear dynamical model. This allows KalmanNet to incorporate domain knowledge in the state-space model, a classic advantage of using the Kalman filter in any application. In our case, this domain knowledge is reflected by our dynamics model, which models the physics of finger positions and velocities, and the observation model, which determines the relationship between brain measurements and finger kinematics, both of which have been informed by previous work (Ref #31, 36 in main). Thus, and also for safety, KalmanNet essentially never directly zeroes out the contribution of the Kalman filter, but rather adds additional flexibility to the model to choose whether to trust dynamics or sensor measurements at any time point.
### Benefits of explainability:
Please see the general rebuttal for a thorough response to this question.
### p, Pearson's r correlation:
In line 223, the p represents the p-value of the difference between correlations. The value of 0.64 means we did not find enough evidence to conclude that the two models had different correlations. On the suggestion of another reviewer, we have also added a computation of the Bayes factors, to determine the ratio of likelihoods for the null versus alternative hypothesis in each comparison (please see response to reviewer xWsH for full explanation).
### Domain modifications to KalmanNet:
We thank the reviewer for this question. Our novel modifications to make KalmanNet work in this domain can be separated into two parts: the state-space model and the training. The biggest difference between the original KalmanNet and the one used in this work is that we created a new state-space model (dynamics and observation models) based on the specific domain of application (finger movements and brain sensor measurements). The state-space model used the structure described in section A.1 of the Technical Appendix, but briefly: we used a kinematic model of the physical position dynamics via velocity integration, and learned from the training data the parameters for the velocity dynamics as well as the relationship between observations and kinematics. For model training, we modified the base loss function to account for the different scales for our two predictors, position and velocity, and increased the input sequence length to improve performance and encourage smoothness in the output. To show this, we will include a new figure that shows the change in velocity MSE with different sequence lengths during training to the Technical Appendix, as seen in the attached PDF (Supplemental Figure 2).
### Figure 5:
We thank the reviewer for the suggestion. We propose adding the values from the product of duration and magnitude as a figure to the Technical Appendix (Supplemental Figure 3), shown in the attached PDF.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: I thank the authors for taking the time to answer my questions. I will keep my score and continue to recommend acceptance. I now better understand the explainability argument: the KalmanNet filter only controls a tuning parameter in the KF. Overall, I think the technical work represented by this contribution is thorough, but possibly of limited significance. The proposed state space model and training scheme seem fairly specific to this application. But I don't think these are strong reasons to reject, since it may be of some interest to the BCI community. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their helpful questions, suggestions, and generally supportive comments. Our work demonstrates the tradeoffs of using KalmanNet, an explainable algorithm that combines deep learning with the Kalman filter (KF), to predict finger movements from brain data. We have addressed the weaknesses and questions raised by the reviewers here and in each review. We believe that with the addition of the proposed modifications this work is greatly improved.
### Novelty and significance:
We appreciate the reviewers recognizing our novel approach and comprehensive evaluation of our decoders in online BMI tasks with newly collected data. We wish to clarify that, yes, all data collected during online tasks are new data specifically for this project and used for comparing the real-time performance of the 4 decoders, 2 of which are new (KalmanNet, HKF). Additionally, 3 of the 13 days used for offline evaluation were new data collected specifically for this project. Upon publication, we will release all data for public usage.
We will revise the explanation of our novel approach in the text to clarify that the primary novelty is in combining a state space model framework and a deep learning architecture for BMI applications. However, we also suggest that these results are significant for any controls application with a user in the loop, interacting with the physical world. KFs are widely used due to safety concerns in these applications, which may limit performance compared to state of the art machine learning techniques. Our investigation of the behavior of KalmanNet during the finger movement task led us to create a new, explainable, and small linear model: a novel and significant advance for models with BMI applications. Thus, we show the benefit of potentially understanding the algorithms “mechanisms”, which is not usually available for black-box models. Additionally, we demonstrated that explainable models do not necessarily perform worse than their pure ‘black-box’ counterparts with novel online experiments. Finally, we show an intriguing result in which an LSTM substantially outperformed both a normal KF and KalmanNet in the case of extreme noise injection, which was counter to our original intuition.
### Explainability:
Some reviewers raised concerns about the benefits of KalmanNet’s explainability. First, throughout robotics and controls, engineers generally choose “white-box” (explainable) approaches at the cost of performance, due to safety concerns stemming from “black box” approaches. This is important not only for brain machine interfaces which control a prosthesis or muscle stimulation, but for any robotics application interacting with the physical world. Autonomous vehicles make greater use of KFs than deep learning models, despite possible higher performance. Second, explainability is helpful for refining the design of algorithms. After observing KalmanNet’s behavior, we generated a simpler linear decoder (HKF), with only a fraction of parameters, that matched the performance of the deep learning model. Third, this result may even shed light on the mechanisms by which the brain drives motor output. KalmanNet learned to trust the brain for producing high velocities and trust dynamics for stopping. This is consistent with prior work in systems neuroscience recording from the motor cortex [1,2].
### Noise injection:
Reviewer 586k pointed out that we should not be penalized for disclosing the fact that KalmanNet has a weakness to extreme noise. Reviewer xWsH shared our view that this was a very intriguing result. Indeed, we performed this analysis because intuitively, one would expect the KF to do well in the face of noisy inputs, where it can rely on its dynamical model and avoid unsafe movements. The fact that the LSTM outperformed both KalmanNet and a regular KF in this regime is a very interesting result in our opinion.
This occurs because the KF architecture is subject to an underlying assumption of zero mean Gaussian noise and is known to suffer when this assumption is violated [3-4]. With KalmanNet, although we do not explicitly model the noise, the model is subject to the same assumptions. We exposed these models to extreme levels of noise, up to 100x the standard deviation of the neural data. We want to show this data for two reasons: First, neural signals are very small, and 100x errors are possible. Second, we want to highlight the surprising success of the LSTM for this difficult problem.
We have now included another figure (Supplemental Figure 1) that compares the MSE for velocity predictions when the noise is less extreme (up to 1x standard deviation). There, the difference between models is small, suggesting that when operating within the expected noise magnitudes, KalmanNet can match the LSTM performance. We will rephrase (line 273):
>Given the small magnitude of brain signals, noise artifacts can be much larger than the signal features of interest. We modeled those with extreme additions of noise ~100 times the variance in the training data.
Pragmatically, this low robustness to out-of-distribution noise is also a known weakness of the KF and many techniques have been developed to address it. For example, the outlier insensitive KF (OIKF) [5] explicitly models outliers as random variables with unknown variance. The outlier robust KF (ORKF) [4], modifies the noise model to allow for non-Gaussian and heavy-tailed noise. Since KalmanNet works under the same framework, we could apply these techniques to improve KalmanNet’s robustness to noise.
### References:
1. Saleh, M., …, Hatsopoulos, N. G. (2010). Journal of Neuroscience, 30(50), 17079-17090.
2. Reina, G. A., ..., Schwartz, A. B. (2001). Journal of neurophysiology, 85(6), 2576-2589.
3. Huber, P. J. (1992). Breakthroughs in statistics: Methodology and distribution, 492-518.
4. Agamennoni, ..., Nebot, E. M. (2011). IEEE ICRA, 1551-1558.
5. Truzman, S., ..., Klein, I. (2023). ICASSP, 1-5.
Pdf: /pdf/e7b53ad8cbf864c691d52bbb1bc0a364e7d2acb3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
3D Gaussian Splatting as Markov Chain Monte Carlo | Accept (spotlight) | Summary: This paper proposes a novel densification strategy of 3D Gaussian Splatting (3DGS) based on the Markov Chain Monte Carlo (MCMC) sampling scheme.
The authors address the ‘heuristic’ densification of standard 3DGS and adopt a distribution-aware resampling pipeline.
Consequently, they achieve higher rendering quality compared to standard 3DGS with a similar number of Gaussian primitives.
Strengths: This paper is well-written to understand.
- The authors tackle the limitation of heuristic densification of standard 3DGS and suggest an MCMC sampling strategy for 3DGS. Also, the detailed analysis effectively supports their theoretical statements.
- It achieves standard 3DGS in rendering quality while preserving the fast inference time with the same format of 3DGS representation. Therefore, it can be used in various applications of 3DGS without any modification.
Weaknesses: - The additional computational costs are required for MCMC resulting in the increase of training time compared to standard 3DGS.
- They only suggest the resampling strategy of 3DGS. Therefore, the technical contribution is inefficient despite the detailed analysis.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Although they have mentioned the computational time in Sec. B of the appendix, I cannot understand how much it takes compared to standard 3DGS. Can you provide the comparison in training time of this method and standard 3DGS for each scene?
- Although it has been described in L184-186, I am confused about the reason why adding noise to other parameters leads to harmful results. Can you describe more details about it?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: It only tackles the cloning strategy without any other problems for achieving higher rendering quality. Thus, the technical improvement seems to be inefficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments and suggestions.
Here are our responses to your comments:
## Technical contribution
We respectfully disagree, as our method cannot be summarized as a mere resampling strategy. It is a completely new take on the 3D Gaussian Splatting optimization that removes ALL the non-differentiable heuristics (and inherent training instability) that were introduced by the original authors: “opacity reset”, “densification”, "splitting", and “pruning”.
We have further shown that this is non-trivial to achieve, as one must carefully design components in a way that obeys the basic principles of hybrid MCMC. Thanks to our re-formulation, future works could integrate various advancements in perturbed Gradient Descent, as reviewer Cpa7 suggests.
## Additional compute cost
Please see the global response for the training time. In short, considering only the compute time for each Gaussian the added time is negligible. Ours does run slower if trained with a high opacity regularizer due to more transparent Gaussians, but it ultimately gives higher quality results, and it also can be tuned to **train faster while still outperforming 3DGS**.
## Noise to other parameters being harmful
There have been a number of recent papers demonstrating that the locations of the Gaussians suffer heavily from local optima, which leads to their reliance on good initializations [7, 8]. Hence, adding noise to these parameters helps escape these local optima, and Gaussians to “explore” the scene. Other degrees of freedom (rotation, scale, opacity) do not suffer as heavily, hence the noise simply slows down convergence unnecessarily. Please see additional details showing more experiments on the noise design in our answer to reviewer Cpa7 if the reviewer is interested.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. The authors have addressed all of my concerns in the rebuttal. Thus, I have decided to maintain my rating in support of acceptance. | Summary: The paper discusses improvements to 3D Gaussian Splatting in neural rendering. Current methods rely on complex cloning and splitting strategies for placing Gaussians, which often do not generalize well and depend heavily on good initializations. The authors propose rethinking 3D Gaussians as random samples from an underlying probability distribution of the scene, using Markov Chain Monte Carlo (MCMC) sampling. They show that 3D Gaussian updates can be converted into Stochastic Gradient Langevin Dynamics (SGLD) updates by adding noise. This allows for the removal of heuristic densification and pruning strategies, replacing them with a deterministic state transition of MCMC samples. Additionally, the authors introduce an L1-regularizer on Gaussians to encourage efficient usage. Their method improves rendering quality, provides easy control over the number of Gaussians, and is robust to initialization across various standard evaluation scenes.
Strengths: 1) This work is very insightful
2) Experiments are extensive and detailed
Weaknesses: /
Technical Quality: 3
Clarity: 3
Questions for Authors: /
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: /
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive assessment of our paper. Please let us know if you have any concerns or questions and we will try our best to respond.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' sincere effort and detailed experiments. I keep my original score. | Summary: Current 3DGS-based methods require carefully designed strategies such as cloning and splitting to assign a 3D Gaussian at a location. Further, they also require initializing points from SFM to generate high-quality novel views. The proposed work assumes that a set of 3D Gaussians are drawn from an underlying probability distribution, which is representative of the scene. Further, 3DGS updates are converted to SGLD updates by introducing noise.
The main contributions of this work are as follows:
- A fresh perspective that 3D Gaussians are sampled from a distribution and relocation strategy is compatible with MCMC samples.
- Robustness to initialization. 3DGS-MCMC is not dependent on the initialization step in 3DGS.
- The proposed method outperforms other NeRF-based methods and 3DGS on standard datasets.
Strengths: - **Qualitative and Quantitative Results:** The proposed method is evaluated on NeRF synthetic, Tank&temples, Deep Blending, MipNeRF360 and OMMO dataset. Quantitatively, 3DGS-MCMC outperforms NeRF-based methods and 3DGS. Qualitatively, the novel-views from 3DGS-MCMC are sharp compared to the 3DGS method (Fig. 2). This is due to the MCMC formulation proposed in this work, which allows exploration.
- **High-Performance compared to 3DGS with a limited budget(L278-297):** The authors show an interesting experiment where they limit the budget for number of Gaussians during optimization. As expected, 3DGS has a significant drop in performance, whereas the performance drop in the proposed method is limited. Notably, there is a difference of 4 dB when the maximum number of Gaussians is set to 100k.
- Unlike 3DGS, the proposed method is not sensitive to the initialization. This robustness to initialization is illustrated in Tab. 2. When a camera extent of $1\times$ is used, 3DGS achieves a PSNR of 22.72 dB, whereas 3DGS-MCMC achieves a PSNR of 29.64. This result show that the proposed method is robust to the initialization. Also, this substantiates the exploration claim proposed in the paper.
- **Exhaustive ablation for all the key design choices:** In Tab. 3, the authors present ablation on the regularizers for Gaussians. Interestingly, in this framework, regularization of the Gaussian parameters improves the performance, whereas it is harmful in the 3DGS framework. Further, noise in the update step update allows for more exploration. This claim is further substantiated in Fig. 4. Finally, when noise is used for all the parameters, overall performance is slightly dropped.
Weaknesses: - The proposed method exhibits robustness to the initialization step. However, is there a reduction in training time between the version using SFM points and the version using random initialization?
- **Missing evaluation on Scannet++[A1] dataset:** It is a large-scale dataset for indoor scenes. It will be interesting to see how this method performs on this challenging dataset. It is difficult to perform this experiment in a short duration of time. I leave it to authors to decide if they want to include in their manuscript.
- **Training Time:** The proposed method generates very high-quality novel-views. However, this comes with an added training cost. 3DGS-MCMC takes 90 minutes for 1M Gaussians, whereas 3DGS can be optimized within 30 minutes. The authors mention in L467-468 that a CUDA implementation can accelerate the proposed method.
[A1] Yeshwanth, C., Liu, Y.C., Nießner, M. and Dai, A., 2023. Scannet++: A high-fidelity dataset of 3d indoor scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 12-22).
Technical Quality: 4
Clarity: 4
Questions for Authors: - L40-41: "lead to .... waste compute."Can "poor-quality renderings" be substantiated by some examples? Also, can the authors elaborate why it is a wasted compute? InstantSplat[7] reconstructs an unbounded scene with sparse views in under 40 seconds on a commercial GPU.
- Does this method accurately represent the surface better than 3DGS? Can we extract a high-quality mesh from 3DGS-MCMC? Will it be better than recent works such as 2D Gaussian Splatting[A2]?
[A2] Huang, B., Yu, Z., Chen, A., Geiger, A. and Gao, S., 2024. 2d Gaussian splatting for geometrically accurate radiance fields. arXiv preprint arXiv:2403.17888.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have discussed the limitations of their method in Appendix D in the supplementary material.
As such, this work has no societal impact. However, some downstream applications can have a societal impact. The authors have discussed this in Appendix E.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for sharing the enthusiasm that we have for our method.
Here is our response to your comments:
## Training time between SfM points version vs Random points version
SfM has faster convergence, but to obtain the best PSNR performances both were run for about the same amount of wall-clock time. However, note our primary objective was not to improve convergence, but to reduce reliance on good initialization and also remove the heuristics. We believe the latter two to be critical in situations where initialization could be difficult, such as 3D generative modeling or dynamic scene reconstruction.
## ScanNet++
Indeed this would be very interesting. We were not able to run these experiments due to the resource crunch during the rebuttal period, but we hope to be able to add these results for the camera ready.
## Training time
Please see the global response for the training time. In short, 90ms was *milliseconds, not minutes* and considering only the compute time for each Gaussian the added time is negligible. Ours does run slower if trained with a high opacity regularizer due to more transparent Gaussians, but it ultimately gives higher quality results, and it also can be tuned to **train faster while still outperforming 3DGS**.
## L40--L41 Suboptimal placements leading to wasted compute and poor quality renderings
Our existing experiments show that existing relocation heuristics place Gaussians at worse locations than ours, leading to lower rendering quality (this is especially true when random initialization is used). In regards to *wasted compute*, we wanted to highlight the fact that Gaussian splitting heuristics becomes unnecessary, but we realize that this can be misinterpreted and will remove this comment.
## Surfaces
This is a great suggestion. While we were not able to finish this during the short rebuttal period as it involves setting up a new evaluation pipeline, we will investigate this in the future.
---
Rebuttal Comment 1.1:
Comment: I have reviewed the rebuttal and my additional comments are as follows:
- First, I initially misunderstood "ms" as minutes. My mistake. According to the provided table, the proposed method takes 12 minutes longer than 3DGS. However, if fewer Gaussians are used, for example, 300k, the method converges in 21 minutes with a similar PSNR to 3DGS. This demonstrates the high efficiency of the proposed method.
- Secondly, I hope these results, along with any additional experiments, will be included in the supplementary materials of the final camera-ready version.
---
Reply to Comment 1.1.1:
Comment: Thank you for the comments! and thanks again for acknowledging the efficiency of our method.
Regarding your second point, we will for sure include them in the camera ready or the supplementary (if we run out of space)
Thanks,
Authors | Summary: The paper presents a simple and effective method to enhance the training of 3D Gaussian Splatting (3DGS). It offers two main contributions. First, it demonstrates that adding carefully designed noise to the Gaussian centers after each gradient step can boost the performance of 3DGS. This encourages more exploration, which is especially helpful when Gaussian centers are randomly initialized. Second, the method performs densification by replacing low opacity Gaussians with clones of Gaussians sampled through multinomial sampling of the "live" ones based on their opacity values. The parameters of the Gaussians are adjusted to minimize the impact on the rendering outcome. The method is tested on both synthetic and real datasets and shows better performance compared to the 3DGS baseline, regardless of whether the Gaussian centers are randomly or "SFM" initialized.
Strengths: - The paper tackles an important problem (optimization, cloning and splitting strategies for placing Gaussians) and introduces relevant concepts and ideas to analyze it.
- The paper proposes a simple and easy-to-implement way to improve the optimization of Gaussian positions by encouraging more exploration.
- The proposed relocation strategy works effectively and pairs nicely with the L1-regularizer on the Gaussians.
- **Evaluation**: The evaluation is performed on various types of scenes (Nerf Synthetic, MipNeRF 360, Tank & Temples, Deep Blending, OMMO) and performance is reported with and without "SFM" initialization. Reporting the average over 3 runs and the corresponding standard deviation is a good practice for the reliability of the results.
- **Performance**: The provided results show that the proposed method improves over the 3DGS baseline. More importantly, the method obtains competitive results without initializing the Gaussians with SFM points.
Weaknesses: - **Relevance of MCMC framework**: Adding noise to the parameters [1] or to the gradients [2][3] is a common practice in optimization to escape from saddle points and find local minima. Since the training relies on momentum-based optimizers, I believe the proposed method is more closely related to Perturbed Gradient Descent methods [4][5] or noise injection methods [1] [2] [3] than to MCMC methods. For a proper momentum based update, noise would typically be added to the momentum estimate instead of the parameters. This doesn't make the method any less relevant or novel. **However, it does make me question the relevance of the MCMC framework that the paper is built around. In which part of the analysis or the method is this framework necessary ?**
- **The design of the noise term**: One important contribution of the paper is the design of the noise term. It would be great to have more explanations and ablations about these design choices. Why is this particular choice important for the method? Additionally, how does it compare to simpler noise schedules such as the one in [2]?
- Concerning the update in equation 9 of the paper, please clarify what assumptions and approximations are needed for this derivation and in which case they are no longer valid.
[1] Mark Steijkvers.A recurrent network that performs a context-sensitive prediction task. 1996.
[2] Neelakantan et al.,Adding gradient noise improves learning for very deep networks. 2015
[3] Deng et al., How shrinking gradient noise helps the performance of neural networks. 2021.
[4] Jin et al., How to Escape Saddle Points Efficiently. 2017.
[5] Jin et al., On Nonconvex Optimization for Machine Learning: Gradients,
Stochasticity, and Saddle Points. 2021.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Relevance of the MCMC framework (See Weaknesses).
- How is the term designed and how does it affect the performance?
- What assumptions and approximations are needed for the derivation of equation 9 ?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed the limitations in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments and suggestions.
## Relevance of the hybrid MCMC framework
Our work is indeed related to gradient descent methods with noise and perturbations as suggested. In our case, our motivation for opting to interpret our framework as a hybrid MCMC method is due to the necessity of “jump” moves, and because we simply are much more familiar with the MCMC literature than what you pointed us to. As you suggest, we will extend our related works with the suggested literature in a revision. However, note that because much of the modeling space (the scene) is empty and Gaussians should only be located near surfaces, it is critical that Gaussians are moved toward these surfaces quickly. Under the MCMC formulation, they can be naturally integrated as “jump” moves or “resampling” moves [A, B], which we leverage.
Regarding the noise in the momentum component, this is clearly an interesting direction to explore. We are excited that our reformulation seems to be opening doors to these various design choices which were originally not possible due to the heuristics that were involved in the optimization, such as abrupt reset of Gaussian opacities or manual splitting of large Gaussians. These heuristics clearly break any hope to formally study the convergence characteristics of the method, and in our work we remove all of them, which will hopefully lead to more work on this topic.
[A] Lindsey et. al, “Ensemble Markov chain Monte Carlo with teleporting walkers”, SIAM/ASA Journal on Uncertainty Quantification. 2022
[B] Green, “Reversible jump Markov chain Monte Carlo computation and Bayesian model determination”, Biometrika, 1995
## Design of the noise term
In Table 3 we already had a brief comparison of an alternative noise strategy where we add noise also to other Gaussian parameters. We have further tested without using the covariance term and the opacity term for the *Tank and Temples* dataset (we chose this dataset due to the short amount of time available for the rebuttal). We report the results in the table below:
| | No Covariance | No opacity* | With both |
| ------------ | ------------- | ---------- | --------- |
| PSNR - Train | 21.29 | 21.68 | 22.40 |
| PSNR - Truck | 25.04 | 23.27 | 26.02 |
| PSNR - Avg. | 23.16 | 22.47 | 24.21 |
Having both covariance and opacity is important to achieve the best performance:
- Without the covariance term, the noise is difficult to “undo” with gradients for narrow Gaussians, which then cause the Gaussians to randomly walk about and no longer represent the desired distribution.
- Without the opacity term, opaque Gaussians that are significantly contributing to the reconstruction loss would not be able to converge/stabilize, leading to a loss of high-frequency details.
Further, note that in the table above we had to use $\lambda_{noise}=5\times10^2$ for “No opacity*”, as if we use the same hyper-parameters as in the paper, the model does not train at all (PSNR<7).
## Noise term scheduler
As suggested, we also tried the scheduling from [2], and we additionally tested a simple linear scheduler. Again, we ran our experiment on the *Tank and Temples* dataset. As shown in the table below, our exponential decay scheduler (which uses the same scheduler as in the original 3DGS paper) works the best:
| | Noise Scheduler in our paper | Best Noise scheduler in [2] | Linear scheduler |
| ------------ | ---------------------------- | --------------------------- | ---------------- |
| PSNR - train | 22.40 | 21.86 | 15.37 |
| PSNR - Truck | 26.02 | 23.06 | 19.90 |
| PSNR - Avg. | 24.21 | 22.46 | 17.64 |
## Derivation of Eq. 9
The detailed derivation of Eq. 9 is provided in Appendix. A. The main approximation is the approximation of the integral that integrates the squared difference between the two distributions. We instead minimize the difference between the individual integrals, projected into each view, which is inspired by sliced Wasserstein methods [17]. As the color values that we integrate over are always non-negative, there are no underlying assumptions that could break.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. The authors have addressed my concerns in the rebuttal. I think the noise ablation should be included in the paper.
Concerning the relevance of the hybrid MCMC framework, I understand how the relocation of Gaussians can be seen as “jump” moves [B]. However, I don't see how this makes sure that "Gaussians are moved toward these surfaces quickly" since the “jump” moves happen between "states" which depends on how the reloacation is performed.
Overall, I think this is a good contribution and the authors did a good job addressing my concerns. I have decided to maintain my rating in support of acceptance. | Rebuttal 1:
Rebuttal: We are glad to see that all reviewers are positive towards our paper.
Reviewers commend the effectiveness of our method (**Cpa7**, **jFyG**, **cJQR**), especially when random initialization is used, the thoroughness of our ablations (**jFyG**, **cJQR**).
They also acknowledge the ease of use of our method as a drop-in to existing Gaussian Splatting methods (**jFyG**, **cJQR**).
Please see the individual rebuttals for reviewer-specific responses.
## Training time (**jFyG** and **cJQR**)
First of all, we would first like to clarify that what we provided in the Appendix for training time is 90ms (milliseconds) and *not 90 minutes* (**jFyG**).
To provide an exact comparison, we took our Gaussians trained at 1M Gaussian, and re-measured the single optimization iteration time for our method and the original 3DGS. In this case, ours takes 80 milliseconds while 3DGS takes 76 milliseconds. That is, the added time for sampling and noise addition is not substantial, even with our implementation.
Still, to achieve the PSNR reported in the paper, our method does take longer. This is because the configuration of Gaussians (i.e. where they are, their sizes and opacity) matters greatly to the runtime as this affects the speed of rasterization (among them the opacity regularizer affects the speed the greatest). For the “room” scene in the MipNeRF 360 dataset, we find the following timings, all with SfM initializations and max 1.5M Gaussians per the original 3DGS implementation:
| | opacity regularizer ($\lambda_o$) | PSNR | Total Training Time |
| ------------ | ------------- | ---------- | ---------- |
| 3DGS | -- |31.7 | 25 minutes |
| Ours (paper) | 0.01 | 32.5 | 42 minutes |
| Ours | 0.001 | 32.4 | 30 minutes |
| Ours (300k Gaussians) | 0.01 | 31.8 | 21 minutes|
Note how our method still **outperforms 3DGS regardless of our choice**. We found $\lambda_o=0.01$ to work well for various scenarios, including when using random initialization (with lower opacity regularizer random initialization performs slightly worse 31.7 vs 32.3 with $\lambda_o=0.001$), thus we report performance with 0.01. Finally, when using fewer Gaussians, our method **still outperforms 3DGS and trains faster**.
Due to the scarcity of compute caused by the rebuttal period, we were unable to provide the full per-scene compute times, but we will include them all in our camera ready. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Federated Graph Learning for Cross-Domain Recommendation | Accept (poster) | Summary: This paper introduces an innovative federated CDR framework with two key modules tailored for privacy preserving and negative transfer. For privacy, it presents a solid theoretical guarantee. For negative transfer, it generates domain attentions by virtual social links and conduct a fine-tuning stage to filter source domain knowledge. With the comprehensive empirical evaluation, it shows the superiority of the proposed framework.
Strengths: 1. Innovation: Utilizing HVH structure and Gaussian noise to ensure all-round privacy, GAT with virtual links to generate domain attentions and comprehensive objective function with prediction loss, mapping loss and social regularization loss seems technically sound.
2. Theoretical analysis: The paper provides a robust theoretical analysis. The detailed explanation concerning protecting transferred high-order embeddings from inference attacks strengthen significantly strengthen the paper's theoretical framework.
3. Generally, the proposed method is complex but this paper is well-organized and easy to follow.
4. Experimental demonstrations: The authors have conducted extensive experiments to validate the effectiveness of the proposed method, including the performance comparison, the ablation study, the privacy budget study and an additional dual-domain study. The compare of overall performance validates the superiority of proposed model. The ablation study validates the effectiveness of two key modules and the privacy budget study demonstrate the balance of privacy preservation and model performance. The dual-domain study proves that the proposed model can also cope with traditional scenarios.
Weaknesses: 1. In Figure 2, the use of the terms "extended" and "expansion" appears to be inconsistent. Consistent use of terminology throughout the paper will aid in the reader's comprehension and the overall clarity of the presentation
2. The authors have conducted extensive experiments on the Amazon dataset as detailed in the experimental section. However, as the authors themselves acknowledge in the limitation section, the generalization performance of their framework has not been sufficiently validated. Given this, and despite the fact that the experiments may not align with the authors' definition of BD-CDR, it is recommended that the authors consider conducting experiments on other datasets (e.g., Douban) to further verify the generalizability of their framework. This additional testing would strengthen the paper's claims regarding the robustness of the proposed approach.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. From the perspective of privacy, DP is widely used in previous method. What are the privacy innovations in this paper? What is the unique technical contribution between this work and existing works on privacy-preserving CDR?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Please refer to weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to sincerely thank you for positive evaluations and valuable comments for improvement.
**W1:** The use of the terms "extended" and "expansion" appears to be inconsistent.
**Response:** We agree and **we will use *expand/expansion* uniformly in paper’s updated version.**
**W2:** The generalization performance of the framework has not been sufficiently validated.
**Response:** We agree that it is important to verify the generalization performance of the model on other datasets. Thus, we conduct new experiments on Douban dataset to validate the generalization capability of the proposed method. As shown in Table u2 in **Global Rebuttal** pdf file, our method yields both optimal and sub-optimal results on almost all metrics. **We will add these new results in the updated version**. For details, please refer to the **Global Rebuttal** with pdf attachment.
**Q1:** What are the privacy innovations in this paper?
Response: We provide a full range of privacy both intra-domain and inter-domain. In our work, we include two horizontal federation phases and DP (line 177, **Privacy-preserving knowledge extraction**) to fully guarantee intra-domain privacy and inter-domain privacy. In previous work, those focusing on firm-level privacy tend to protect inter-domain privacy through privacy techniques (DP, projection methods, etc.); those focusing on user-level privacy tend to consider only intra-domain privacy (embeddings from different domains are passed in plaintext). In the **Appendix** of paper's previous version, we had provided a theoretical proof of privacy preservation (**Appendix A**) and experimentally verify the model performance under different privacy budgets (**Appendix D.3**). Our approach considers the full range of privacy protection, which distinguishes us from previous approaches, and outperforms all baselines. | Summary: This paper presents a novel federated framework of CDR, FedGCDR, for privacy preserving and negative transfer between domains. Its key strengths include a solid theoretical foundation analyzing the DP-based privacy preserving and a novel and dynamic attention generation method to mitigate negative transfer. By tackling both privacy concern and potential negative transfer problem, this work makes a valuable contribution to the field.
Strengths: 1. Tackling two Critical Issues for cross-domain recommendation: The paper addresses crucial challenges for privacy preservation and potential negative transfer. By following a horizontal-vertical-horizontal FL pipeline and adopting the Gaussian mechanism, the paper ensures the privacy of both intra-domain and inter-domain. By the graph expansion, the paper generates domain attention to filter harmful info from multiple source domains. Addressing these two issues is essential for fostering a wide range of participation and maintaining a healthy balance between domain involvement and model performance, ultimately contributing to the overall sustainability and scalability of CDR.
2. Novel Approach: The proposed FedGCDR approach is novel in several aspects. First, the problem formulation itself, which focuses on mitigating negative transfer under the privacy constraint, is a departure from existing methods that directly transfer with the consumption of data sparsity and well-chosen domains. Second, the theoretical analysis of the reliability of the Gaussian mechanism protecting the high-order embeddings provides valuable insights and paves the way for a secure transfer between domains. Third, the incorporation of GAT as a tool for both mining domain knowledge and generating dynamic domain attention to filter potentially harmful information is a novel method to address the negative transfer issue, which has been a long-standing challenge in CDR.
3. Theoretical Guarantees and Practical Considerations: The paper presents a solid theoretical foundation by analyzing the Gaussian mechanism to protect high-order embeddings output by GAT. Furthermore, by combining horizontal fl with vertical fl, knowledge transfer down to edge devices is realistic and avoids the additional overhead required for the vertical process.
4. Comprehensive Empirical Evaluation: The paper's strength lies in its comprehensive empirical evaluation across sixteen widely used domains of the Amazon dataset, spanning various sub-datasets of different domain numbers. Besides the overall model performance comparison, the paper compares the ability of mitigating negative transfer with defined concept ‘soft negative transfer’ and ‘hard negative transfer’. The target domain setting with different data quality further demonstrates the generalization of the model and the irrationality of the direct transmission of the previous method.
Weaknesses: 1. The paper addresses the broader-domain CDR involving more than three domains. While multi-domain CDR is undoubtedly significant and has been the subject of extensive research, the emphasis on the number "three" is noted. Could the authors clarify the significance of this number in their work? Does this particular emphasis distinguish their research from existing studies? If not central to the research, it is suggested that the authors reconsider the emphasis on this number to avoid potential misunderstandings or unnecessary focus.
2. By introducing multiple social regulation terms, yet the authors have chosen one specific term (as shown in Equation 9). It would be beneficial for the authors to provide an explanation for this selection. Is this choice based on theoretical analysis, empirical results, or comparative study with existing literature? Clearly articulating the basis for this choice will help readers understand its importance and applicability.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. This article is mainly from the user's perspective to solve the privacy problem and whether the proposed framework can be extended to more realistic company-level applications.
2. Negative transfer is an important problem in CDR as well as transfer learning. However, facing the ‘when to transfer’ problem [1] in CDR, it seems to be a potential method to stop the negative transfer when it occurs. Why cannot we adopt this simple method to avoid such complex attention generation progress?
[1] Pan S J, Yang Q. A survey on transfer learning[J]. IEEE Transactions on knowledge and data engineering, 2009, 22(10): 1345-1359.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weak points.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable comments and suggestions. We hope our response addresses your concerns.
**W1:** Why addresses the broader-domain CDR involving more than three domains.
**Response:** Because solving the cross-domain recommendation problem with more than three participating domains is very important for better mining user preferences. In existing work in the area of cross-domain recommendation, researchers tend to focus on only two domains. These works often do not generalize well to multi-domain tasks. There are also some studies that begin to focus on multi-domain knowledge transfer, but they are also limited to only three domains [1]. We believe that these studies do not fully reflect the nature of multi-domain, while paying insufficient attention to privacy and negative transfer challenges. These two challenges become more severe with the number of domains, in line…. We therefore emphasize a number of domains greater than three in the article to distinguish our method from previous works. As shown in Table 1 and Table 2 of paper’s previous version, our methods (FedGCDR, FedGCDR-DP) outperform all baselines under the settings of 4 domains, 8 domains and 16 domains.
[1] Liu W, Chen C, Liao X, et al. Federated Probabilistic Preference Distribution Modelling with Compactness Co-Clustering for Privacy-Preserving Multi-Domain Recommendation[C]//IJCAI. 2023: 2206-2214.
**W2:** Why choose the special social term in Soreg.
**Response:** We use the social term in Equation 9 because it better exploits the user's interests. In Soreg, the authors propose two innovative social terms, as follows:
$(1)\sum_{i=1}^m || U_i -\frac{\sum_{f \in \mathcal{F}^+(i) } Sim(i,f) \times U_f}{\sum_{f \in \mathcal{F}^+(i) } Sim(i,f)}||^2_F$
$(2)\sum^m_{i=1} \sum_{f \in \mathcal{F}^+(i)}Sim(i,f)||U_i-U_f||^2_F$
where, according to Soreg, the second formula: *“is insensitive to users with different tastes”* [2]. For the same user's behavior on different domains, we consider that it shows different dimensions of user interests, which is why we combine as many domains as possible to fully explore user interests. So the same user may exhibit very different tastes on different domains, so we take the more appropriate first social term.
[2] Ma H, Zhou D, Liu C, et al. Recommender systems with social regularization[C]//Proceedings of the fourth ACM international conference on Web search and data mining. 2011: 287-296.
**Q1:** Whether the proposed framework can be extended to more realistic company-level applications.
**Response:** Yes, our approach fits real-world scenarios and can be modified to better fit company-level scenarios. In real applications, there are many similar setups, e.g., between different recommendation scenarios on online platforms, where their users are partially overlapping and the items are often different. Our assumptions are consistent with the real scenarios, while some modifications to our approach are needed to accommodate company-level applications. In specific, it is necessary to change the intra-domain GAT training and fine-tuning phase of the horizontal FL setting to a centralized training process undertaken by a domain server (e.g., a company). Knowledge transfer between domains (company to company) is still well protected with DP.
**Q2:** Why cannot we adopt stop-transfer method to avoid such complex attention generation progress?
**Response:** Because it's hard to determine whether to stop the transfer when there are multiple source domains. For simpler cross-domain or transfer learning setting, the approach you propose is also really a way to avoid negative transfer.
However, during CDR training, especially in our hypothetical BD-CDR scenario (more than three domains with multiple source domains and one target domain) setup, it would be difficult to determine which source domain or domains lead to negative transfer. Also we need to consider that the time a domain is joined affects whether it produces negative transfer or not. For example, assuming that the knowledge transfer from domain A to domain B is positive, while the re-transfer of knowledge from domain A to domain B, which aggregates the knowledge of domains C and D, is negative. Positive transfer indicates that there is some information or pattern in domain A that is beneficial to domain B. The later negative transfer suggests that after combining the knowledge of domains C and D, the negative knowledge in domain A dominates the transfer process. This shows that it is very arbitrary to rely only on the positive and negative transfer to judge whether to transfer or not, and the correct approach should be for the target domain to actively filter the negative information, so as to fully utilize the knowledge of each domain. Our method filters out potential harmful or conflicting knowledge from source domains, and mitigates the issue of negative transfer.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed explanation. I will keep my score. | Summary: The paper proposes a novel federated graph learning framework, FedGCDR, aimed at addressing the challenges of privacy and negative transfer (NT) in Broader-Source Cross-Domain Recommendation (BS-CDR) scenarios. The framework includes two key modules: the positive knowledge transfer module and the positive knowledge activation module. These modules ensure privacy preservation and mitigate NT by employing differential privacy and feature mapping techniques, followed by graph expansion and fine-tuning in the target domain. The framework is validated through extensive experiments on the Amazon dataset, demonstrating its superior performance over existing methods.
Strengths: 1. The paper considers privacy preservation and negative transfer (NT) challenges under a more generic scenario of Broader-Source Cross-Domain Recommendation (BS-CDR).
2. The framework's design is modular, allowing for easy adaptation and potential integration with other recommendation system components.
3. The experiments cover 16 popular domains from the Amazon dataset and demonstrate that FedGCDR outperforms state-of-the-art methods in terms of recommendation accuracy.
Weaknesses: 1. Despite the overall clarity of the writing, there are several clerical mistakes that could easily mislead the reader.
2. While the results on the Amazon dataset are promising, it is unclear how well the model generalizes to other datasets or real-world scenarios with different characteristics.
3. The authors emphasize that the contribution lies in provides high-quality BS-CDR recommendations while safeguarding both user privacy and domain confidentiality. However, the experimental section is completely devoid of any discussion on privacy preservation.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. About the clerical error in writing, such as in line 105, should "privacy of individual users (inter-domain privacy)" be "intra-domain privacy"? The line 166 “we learn learning…”, and the line 584 “FedCT’s HR@10 performance is better on Amazon-4@CDs than on Amazon-8@CDs”. But according to Figure 6, it should be the Amazon-8@CDs that outperform the Amazon-4@CDs for FedCT's HR@10.
2. In Section 3.1, the authors assumed that users are partially overlapping between domains, so how do non-overlapping users perform graph expansion in the target domain?
3. The improvement in recommendation performance of the proposed framework is demonstrated in the experimental section, but why another important contribution on privacy preservation is not discussed?
4. Does FedGCDR-DP in Tables 1 and 2 refer to another variant method? What is the difference with the proposed FedGCDR? This is not explained relevantly in the paper.
5. Are the Amazon-4, Amazon-8 shown in Table 4 randomly divided? Or is there another domain selection strategy?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable comments and suggestions. We hope our response addresses your concerns.
**W1&Q1:** Writing errors.
**Response:** Thank you. We agree and **will correct any errors you have raised and scrutinize the paper and fix any typos during the revision process.**
| Line | Writing error | Revised version |
| --- | --- | --- |
| 105 | privacy of either individual users (inter-domain privacy) | privacy of either individual users (intra-domain privacy) |
| 116 | we learn learning a function to estimate the scores | we learn a function to estimate the scores |
| 584-587 | The slight difference is that FedCT’s HR@10 performance is better on Amazon-4@CDs than on Amazon-8@CDs. | Compared to Figure 6,the slight difference is that FedCT’s HR@10 performance is better on Amazon-8@CDs than on Amazon-4@CDs. |
**W2**: How well does the model generalize to other datasets or real-world scenarios with different characteristics?
**Response**: We agree that it is important to verify the generalization performance of the model on other datasets. Thus, we conduct new experiments on Douban dataset to validate the generalization capability of the proposed method. As shown in Table u2 in **Global Rebuttal** pdf file, our method yields both optimal and sub-optimal results on almost all metrics. **We will add these new results in the updated version**. For details, please refer to the **Global Rebuttal** with pdf attachment.
**W3&Q3:** The experimental section is completely devoid of any discussion on privacy preservation.
**Response:** In paper’s previous version, we did not detail privacy protection related content in the **experimental section** due to space constraints, instead we added this part in the **Appendix of paper’s previous version**. In accordance with previous work on federated recommendation, we have made full analysis and discussion on privacy protection in both theoretical and experimental aspects. For details, please refer to **Appendix A Privacy analysis** for theoretical proof and **Appendix D.3 Privacy budget** for the experiments on privacy budget in paper’s previous version.
**Q2:** How do non-overlapping users perform graph expansion in the target domain?
**Response:** Non-overlapping users perform graph expansion by a set of Gaussian noise matrices. In lines 197 through 198 of paper’s previous version, we had explained the case where there are no corresponding users in one of the source domains in the graph expansion strategy: “*It is worth noting that for source domains where a user has no rating, $X_{S_i}$* *is a Gaussian noise matrix.”* Therefore, for non-overlapping users in the target domain, we conduct the graph expansion with $M$ Gaussian noise matrices (assuming $M$ source domains).
**Q4:** Does FedGCDR-DP in Tables 1 and 2 refer to another variant method? What is the difference with the proposed FedGCDR?
**Response:** Yes, FedGCDR and FedCDR-DP are two variants of our proposed method, where FedGCDR does not add Gaussian noise to the knowledge transfer module, while FedGCDR-DP is a complete implementation of the method integrating Gaussian noise. By introducing these two variants it is possible to compare the loss of accuracy due to the addition of inter-domain privacy protection. Thanks for this valuable comment, and **we will add these descriptions to the experimental section in paper’s updated version.**
**Q5:** Domain selection strategy.
**Response:** The basis of our domain selection strategy is the amount of data before performing data filtering. Thus, we sorted the domains contained in the Amazon dataset based on the amount of data in descending order and selected the top 16 domains. Similarly, Amazon-4 and Amazon-8 were selected accordingly. The only exception is that we prioritized the Movie domain, which has a relatively small amount of source data, based on popularity. **We will add this content to the appendix in paper’s updated version.**
---
Rebuttal Comment 1.1:
Comment: Dear Reviewers,
As the deadline for the discussion approaches, we are happy to address any further questions or provide additional clarificaion if needed. Thank you.
---
Rebuttal Comment 1.2:
Comment: Thank you for your thorough and detailed responses to the reviews. Your explanations have effectively addressed my concerns, particularly regarding the generalization of the model, the domain selection strategy, and the distinctions between the variant methods. I appreciate the additional experiments and clarifications you've included in the updated version of the paper, which have significantly strengthened the overall quality of this work. In light of these improvements, I will be revising my score accordingly.
---
Reply to Comment 1.2.1:
Comment: Thank you. We appreciate your response, and we are glad that our rebuttal is reassuring. | Summary: To solve the privacy issue and negative transfer phenomenon in the cross-domain recommendation, the authors propose a novel framework named FedGCDR. Following the HVH pipeline, two key modules collaboratively transfer positive knowledge and filter the negative interference from source domains. In the experimental part, real world data in sixteen domains validated the model's performance.
Strengths: 1. Novelty. Dynamic attention mechanism in the end-to-end framework ensures an automated domain filtering mechanism. The HVH pipeline makes the training of multi-domain scenes efficient, coupled with differential privacy technology, ensuring the privacy of the training process.
2. Theory. This paper adopts the Gauss mechanism guarantee on the middle embedding to defend against inference attacks and proves its validity by theory.
3. Experiments. Based on the Amazon data set, the author carried out a detailed experiment and analyzed the coping ability of various methods to the phenomenon of negative transfer from two perspectives of soft negative transfer and hard negative transfer, thus verifying the superiority of the proposed method. Based on verifying the effectiveness of the two modules, the detailed ablation experiment fully reveals the influence of the two modules on domains of different data quality, which is very convincing.
Weaknesses: 1. The introduction and related works sections of the paper introduce two pivotal concepts: intra-domain privacy and inter-domain privacy. However, the definitions provided for these concepts appear to be inconsistent, which may obscure the paper's motivation regarding privacy concerns. For the paper to effectively convey its contributions and significance, it is crucial that these two concepts are clearly and uniformly defined.
2. The authors have chosen to emphasize the "Broader-Domain CDR" scenario, which involves more than three domains when defining the problem. While this distinction is noted, it is not immediately clear why the term "multi-domain CDR" is not used to introduce the framework, as it seems to be a more commonly accepted term in the literature.
Technical Quality: 3
Clarity: 3
Questions for Authors: In practical application, heterogeneity exists between different domains, and it may not be appropriate to use the same embedding dimension. Is there a possible solution?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weak points.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive review and valuable questions that have helped us improve our work.
**W1:** The definitions provided for the intra-domain privacy and inter-domain privacy appear to be inconsistent.
**Response:** We agree and we **will give a unified and complete definition in the introduction section of paper’s updated version.**
**W2:** Why emphasize the concept of ‘Broader-Domain CDR’.
**Responese:** Because solving the cross-domain recommendation problem with more than three participating domains is very important for better mining user preferences. In existing work in the area of cross-domain recommendation, researchers tend to focus on only two domains. These works often do not generalize well to multi-domain tasks. There are also some studies that begin to focus on multi-domain knowledge transfer, but they are also limited to only three domains [1]. We believe that these studies do not fully reflect the nature of multi-domain, while paying insufficient attention to privacy and negative transfer challenges. These two challenges become more severe with the number of domains (lines 44 through 55). We therefore emphasize a number of domains greater than three in the article to distinguish our method from previous works. As shown in Table 2 of paper’s previous version, our methods (FedGCDR, FedGCDR-DP) outperform all baselines under the settings of 4 domains, 8 domains and 16 domains.
[1] Liu W, Chen C, Liao X, et al. Federated Probabilistic Preference Distribution Modelling with Compactness Co-Clustering for Privacy-Preserving Multi-Domain Recommendation[C]//IJCAI. 2023: 2206-2214.
**Q1:** Heterogeneity of embedding dimension exists between different domains.
**Response:** For solving the problem of heterogeneity of embedding dimensions, we have two possible approaches. 1. Aligning the attention layers of different domains. Since our method does not directly transmit the original embedding, each domain can choose the appropriate embedding dimension according to its own characteristics. Intermediate embeddings of the same dimension can be obtained by aligning the weight matrices of the attention layers, thus realizing knowledge transfer. 2. Aligning dimensions by mapping functions. Before knowledge transfer, we will first map the knowledge vectors of the current domain to the feature space of the target domain through MLP. The embedding dimensions of the source and target domains can be aligned by changing the input dimension of the MLP. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their valuable comments and suggestions, which are crucial for improving our work. Here we carefully response your questions point-by-point, and more details of the responses are **in the PDF file** at the bottom of this **Author Rebuttal.**
**1:** How well the model generalizes to other datasets or real-world scenarios with different characteristics.
**Response:** As per your requests, we conducted new experiments on the Douban dataset to validate the generalizability of our approach. The dataset information (Table u1) and experimental results (Table u2) are available in the rebuttal PDF file.
- *Experimental settings:* We filtered users and items in the dataset with a number of interactions less than 5. Since the Douban dataset contains only three domains, we set them as target domains for the experiments in turn.
- *Results:* As shown in Table u2, first, our method outperforms all the baselines in almost all metrics, except NDCG@5 on Movie domain. For the only exception to the NDCG@5 metric, our method is the second best. Although **Single domain** is better than our methods on NDCG@5, it has poor performance on the other three metrics (HR@5, HR@10, NDCG@10) because it can not leverage knowledge form source domains. Second, for the CDR scenario, our method demonstrated its superiority compared to other CDR methods. Third, the inclusion of Gaussian noise for the purpose of preserving inter-domain privacy does not always have a side effect on model performance (e.g., in the book domain, FedGCDR-DP outperforms FedGCDR on the metrics HR@5, NDCG@5 and NDCG@10), and we believe that the inclusion of noise benefits the model's attention mechanism enhancing its ability to filter out noise and negative knowledge. In conclusion, the experimental results demonstrate that FedGCDR significantly outperforms state-of-the-art methods.
Pdf: /pdf/2de6eafc9176f7081a111afac6ba40ce119fa96b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MagR: Weight Magnitude Reduction for Enhancing Post-Training Quantization | Accept (poster) | Summary: - The authors propose an optimization-based preprocessing technique called Weight Magnitude Reduction (MagR) to improve the performance of post-training quantization (PTQ) for large language models (LLMs). They motivate their method from previous work, showing that linear transformation of weights can render more quantization-friendly, allowing for sub-4bit quantization. However, such methods normally require a post-processing step during inference to undo that linear transformation, introducing a run-time overhead.
- Instead, they propose a non-linear transformation without needing an additional post-processing step. That transformation aims to minimise the $\ell_{\infty}$ norm, or the maximum value of the weights per channel or column while preserving the layer's output. By making the weight distribution more compact around zero and removing outliers, they are able to quantize weights to 2 or 3 bits with little accuracy drop compared to competing methods.
- They formulate the optimization as an $\ell_{\infty}$ -regularisation and solve it using proximal gradient descent. In fact, the problem reduces to computing the projection of weight vectors onto the $\ell_1$ ball, for which there are several efficient algorithms available.
Strengths: - MagR achieves state-of-art weight quantization without introducing any additional computation overhead. This is a very important contribution as all other competing methods with the linear transformation of weight, such as QuIP, require a linear transformation of the transformer block input/output. Having said this, the inference runtime comparison to QuIP is missing from the experiments section
- The authors extensively evaluate the methods, evaluating a large corpus of datasets and benchmarks. They also prove that they outperform strong competing methods, such as OmniQuant, in almost all cases.
- The idea of regularizing the weight's $\ell_{\infty}$ norm is quite novel and very effective. In fact, I am surprised this has not been established before in PTQ literature.
Weaknesses: - The paper is very focused on ppl and accuracy results on a wide range of benchmarks and datasets at the cost of other important ablation studies. For example:
- ablation study on the penalty factor $\alpha$ and its impact on the final accuracy.
- Inference runtime comparison is missing compared to competing methods, such as QuIP. In fact, the main claim of the paper is that it is faster than QuIP, but there are no studies supporting it.
- Results with activation quantization beyond FP16 and runtime improvements vs baselines.
- The connection between the rank-deficient feature matrix argument and $\ell_{\infty}$ regularisation requires further theoretical justification. There must be a theoretical guarantee for the optimality of that optimizaiton. In fact, the method would be stronger if it was formulated more generally as an $\ell_p$ constrained, and the authors show that $\ell_{\infty}$ is better constrained.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Table 1: It is unclear to me how all these numbers are relevant in supporting the method. One can see from the maximum value that all Llama Models are rank-deficient based on the maximum value being below 100%. How are the rest of the statistics informative?
- Line 124-125: From all the solutions available, why choose the lowest infinity norm? Why is this optimal? What is missing there is the optimality of this norm for quantization purposes. There, in fact, a discussion about quantization noise as a perturbation bounded by $\ell_{\infty}$ -norm in a paper by M. Alizadeh et. al called "Gradient $\ell_1$ regularization for quantization robustness"
- Table 3: activation quantization bitwidth missing
- Please provide ablation studies about the penalty factor $\alpha$. What is the effect of it not being tiny? How sensitive are the results?
- To fully support the claims of the paper, it would be nice to see an ablation study on other norms as regularization. Otherwise, a theoretical justification is required about the optimality of the $\ell_{\infty}$ -norm.
- Paragraph starting at line 210: it is unclear how you establish the optimality of the shrinking factor $\beta$. In fact, I cannot see why you would not search for the optimal clipping threshold based on minimizing the MSE per column or group. This is a standard method for weight clipping in LLM quantization literature.
- Section 5.3: the section's title is misleading. Maybe rename it to preprocessing runtime? Most commonly, runtime refers to the inference runtime
- A section on inference runtime comparison to QuIP is required. The main claim of the paper is that the method is faster than QuIP due to the lack of transformation, but no experimental evidence is provided to support it.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper lacks a thorough discussion of each limitation. The only reference I found was in lines 245-6 about extending MagR into incoherence processing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments! We'll discuss the reference by Alizadeh et. al in the revised paper.
The primary concerns were regarding:
**Weaknesses:**
- **The use of** $\ell_\infty$-**norm**: MagR aims to reduce the range of the pre-trained weights (not the quantization noise), and $\ell_\infty$-norm is naturally the best to serve this purpose. So the nature of MagR is different than $\ell_1$-gradient regularization proposed by Alizadeh et. al. Given that the quantization step (or float scalar) is linearly proportional to the range of the pre-trained weights for a fixed bit-width (by the definition of uniform quantizer), **MagR results in a smaller quantization step. This, in turn, leads to a smaller quantization error**. In fact, we can show that the $\ell_2$ quantization error is $O(\delta)$.
Our additional experiments also demonstrate that **the layer-wise quantization errors indeed are reduced by applying MagR**; see the figure in the attached pdf file. Another notable advantage of $\ell_\infty$-norm over the general $\ell_p$-norm is its closed form of proximal operator, which is the core of the algorithm design for MagR.
- **Activation quantization**: By its nature, MagR is designed to improve the performance of weight quantization without introducing inference overhead. So we evaluated our method for only weight quantization. We intend to investigate activation quantization in our future work.
- **Ablation study on $\alpha$**: $\alpha$ is the parameter balancing the tradeoff between the output discrepancy and the max magnitude of the weights. We show the ablation study on $\alpha$ for channel-wise quantization on LLaMA2-7B as below. Note that **the choice of $\alpha$ does not depend on the bit-width** and $\alpha=0.001$ is the best choice for channel-wise quantization.
| Model | $\alpha$ | W/A | Wiki (PPL) | C4 (PPL) |
|----------|-|---------|-------|--------------|
| LlaMa2-7B | 0.005 | 4/16 | 5.84 | 7.55 |
| | **0.001** | 4/16 | **5.70** | **7.28** |
| | 0.0005 | 4/16 | 5.72 | 7.29 |
| | 0.0001 | 4/16 | 5.78 | 7.35 |
| | 0.00001 | 4/16 | 5.81 | 7.40 |
| | | | |
| | 0.005 | 3/16 | 6.64 | 8.74 |
| | **0.001** | 3/16 | **6.41** | **8.23** |
| | 0.0005 | 3/16 | 6.49 | 8.38 |
| | 0.0001 | 3/16 | 6.83 | 8.79 |
| | 0.00001 | 3/16 | 7.08 | 9.19 |
**Questions:**
- **Fraction ranks in Table 1**: The statistics in Table 1 not only show that all feature matrices are nearly rank-deficient, but also gives us an idea of how low-rank they are and the overall distribution of their fraction ranks across the architectures. Intuitively, MagR works better for low-rank feature matrix because its kernel space would be larger (of a higher dimension), and MagR potentially produces weights with smaller $\ell_\infty$-norm.
- **The choice of $\beta$**: Thanks for your suggestion. Since our main focus is on MagR preprocessing to reduce the quantization error, we prioritized simplicity and efficiency of the quantization algorithm by fixing the $\beta$, which has proven to be effective. We agree that optimizing the quantization error with respect to $\beta$ could potentially further improve the performance, and we intend to investigate this in the revision.
- **Preprocessing runtime**: Thanks for your suggestion, we will rename it.
- **Inference time**: MagR essentially replaces the original pre-trained weights with a new set of weights that have a smaller magnitudes prior to actual quantization, without sacrificing the original accuracy or altering the architecture. Consequently, our MagR+OPTQ achieves **exactly the same inference efficiency** as OPTQ. This is immediately supported by the widely-used inference kernel from the AutoGPTQ library. In contrast, at inference time, QuIP requires performing a random linear transformation on the activations before multiplying the quantized weights. Since the code for QuIP's inference is not released, we cannot compare them directly. But according to QuIP's own report, QuIP's inference speed is at least **1.5 times slower than OPTQ** for the OPT-66B model (81 ms vs 53 ms).
---
Rebuttal Comment 1.1:
Title: Reviewed score
Comment: Having reviewed the responses, ablation studies and latest results in the conversation with reviewer jujD, I have decided to increase my score to strong accept, provided the additional results and analyses are included in the camera-ready version.
From my initial review, I found the method itself innovative but concerns about the presentation and certain claims. The authors have been diligent in addressing the points raised by me and other reviewers.
---
Reply to Comment 1.1.1:
Title: Thanks for the reply
Comment: We sincerely appreciate your valuable feedback and your recognition of the novelty of our work!
Thanks again! | Summary: The paper proposes a novel approach called Weight Magnitude Reduction (MagR) to improve the performance of PTQ for LLM. The MagR reduces the magnitude of the weights in PTQ. The experiments demonstrate the effectiveness of MagR.
Strengths: 1. The idea is clear.
2. The paper is easy to read (except typos/errors).
Weaknesses: 1. There exist some typos/grammatical errors in the paper and should be revised.
2. The presentation is not clear. For example, this paper aims to reduce the parameters of LLM using PTQ, however, the title and abstract did not reflect the contribution of LLM compression. In the abstract, the author claims MagR can diminish the maximum magnitude of the weights and smooth out outliers, however, the title only focuses on Weight Magnitude Reduction.
3. Provide a more thorough discussion of the generalization ability, robustness, and potential applications of the proposed approach.
4. The format of references is not correct, i.e., [1], [3], [9],[14], etc.
5. The main limitation of this paper is that the proposed method lacks theoretical analysis.
6. While the abstract provides an overview of the proposed method, some aspects of the methodology could benefit from further elaboration in the main paper. Providing step-by-step explanations and intuitive visualizations for key components such as L infinity norm, and L infinity-> L 1 norm, would enhance the reader's understanding.
7. The paper claims that the method outperforms state-of-the-art methods, but a more comprehensive comparative analysis is needed. Detailed comparisons with other existing approaches, along with discussions about the reasons behind the performance differences, would strengthen the argument for the superiority.
8. Exploring the reasons behind the success of these techniques and providing intuitive explanations would contribute to the overall scientific contribution of the work.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The paper contains numerous hyperparameters, such as \(\alpha\). Including more ablation studies would enhance the readers' understanding.
2. In Table 3, why did MagR+OPTQ not achieve better results compared to peer competitors?
3. How can the impact of outliers on PQT for LLM be reduced?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NO
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! We'll add ablation study and fix the typos and incorrect format as suggested, and include the details for deriving Alg. 1.
Other concerns were regarding:
**Weaknesses:**
- **Theoretical analysis**: Our new results establish that:
- The layer-wise $\ell_2$ quantization error is $O(\delta)$, where $\delta$ is the quantization step of the uniform quantizer. This motivates our preprocessing method based on $\ell_\infty$-norm of the weights since minimizing $\ell_\infty$-norm amounts to minimizing $\delta$.
- We can also prove the convergence rate of the proposed MagR algorithm for the $\ell_\infty$-regularization. Specifically, $f(w^k) - f(w^*) = O(1/k) \to 0$ as $k\to \infty$, where $f(w) = \frac{1}{2}|| Xw - X\hat{w}||^2 + \alpha ||w||_{\infty}$ is the objective function, $w^k$ is the $k$-th iterate generated by MagR, and $w^*$ is the true minimizer.
- The layer-wise output $\ell_2$-error after MagR preprocessing obeys $|| Xw^* - X\hat{w}|| = O(\sqrt{\alpha})$, where $\alpha>0$ is the penalty parameter.
- **Why MagR works:**
- MagR effectively reduces the range of the pre-trained weights by employing $\ell_\infty$-minimization, as illustrated by Figure 1. Given that the quantization step (or float scalar) is linearly proportional to the range of the pre-trained weights for a fixed bit-width (by the definition of uniform quantizer), **MagR results in a smaller quantization step. This, in turn, leads to a smaller quantization error**. In fact, given the activations $X\in\mathbb{R}^{m\times n}$, quantizer $\mathcal{Q}$ with quantization step $\delta>0$, pre-trained weights $\hat{w}\in\mathbb{R}^n$, the following analysis shows that **the $\ell_2$ quantization error is linear in $\delta$**:
$$|| Xw_q - X\hat{w}|| \leq ||X \mathcal{Q}(\hat{w}) - X\hat{w}|| \leq \sigma_{\max}(X) ||\mathcal{Q}(\hat{w}) - \hat{w}|| \leq \frac{\sigma_{\max}(X) \sqrt{n}}{2}\delta.$$
Our additional experiments also demonstrate that **the layer-wise quantization errors indeed are reduced by applying MagR on randomly sampled layers; see the figure in the attached pdf file**.
- We also note that MagR can preserve all the layers' outputs (before performing quantization), ensuring that the pre-trained model's performance remains unaffected. **This is crucial in the PTQ setting, as the goal is to search for the quantized model only within the local neighborhood of the pre-trained model**. The following table shows that MagR preprocessing indeed approximately maintains the perplexity (ppl) of the pre-trained model with minor degradation. Since we are minimizing the regularization $ \frac{1}{2}|| Xw - X\hat{w}||^2 + \alpha ||w||_{\infty}$ for preprocessing, for the minimizer $w^*$ it holds that $|| Xw^* - X\hat{w}|| = O(\sqrt{\alpha})\to 0$, as $\alpha \to 0$. When choosing a small $\alpha$ (equivalent to imposing large penalty on the fidelity term), $|| Xw^* - X\hat{w}||$ will be small but not 0. This explains why the model performance degrades slightly after MagR (before quantization).
| Model | Method | Wikitext2 (PPL) | C4 (PPL) |
|----------|-|----------------|--------------|
| LlaMa2-7B | Original | 5.47 | 6.97 |
| | After MagR | 5.52 | 7.04 |
| LlaMa2-13B | Original | 4.88 | 6.46 |
| | After MagR | 4.92 | 6.52 |
| LlaMa2-70B | Original | 3.31 | 5.52 |
| | After MagR | 3.35 | 5.56 |
**Questions:**
- **Ablation studies**: $\alpha$ is the parameter balancing the tradeoff between the output discrepancy and the max magnitude of the weights. We show the ablation study on $\alpha$ for channel-wise quantization on LLaMA2-7B as below. Note that **the choice of $\alpha$ does not depend on the bit-width**, and $\alpha=0.001$ works well for all channel-wise quantization.
| Model | $\alpha$ | W/A | Wiki (PPL) | C4 (PPL) |
|----------|-|---------|-------|--------------|
| LlaMa2-7B | 0.005 | 4/16 | 5.84 | 7.55 |
| | **0.001** | 4/16 | **5.70** | **7.28** |
| | 0.0005 | 4/16 | 5.72 | 7.29 |
| | 0.0001 | 4/16 | 5.78 | 7.35 |
| | 0.00001 | 4/16 | 5.81 | 7.40 |
| | | | |
| | 0.005 | 3/16 | 6.64 | 8.74 |
| | **0.001** | 3/16 | **6.41** | **8.23** |
| | 0.0005 | 3/16 | 6.49 | 8.38 |
| | 0.0001 | 3/16 | 6.83 | 8.79 |
| | 0.00001 | 3/16 | 7.08 | 9.19 |
| Model | $\beta$ | W/A | Wiki (PPL) | C4 (PPL) |
|----------|-|---------|-------|--------------|
| LlaMa2-7B | 1 | 3/16 | 6.43 | 8.33 |
| | **0.9** | 3/16 | 6.41 | **8.23** |
| | 0.85 | 3/16 | 6.48 | 8.39 |
| | 0.8 | 3/16 | 7.12 | 9.46 |
| | | | | |
| | 1 | 2/16 | 16.99 | 24.12 |
| | 0.9 | 2/16 | 20.88 | 31.78 |
| | 0.85 | 2/16 | 16.76 | 24.45 |
| | **0.8** | 2/16 | 16.73 | **23.73** |
- **Why MagR+OPTQ not achieve better results in Table 3**: The reasoning tasks in Table 3 require diverse knowledge and skills. Quantization may impede the model's ability to excel across all tasks. Therefore, a quantized model with better perplexity does not necessarily imply higher multi-task accuracy, especially when the perplexity values are close.
- **Impact of outliers on PTQ**: Outliers have much larger magnitude than other weights, which result in a large quantization step and a large quantization error. MagR addresses this issue by minimizing $\ell_\infty$-norm to reduce the quantization step.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. After reading it and other reviewer's comments, I will raise my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply
Comment: We sincerely appreciate your valuable feedback and thank you for raising your score! | Summary: Authors propose an optimization-based preprocessing technique called MagR to enhance the performance of post-training quantization.
MagR adjusts weights by solving an l1-regularized optimization problem, reducing the maximum magnitude and smoothing out outliers.
As a nonlinear transformation, MagR eliminates the need for additional post-processing, thus avoiding overhead during inference.
Experiments demonstrate that MagR achieves state-of-the-art performance on the Llama family of models, such as significantly reduced perplexity on Wikitext2 for the LLaMA2-70B model.
Strengths: 1.The experiment results are good.
2. The idea is novel, which is using preprocessing technology before the quantization process.
Weaknesses: In fact, I like the work but not in authors' view. I think this work is improve the generalization of model via norm. If this paper presents their work in this view, I would like to accept more.
1.Some value is lack of explanation, like " minimal the singular values of the feature matrix are less than 0.01 times the maximum singular value."
2. The applicability and the nature analysis of methods should be discussed.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Section 4.1, the singular values of the feature matrix are less than 0.01 times the maximum singular value. How is 0.01 calculated? If this value is changed, relaxing the constraint, will the results change?"
2. If the feature matrix is not actually rank-deficient, could the performance of the MagR method potentially suffer?
3. Proximal gradient descent may require more computational resources.
4. The model performance after preprocessing and before quantization should be presented. I wonder that the nature of this method is improve the generalizability of model before quantization.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The paper is lack of discussing the nature of preprocessing, i.e., what the preprocessing do for quantization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review! The primary concerns were regarding:
**Questions:**
- **Rank deficiency of feature matrix**: Section 4.1, specifically Table 1, demonstrates that the feature matrices across all the layers of the LLaMA family models have very small singular values, less than 0.01 times the maximum singular value. This indicates that the feature matrices are **approximately** rank-deficient, which ensures that we can approximately preserve the original model's performance after applying MagR. It is important to note that **exact** rank-deficiency is not a requirement for MagR to work. Notably, this approximate rank-deficiency is not unique to the LLaMA models; it also applies to the other models like OPT, as discussed in the QuIP paper. The threshold value of 0.01 is not a calculated parameter nor an algorithmic hyperparameter. Changing this threshold will not alter the feature matrices or affect the quantization performance of the proposed method.
- **Efficiency of proximal gradient descent**: The preprocessing times on a single A100 GPU are 15 min for Llama2-7B, 30 min for 13B, and 3.5 hr for 70B. We believe that MagR can be readily applied to larger LLMs with 100+B parameters.
- **Model performance after MagR**: The table at the bottom shows that the perplexity values degrade slightly after MagR. But since the range of the weights are reduced (Figure 1), we are able to use a smaller quantization step for the quantizer, which gives a smaller quantization error (see the table below) and improves the quantization performance.
**Limitations**:
- **Why MagR works:**
- MagR effectively reduces the range of the pre-trained weights by employing $\ell_\infty$-minimization, as illustrated by Figure 1. Given that the quantization step (or float scalar) is linearly proportional to the range of the pre-trained weights for a fixed bit-width (by the definition of uniform quantizer), **MagR results in a smaller quantization step. This, in turn, leads to a smaller quantization error**. In fact, given the activations $X\in\mathbb{R}^{m\times n}$, quantizer $\mathcal{Q}$ with quantization step $\delta>0$, pre-trained weights $\hat{w}\in\mathbb{R}^n$, the following analysis shows that **the $\ell_2$ quantization error is linear in $\delta$**:
$$|| Xw_q - X\hat{w}|| \leq ||X \mathcal{Q}(\hat{w}) - X\hat{w}|| \leq \sigma_{\max}(X) ||\mathcal{Q}(\hat{w}) - \hat{w}|| \leq \frac{\sigma_{\max}(X) \sqrt{n}}{2}\delta.$$
Our additional experiments also demonstrate that **the layer-wise quantization errors indeed are reduced by applying MagR on randomly sampled layers; see the figure in the attached pdf file or the table below**.
| Model (4-bit) | Layer | RMSE With MagR | RMSE Without MagR |
|----------|-|----------------|--------------|
| LlaMa2-7B | 14 | 0.1327 | 0.1575 |
| | 65 | 0.1421 | 0.1616 |
| | 121 | 0.1622 | 0.1879 |
| | 184 | 0.2025 | 0.2319 |
| | 217 | 0.2198 | 0.2542 |
| LlaMa2-13B | 14 | 0.1289 | 0.1518 |
| | 88 | 0.1647 | 0.1892 |
| | 145 | 0.1836 | 0.2127 |
| | 215 | 0.2098 | 0.2435 |
| | 271 | 0.2245 | 0.2618 |
- We also note that MagR can preserve all the layers' outputs (before performing quantization), ensuring that the pre-trained model's performance remains unaffected. **This is crucial in the PTQ setting, as the goal is to search for the quantized model only within the local neighborhood of the pre-trained model**. The following table shows that MagR preprocessing indeed approximately maintains the perplexity (ppl) of the pre-trained model with minor degradation. Since we are minimizing the regularization $ \frac{1}{2}|| Xw - X\hat{w}||^2 + \alpha ||w||_{\infty}$ for preprocessing, for the minimizer $w^*$ it holds that $|| Xw^* - X\hat{w}|| = O(\sqrt{\alpha})\to 0$, as $\alpha \to 0$. When choosing a small $\alpha$ (equivalent to imposing large penalty on the fidelity term), $|| Xw^* - X\hat{w}||$ will be small but not 0. This explains why the model performance degrades slightly after MagR (before quantization).
| Model | Method | Wikitext2 (PPL) | C4 (PPL) |
|----------|-|----------------|--------------|
| LlaMa2-7B | Original | 5.47 | 6.97 |
| | After MagR | 5.52 | 7.04 |
| LlaMa2-13B | Original | 4.88 | 6.46 |
| | After MagR | 4.92 | 6.52 |
| LlaMa2-70B | Original | 3.31 | 5.52 |
| | After MagR | 3.35 | 5.56 | | Summary: This paper introduces Weight Magnitude Reduction (MagR), a technique designed to smooth out outliers before LLM quantization. MagR adjusts pre-trained floating-point weights by solving an ℓ∞-regularized optimization problem. This preprocessing step reduces the maximum weight magnitudes, making the LLMs more suitable for quantization and resulting in better perplexity/accuracy results compared to models without preprocessing. Importantly, the MagR technique does not introduce any additional computational overhead during inference.
Strengths: 1) Since MagR is a technique for preprocessing LLM weights before quantization, it can be used in conjunction with other quantization methods (e.g. OPTQ).
2) MagR introduces proximal gradient descent steps, and it addresses these through layer-wise optimization using the Hessian, resulting in a shorter quantization runtime similar to OPTQ.
3) This method does not require additional operations for scaling activation, as needed in AWQ and QuIP, thereby avoiding extra overhead during inference.
Weaknesses: 1) MagR does not show significant improvement in perplexity/accuracy over previous method - QuIP.
Additionally, DecoupleQ [1] presents strong results for 2-bit quantization. While I understand that DecoupleQ was released recently, allowing insufficient time for a thorough comparison, I recommend the authors review DecoupleQ.
2) The paper emphasizes that MagR has no inference overhead, unlike AWQ and QuIP, which require scaling activations. However, it does not provide any latency information to highlight the significance of removing this additional inference overhead.
3) The runtime cost comparison between MagR and other methods is insufficient. The paper only provides the runtime of MagR, mentioning roughly that it is half the runtime of OmniQuant on page 8, lines 252-254. Since MagR is a preprocessing method, it is unclear if this comparison holds when considering the entire quantization process (MagR + OPTQ). Additionally, the runtime of QuIP is not addressed.
4) Some prior works have regularized weight distribution to make the model more suitable for quantization [2]. However, this paper does not mention or compare the differences between the proposed method and these earlier approaches.
5) The paper does not include any information about the calibration dataset.
[1] Guo, Yi, et al. "decoupleQ: Towards 2-bit Post-Training Uniform Quantization via decoupling Parameters into Integer and Floating Points." arXiv preprint arXiv:2404.12759 (2024).
[2] Kundu, Arnav, et al. "R2 Loss: Range Restriction Loss for Model Compression and Quantization." arXiv preprint arXiv:2303.08253 (2023).
Technical Quality: 2
Clarity: 2
Questions for Authors: 1) What is the inference latency of the proposed method compared to QuIP?
2) What is the runtime for QuIP and OmniQuant?
3) Why did you not provide perplexity results for the 7B/13B models for QuIP in Table 2?
4) Is QuIP incompatible with group-wise quantization, or can it be adapted to group-wise quantization with minor modifications, as done with MagR and OPTQ?
5) For the results in Table 3, do all the methods use channel-wise quantization, or do some use group-wise quantization?
6) What type of calibration dataset did you use, and how many data points were included for calibration?
7) What happens if you apply Omniquant/QuIP after processing LLMs with MagR?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The proposed MagR is a promising approach, as it regularizes weight values to produce LLMs that are better suited for quantization. However, this paper lacks sufficient evaluation to convincingly demonstrate the superiority of the proposed method compared to state-of-the-art techniques.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and for pointing out relevant references. We'll add the references [1,2] and discussions.
The primary concerns were regarding:
**Weaknesses:**
- **MagR vs QuIP, DecoupleQ**:
- It is possible to run additional coordinate descent iterations on top of OPTQ to further pursue the ppl, as shown in the updated results for the 2-bit 70B model. We also include the ppl results on 7B/13B models for QuIP. Our method outperforms QuIP in most cases. Regardless, QuIP trades off inference speed for accuracy, whereas MagR does not.
| Model | Method | Wbits |Wiki (PPL) | C4 (PPL) |
|----------|-|-|---------------|--------------|
| LlaMa2-7B | QuIP | 4 |5.94 | 8.01 |
| | MagR+OPTQ | 4 | 5.70 | 7.28 |
| | QuIP | 3 |6.50 | 8.74 |
| | MagR+OPTQ | 3 | 6.41 | 8.23 |
| | QuIP | 2 |27.13 | 31.33 |
| | MagR+OPTQ | 2 | 16.73 | 23.73 |
||||||
| LlaMa2-13B | QuIP | 4 |5.01 | 6.88 |
| | MagR+OPTQ | 4 | 4.97 | 6.63 |
| | QuIP | 3 |5.34 | 7.34 |
| | MagR+OPTQ | 3 | 5.41 | 7.19 |
| | QuIP | 2 |10.09 | 13.13 |
| | MagR+OPTQ | 2 | 11.14 | 14.45 |
||||||
| LlaMa2-70B | QuIP | 2 |6.33 | 8.94 |
| | MagR+OPTQ | 2 | 5.95 | 8.53 |
- DecoupleQ proposes to use block-wise optimization (similar to Omniquant) on top of GPTQ to refine the solution. Bottomline is that decoupleQ is still a quantization method (like GPTQ/Omniquant), MagR can be used as preprocessing for DecoupleQ to further improve its performance (MagR + DecoupleQ).
- **Quantization runtime comparison**: We reported the runtime of MagR+OPTQ and MagR+RTN in Table 4. We meant that the entire runtime of MagR+OPTQ is roughly half of Omniquant. The following table shows the quantization runtimes of QuIP and OmniQuant. All the methods were tested on a single NVIDIA A100 GPU.
| Method | LlaMa2-7B | LlaMa2-13B |
|-----------|---------------|--------------|
| QuIP | 54 min | 68 min |
| Omniquant | 73 min | 2 hr |
| OPTQ | 22 min | 40 min |
| MagR+OPTQ | 35 min | 70 min |
- **Prior works on weight regularization**: Thanks for pointing out the reference. The goal of R2 Loss is similar to MagR,which regularizes a smaller range, but the reference uses a traditional end-to-end training. The training is from scratch and the regularization term is added to the original cross-entropy loss. The targets are CNN models. This method cannot be used for LLMs since the pre-training phase is too expensive to repeat. Moreover, their minimization is carried out by the conventional (sub)gradient method, whereas we investigate efficient algorithm which takes advantage of the proximal operator of $\ell_\infty$-norm.
- **Calibration set**: The calibration dataset consists of 128 randomly selected 2048-token segments (context length) from WikiText2, which follows the routine of prior PTQ works.
**Questions:**
- **Inference efficiency**: MagR essentially replaces the original pre-trained weights with a new set of weights that have a smaller magnitudes prior to actual quantization, without sacrificing the original accuracy or altering the architecture. Consequently, our MagR+OPTQ achieves **exactly the same inference efficiency** as OPTQ. This is immediately supported by the widely-used inference kernel from the AutoGPTQ library. In contrast, at inference time, QuIP requires performing a random linear transformation on the activations before multiplying the quantized weights. Since the code for QuIP's inference is not released, we cannot compare them directly. But according to QuIP's own report, QuIP's inference speed is at least **1.5 times slower than OPTQ** for the OPT-66B model (81 ms vs 53 ms), in addition to the power consumption overhead.
- **Is QuIP incompatible with group-wise quantization?**: We think that QuIP should be compatible with group-wise quantization. QuIP first applies random linear transformations to preprocess the weights and activations and to smooth out outliers, then uses OPTQ to perform the actual quantization. Like OPTQ, it should be compatible with group-wise quantization. But they never reported such results. The drawback of QuIP is that it requires random linear transformations on the feature matrix (so-called incoherence processing) not only in the preprocessing stage, but also in the inference phase.
- **Results in Table 3**: All the results in Table 3 are for channel-wise quantization.
- **Omniquant/QuIP with MagR**: We believe that MagR could enhance the performance of Omniquant or QuIP, but the improvement may not be as significant as with OPTQ. For instance, QuIP's incoherence processing also helps reduce weight magnitude, as reported in the paper. Omniquant uses block-wise minimization and learns the quantization step (weight clipping parameter) via SGD. In contrast, OPTQ is a fast, greedy, gradient-free algorithm. MagR+OPTQ achieves both simplicity and effectiveness, without introducing any inference overhead.
---
Rebuttal 2:
Comment: Thank you for your careful and detailed response.
While I find the concept of MagR in regularizing weight values before quantization interesting, the paper seems to lack essential information needed to verify the significance of the proposed work:
1. As you correctly noted, R2 Loss [2] was evaluated on CNNs using a full training approach. Since the loss term designed to regularize the weight distribution can be applied across various model architectures, it is crucial to thoroughly clarify the differences between R2 Loss and MagR, particularly in terms of loss term design or training efficiency. Although the difference between R2 Loss and MagR is briefly discussed in the rebuttal, the explanation provided is not detailed enough to fully clarify the distinction.
2. The proposed MagR+OPTQ does not consistently achieve the best ppl/accuracy results compared to previous works.
3. If the key novelty of MagR lies in pre-processing LLMs before quantization, it should be compatible with various quantization methods to enhance ppl/accuracy. However, the paper only discusses MagR+OPTQ without exploring the results of MagR combined with other quantization methods. Consequently, it is unclear whether MagR should be regarded as a general pre-processing solution or if ‘MagR+OPTQ’ is intended as a new quantization scheme. If MagR is indeed a broadly applicable pre-processing method, it would be highly valuable. If not, I have reservations about the significance of ‘MagR+OPTQ,’ given the limited ppl/accuracy improvement as discussed earlier.
Therefore, I still perceive MagR as a limited contribution and will maintain my stance.
---
Rebuttal 3:
Comment: ---
1. ... it is crucial to thoroughly clarify the differences between R2 Loss and MagR, particularly in terms of loss term design or training efficiency ...
---
As we previously noted, R2 regularization applies $\ell_\infty$ penalty to the **traditional network loss** during end-to-end model pre-training, optimized using **standard SGD**, which is **practically infeasible for LLMs**. In contrast, MagR, as a concurrent approach, operates directly on the pre-trained model in a layer-by-layer manner, utilizing **linear least squares loss** to preserve each layer's output. MagR employs **proximal gradient descent** algorithm specifically tailored for this objective, **enabling efficient processing of LLMs**. This mathematically elegant algorithm represents a main contribution of our work.
---
2. ... MagR+OPTQ does not consistently achieve the best ppl/accuracy results ...
---
While we don’t claim that MagR+OPTQ always achieves the highest accuracy, it does perform at or near the top in most cases. Moreover, its accuracy can be further enhanced by applying techniques like additional coordinate descent iterations or learnable weight clipping as suggested by Reviewer 5GSj. Accuracy is just one of many critical performance metrics. More importantly, we introduce a technique that **incurs no additional overhead at inference time while achieving both training efficiency and accuracy comparable to the state-of-the-art, which typically sacrifices inference speed**. Inference speed is crucial for real-world applications, particularly in resource-limited settings such as edge computing.
---
3. ... the paper only discusses MagR+OPTQ without exploring the results of MagR combined with other quantization methods ...
---
We would like to remind the reviewer that **we also reported the substantial performance gain of MagR over the nearest round method (MagR+RTN vs RTN) in Table 2.**
|Method | Wbits |Wiki (PPL)| | || C4 (PPL)|||
|-|-|-|-|-|-|-|-|-|
|| |7B|13B|70B|-|7B|13B|70B|
|Baseline| FP16|5.47|4.88|3.31||6.97|6.46|5.52|
||||||
|RTN|4/16|6.11|5.20|3.67|-|7.71|6.83|5.79|
|MagR+RTN|4/16|5.91|5.17|3.58|-|7.52|6.81|5.72|
||||||
|RTN|3/16|539.48|10.68|7.52|-|402.35|12.51|10.02|
|MagR+RTN|3/16|8.66|6.55|4.64|-|10.78|8.26|6.77|
Here we also demonstrate that MagR can enhance the performance of QuIP:
|Model|Method|Wbits|Wiki (PPL)| C4 (PPL)|
|-|-|-|-|-|
|LlaMa2-7B|QuIP|4|5.94|8.01|
| |MagR+QuIP|4|5.74|7.25|
| |QuIP|3|6.50|8.74|
| |MagR+QuIP|3|6.25|7.88|
| |QuIP|2|27.13|31.33|
| |MagR+QuIP|2|13.31|14.49|
||||||
|LlaMa2-13B|QuIP|4|5.01|6.88|
| |MagR+QuIP|4|4.99|6.63|
| |QuIP|3|5.34|7.34|
| |MagR+QuIP|3|5.29|7.02|
| |QuIP|2|10.09|13.13|
| |MagR+QuIP|2|9.40|11.07|
In summary, MagR preserves the behavior of the pre-trained model, as demonstrated by the new data, while allowing for small quantization steps in the subsequent PTQ process, leading to reduced quantization error. So theoretically, MagR can be combined with other PTQ methods.
---
Rebuttal Comment 3.1:
Comment: Thank you for your response.
After reading your reply, I have decided to raise my score to 7, as the additional experimental results on MagR combined with other quantization methods have convinced me of the effectiveness of the proposed work.
However, I would like to request that the authors properly cite previous works related to weight regularization. While I understand that this paper is not identical to earlier approaches, applying weight regularization to achieve quantization-friendly models is a well-studied technique, and this paper currently lacks citations of these relevant works.
---
Reply to Comment 3.1.1:
Comment: Thank you for your feedback and for raising your score. We appreciate your recognition of the contributions of our work. We will thoroughly review and cite previous works related to weight regularization that are pertinent to our approach. | Rebuttal 1:
Rebuttal: We thank the reviewers for their constructive feedback. We address the common concerns below:
**(1) Why MagR works:**
- MagR effectively reduces the range of the pre-trained weights by employing $\ell_\infty$-minimization, as illustrated by Figure 1. Given that the quantization step (or float scalar) is linearly proportional to the range of the pre-trained weights for a fixed bit-width (by the definition of uniform quantizer), **MagR results in a smaller quantization step. This, in turn, leads to a smaller quantization error**. In fact, given the activations $X\in\mathbb{R}^{m\times n}$, quantizer $\mathcal{Q}$ with quantization step $\delta>0$, pre-trained weights $\hat{w}\in\mathbb{R}^n$, the following analysis shows that **the $\ell_2$ quantization error is linear in $\delta$**:
$$|| Xw_q - X\hat{w}|| \leq ||X \mathcal{Q}(\hat{w}) - X\hat{w}|| \leq \sigma_{\max}(X) ||\mathcal{Q}(\hat{w}) - \hat{w}|| \leq \frac{\sigma_{\max}(X) \sqrt{n}}{2}\delta.$$
Our additional experiments also demonstrate that **the layer-wise quantization errors indeed are reduced by applying MagR on randomly sampled layers; see the figure in the pdf file**.
- We also note that MagR can preserve all the layers' outputs (before performing quantization), ensuring that the pre-trained model's performance remains unaffected. **This is crucial in the PTQ setting, as the goal is to search for the quantized model only within the local neighborhood of the pre-trained model**. The following table shows that MagR preprocessing indeed approximately maintains the perplexity (ppl) of the pre-trained model with minor degradation. Since we are minimizing the regularization $ \frac{1}{2}|| Xw - X\hat{w}||^2 + \alpha ||w||_{\infty}$ for preprocessing, for the minimizer $w^*$ it holds that $|| Xw^* - X\hat{w}|| = O(\sqrt{\alpha})\to 0$, as $\alpha \to 0$. When choosing a small $\alpha$ (equivalent to imposing large penalty on the fidelity term), $|| Xw^* - X\hat{w}||$ will be small but not 0. This explains why the model performance degrades slightly after MagR (before quantization).
| Model | Method | Wikitext2 (PPL) | C4 (PPL) |
|----------|-|----------------|--------------|
| LlaMa2-7B | Original | 5.47 | 6.97 |
| | After MagR | 5.52 | 7.04 |
| LlaMa2-13B | Original | 4.88 | 6.46 |
| | After MagR | 4.92 | 6.52 |
| LlaMa2-70B | Original | 3.31 | 5.52 |
| | After MagR | 3.35 | 5.56 |
**(2) Inference efficiency:** MagR essentially replaces the original pre-trained weights with a new set of weights that have a smaller magnitudes prior to actual quantization, without sacrificing the original accuracy or altering the architecture. Consequently, our MagR+OPTQ achieves **exactly the same inference efficiency** as OPTQ. This is immediately supported by the widely-used inference kernel from the AutoGPTQ library. In contrast, at inference time, QuIP requires performing a random linear transformation on the activations before multiplying the quantized weights. Since the code for QuIP's inference is not released, we cannot compare them directly. But according to QuIP's own report, QuIP's inference speed is at least **1.5 times slower than OPTQ** for the OPT-66B model (81 ms vs 53 ms), in addition to the power consumption overhead.
**(3) Preprocessing time:** We would like to highlight that a major contribution of our work is efficiently addressing the computational challenge of large-scale $\ell_\infty$-regularization with matrix variables. The preprocessing times for MagR (without quantization) on **a single A100 GPU** are modest: **15 min for Llama2-7B, 30 min for 13B, and 3.5 hr for 70B**. We believe that MagR is readily applicable to larger LLMs with 100+B parameters.
**(4) Ablation study on** $\alpha$: $\alpha$ is the parameter balancing the tradeoff between the output discrepancy and the max magnitude of the weights. We show the ablation study on $\alpha$ for channel-wise quantization on LLaMA2-7B as below. Note that **the choice of $\alpha$ does not depend on the bit-width** and $\alpha=0.001$ is the best choice for channel-wise quantization.
| Model | $\alpha$ | W/A | Wiki (PPL) | C4 (PPL) |
|----------|-|---------|-------|--------------|
| LlaMa2-7B | 0.005 | 4/16 | 5.84 | 7.55 |
| | **0.001** | 4/16 | **5.70** | **7.28** |
| | 0.0005 | 4/16 | 5.72 | 7.29 |
| | 0.0001 | 4/16 | 5.78 | 7.35 |
| | 0.00001 | 4/16 | 5.81 | 7.40 |
| | | | |
| | 0.005 | 3/16 | 6.64 | 8.74 |
| | **0.001** | 3/16 | **6.41** | **8.23** |
| | 0.0005 | 3/16 | 6.49 | 8.38 |
| | 0.0001 | 3/16 | 6.83 | 8.79 |
| | 0.00001 | 3/16 | 7.08 | 9.19 |
In light of our responses to the reviewers' concerns, we would be very grateful if you would reconsider your opinion. We believe our work proposes a simplistic and effective method for quantizing LLMs without introducing any inference overhead.
Pdf: /pdf/9cf7583a883a27e5ef4a9c7fa93be7f848b9369a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms | Accept (poster) | Summary: The paper considers the vector-valued regression problem. Given $n $ iid samples $\{(x\_i,y\_i)\}\_{i=1}^n$ from a distribution $\mathcal{D}$ on $\mathcal{X} \times \mathcal{Y}$, the goal is to output the estimator $\hat{f}$ such that $\mathbb{E}[||\hat{f}(x)-f^{\star}(x)||\_{\mathcal{Y}}^2]$ is small. Here, $f^{\star}$ is the optimal regressor, that is $\mathbb{E}[y \mid x]$. In this work, the authors consider the case where the estimator $\hat{f} $ is obtained through spectral regularization methods such as Tikhonov, hard-thresholding, and iteration filters.
Strengths: - In the RKHS framework, the ridge estimator obtained by Tikhonov regularization is a canonical estimator. However, in Theorem 3, the authors show that the ridge estimator cannot exploit the higher-order smoothness of the optimal regressor $f^{\star}$ .
- In Theorem 4, the authors show that other spectrally regularized estimators (for example, one based on hard-thresholding) can exploit the higher-order smoothness of the functions.
- The paper also points out some inaccuracies in earlier work. I did not verify the validity of these claims, but if true, this is also an important contribution to the literature.
- Overall, the paper is well-written and is easy to follow.
Weaknesses: The proof of Theorem 4 is fully deferred to the Appendix. Given that the proof is highly technical, it would be helpful for the reader to include a high-level proof sketch in the main text of the paper. For example, some discussion of key challenges in generalizing the proof of scalar-valued to general $\mathcal{Y}$ would be useful.
Technical Quality: 3
Clarity: 3
Questions for Authors: If the proof in [1] were correct or can be readily fixed, can't Theorem 3 be inferred immediately through the result in the scalar-valued case? Here is a sketch of the argument:
Let $\\{d\_j\\}\_{j \in \mathbb{N}}$ be the ONB of $\mathcal{Y}$, and $\mathcal{Y}\_1 = \\{ \langle y, d_1 \rangle \quad |\quad y \in \mathcal{Y} \\}$ to be the subspace along the direction $d\_1$. Consider the probability distribution such that $y \mid x$ is only supported over $\mathcal{Y}\_1$.
Suppose $f^{\star}$ is the optimal regressor and $\hat{f}\_{\lambda}$ is the KRR estimator. Then, we have
$$\mathbb{E}[||\hat{f}\_{\lambda}-f^{\star}||\_{\mathcal{Y}}^2] = \int ||\hat{f}\_{\lambda}(x)-f^{\star}(x) ||\_{\mathcal{Y}}^2 p(x, dy) \pi(dx) \geq \int | \langle \hat{f}\_{\lambda}(x)-f^{\star}(x) ,d_1\rangle |^2 p(x, dy) \pi(dx) . $$
Now, this is effectively lower bounding the risk for scalar-valued regression. I believe that this argument can be formalized by defining a one-to-one mapping between $\mathcal{Y}\_1 $ and $\mathbb{R}$. I might be missing something here, so please correct me if I am wrong.
[1] Y. Li, H. Zhang, and Q. Lin. On the saturation effect of kernel ridge regression. In The Eleventh 390 International Conference on Learning Representations, 2023.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful and helpful feedback.
**Weaknesses**
The key challenge in generalising the proof from the scalar-valued case to the general vector-valued case lies in finding the right way to harmonize the technical definition of vector-valued interpolation space, and using Fourier series computations (see section C.1 in the Appendix) in order to control terms in the vector-valued setting by corresponding terms in the scalar-valued setting. Furthermore, it is essential to find the right decomposition of the learning risk. We appreciate the reviewer's suggestion to include a high-level overview of the proof. For the camera ready version, we will use the extra page allowed to include a proof sketch.
**Questions**
Assuming the proof of [1] was true, we agree with the reviewer that this would be a particularly elegant way to obtain our results through the result in the scalar-valued case. Below, we confirm this by providing the full argument, and discuss a subtle point regarding the noise assumption that we must consider.
Given the vector-valued problem with random variables $(X,Y) \in \mathcal{X} \times \mathcal{Y}$ and Bayes function $F\_* (X) := \mathbb{E}[Y \mid X]$, we consider a modified setting where the target variable is projected along the direction $d\_j$: $\tilde{Y} := \langle Y, d\_j \rangle\_{\mathcal{Y}} \in \mathbb{R}$. We readily see that the Bayes function associated to this setting is $f\_* (X) := \mathbb{E}[\tilde{Y} \mid X] = \langle F\_{\ast},d\_j\rangle\_{\mathcal{Y}}.$ Given our vector-valued estimator $\hat F\_{\lambda}$ (Eq. (7) in the submission), we can verify that $\hat{f}\_{\lambda}(\cdot) := \langle \hat{F}\_{\lambda}(\cdot), d\_j\rangle\_{\mathcal{Y}}$ is the scalar-valued ridge estimator associated to the dataset $\{(x\_i, \tilde{y}\_i)\}\_{i=1}^n \in (\mathcal{X} \times \mathbb{R})^n$ with $\tilde{y}\_i := \langle y\_i, d\_j \rangle\_{\mathcal{Y}}.$ To see this, using that $\hat{F}\_{\lambda}(\cdot) = \hat{C}\_{\lambda}\phi(\cdot)$, with $\hat{C}\_{\lambda}\in S\_2(\mathcal{H},\mathcal{Y})$, we obtain,
\begin{equation*}
\hat{f}\_{\lambda}(\cdot) = \langle \hat{F}\_{\lambda}(\cdot), d\_j\rangle\_{\mathcal{Y}} = \langle \phi(\cdot), \hat{C}\_{\lambda}^{\ast}d\_j\rangle\_{\mathcal{Y}}.
\end{equation*}
By Eq. (11) in the current submission, $\hat{C}\_{\lambda} = \frac{1}{n}\sum\_{i=1}^{n}y\_i \otimes \phi(x\_i)\left(\hat{C}\_{XX}+\lambda\right)^{-1}$, therefore,
\begin{equation*}
\hat{C}\_{\lambda}^{\ast}d\_j = \left(\hat{C}\_{XX}+\lambda\right)^{-1}\frac{1}{n}\sum\_{i=1}^{n}\phi(x\_i)\langle y\_i,d\_j\rangle = \left(\hat{C}\_{XX}+\lambda\right)^{-1}\frac{1}{n}\sum\_{i=1}^{n}\phi(x\_i)\tilde{y}\_i,
\end{equation*}
which shows exactly that $\langle \hat{f}\_{\lambda}(\cdot),d\_j\rangle$ is the desired scalar-valued ridge estimator. A slightly subtle point is that in order to apply the results of [1] to control the following term
\begin{equation*}
\int\left|\left\langle\hat{F}\_\lambda(x)-F\_{\star}(x), d\_j\right\rangle\right|^2 \pi(dx),
\end{equation*}
we must impose the assumption that for $\pi$-almost all $x\in \mathcal{X}$,
\begin{equation*}
\mathbb{E}\_{X,Y}\left[\langle Y - F\_{\ast}(X), d\_j\rangle^2\mid X=x\right] \geq \overline{\sigma}^2,
\end{equation*}
for some $\overline{\sigma}>0$. We refer to it as assumption (1). In contrast, the assumption we currently adopt in our paper states that for $\pi$-almost all $x\in \mathcal{X}$,
\begin{equation*}
\mathbb{E}\_{X,Y}\left[\\| Y - F\_{\ast}(X)\\|\_{\mathcal{Y}}^2\mid X=x\right] \geq \overline{\sigma}^2,
\end{equation*}
We refer to it as assumption (2). It is clear that (1) implies (2). We now provide an example to show that (2) does not imply (1).
*Example.* Let $\mathcal{Y} = \mathbb{R}^2$ and $\mathcal{X} = \\{0,1\\}$. Let $\pi$ denote the uniform distribution over $\mathcal{X}$. We define the joint distribution $p(x,y)$ as
\begin{equation*}
p(x,y) = p(y\mid x=0)\frac{1}{2} + p(y\mid x=1)\frac{1}{2}
\end{equation*}
where
\begin{equation*}
p(y = (y_1, y_2) \mid x=0) = \frac{1}{2}\delta\_0(y\_1)1[y\_2\in \\{-1,1\\}]
\end{equation*}
and
\begin{equation*}
p(y = (y_1, y_2) \mid x=1) = \frac{1}{2}\delta\_0(y\_2)1[y\_1\in \\{-1,1\\}]
\end{equation*}
We note that $\mathbb{E}[Y \mid X=0] = \mathbb{E}[Y \mid X=1] = (0, 0)^{T}$. Thus, $F\_{\ast}(x) = \mathbb{E}[Y\mid X=x] =(0, 0)^{T}$ for $x \\in \\{0,1\\}$. On the other hand, writing $Y = (Y\_1,Y\_2)^{T}$, we have
\begin{align*}
&\mathbb{E}[Y\_1^2\mid X = 0] = 0,\quad \mathbb{E}[Y\_2^2\mid X = 0] = \frac{1}{3}\\
&\mathbb{E}[Y\_1^2\mid X = 0] = \frac{1}{3},\quad \mathbb{E}[Y\_2^2\mid X = 0] = 0\\
\end{align*}
while for all $x\in \\{0,1\\}$, we have
\begin{equation*}
\mathbb{E}[\\|Y \\|^2\mid X = x] = \frac{1}{3}
\end{equation*}
This provides the desired example where (2) holds but (1) does not.
---
Rebuttal Comment 1.1:
Comment: Thank you for formalizing my sketch. I will retain my current score. | Summary: This manuscript presents the excess risk upper bound for spectral regularized algorithms whose output might belong to potential infinite-dimensional Hilbert space. Additionally, the saturation effect for a special case of vector-valued spectral algorithms, KRR, is rigorously confirmed.
Strengths: 1. The manuscript is easier to follow as it provides enough detail for moving from scalar-valued RKHS to vector-valued RKHS.
2. The manuscript established a series of well-studied properties/results for spectral algorithms and saturation effect to vector-valued output scenarios, which, to the best of my knowledge, is less concerned and studied in the community.
3. Identify some issues for proving the lower bound in previous work [1] and provide new techniques to handle the bias and variance.
Weaknesses: 1. I find that the saturation effect, or more exactly, the lower bounds for real-valued spectral algorithms, is proved in [2], seemingly a following work of [1]. Is there any specific obstacle the authors are facing to prove the lower bounds for the vector-valued spectral algorithms?
2. The (EVD+) condition looks weird to me. The authors state that the lower bound for the eigenvalue depends not only on $p$ but also on the running index $i$, and this is needed for the lower-bound proof. Can authors elaborate more on this? I especially note that [1], who considered the real-valued case, does not have such a requirement of dependence on $i$ in the lower bound. Is this a unique challenge present by the vector-valued setting? Or is this due to your correction for the proof issues in [1]?
[1] Y. Li, H. Zhang, and Q. Lin. On the saturation effect of kernel ridge regression. In The Eleventh International Conference on Learning Representations, 2023.
[2] Li, Yicheng, et al. "Generalization Error Curves for Analytic Spectral Algorithms under Power-law Decay." _arXiv preprint arXiv:2401.01599_ (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Just of independent interest. Consider spectral algorithms with finite qualification context (like KRR). Since there is a gap between the upper and lower bound when the interpolation index $\beta$ is greater than $2\rho$, we can consider the following approach to avoid this gap. Based on your misspecification results, one might consider imposing a kernel whose induced space RKHS is much 'smoother' than the RKHS to which the true function belongs. However, without the prior of the true RKHS, it is hard to pick the imposed kernel to make it 'smooth' enough.
So, I'm wondering if there are some general approaches to avoid the saturation effect. In practical applications, when one needs to use a specific general algorithm with finite $\rho$, this seems to be an important issue. (I understand this is a theoretical paper, but this just popped up in my head when I read the theorems).
2. Is the $Id_{\mathcal{Y}}$ in Equation (2) the identical element in $\mathcal{Y}$? If so, I think you should define it for completeness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: While the authors claim they discussed the limitation in terms of assumptions in the checklist. I don't clearly see them. But, to my knowledge, these assumptions are almost standard in the literature, except the one I raise in W2.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1.** We thank the reviewer for their encouraging feedback and for providing the reference [Li et al. (2024)]. We were not aware of this work. We think that it may be possible to generalise the techniques in [Li et al. (2024)] to the more challenging vector-valued setting. However, the assumptions and techniques in [Li et al. (2024)] are quite different from our work. In particular, [Li et al. (2024)] Assumption 2 introduced the notion of a "regular RKHS", which is not a standard assumption in the literature. The examples they provide where this assumption holds are restricted to simple compact covariate spaces such as the d-dimensional torus or the d-dimensional unit ball. Hence, we would need to carefully check whether the results in [Li et al. (2024)] can be generalised to work under our weaker assumptions. If it turns out that we can generalise the results of [Li et al. (2024)] to our setting with some simple modifications, we will implement the changes in the camera ready version. If it turns out that it requires substantial technical analysis to harmonise [Li et al. (2024)] and our manuscript, we will include a citation to the approach of [Li et al. (2024)] in our work, and defer the analysis of the saturation effect of spectral algorithms in the vector-valued setting to future work.
**W2.** We thank the reviewer for carefully proofreading our manuscript and bringing this to our attention. The (EVD+) condition as stated in the manuscript contained a typo. It should read instead: For $D_1,D_2>0$ and $p\in (0,1)$ and for all $i\in I$,
$D_1 i^{-\frac{1}{p}} \leq \mu_i \leq D_2i^{-\frac{1}{p}},$
and this is the version we use in the proofs in the appendix. We will make sure to proofread the manuscript and correct the typos in the camera ready version. After correcting the typo, (EVD+) is a standard assumption used in the literature to obtain lower bounds (see for example [6] or [18]).
**Q1.** We thank the reviewer for their insightful question. In the following discussion we work under strong assumptions that allow us to precisely define what we mean by 'smoothness'. We stress that these assumptions do not likely hold in practice and therefore one needs to be careful when using the concept of 'smoothness'. We consider the setting of [30] (Corollary 2) where `smoothness' takes a formal meaning in terms of weak derivatives: $\mathcal{X}$ is a compact subset of $\mathbb{R}^d$, and $X$ follows a distribution equivalent to the Lebesgue measure on $\mathcal{X}$. Let $W^{s,2}(\mathcal{X};\mathcal{Y})$ denote the vector-valued Sobolev space (see [30] (Definition 3)). If $F \in W^{s,2}(\mathcal{X};\mathcal{Y})$, then all weak derivatives of $F$ of orders up to $r := (r_1,\ldots, r_d) \in \mathbb{N}^d$ where $\|r\|_1\leq s$ exist and are square-integrable. Thus, the larger $s$, the smoother the function $F$.
We emphasize that the smoothness of the function $F_{\ast}$ as measured by the interpolation index $\beta$ depends on the vRKHS $\mathcal{G}$, whereas the smoothness of $F_{\ast}$ as measured by the vSobolev space to which it belongs simply depends on the smoothness index $s$. It is shown in [30] that both notions of smoothness are linked. Let the vRKHS be a Sobolev space: $\mathcal{G} = W^{m,2}(\mathcal{X};\mathcal{Y})$ (this is a vRKHS if $m > d/2$); and let $F_* \in W^{s,2}(\mathcal{X};\mathcal{Y})$ for $s \geq 0$. Then $F_* \in [\mathcal{G}]^{\beta}$ with $\beta = s/m$. Furthermore, it is also shown in [30] that the parameters of (EVD) and (EMB) are $\alpha = p = d/(2m)$. Finally, since $\beta = s/m$, given a qualification $\rho$, the saturation level is $s = 2m\rho$.
To simplify the discussion, we focus on the $L_2-$error rate ($\gamma=0$). Plugging the values of $\beta$ and $p$ in our rate in Theorem 4, we obtain
$$
\\|\hat F_{\lambda} - F_*\\|_{L_2}^2 = O_P\left(n^{-\frac{\min \\{\beta, 2\rho\\}}{\min\\{\beta, 2\rho\\} + p}}\right) = O_P\left(n^{-\frac{\min\\{s, 2\rho m\\}}{\min\\{s, 2\rho m\\} + d/2}}\right) ,
$$
This confirms the comment raised by the reviewer: we should impose a kernel whose induced vRKHS is as smooth as possible, as measured by $m$. We note, however, that this reasoning would only be implementable in practice if we could actually control the smoothness of the vRKHS through the kernel. As illustrated above, this is possible if $\mathcal{X}$ is a compact subset of $\mathbb{R}^d$, the input data distribution is equivalent to the Lebesgue measure on $\mathcal{X}$ and for kernels whose induced vRKHS is a Sobolev space, such as the Matérn kernel. Outside this restrictive setting, it is unclear how to interpret 'smoothness' and how we can precisely control the 'smoothness' of the vRKHS through the kernel.
An alternative approach to avoid the saturation effect when learning with a finite qualification spectral algorithm is to increase the qualification $\rho$. This can be achieved by using iterated ridge regression [20] (Section 5.4). The qualification of iterated ridge learner is exactly the number of iterations. As a special case, we recover the fact that vanilla ridge regression has qualification $1$. This approach is popular in the econometrics literature and inverse problem literature. For example, see:
- Section 3 of S. Darolles, Y. Fan, et al.. Nonparametric Instrumental Regression, Econometrica
- Section 9 of Z. Li, H. Lan, et al. Regularized DeepIV with Model Selection, arXiv
**Q2.** We thank the reviewer for their question. $\mathrm{Id}_{\mathcal{Y}}$ is indeed the identity map on $\mathcal{Y}$. We will define all relevant notations in the camera ready version.
**Limitations.** We agree with the reviewer that the assumptions we made are standard assumptions in non-parametric kernel regression. As mentioned previously, the one raised in W2 is a typo and the corrected version is standard.
We have done our best to respond to the reviewer's questions. If we have done so, we would be grateful if the reviewer might consider increasing their score.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses.
W1. Thank you for explaining the differences in assumptions between the current manuscript and those in Li et al. (2024). As I am closely following this field, my intention is to understand the technical challenges involved in extending the current results to spectral algorithms, which could be a potential direction for further research.
Q1. After reviewing this, I noticed a recent work [1] that controls the smoothness of the RKHS through the Gaussian kernel, i.e., motivated by the fact that the Gaussian kernel is the limit of the Matern kernel, which seems aligned with the need for "controls the smoothness". I think it would be interesting to check if vRKHS with Gaussian kernels can be applied to the setting in the current manuscript.
I will retain my score as I'm happier to see a complete story in a paper, i.e., proving the saturation effect for the vector-value spectral algorithm. That being said, the paper is still in good shape, and I think it has made a sufficient contribution to the field. I will also support this paper during the discussion phase with the other reviewers.
[1] H. Lin, and M. Reimherr. "Smoothness Adaptive Hypothesis Transfer Learning." Forty-first International Conference on Machine Learning. | Summary: This paper considers the regression task of learning a mapping where both the input space and the output space can potentially be infinite dimensional. The authors formulate the problem setting by proposing a number of assumptions that can be thought of as the vector-valued counterparts of the standard assumptions in (real-valued) kernel regression. In the well-specified regime, the authors show that a saturation effect exists i.e. the Tikhonov regularized regression estimator is provably suboptimal. The same phenomenon is known to exist in the real-valued setting, but extension to vector-valued output is a novel contribution. Finally, the authors show that for estimators based on a class of filter functions, one can establish error rates that match the best-known upper bounds even in the real-valued setting.
Strengths: 1. The paper is well written. Though the authors consider an extension of previous works where the output space is R, all relevant notions and assumptions in the current setting are rigorously defined.
2. The main result of this paper addresses the optimality (at least for some set of smoothness parameters) of a class of regression estimators based on filter functions, going beyond Tikhonov regularization that most often appears in existing literature.
Weaknesses: 1. While the mathematical parts of this paper look sound, and the results appear to be novel, both the error rates and the proof techniques seem to have no difference with the finite-dimensional output setting. I haven't gone through all the proof details and it might be the case that additional challenges arise in the vector-valued setting. If this is the case, then it would be great if the authors can point to the most challenging parts of the proof with several sentences in the main part of the paper.
2. The main part of the paper ends a little abruptly -- it would be better to write a final section summarizing the contributions of the paper and discussing potential future directions. The problem setting and necessary assumptions may be introduced in a more concise way.
Technical Quality: 3
Clarity: 3
Questions for Authors: Does the lower bound for real-valued output ($n^{-\frac{\max\{\alpha,\beta\}-\gamma}{\max\{\alpha,\beta\}+p}}$) directly imply a lower bound in the vector-valued setting? If this is the case, it would be nice to state it as a theorem after Theorem 4.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging review of our work. We have done our best to respond to the reviewer's questions. If we have done so, we would be grateful if the reviewer might consider increasing their score.
**Weaknesses**
1. The additional difficulty with respect to the scalar-valued setting is to handling the vector-valued interpolation spaces. Under the right decomposition, the vector-valued interpolation spaces allow us to reduce some terms to the the scalar-valued case while other terms require new analysis. We will add a proof sketch in the appendix to highlight the additional difficulties.
2. We fully agree with the reviewer that a conclusion should be added. We will use the extra page given for the camera ready version to add a concluding discussion and perspective on future works.
**Questions**
We believe the lower bound $n^{-\frac{\max\\{\alpha,\beta\\}-\gamma}{\max\\{\alpha,\beta\\}+p}}$ mentioned by the reviewer comes from [18]. We would first like to mention that this lower bound is obtained under the assumption that the target function is **bounded**. It was shown in [55] that this assumption is not needed. The correct information-theoretic lower bound is therefore the following. Given assumption (EVD+) with parameter $p \in (0,1]$, assumption (MOM), assumption (SRC) with parameter $\beta > 0$, and any $\gamma \in [0,\beta)$, any learning algorithm $\hat {F}$, will satisfy
$\\|\hat F - F_*\\|_{\gamma}^2 = \Omega_P \left(n^{-\frac{\beta- \gamma}{\beta + p}}\right).$
This result can be found in Remark 3 of [18] in the scalar-valued setting and in Theorem 5 of [30] in the vector-valued setting. The insight of the reviewer is correct: the vector-valued lower bound can be obtained using a reduction to the scalar-valued setting. We will make sure to include this remark in the camera ready version.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions and concerns. I will maintain my score and stay on the positive size. | Summary: The submission explores a class of spectral learning algorithms for regression within the context of supervised learning using random design. The focus is on high-dimensional and potentially infinite-dimensional output spaces. The problem is framed as minimizing the risk associated with the least squares loss, over vector-valued reproducing kernel Hilbert spaces (RKHS).
The authors make several contributions:
1. Saturation Effect of Ridge Regression: The paper rigorously confirms the saturation effect in ridge regression for general Hilbert output spaces.
2. Convergence Rates for Spectral Algorithms: The paper provides upper bounds on the rates of convergence for a broad range of spectral algorithms even in the misspecified learning case where the target function might not be contained within the RKHS. The smoothness of the target function is characterized using interpolation spaces.
Strengths: 1. The paper extends non-parametric regression results involving spectral regularization to a broader setting that includes potentially infinite-dimensional output spaces.
2. The authors mostly clearly explain their contributions, making the advancements in the field accessible.
2.They rigorously introduce the mathematical framework of vector-valued reproducing kernel Hilbert spaces (RKHSs) and regression, ensuring a solid theoretical foundation.
3. One particularly commendable aspect of the approach is the expression of the smoothness of the target function in terms of vector-valued interpolation spaces, which appears to be a natural and effective strategy.
4. Rates of convergence are presented that are shown to be optimal in the well-specified case, effectively closing a gap in the literature.
5. Additionally, the paper demonstrates tight lower bounds for Tikhonov regularization with a Hölder continuous kernel, proving that saturation is an unavoidable phenomenon in this context. The authors extend the results from [28] from real-valued output spaces to infinite dimensional output spaces using the same bias-variance decomposition, however with a simpler approach.
Overall, the research provides valuable insights into spectral learning algorithms and their application in high-dimensional settings.
Weaknesses: 1. While the paper does close a gap in the literature, it follows the usual lines in non-parametric regression over RKHSs. The results are somewhat expected, and the approach is not actually new.
2. The setting seems rather restrictive: The maps that are learned with classical regularization approaches are basically linear operators (Thm. 1) . Perhaps, the authors can state this more clearly to better distinguish between other current streams of operator learning research (in particular non-linear operator learning).
3. The difference between this work and previous work [38], particularly in the well-specified case, could be explained in more detail to highlight the novel contributions.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1: Is the machinery with introducing vvRKHSs really necessary when functions of Hilbert-Schmidt operators are learned?
Q2: Theorem 2: Can "with high probability" be made more precise?
minor:
* some brackets for references are missing, e.g. p. 1, l.20, 33 and through out the manuscript
* the references [33] and [34] are the same, also [6] and [7]
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging feedback and their helpful comments. We have done our best to address all the questions. If we have done so, we would be grateful if the reviewer might consider increasing their score.
**Weaknesses:**
1. We agree, we base our investigation on the typical integral operator approach [6], making use of a variety of arguments based on available literature. In that sense, the structure of the rates is not surprising. We argue, however, that *our contribution is precisely the insight that known arguments can be transferred to vector-valued learning in a dimension-free manner* by refining a tensor product trick first used by [38,29]. This was not clear in the initial line of work going back to [6], as a trace condition ruled out infinite-dimensional product kernels. In a practical context, based on the representer theorem in Proposition 1, our work provides the first theoretical foundation for learning the conditional mean embedding with general spectral algorithms. Furthermore, it unifies the theory for kernel-based functional regression [25], which is novel in that degree of generality to the best of our knowledge.
2. Thank you for pointing this out. We agree that in light of the recent interest in operator learning, we should comment on our results in this context. We note that our work can be directly interpreted as regularised (nonlinear) nonparametric operator learning: Bochner-universality shows that the vvRKHS is *sufficient* to learn all Bochner square integrable nonlinear operators. We hypothesise that, analogously to how scalar kernel regression is used to understand generalisation
and early stopping in deep learning (for example via the NTK), our work could be a starting point for a theory of neural operator learning; although there is much theory that is still missing in the bigger picture. This seems to be a very interesting direction for future research.
Our work also contains *linear* operator learning (related to [38]): in particular, when $X$ takes values in a Hilbert space $\mathcal{X}$ we choose the scalar kernel $k(x,x') = \langle x, x'\rangle_\mathcal{X}$, we recover precisely the linear setting of [38] via the vector-valued kernel $K = k \\times \\operatorname{Id}\_{\\mathcal{Y}} $. The assumptions in the linear and nonlinear settings require caution, however. The standard assumption that $k$ is bounded guarantees the crucial embedding property (EMB), which only holds for almost surely bounded $X$ in the linear case. However, with the kernel $k(x,x') = \\langle x,x'\\rangle_{\\mathcal{X}} $ and when $X$ is bounded, *our rates apply in the linear operator learning setting without modification, and appear to be novel for this field.*
3. We note that the authors of [38] work directly in the operator regression setting and do not assume boundedness of the covariate $X$, but subgaussianity. In our work, boundedness of the embedded covariates is assumed implicitly through boundedness of the kernel $k$. In this sense, [38] is more general. On the other hand, in [38], the authors consider exclusively the well-specified setting and show rates up to $O(1/\sqrt{n})$, as no decay of eigenvalues of the integral operator is assumed and rate optimality under these assumptions is not addressed. In contrast, our assumptions allow for optimal rates up to $O(1/n)$ by exploiting the aforementioned eigenvalue decay, in which sense our results are more general.
We will include these discussions in the camera ready version.
**Questions**
1. We see the reviewer's point and argue that the vvRKHS framework is fairly general: it allows both linear and nonlinear operator learning on Hilbert spaces
as highlighted in W2 above, but also contains more general settings where one observes covariates $X$ on a general topological space without linear structure. We agree that generally, details about the machinery of vvRKHSs could be ignored when purely focusing on the operator learning setting. Nonetheless, interpreting HS-operators as vvRKHS functions allows convenient discussions based on the existing vvRKHS literature. One may also argue that our presentation may be practical for readers with an RKHS background and applications such as conditional mean embeddings are formulated in this setting; although this is likely up to personal preference and background.
2. We agree with the reviewer that Theorem 2 was introduced in an informal way. The formal version of the theorem can be found in [30] (Theorem 5). We will update Theorem 2 in that spirit in the camera ready version to make sure that what is hidden behind "with high probability" is correctly specified. For completeness we will add Theorem 5 from [30] in the appendix of our current submission so that the reader can have the theorem in full details.
Finally, we thank the reviewer for spotting typos in the references. We will carefully proofread and correct all typos in the camera ready version. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their encouraging and positive feedback. They sparked very interesting discussions that we will use to improve our manuscript in the camera ready version.
As a summary here are the main points that were brought up by the reviewers:
1. As the proof for the upper rates is quite lengthy, it was mentioned that it is not straightforward to understand the technical novelties with respect to previous works in the proof techniques. To address this, we will incorporate a proof sketch at the beginning of the appendix in the camera ready version.
2. Two reviewers wondered if our results could be extended to more general vector-valued kernels. We refer to our answer to reviewers f39z and U1CH.
3. One reviewer noticed that a conclusion should be added. We will use the extra page in the camera ready version to add it and address the points raised by the reviewers. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This papers focuses on learning vector-valued functions in reproducing kernel Hilbert spaces (RKHS). The kernel in this case is an operator-valued function instead of a scalar-valued function. The papers considers kernel-based vector-valued regression with spectral regularization, which include ridge regression and kernel PCR. The contribution of the paper is theoretical: learning rates for vector-valued and spectral regularization-based regression are derived. In the case of kernel ridge regression upper and lower rates are provided. In the general case of spectral filter function only upper rates are given.
Strengths: * The paper provides new theoretical results for vector-valued RKHS-based regression.
* The paper is well written.
Weaknesses: * Contributions compared to previous work should be made more clear.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Section 3 focuses on kernel ridge regression (KRR). Learning rates and results on the saturation effect of vector-valued KRR have been reported in [1]. Could you make clear what are the contribution compared to [1] here? What are the main technical challenges compared to [1]?
[1] Li, Zhu, et al. "Towards Optimal Sobolev Norm Rates for the Vector-Valued Regularized Least-Squares Algorithm." JMLR (2024).
* Excess risk bounds for vector-valued learning with spectral filtering are provided in [2]. How these bounds can be compared to those obtained in this work?
[2] Baldassarre, Luca, et al. "Multi-output learning via spectral filtering." Machine learning (2012).
* Theorem 4 provides upper rates for vector-valued function learning with general spectral function. How optimality is maintained in this case?
* Could the results be extended to other classes of operator-valued kernels (e.g., non-separable kernels)?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: A detailed discussion on limitations is missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: First, we thank the reviewer for their encouraging review of our work. We have done our best to respond to the reviewer's questions. If we have done so, we would be grateful if the reviewer might consider increasing their score.
First of all, the reviewer highlighted that a comparison of our results to [1] and [2] should be made more thoroughly.
*Comparison to [1].* [1] only considers Tikhonov regularisation, while the present work handles arbitrary spectral regularisation. The main difficulty is applying the machinery of vector-valued interpolation spaces developed in [1] to arbitrary filter functions. We will add a proof sketch at the beginning of the appendix to highlight the terms in the bound that require new analysis.
*Comparison to [2].* We highlight the differences in the assumptions that are made to obtain excess risk bounds. 1) [2] focuses on vector-valued learning where the output space is $\mathcal{Y} = \mathbb{R}^d$, while we handle the more general case where $\mathcal{Y}$ is a potentially infinite-dimensional Hilbert space. This generalisation is important to study conditional mean embedding learning, a crucial step in nonparametric causal inference. 2) [2] focuses on the well-specified setting where the target function is assumed to belong to the hypothesis space, while we also handle the mis-specified setting. 3) [2] does not consider the effective dimension in their rate which corresponds to setting $p=1$ in assumption (EVD). 4) [2] works with bounded outputs, which is a stronger assumption than our (MOM) assumption.
Overall, their excess risk bounds are obtained under more stringent assumptions than ours, and they focus in most of the paper on applications. In the well-specified setting, when $\mathcal{Y} = \mathbb{R}^d$, with bounded outputs and ignoring the effective dimension ($p=1)$, our rates are identical. Obtaining excess risk bounds under our more general assumptions requires very different proof techniques. We will add this discussion in the camera ready version of the manuscript.
**Question on optimality after Theorem 4:** We realise that the conversation around optimality has been overshadowed by the fact that we did not clearly state what the information-theoretic lower bound is. Given assumption (EVD+) with parameter $p \in (0,1]$, assumption (MOM), assumption (SRC) with parameter $\beta > 0$, and any $\gamma \in [0,\beta)$, any learning algorithm $\hat {F}$, will satisfy (see Theorem 5 [1])
$||\hat{F} - F_*||_{\gamma}^2 = \Omega_P \left(n^{-\frac{\beta- \gamma}{\beta + p}}\right).$
On the other hand, for an estimator $\hat F_{\lambda}$ with qualification $\rho$ (Eq.~(11) in our manuscript), upper rates given in Theorem 4 of our submission satisfy the following upper bound (with the correct choice of $\lambda$),
$||\hat F_{\lambda} - F_{*}||_{\gamma}^2 = O_P\left(n^{-\frac{\min(\beta, 2\rho)- \gamma}{\min(\beta, 2\rho) + p}}\right),$
if $\beta > \alpha - p$, and
$||\hat F_{\lambda} - F_*||_{\gamma}^2 = O_P\left(n^{-\frac{\beta- \gamma}{\alpha}}\right),$
if $\beta \leq \alpha - p$. Optimality is therefore maintained for the smoothness parameter $\beta$ in the interval $(\alpha - p, 2\rho]$. In the scalar-valued setting, this recovers the state-of-the-art rates of [3]. We hope this answer clarifies the question around optimality in Theorem 4, and we will make sure to include this detailed discussion in the camera ready version.
**Question on more general vector-valued kernels:** While one could consider operator-valued kernels that do not have the multiplicative structure $K = k \operatorname{Id}$ (with $k$ a scalar-valued kernel), it seems to us that it is currently the only relevant kernel in the infinite-dimensional setting to be found in the literature (see references with applications in the introduction). We hypothesise that this has two main reasons. Firstly, such a kernel allows for the numerical computation and evaluation of the finite-sample estimators via a vector-valued representer theorem analogously to the real-valued setting, as highlighted by Proposition 1 in our manuscript. We are not aware of such results for other types of kernels. Secondly, the available technical investigations of vRKHS induced by such kernels show that critical properties like *universality* are already achieved by this type of kernel (this is addressed in Remark 3 in our manuscript), allowing for universal consistency when used in supervised learning. In theory, one may argue that such kernels are in a sense *sufficient* for vector-valued learning problems.
That being said, we believe it is possible to generalise the result to different families of operator-valued kernels. For example, in the context of vector-valued learning with Tikhonov regularisation [4] or with general spectral regularisation [5], the cited works study vector-valued regression with kernel $K:\mathcal{X} \times \mathcal{X} \to \mathcal{L}(\mathcal{Y})$ such that $K_x$ is Hilbert-Schmidt (hence compact). In infinite dimension, this rules out the kernel $K = k \operatorname{Id}$ due to the non compactness of $\operatorname{Id}$. Note that the lower bound they obtain in both works requires $\operatorname{dim}(\mathcal{Y}) < + \infty$ while we do not impose such a restriction.
However, we note that there is a technical caveat to directly transferring our results to more general kernels: we exploit the tensor product trick (vvRKHS is isomorphic to HS-operators) in order to apply real-valued arguments. For different kernels, such an isomorphism does generally not apply and it is not entirely clear if and how real-valued arguments can be used straightforwardly. This may require a new approach and additional work.
[3] Z. Haobo, et al. On the Optimality of Misspecified Spectral Algorithms
[4] A. Caponnetto, E. De Vito. Optimal rates for the regularized least-squares algorithm
[5] Rastogi, A, Sampath, S. Optimal rates for the regularized learning algorithms under general source condition
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional information. However, some issues still need clarification.
1. The authors said that "[1] only considers Tikhonov regularisation, while the present work handles arbitrary spectral regularisation". Theorem 4 considers an estimator based on a general spectral filter. To obtain optimality, the authors would like to use a lower bound provided in [1] (Theorem 5 [1]), as mentioned in the response. But [1] considers only the case of L2-regularization and so the result cannot be used for general spectral regularization.
2. This paper does not provide a lower bound in the case of arbitrary spectral regularization.
3. Regarding the question about the main technical challenges compared to [1], the answer is "The main difficulty is applying the machinery of vector-valued interpolation spaces developed in [1] to arbitrary filter functions". Can you be more specific?
---
Reply to Comment 1.1.1:
Title: Answer to points 1 and 2
Comment: Thank you for your additional effort in pursuing the discussion. Below we address your concerns.
**Regarding points 1 and 2**, the reviewer mentioned that ``the result (the lower bound from [1]) cannot be used for general spectral regularisation''. We agree that [1] focuses on vector-valued regression with Tikhonov regularisation. Their *upper bound* (Theorem 3 [1]) only applies to this setting. However, their *lower bound* (Theorem 5 [1]) is the information-theoretic lower bound, i.e. it applies to **any estimator**, including any of the spectral regularisation methods. Therefore **we do have a lower bound for arbitrary spectral regularisation**, directly inherited from [1].
However, using the lower bound from [1] is not entirely satisfying as it shows that the upper bound for arbitrary spectral regularisation is not tight in the high smoothness regime. Indeed, the lower bound of [1] (Theorem 5) for the squared $\gamma-$norm is in $\Omega(n^{-\frac{\beta- \gamma}{\beta + p}})$ and the upper bound in our current submission (Theorem 4) is in $O(n^{-\frac{\min\\{\beta,2\rho\\}- \gamma}{\min\\{\beta,2\rho\\} + p}})$ (when $\beta + p > \alpha$). It shows that spectral regularization methods do not achieve the optimal rate when $\beta \geq 2 \rho$. In our current submission, we show that this is unavoidable when we employ *Tikhonov regularisation*. Indeed, Theorem 3 provides a lower bound that *applies specifically to Tikhonov regularisation* (for which $\rho = 1$) in $\Omega(n^{-\frac{\min\\{\beta,2\\} - \gamma}{\min\\{\beta,2\\} + p}})$. This demonstrates that the saturation effect for Tikhonov regularisation is unavoidable. What remains to be shown is the following: given a spectral algorithm with qualification $\rho$, can we obtain a lower bound specific to this spectral algorithm in $\Omega(n^{-\frac{\min\\{\beta,2\rho\\} - \gamma}{\min\\{\beta,2\rho\\} + p}})$? This would show that saturation is unavoidable for arbitrary spectral algorithms with qualification $\rho$. This is a challenging topic for future work. | null | null | null | null | null | null |
Meta-DT: Offline Meta-RL as Conditional Sequence Modeling with World Model Disentanglement | Accept (poster) | Summary: A new framework is proposed for offline meta-reinforcement learning (meta-RL) that leverages transformers and world model disentanglement to enhance task generalization without the need for expert demonstrations or domain knowledge. The approach, called Meta Decision Transformer (Meta-DT), utilizes a context-aware world model to encode task-relevant information which guides the transformer in generating task-oriented sequences. Experimental results show that Meta-DT achieves superior few-shot and zero-shot generalization across different benchmarks, indicating its practicality and effectiveness in RL applications.
Strengths: Generalization without Experts: Meta-DT effectively generalizes across unseen tasks without requiring expert demonstrations or domain knowledge, which is a significant advancement in the field of meta-reinforcement learning.
World Model Disentanglement: The introduction of a context-aware world model that disentangles task-relevant information from behavior policies enhances the robustness and accuracy of task representation.
Efficient Use of Transformers: The paper creatively applies the transformer architecture to offline meta-RL, leveraging its strong sequential modeling capabilities to improve policy generation and adaptation.
Empirical Validation: The paper includes comprehensive experimental results showing that Meta-DT outperforms existing baselines in few-shot and zero-shot generalization tasks across multiple benchmarks, demonstrating its practical effectiveness.
Weaknesses: 1. Are there any environments or types of tasks where Meta-DT's performance is notably limited? What are the challenges in extending the model to such environments, and how might these be addressed in future work?
2. Can you elaborate on the process and benefits of world model disentanglement in Meta-DT? How does this process affect the model's ability to generalize across different tasks, especially in dynamically changing environments?
Technical Quality: 3
Clarity: 3
Questions for Authors: Mentioned in the weakness section
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Are there any environments or types of tasks where Meta-DT's performance is notably limited? What are the challenges in extending the model to such environments, and how might these be addressed in future work?**
A1. Thank you for your insightful questions. We evaluated Meta-DT on seven environments adopted from three classical benchmarks in meta-RL. During rebuttal, we also conducted experiments on new evaluation domains, including Humanoid-Dir and two more Meta-World environments (Figure S2 in Global Response). **Results showed the consistent superiority of Meta-DT in these ten environments.**
On the other hand, the above environments are relatively homogenous in nature. As stated in the limitations section, our generalist model is trained on relatively lightweight datasets compared to popular large models. The urgent trend that RL practitioners are striving to break through is to deploy on significantly large datasets with more diversity. As mentioned by Reviewer sbh9, **it would be interesting to see how Meta-DT generalizes across worlds and tasks with more diversity**, like training on $K$-levels of an Atari game and generalizing to the remaining $N-K$ levels of well-suited Atari games.
The **challenges** in extending to such paradigms might include: i) how to collect large-scale datasets with sufficient diversity and high quality, ii) how to tackle the heterogenous task diversity in hard cases, and iii) how to scale RL models to very large network architectures like in large language or visual models. To tackle these potential challenges, we conjecture several **promising solutions** including: i) leveraging self-supervised learning to facilitate task representation learning at scale, ii) enabling efficient prompt design or prompt tuning to facilitate in-context RL, and iii) incorporating effective network architectures like mixture-of-experts to handle heterogenous domain diversity. We will include the above insights into our limitations section, and are looking forward to investigating these promising future directions.
**Q2. Elaborate on the process and benefits of world model disentanglement in Meta-DT, and how it affects the model's generalization ability.**
A2. Existing context-based or prompt-based methods usually infer task representations by feeding subsets of experience into a trajectory encoder. However, in RL regimes, since the offline dataset depends on both the task and the behavior policy, **the task information could be entangled with the features of behavior policies**, thus producing biased task inference at test time due to the shift in behavior policies. Taking 2D navigation as an example, tasks differ in goals and the behavior policy is going towards the goal for each task. The algorithm can easily distinguish tasks based on state-action distributions rather than reward functions, leading to extrapolating errors when the behavior policy shifts to random exploration during testing.
Hence, we attempt to find a stable way to accurately disentangle task-relevant information from behavior policies. For RL paradigms, **the world model completely describes the characteristics of the MDP/task, and keeps invariant to behavior policies or collected datasets**. Naturally, it could be a promising alternative for accurately disentangling task beliefs. That is exactly the motivation of our world model disentanglement.
In the meta-learning setting, we assume that the world model shares some common structure across the task distribution. **The principle behind is that tasks with similar contexts will behave similarly in the world model.** Intuitively, we map tasks into a latent space projected by the world model, and infer task similarity via the world model dynamics. That is, we extrapolate the meta-level knowledge across tasks by the extrapolation ability of the world model, which is more accurate and robust since the world model is intrinsically invariant to behavior policies or collected datasets. Subsequently, we pretrain the world model and then train the meta-policy with a fixed world model. This kind of decoupling assures a more stable process for learning task representations, since the algorithm can solely focus on capturing task similarity via the world model dynamics.
### `Summary Response`
Thank you for your valuable comments, which help us gain more insights into our methodology and motivate promising directions for future work. We are honored to have **your recognition of our work**, and are also grateful that **most reviewers are quite affirmative to our overall contributions**, including the novelty and motivation, the extensiveness of empirical evaluation, and the superior performance.
The corresponding figures of extended experiments can be found in the PDF of Global Response. Please let us know if we have addressed your concerns, and we are more than delighted to have further discussions and improve our manuscript.
---
Rebuttal Comment 1.1:
Title: Response
Comment: All of my concerns are well addressed. Thank you. I will increase my rating to accept
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your time in helping us improve our work. We are more than delighted to see that we could address your all concerns. We sincerely appreciate you raising the score on our work! | Summary: The paper introduces a new architecture for Meta-RL based on Decision Transformers. The new architecture uses a world model responsible for efficiently encoding task information from demonstrations. The model also introduces a prompt encoder which acts as a boosting mechanism to the context encoder to enhance the models task adaptation abilities.
Strengths: - Strong experimental evaluation including over many different tasks, settings,f ablations, and comparison against several other techniques
- Strong results in this evaluation, with good demonstration that the introduced components make a difference.
- Simple design with good results
- Good figures and explanations, with std error ranges -- which can be critical for RL
Weaknesses: No major flaws with the paper.
The evaluation covers environments which are relatively homogenous in nature. Performance in MetaWorld is much less differentiating compared to MuJoCo. It would be nice to see more detailed evaluation in non-mujoco tasks.
It would be interesting to see how the technique generalizes across worlds and tasks with more diversity in them, mujoco worlds are extremely minimal with little variation. Something like training on K-levels of an Atari game and generalizing to the remaining N-K levels of well suited Atari games. It is entirely possible the research in this simply isn't ready to tackle hard cases such as this though. Or something like the worlds seen in the Muesli paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Why do you thin the disentanglement works well as an idea? I understand the different perspectives of adaptation the prompt vs context encoder aim to target, but could you not form the context in manner to do this, for example running it twice with different subsets of experience? Is it the way things are fed in to the prompt vs context encoder (segmenting) and the lack of constraints on the prompt encoders representation?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations and future work are well addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. It would be nice to see more detailed evaluation in non-mujoco tasks.**
A1. Thank you for your advice. We have conducted new evaluations on Humanoid-Dir and two more Meta-World environments of Sweep and Door-Lock. The following table shows the few-shot testing performance of Meta-DT against baselines using Medium datasets (Figure S2 in Global Response). **The results again demonstrate the consistent superiority of Meta-DT over all baselines in these newly evaluated domains**.
| Environment | CMT | Prompt-DT | Generalized DT | CORRO | CSRO | Meta-DT |
| --- | --- | --- | --- | --- | --- | --- |
| Humanoid-Dir | 548.42 $\pm$ 40.98 | 556.43 $\pm$ 15.84 | 538.95 $\pm$ 7.04 | 429.66 $\pm$ 150.58 | 465.58 $\pm$ 21.78 | **661.08** $\pm$ 57.04 |
| Sweep | 493.06 $\pm$ 72.54 | 593.11 $\pm$ 92.10 | 549.17 $\pm$ 79.35 | 272.67 $\pm$ 77.41 | 446.50 $\pm$ 191.46 | **760.18** $\pm$ 4.87 |
| Door-Lock | 2291.82 $\pm$ 76.88 | 2284.95 $\pm$ 341.20 | 2601.24 $\pm$ 44.42 | 1615.66 $\pm$ 248.00 | 2216.67 $\pm$ 116.84 | **3025.92** $\pm$ 23.41 |
**Q2. It would be interesting to see how the technique generalizes across worlds and tasks with more diversity in them, like Atari games.**
A2. Thank you for pointing out this insightful direction. As stated in the limitations section, our generalist model is trained on relatively lightweight datasets compared to popular large models. The urgent trend that RL practitioners are striving to break through is to deploy on significantly large datasets with more diversity, unlocking the scaling law with the transformer architecture. Due to limited time during rebuttal, it is hard for us to conduct systematic empirical evaluation on Atari domains. We will include the above insights into our limitations section, and are looking forward to investigating this promising future direction.
**Q3. Why do you think the disentanglement works well as an idea?**
A3. Thank you for your insightful question. Existing context-based or prompt-based methods usually infer task representations by feeding subsets of experience into a trajectory encoder. The offline dataset depends on both the task and the behavior policy. **The task information could be entangled with the features of behavior policies**, thus producing biased task inference at test time due to the shift in behavior policies.
Hence, we attempt to find a stable way to accurately disentangle task-relevant information from behavior policies. In the RL regime, **the world model completely describes the characteristics of the MDP/task, and keeps invariant to behavior policies or collected datasets**. Naturally, it could be a promising alternative for accurately disentangling task beliefs. That is the beginning of motivating the world model disentanglement as our idea.
In the meta-learning setting, we assume that the world model shares some common structure across the task distribution. **The principle behind is that tasks with similar contexts will behave similarly in the world model.** Intuitively, we map tasks into a latent space projected by the world model, and infer task similarity via the world model dynamics. That is, we extrapolate the meta-level knowledge across tasks by the extrapolation ability of the world model, which is more accurate and robust since the world model is intrinsically invariant to behavior policies or collected datasets. Subsequently, we pretrain the world model and then train the meta-policy with a fixed world model. This kind of decoupling assures a more stable process for learning task representations, since the algorithm can solely focus on capturing task similarity via the world model dynamics.
### `Summary Response`
Thank you for your valuable comments, which help us enhance our experimental evaluation and gain more insightful directions for future work. We are honored to have **your recognition of our work**, and are also grateful that **most reviewers are quite affirmative to our overall contributions**, including the novelty and motivation, the extensiveness of empirical evaluation, and the superior performance.
The corresponding figures of extended experiments can be found in the PDF of Global Response. Please let us know if we have addressed your concerns, and we are more than delighted to have further discussions and improve our manuscript. | Summary: This work proposes a transformer-based framework for offline meta-reinforcement learning problems. The proposed algorithm utilizes a context-aware world model for task encoding and self-guided prompting. It outperforms existing offline meta-RL baselines.
Strengths: S1. The results in the paper show the performance gain of the proposed method over baselines.
Weaknesses: W1. The work misses the CMT baseline which is cited but not shown as a baseline (https://arxiv.org/abs/2211.08016).
W2. The work lacks novelty as the idea of a world model and Prompt tuning already exists in CMT. Moreover, the transformer architecture is based on a decision transformer.
W3. The run-time is unclear for fare comparison with baselines.
W4. The results for Mujoco benchmark $\texttt{Humanoid-Dir}$ in CSRO are missing. Meta-world results are limited (shown results only for $\texttt{Reach}$), thus hard for a general conclusion.
Technical Quality: 3
Clarity: 2
Questions for Authors: Q1. How do the world modeling and prompt tuning in the proposed method differ from CMT?
Q2. How does the method perform in comparison to CMT?
Q3. How does the method perform in $\texttt{Humanoid-Dir}$ and more meta-world benchmarks such as $\texttt{Hammer}$, $\texttt{Sweep}$, and $\texttt{Door-Lock}$?
Q4. What is the sensitivity with the prompt length $k$?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations section is inadequate.
L1. There seems to be high computational overhead during training and inference.
L2. Meta-world benchmarks concluded only from one task which may question the generalization of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. The work misses the CMT baseline which is cited but not shown as a baseline.**
A1. The reason why we did not include CMT as a baseline is that **its code has not been open-sourced**. We emailed CMT's authors for requesting the source code, and received the response that the code would be made public at an uncertain time. Following your suggestion, **we have included CMT as a new baseline** using our own implementation (Figure S1 of Global Response). **The results again demonstrate the consistent superiority of Meta-DT over CMT**.
|Method|Point-Robot|Cheetah-Vel|Cheetah-Dir|Ant-Dir|Hopper-Param|Walker-Param|Reach|
|---|---|---|---|---|---|---|---|
|CMT|-12.21$\pm$ 0.51|-131.71$\pm$ 10.51|545.55$\pm$ 7.07|275.67$\pm$ 19.12|325.20$\pm$ 3.62|356.45$\pm$ 5.58|2295.34$\pm$ 121.56|
|Meta-DT|**-10.18**$\pm$ 0.18|**-99.28**$\pm$ 3.96|**608.18**$\pm$ 4.18|**412.00**$\pm$ 11.53|**348.20**$\pm$ 3.21|**405.12**$\pm$ 11.11|**2458.83**$\pm$ 162.10|
**Q2. How do the world modeling and prompt tuning in the proposed method differ from CMT?**
A2. **We do not tune the prompt**. Meta-DT selects the trajectory segment using the world model to construct the prompt, without tuning any model parameters associated with the prompt. While both involve the concepts of world model and prompt, **Meta-DT differs significantly from CMT in many crucial aspects** including:
- `The way of using the world model for task inference is different.` CMT infers task beliefs at the trajectory level, abstracting a task prompt $z_{\tau}$ from a context trajectory $\tau$ and using that prompt to guide policy generation for the entire episode. In contrast, Meta-DT performs task inference at the transition level, abstracting task representation $z_t$ at each timestep $t$. Hence, Meta-DT can provide more fine-grained guidance to realize high-capacity generalization, and can achieve zero-shot generalization via deriving real-time task inference.
- `The prompt design is quite different.` CMT abstracts task presentation $z_{\tau}$ from a context trajectory and uses $z_{\tau}$ as the task prompt. In contrast, we directly use a trajectory segment $\tau^*$ as the task prompt, enjoying the power of architecture inductive bias like in Prompt-DT. Moreover, CMT tunes the prompt adaptor layer based on relabeling the offline dataset, while Meta-DT does not involve prompt tuning. Our complimentary prompt design is more lightweight and easy-to-implement. By using the world model to help construct the prompt, the world model (algorithmic perspective) and the complementary prompt (architecture perspective) work as an efficient whole to perform accurate task inference for Meta-DT.
- `The pipeline of training the world model and meta-policy is different.` CMT simultaneously learns the world model and meta-policy. In contrast, Meta-DT decouples the learning process, which pretrains the world model and then trains the meta-policy with a fixed world model. This kind of decoupling assures a more stable process for learning task representations, since the algorithm can solely focus on capturing task similarity via the world model dynamics.
- `The algorithm paradigm is different.` Our method is based on decision transformer, a return-conditioned supervised learning paradigm that has achieved promising results on offline RL. It feeds the (return-to-go, state, action) tuple sequence to the causal transformer to predict the action element only. In contrast, CMT uses another paradigm that feeds the (state, action, reward) tuple sequence to the causal transformer to perform autoregressive prediction on all elements.
**Q3. Evaluation on Humanoid-Dir and more Meta-World environments.**
A3. Following your suggestion, **we have conducted new evaluation on Humanoid-Dir and two Meta-World environments of Sweep and Door-Lock**. The following table shows the few-shot testing performance of Meta-DT against extended baselines using Medium datasets (Figure S2 in Global Response). The results again demonstrate the **consistent superiority of Meta-DT** over all baselines in these domains.
|Environment|CMT|Prompt-DT|Generalized DT|CORRO|CSRO|Meta-DT|
|---|---|---|---|---|---|---|
|Humanoid-Dir|548.42$\pm$ 40.98|556.43$\pm$ 15.84|538.95$\pm$ 7.04 |429.66$\pm$ 150.58|465.58$\pm$ 21.78|**661.08**$\pm$ 57.04|
|Sweep|493.06$\pm$ 72.54|593.11$\pm$ 92.10|549.17$\pm$ 79.35|272.67$\pm$ 77.41|446.50$\pm$ 191.46|**760.18**$\pm$ 4.87|
|Door-Lock|2291.82$\pm$ 76.88|2284.95$\pm$ 341.20|2601.24$\pm$ 44.42 |1615.66$\pm$ 248.00|2216.67$\pm$ 116.84|**3025.92**$\pm$ 23.41|
**Q4. The run-time during training and inference is unclear for fare comparison with baselines.**
A4. Please refer to our `A1 to Reviewer iZYy` for the tables of model parameters, training time, and inference time for one episode. Compared to DT-based baselines, our methods incurrs about 15\% increase in model parameters, and about 10\% increase in training and inference time. **Considering the significant performance gain of our method, this lightweight cost is likely acceptable.** Moreover, Meta-DT's computation cost is even lower than other baselines.
**Q5. What is the sensitivity with the prompt length $k$?**
A5. We have conducted new experiments to investigate the influence of prompt length $k$ on Meta-DT’s performance (Figure S3 in Global Response). Generally, **the performance of Meta-DT is not sensitive to the pre-defined value of prompt length**.
|Prompt length $k$|3|5|7|
|---|---|---|---|
|Point-Robot|**-10.18**$\pm$ 0.18|-10.19$\pm$ 0.15|-10.22$\pm$ 0.04|
|Cheetah-Dir|606.69$\pm$ 2.92|**608.18**$\pm$ 4.18|605.09$\pm$ 6.31|
|Ant-Dir|408.97$\pm$ 18.01|412.00$\pm$ 11.53|**413.09**$\pm$ 8.82|
**Q6. The limitations section is inadequate.**
A6. We have discussed three threads of our limitations. During rebuttal, the valuable comments from all reviewers also give us new insights into our method, and we will have a more thorough discussion on limitations based on these comments.
---
Rebuttal 2:
Title: Response Summary by Authors
Comment: Thank you for your valuable review comments, which help us gain more critical insights on the difference from existing works, and further enhance our empirical evaluation. We summarize the three main concerns as
- `The difference from CMT.` While both involve the concepts of world model and prompt, **Meta-DT differs significantly from CMT in many crucial aspects** (refer to our A2). Also, **other four reviewers** (we refer to iZYy as R1, LRM6 as R2, sbh9 as R4, and BWEr as R5) **are quite affirmative to our novelty and motivation** ("a novel Meta-DT" by R1, "a clear motivation... quite novel and inspiring" by R2, "a new architecture... no major flaws with the paper" by R4, and "a new framework" by R5). We hope that our novelty and distinct contributions have been adequately justified.
- `Empirical comparison to CMT.`We did not include CMT as a baseline since we could not get its source code. Following your advice, **we have added CMT as a new baseline**, and evaluation results again show the superiority of our method.
- `Evaluation on more environments.` We evaluated our method on **seven environments adopted from three classical benchmarks** in meta-RL. Also, **other four reviewers are quite affirmative to the extensiveness of our empirical evaluation** ("shows the good performance" by R1, "comprehensive experiments on multiple benchmarks" by R2, "strong experimental evaluation over many different tasks" by R4, and "comprehensive experimental results" by R5). Following your advice, **we have conducted new evaluations on Humanoid-Dir and two more Meta-World environments**. Finally, results of **a total of 10 environments from 3 benchmarks** again verify our advantages. We hope that our extended experiments can further verify the generalization of our method.
The corresponding figures of extended experiments can be found in Global Response. Again, we appreciate your time in reviewing our manuscript and your valuable comments. We would like to justify our method to you in more detail, and we are more than delighted to have further discussions to improve our manuscript. If our response has addressed your concerns, we would be grateful if you could re-evaluate our work.
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed response. As the majority of my concerns have been addressed, I will increase my rating.
---
Reply to Comment 2.1.1:
Title: Thank you!
Comment: Thank you for your time in helping us improve our work. We are happy to see that we could address your main concerns. Thank you sincerely for raising the score on our work. We truly appreciate it! | Summary: This paper proposes a novel meta-RL framework called Meta Decision Transformer (Meta-DT). It leverages robust task representation learning via world model disentanglement to achieve task generalization in offline meta-RL. Firstly, it pretrains a context-ware world model to capture the task-relevant information from the offline dataset. Then, it guides the sequence generation of decision transformer using the task representation and the self-guided prompt from past trajectories. The authors conduct extensive experiments on multiple benchmarks. Meta-DT demonstrates better few- and zero-shot generalization ability than other baselines.
Strengths: 1. This paper has a clear motivation in disentangling the task specific information. It is quite novel and inspiring to make use of the prompt to provide task-specific context and exploit architecture inductive bias.
2. The authors conduct comprehensive experiments on multiple benchmarks, showing notable performance gains in many tasks of both few-shot and zero-shot scenarios. The ablation studies are also designed properly to justify the algorithm design.
3. The paper is well-organized and easy to understand. The authors also provide detailed pseudo-codes and hyper-parameters in the appendix for reproduction.
Weaknesses: 1. From my perspective, the generalization ability of meta-DT depends heavily on the extrapolation ability of the context encoder. Although it is intuitive for some continuous tasks like Cheetah-Vel, this extrapolation tends to be quite hard for some other discrete tasks like Cheetah-Dir. Despite the good experiment performance, some intuitive explanation and theoretical justification are also desirable.
2. To evaluate the generalization ability more comprehensively, it may be helpful to provide the experiments with different number of training tasks. It is also interesting to see the extrapolation ability limit of meta-DT. For example, what if the training tasks of Ant-Dir are sampled from U[0,1.5$\pi$] with hold-out tasks are from U[1.5$\pi$, 2$\pi$].
Technical Quality: 3
Clarity: 3
Questions for Authors: Please also consider responding to the Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Some intuitive explanation on the extrapolation ability are also desirable.**
A1. Thank you for your insightful comments. Meta-DT's generalization comes from the extrapolation ability of the context-aware world model $W(r,s'|s,a; z^i)$. The world model completely describes the characteristics of the MDP/task. In the meta-learning setting, we assume that the world model shares some common structure across the task distribution, with a latent representation $z^i$ to approximate the unknown context of task $i$. **The principle behind is that tasks with similar contexts will behave similarly in the world model.** Intuitively, we map tasks into a latent space projected by the world model, and infer task similarity via the world model dynamics. That is, we extrapolate the meta-level knowledge across tasks by the extrapolation ability of the world model, which is more accurate and robust since the world model is intrinsically invariant to behavior policies or collected datasets. For discrete tasks like Cheetah-Dir, the world model also shares some common structure across the task distribution, e.g., the kinematics principle or locomotion skills. Hence, **the extrapolation of the world model works for both continuous and discrete tasks**. We will enhance the intuitive explanation of our method in the revised paper.
**Q2. It may be helpful to provide the experiments with different numbers of training tasks.**
A2. Following your advice, we have conducted new experiments to evaluate Meta-DT with a different number of training tasks. The following table shows the few-shot test performance of Meta-DT using Medium datasets (Figure S4 in Global Response). **Increasing the number of training tasks can usually improve the generalization ability to test tasks, especially in harder tasks like Ant-Dir**.
The result also matches the intuition of function approximation in machine learning. The "generalization" refers to the question: How can experience with a limited subset of the data space be usefully generalized to produce a good approximation over a much larger subset? In the single-task setting, increasing the valid sample points (before being overfitted) can generally improve the function approximation of the sample space and the generalization to unseen samples at testing. In the meta-learning setting, increasing the valid task points (before being overfitted) can usually boost the function approximation of the task space and the generalization to unseen tasks at testing. We will include this new experiment and the interesting findings in our revised paper.
| Number of training tasks | 15 | 30 | 45 |
| --- | --- | --- | --- |
| Point-Robot | -10.20 $\pm$ 0.28 | -10.45 $\pm$ 0.22 | **-10.18** $\pm$ 0.18 |
| Cheetah-Vel | **-85.42** $\pm$ 1.59 | -95.14 $\pm$ 0.68 | -99.28 $\pm$ 3.96 |
| Ant-Dir | 388.50 $\pm$ 9.02 | 402.71 $\pm$ 5.00 | **412.00** $\pm$ 11.53 |
**Q3. It is also interesting to see the extrapolation ability limit of meta-DT.**
A3. Following your suggestion, we have conducted new experiments on Ant-Dir to evaluate Meta-DT's generalization ability to out-of-distribution (OOD) tasks. The following table shows the few-shot performance of Meta-DT with OOD test tasks (Figure S5 in Global Response). In this case, we sample training tasks with a goal direction of $U[0, 1.5\pi]$, and then test on tasks of $U[1.5\pi, 2\pi]$.
Obviously, **Meta-DT can still obtain better performance than baselines on OOD test tasks**, which again verifies our superiority. As stated in our A1 to Q1, we extrapolate the meta-level knowledge across tasks by the extrapolation ability of the world model, which is more accurate and robust since the world model is intrinsically invariant to behavior policies or collected datasets. For tasks like Ant-Dir, the world model shares some common structure across the task distribution (even for OOD tasks), e.g., the kinematics principle or locomotion skills. Hence, the extrapolation of the world model also works for OOD test tasks in this case. We will include this interesting experiment in the revised paper.
| OOD Testing | CMT | Prompt-DT | Generalized DT | CORRO | CSRO | Meta-DT |
| --- | --- | --- | --- | --- | --- | --- |
| Ant-Dir | 271.74 $\pm$ 102.81 | 281.73 $\pm$ 12.29 | 148.83 $\pm$ 6.06 | 141.79 $\pm$ 94.20 | 208.92 $\pm$ 15.47 | **412.67** $\pm$ 5.15 |
### `Summary Response`
Thank you for your valuable suggestions, which help us gain more critical insights into the extrapolation ability of Meta-DT, and further enhance our experimental evaluation. The corresponding figures of extended experiments can be found in the PDF of Global Response. We are honored to have **your recognition of our method**, and are also grateful that **most reviewers are quite affirmative to our overall contributions**, including the novelty and motivation, the extensiveness of empirical evaluation, and the superior performance.
Please let us know if we have addressed your concerns. We are more than delighted to have further discussions and improve our manuscript.
---
Rebuttal 2:
Comment: Thank you for the detailed response. This has helped to clarify my questions. I recognize the contribution of this work, and I would like to keep my original rating to vote for acceptance.
---
Rebuttal Comment 2.1:
Title: Thank you!
Comment: Thank you for your time in helping us improve our work. We are happy to see that we could address your concerns. We sincerely appreciate your recognition of our contribution and your vote to accept our work! | Rebuttal 1:
Rebuttal: # Revision Summary
We thank the reviewers for their valuable feedback (we refer to iZYy as R1, LRM6 as R2, EjPB as R3, sbh9 as R4, and BWEr as R5). We are grateful that **most reviewers are quite affirmative to our overall contributions**, including **the novelty and motivation** ("a novel Meta-DT" by R1, "a clear motivation... quite novel and inspiring” by R2, "a new architecture... no major flaws with the paper" by R4, and "a new framework" by R5), **the extensiveness of empirical evaluation** ("comprehensive experiments on multiple benchmarks" by R2, "strong experimental evaluation over many different tasks" by R4, and "comprehensive experimental results" by R5), and **the superior performance** ("shows the good performance" by R1, "showing notable performance gains" by R2, "show the performance gain" by R3, "strong results in this evaluation" by R4, and "achieves superior generalization... demonstrating its practical effectiveness" by R5).
We have made **a number of changes** to address all reviewers' suggestions and concerns. A short summary of the modifications is made as
1. We include CMT as a new baseline for experimental evaluation.
2. We conduct new empirical evaluation on Humanoid-Dir and two Meta-World environments of Sweep and Door-Lock.
3. We conduct new hyperparameter analysis experiments to investigate the influence of prompt length $k$ on Meta-DT’s performance.
4. We conduct new experiments to demonstrate Meta-DT's performance with a different number of training tasks.
5. We include new experiments to evaluate Meta-DT's generalization ability to out-of-distribution tasks.
6. We show and analyze the number of model parameters, the training time, and the inference time for Meta-DT and baselines.
7. We further justify our methodology including: i) the significant differences from CMT and corresponding superiorities over CMT, ii) the motivation and benefits of world model disentanglement, and iii) limitations of our method, challenges in extending to harder cases, and promising future directions.
In summary, we have **significantly extended our empirical evaluation** based on the comprehensive experimental results in our original manuscript. Impressively, **the results of massively extended experiments are generally consistent with observations and conclusions from our original manuscript**. The tabular results are given in responses to each reviewer, and corresponding figures can be found in the PDF of Global Response.
Please let us know if we have addressed your concerns. We are more than delighted to have further discussions and improve our manuscript. If our response has addressed your concerns, we would be grateful if you could re-evaluate our work.
Pdf: /pdf/4b8b237b92d89336d795b17dea895afc01b13384.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes a novel Meta-DT method that leverages the task representation from the world model disentanglement. Compared with the previous works, the expert demonstration is not necessary. This method could get the task representation from the trained encoder which is used as the guidance for the autoregressive training. This paper also incorporates an additional prompt as the complementary. The empirical study shows the good performance of the Meta-DT compared with baselines.
Strengths: - This paper proposes a novel Meta-DT that disentangles the task information from the trajectories as the prompt in Meta RL tasks.
- This paper could outperform the baselines in most of the settings, especially on low-quality demonstration sets.
Weaknesses: - This paper needs to train a world model which is used to represent the dynamic information for the task. This method is not as light as the previous work like Prompt-DT.
- This paper shows the sub-performance compared with prompt DT when the demonstration set is expert.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you explain more about the training hours and parameters for this method?
- Can you explain the experiments why the method does not outperform the baselines in the expert demonstration?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Explain more about the training hours and parameters for this method.**
A1. The following tables show the number of model parameters, the training time, and inference time for one episode. Compared to DT-based baselines, our method introduces a lightweight world model that consists of about 15\% parameters of the DT backbone. Thanks to this lightweight design, Meta-DT incurrs about 10\% runtime increase during training and inference. **Considering the significant performance gain of our method, this lightweight cost is likely acceptable.** Moreover, Meta-DT's computation cost is even lower than other baselines.
|`Model Size`|CMT|Prompt-DT|Generalized DT|CORRO|CSRO|Meta-DT|
|---|---|---|---|---|---|---|
|Point-Robot|675,989|603,259|600,754|705,961|858,480|702,758|
|Cheetah-Vel|730,891|657,819|629,718|740,273|895,448|755,996|
|Cheetah-Dir|730,891|657,819|629,718|740,273|895,448|755,996|
|Ant-Dir|734,500|661,284|631,240|754,357|910,527|758,965|
|Hopper-Param|726,079|653,199|627,651|756,827|885,014|751,274|
|Walker-Param|729,720|658,872|629,286|735,665|893,808|755,129|
|`Training time (s)`|CMT|Prompt-DT|Generalized DT|CORRO|CSRO|Meta-DT|
|---|---|---|---|---|---|---|
|Point-Robot|874|762|594|1,315|980|824|
|Cheetah-Vel|1,068|758|600|1,609|1,076|1,050|
|Cheetah-Dir|912|790|596|1,599|1,052|877|
|Ant-Dir|5,076|4,605|3,021|9,880|6,179|4,930|
|Hopper-Param|1,305|770|596|2,113|1,073|1,018|
|Walker-Param|1,304|760|595|2,211|1,067|1,027|
|`Inference time (ms)`|CMT|Prompt-DT|Generalized DT|CORRO|CSRO|Meta-DT|
|---|---|---|---|---|---|---|
|Point-Robot|9.52|8.68|46|10|15|9.16|
|Cheetah-Vel|50.44|45.84|468|125|168|48.92|
|Cheetah-Dir|50.40|48.2|467|123|167|48.90|
|Ant-Dir|55.72|50.11|518|162|244|53.24|
|Hopper-Param|51.77|48.64|488|146|210|51.00|
|Walker-Param|54.13|49.60|516|147|231|53.01|
**Q2. Explain the experiments why the method does not outperform the baselines in the expert demonstration.**
A2. In experiments with Expert datasets as shown in Table 4 and Figure 10, Meta-DT outperforms three baselines (Generalized DT, CORRO, and CSRO) to a large extent in all environments.
Compared to Prompt-DT, Meta-DT obtains **significantly better performance** in Point-Robot (-6.90 vs. -7.99), Cheetah-Vel (-52.42 vs. -133.78), and Ant-Dir (961.27 vs. 678.07), and obtains **extremely close performance** in Cheetah-Dir (874.91 vs. 960.32), Hopper-Param (383.51 vs. 393.79), and Walker-Param (437.79 vs. 449.15). **Taken together, it can be empirically verified that Meta-DT still outperforms Prompt-DT when evaluated on Expert datasets.**
With Expert datasets, Prompt-DT can access high-quality expert demonstrations as the task prompt to guide policy generation. Within the expert data, the behavior policy exactly equals the optimal policy and totally aligns with task characteristics, thus it can capture task-relevant information very well. Hence, Prompt-DT can achieve the same level of performance as Meta-DT in several environments. When faced with inferior datasets, the task information could be entangled with the features of behavior policies, producing biased task inference at test time. Hence, the performance of Prompt-DT might drop a lot, while Meta-DT can still obtain satisfactory performance due to accurately capturing task information via world model disentanglement.
**Q3. This method is not as light as the previous work like Prompt-DT.**
A3. We agree with you on this point, as our method introduces a pretrained context-aware world model. In contrast, Prompt-DT uses the same network architecture as DT, with some expert demonstrations prepended to the DT's input.
Our motivation of introducing the additional world model is to disentangle task-relevant information from behavior policies, thus more accurately inferring task beliefs. With this effective disentanglement, our method is verified to be robust to the dataset quality and is more practical with fewer prerequisites in real-world scenarios. This is what Prompt-DT lacks, since it is sensitive to the quality of prompt demonstrations and can achieve satisfactory performance only when expert demonstrations are available at test time.
### `Summary Response`
Thank you for your valuable review comments, which help us gain more insights on comparison to existing works like Prompt-DT, and further enhance our experimental illustration. We are honored to have **your recognition on our method, especially on its novelty** ("a novel Meta-DT method") **and superior performance** ("shows the good performance... outperform the baselines").
Also, we are grateful that **most reviewers** (we refer to LRM6 as R2, EjPB as R3, sbh9 as R4, and BWEr as R5) **are quite affirmative to our overall contributions, including the novelty and motivation** ("a clear motivation... quite novel and inspiring" by R2, "a new architecture... no major flaws with the paper" by R4, and "a new framework" by R5), **the extensiveness of empirical evaluation** ("comprehensive experiments on multiple benchmarks" by R2, "strong experimental evaluation over many different tasks" by R4, and "comprehensive experimental results" by R5), and **the superior performance** ("showing notable performance gains" by R2, "show the performance gain" by R3, "strong results in this evaluation" by R4, and "achieves superior generalization... demonstrating its practical effectiveness" by R5).
Please let us know if we have addressed your concerns. We are more than delighted to have further discussions and improve our manuscript. If our response has addressed your concerns, we would be grateful if you could re-evaluate our work.
---
Rebuttal 2:
Title: Looking forward to further discussions!
Comment: Dear Reviewer,
Thank you for your insightful comments. We were wondering if our response and revision have resolved your concerns. We have addressed your initial questions through our rebuttal and are eager to clarify any further points you might raise. Please feel free to provide additional feedback. We greatly appreciate your continued engagement.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for the detailed explanation and additional experiments. I will raise my score.
---
Reply to Comment 2.1.1:
Title: Thank you!
Comment: Thank you very much for your constructive feedback and your time in helping us improve our work. We sincerely appreciate you raising the score on our work! | null | null | null | null | null | null |
An eye for an ear: zero-shot audio description leveraging an image captioner with audio-visual token distribution matching | Accept (poster) | Summary: This paper proposes to use a well-trained visual LLM (specifically Llava-v1.5) to perform audio captioning in a zero-shot fashion. The authors propose a training framework which comprises 2 stages and 5 sub-steps. The authors also propose to use Maximum Mean Discrepancy (MMD) and optimal transport (OT) as the loss function to align audio and visual representation space.
Strengths: 1. The two loss functions are novel directions to explore for alignments between different modalities
2. This paper explores audio-visual LLM application scenarios which is timely.
Weaknesses: 1. The motivation is not clear. The author must specify in which scenario zero-shot audio captioning is needed, with concrete examples that the proposed method will help in practice, such as generating a particular captioning style or targeting a specific type of audio. As far as I can see in the paper, the authors only did experiments on two standard tasks (AudioCaps and Clotho) and both tasks have a reasonably-labelled size, and the performance is far from the state-of-the-art.
2. The method lacks theoretical grounding or insights. The author says they randomly choose one frame from the 10 frames in a video as the image and train to pull the image representation space to be close to the audio space. The author did not analyse how related the image is to the audio. How often does the image reflects what can be heard, and how often it does not?
3. The experimental setup is questionable.
- The author says clearly in line 167 that they performed p-tuning with audio captioning data "We trained the additional tokens using image-text pairs, with the text describing the sounds associated with the images". This clearly sets this method apart from "zero-shot learning" which refers to having no data at all. If the author means "zero-shot" for a specific audio caption dataset, they need to be clear about it.
- Regarding experimental results: I wonder if the comparison between the proposed method and ImageBind is fair. They also did not say how much data they used for ImageBind p-tuning. If the authors just took the numbers from Shaharabany et al., then putting the numbers in the same table is clearly unfair. I might have missed something here but without a convincing explanation, I do not think the experiments are valid. Why is ImageBind not included in Table 2?
- I would be interested to see whether adding audio to the audio-visual description would give better performance or not. It seems that the audio information is all included in the visual information. Getting back to Weakness 1 - what is the point of doing this task then? What extra did you gain?
A few writing problems:
1. The referencing style makes it difficult to locate various papers.
2. Fig. 2 captions are incomplete.
Technical Quality: 2
Clarity: 1
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: There is a limitations section but no discussion on potential negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We’re glad to hear that you found the exploration of our **loss functions as novel directions for alignments between different modalities** and the **timely application scenarios of audio-visual LLM** valuable. Below, please find a point-by-point response to your feedback.
>The motivation is unclear. The author should specify practical scenarios where zero-shot audio captioning is needed.
We posit that Zero-shot audio captioning is particularly valuable in scenarios where annotated audio data is scarce or unavailable. Current audio captioners like Audio Flamingo and Pengi, trained on nearly all publicly available audio-text pairs, seem to have reached the limits of the supervised approach. In contrast, the abundance of videos makes unsupervised audiovisual training a viable way to scale further.
For example, domain-specific topics (like wildlife sounds) that lack supervised datasets but non-annotated videos are available, which is realistic given that there exist only two "real" academic audio captioning datasets.
>The author randomly selects one frame from a video without analyzing the relevance of the image to the audio.
We indeed chose one random frame from the video to match its distribution with the audio and analyzed the image-audio relatedness. The paragraph Datasets of Section 4 as well as the full section C of the Appendix details this process: (line 579-581) “As AudioSet (...) features audio events that may be unrelated to the visual event depicted (...) we filtered out audiovisual discrepancies”. Pairs were chosen “by computing the similarity between the noisy audio labels and an image caption generated by BLIP-2” (line 269-270). This filtering considers the entire video,averaging similarity scores across frames sampled at one-second intervals.
To address how often the image reflects the audio, we retained the 500k cleanest pairs, which best retain related pairs (line 269: “we train our models on a subset of 500k videos from AudioSet”). Additionally, since the image and audio can contain partial, non-common information, we used attentive optimal transport to learn only the common parts of the distributions (Section 3.2, paragraph 3).
>The author states that they used image-text pairs for p-tuning, which differs from "zero-shot learning"
We acknowledge your concern about potentially breaking the “zero shot” set-up with our handling of data. To clarify this concern, as we explained in line 53 of the submission, and consistent with Salewski et al., we define zero-shot audio captioning as **the model's ability to generate audio captions without training with manually assembled audio-caption pairs**, as they also used textual-only descriptions (AudioSet tags from a "keyword bank") as opposed to actual audio files and their corresponding captions.
While we did use captions for prefix tuning, we emphasize that we **never** used audio-caption pairs for the targeted audio captioning task, which would indeed compromise the experimental setup. Instead, we used a few image-caption pairs without accessing the corresponding audio files. This use was solely to retask the LLM to create audio-centric captions.
We maintain that our approach effectively addresses the zero-shot audio captioning task, allowing direct comparison with Salewski et al. and other methods.
>Is the comparison with ImageBind fair? Data used for ImageBind p-tuning isn't specified. Why is ImageBind absent from Table 2?
Firstly, we would like to clarify that our work does not compare directly to ImageBind, as it lacks the capability to perform audio captioning. Instead, it is a model that maps multiple modalities to a shared space. Hence, comparing our method directly to ImageBind is not feasible.
We instead compare with Shaharabany et al., which, as described in Lines 108-114. This work is the closest to ours in terms of backbone training setup, as it has been trained using audio-image and image-text pairs but **not** audio-text pairs.
Second, Shaharabany et al. did not perform p-tuning, relying instead on an “audibility score” (part of the training loss, core of their work). In our case, we used p-tuning for the same purpose (using only image-text pairs). Since neither work used audio-text pairs, comparison is feasible and fair.
We acknowledge that, as ImageBind also contains an image encoder, prefix tuning can be applied. Unfortunately, as Shaharabany et al.’s code is not available, we couldn’t test his interesting idea.
Finally, the work of Shaharabany et al. is not included in Table 2 as they did not report their results on Clotho. We will clarify this by extending Table 2’s caption to ”Shaharabany et al. did not report results on Clotho.”
>Does adding audio to the audio-visual description improves performance ? What was the purpose of this task and what additional value did it provide?
We would like to recall, that our method is designed to handle multiple modalities, such as image, audio AND audio-visual for the zero-shot audio captioning task. This is achieved by substituting or concatenating modality-specific tokens.
As demonstrated in Table 1, our method performs better when both image and audio are used together (6th row, labeled DALI_OT^att+Image) compared to using the image alone (1st row). This highlights the usefulness of including audio, offering details about sound sources occluded or out of view in the image, e.g., a ringing phone inside a bag.
Our results show that using both modalities leads to more informative audio captions. We also show that using audiovisual captions as pseudo-captions to supervise an audio-only model in a second stage (audiovisual distillation), yields a more robust audio captioner. One core motivation was to leverage a powerful image captioner to enhance audio captioning, creating a virtuous cycle where both modalities mutually benefit. Thus, including audio proves significant value beyond what visual data alone can offer.
---
Rebuttal Comment 1.1:
Title: Response to the Authors
Comment: I thank the authors for providing detailed responses. However, I do not think my concerns are adequately addressed for the following reasons:
1. In response to my point 1, the author did not provide examples where zero-shot audio captioning is useful, e.g. categories held out of training so that they could be evaluated in zero-shot category generalization, as pointed out by reviewer rTvm. I still doubt whether this task under the author's specific setup is useful or not.
2. In response to my question about the experimental setup:
- I disagree with the authors (including the cited paper) that having audio captioning (description particularly used for audio) is a practical setup. After all, how would one generate an audio description when looking at an image? I do not see the point of the experiment when you have images and descriptions for audio for training. Why would we have such a weird dataset? This is fundamentally questionable from a practical point of view. To validate this point, please assume there are also no pairs of image and audio-captioning text and re-run the experiments.
- In response to my point on comparison to "ImageBind", the authors clearly say "Shaharabany et al. did not perform p-tuning", whereas, given my concerns about their p-tuning setup, I think the authors should compare their method to Shaharabany et al. without p-tuning.
Given all the above, I am not convinced by the task setup and experiments performed. Therefore, I would keep my score as it is.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our rebuttal. Let us address the last concerns which are probably due to a misunderstanding.
On the practical scenarios for zero-shot audio captioning, two points need further clarification here:
- “zero-shot audio captioning” refers to the scenario where the system is trained without having access to audio-text pairs. It is useful in diverse use-cases, notably, audio description for the hearing impaired... where footage is largely available, but audio annotations are sparse.
- While it is standard to evaluate “zero-shot category generalization” as part of classification systems, it is unusual to do so in the captioning application. In general, one would have to first extract “categories” (classes) from the ground-truth captions, which is a problem per se. We agree that the idea is interesting but it is clearly beyond the scope of this work, where the definition of “zero-shot” is different: i.e., no audio-text pairs during training.
On the validity of the experimental setup:
> I disagree with the authors [...] After all, how would one generate an audio description when looking at an image?
We must disagree with the reviewer. It is clearly valuable to use audio description from audiovisual data, since such description correlates to a great extent to both the image and audio modalities. For example, in audiovisual data, a textual description like "a train passing by" not only describes the visual element of a train but also implicitly refers to the expected sounds of a train—such as the clickety-clack of wheels on tracks and the whistle blowing. This approach allows the model to shift focus from descriptions with heavy visual cues to auditory cues, akin to style transfer. To mention a few other examples: think of an image of a person holding a phone with the mouth open (“person speaking on the phone”), or one of an inclined bell (“bell tolling”), or a person operating a jackhammer, etc. We'd like to invite the reviewer to do an image search with these descriptions to see the result...
> I do not see the point [...]
First, please note that this is merely a pre-processing (which only uses 16 image-audio description pairs) and not the central component of the system.
In subsection "Prefix-tuning with image-caption pairs," we explained that LLVMs tend to generate descriptions heavily focused on visual features rather than auditory ones even when asked, "What sounds can the objects in this image generate?" This visual bias negatively impacts audio captioning metrics and calls for a solution to shift descriptions toward auditory content. Importantly, no large image-caption (audio-centric caption) dataset is required, as the audio-centric descriptions **can be easily composed by hand**, given the small number needed. As demonstrated in Appendix A, using more than 16 pairs offers no additional benefit. Without the p-tuning process that reorients the model's descriptive style, the generated captions would continue to exhibit a strong visual bias, undermining the benefits of the audiovisual distribution alignment. This is confirmed by our preliminary experiments.
> In response to my point on comparison to "ImageBind" [...] I think the authors should compare their method to Shaharabany et al. without p-tuning.
We disagree. While Shaharabany et al. rely on an "audibility score" as part of their training loss to steer their models toward generating audio-centric descriptions, our approach employs p-tuning for the same purpose. This difference in methodology is important, as p-tuning effectively reorients the model's descriptive style from visual to auditory, addressing the inherent visual bias of LLVMs. Without p-tuning, our model would continue to produce visually biased descriptions. Therefore, comparing our method to Shaharabany et al. without p-tuning would not only undermine the intent of our approach but also fail to provide a fair comparison, given that both methods are designed to achieve the same goal through different means.
---
Rebuttal 2:
Title: Answer to further questions
Comment: Thanks for the last answer and those new remarks. However, we do not agree with your points:
1-Concerning the difference between image only and audio only, it's important to notice that our
method targets AUDIO captioning, which consists in generating a caption while having access to an AUDIO and not an image. Therefore, there is no point in trying to compare audio and image performance.
2-The prefix tuning does not introduce any bias or unbalance between methods as the prefixes are ALSO used with the image-only method, because as previously mentioned, the visual-LLM off-the-shelf is not able to produce audio-centric captions.
We would encourage the reviewer to perform a more careful reading of the paper as he/she seems to have missed a lot of important details which are very clearly stated. | Summary: This paper presents a method to align tokens created by an audio encoder with those from an image encoder to allow zero-shot description of audio and audio-visual recordings. It introduces an attention-based matching to the alignment to be able to account for objects that appear in one modality but not the other (e.g., background music or the visual background). The paper compares two different alignment methods: maximum mean discrepancy and optimal transport, with the attention only applied in the case of optimal transport. And prefix tuning is used to guide the image captioner towards audio captions.
The model is trained on AudioSet and evaluated on AudioCaps and Clotho using standard captioning metrics. While several metrics are reported, the primary one used by the community is SPIDEr, and by this metric on AudioCaps (in-domain to training) the proposed approach when using audio-visual input (0.1946) outperforms a comparison supervised system (0.1830), an image-only system (0.1499), and other variants of the proposed system such as the audio-only version (0.1592). On the out of domain and audio-only Clotho dataset the supervised baseline performs best (0.097) followed by the MMD variant of the proposed system (0.0655). Note that state of the art performance on Clotho was around 0.3 for systems like QwenAudio and AudioFlamingo and around 0.5 on AudioCaps.
Strengths: Significance:
* Cross-modal learning is an important problem, although it is not clear that zero-shot learning is necessary as there is a great deal of audio-visual content available, but if it is possible, then it seems worth exploiting.
* The ability to leverage visual captioning systems to bootstrap audio captioning systems is valuable, as the former have received much more attention than the latter.
* The issue of audio-visual correspondence is key in audio-visual processing and the proposed approach is an interesting way to deal with it.
Clarity:
* The paper is generally well written and easy to follow
* The diagrams are quite helpful in understanding the approach
* The paper includes a real discussion of its limitations
* The caption examples included in the appendices are quite informative. It would be nice if there were examples from all of the systems to compare
Technical correctness:
* The experiments are well organized and well conducted
Weaknesses: Clarity:
* The use of many metrics dilutes the clarity of the results and their interpretation. The metrics should be prioritized and the systems they are comparing discussed accordingly. The discussion of results seems to treat all metrics as somewhat interchangeable and makes broad statements about which systems seem better than others. It would be much clearer and easier to follow if differences in performance on specific metrics were tied to qualitative differences in model behaviors and when there is one main metric (i.e., SPIDEr) to focus on that for overall system comparison.
* Line 96 states that "Audio data often lacks the richness and context provided through the visual modality, resulting in less precise and more subjective descriptions of the audio content." While I agree that audio captioning datasets typically contain more subjective descriptions of their content, I think this is an issue with the data that is selected to be captioned and the question asked of the captioner as opposed to a shortcoming of audio in general. With the right data and right task, audio captioning could be much better defined. I believe this sentence should be modified to focus on existing datasets as opposed to the general modality.
* No mention is made of the size of the various models or model pieces involved or how much data they were trained on. This is especially acute for comparing the proposed system to existing baselines from Shaharabany et al and Salewski et al.
* I can't understand what the ablations are showing, these should be described in the body of the paper, not just shown in tables.
Performance:
* Overall, the reported SPIDEr results (0.1-0.2) are fairly low compared to SOTA approaches for Clotho (0.3) and AudioCaps (0.5). It's clear that this zero-shot approach does not completely solve the problem. It would be useful to discuss this in the paper.
* Additionally, the differences in performance between the proposed variants are not that large, bringing into question the value of including multiple-such approaches.
Minor issues:
* Figure 2: sub-figures (d) and (e) are not mentioned in the caption
* Equation (4): please use \langle and \rangle as opposed to < and > for brackets
* Tables 1 and 2: Order the rows within each block intentionally. I would recommend ordering by SPIDEr.
* Figure 4: It's hard to distinguish between this many colors with this level of transparency. Maybe you could use different symbols instead of different colors.
Technical Quality: 3
Clarity: 3
Questions for Authors: What exactly was done in the ablation studies listed in Tables 1 and 2? What do the rows mean? What was taken away?
Why wasn't the attention weighting used with MMD in addition to OT?
Does the audio-visual alignment treat each instance as a "bag of tokens"? If so, would there be value in considering the sequence of tokens in this alignment as opposed to just their distribution?
Were any categories held out of training so that they could be evaluated in zero-shot category generalization? Or was the focus only on zero-shot modality generalization?
Was the filtering of audiovisual discrepancies described in appendix C applied to both training and test data or just to training data?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper has a section focusing on limitations and does an excellent job discussing them, providing true limitations of the approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review. We’re happy to hear that you found our paper **well written and easy to follow**, appreciated the **helpful diagrams**, and recognized the **value in our approach to addressing audio-visual correspondence and leveraging visual captioning systems**. Below we address your concerns.
> Using too many metrics muddies the results.
We understand that using many metrics complicates the reading. Here's a simplified analysis:
CIDEr emphasizes the precision of n-grams (targeting key words), while SPICE focuses more on the semantic content, making it complementary to CIDEr. SPIDEr, an average of both, is therefore used as the primary metric.
On AudioCaps, all distribution alignment methods outperform the contrastive baseline (SPIDEr: 0.0871) after the first stage. While MMD (0.1385) performs better than standard OT (0.1183), attentive OT improves even further (0.1443).
After distillation, all methods improve except for MMD (0.1360), with attentive OT remaining the best (0.1592) and the contrastive baseline improving significantly (0.1524). These results confirm the intuition that MMD learns a visual bias and therefore does not benefit from the distillation.
Note that other metrics and especially in ROUGE_l, the contrastive approach remains less effective than other methods (0.2914 against 0.3025 and 0.3106 for MMD alignment and attentive optimal transport respectively).
On Clotho, MMD’s visual bias is beneficial, outperforming other methods after the first stage (0.0640 vs. 0.0377, 0.0299, and 0.0355 for contrastive, OT, and attentive OT, respectively). Post-distillation, MMD does not improve (0.0655), while other methods do, with attentive OT (0.0625) and MMD remaining slightly ahead of the contrastive approach (0.0620). ROUGE_l shows similar trends to AudioCaps.
This discussion aligns with the paper’s conclusions. We'll revise the manuscript accordingly.
>Line 96 suggests audio data lacks richness. However, this is a data selection and task issue, not an audio modality shortcoming.
We'll revise Line 96 for clarity: "Current audio captioning datasets often fall short in capturing the richness and context compared to what the vision datasets offer. This limitation arises from both the selection of data for captioning and the nature of the questions posed to captioners, leading to descriptions that are less precise and more subjective."
>No mention is made of the size of the various models or
how much data they were trained on.
We agree mentioning this information is important. We will revise as follows:
- Lines 104-107: “ZerAuCaps uses CLAP (an 80.8M-parameter model trained on 3.4M audio-text pairs) (...) to prompt an LLM (OPT 1.3B)”.
- Lines 109-111: “The authors adopted ImageBind (using an audio encoder of 86M parameters trained on AudioSet) (...) They fine-tuned a GPT-2 model (117M parameters) (...)”
> The reported are low compared to SOTA. What is the value of including multiple such approaches.
We want to clarify that our approach extends image captioners to audio without compromising their original image performance. While better audio performance might be achievable by sacrificing some image performance (typically by unfreezing the LLM), our focus is on learning from videos for scalability due to their widespread availability. To our knowledge, this is the first attempt at this problem. This is a first step towards large-scale training, and a model trained on 500k videos cannot be compared with others trained on millions of audio-text pairs. Future work could scale this method further to compete with supervised approaches.
For now, exploring alternative distribution alignment methods is valuable, but given the minor differences between methods, future work might focus on a single alignment method.
>What was done in the ablation, what was taken away?
Tables 1 and 2 present ablations for different training stages. The rows show model performance after the first stage, which includes only distribution alignment and prefix tuning. What was taken away is the role of the second stage: audiovisual distillation (as written in the Table captions: “Ablation of audiovisual distillation”). We will clarify this further in the text.
>Why wasn't the attention weighting used with MMD in addition to OT?
In attentive OT, cross-attention is applied in the token space to weight each sample of the empirical distributions. MMD, however, computes the expectation of the distribution in the kernel space, so weighting tokens before averaging would significantly alter the distance formulation, which is why we did not initially explore this.
We acknowledge this is an interesting experiment and hace since tested it. The results, shown in Table B reveal that while performing well across all metrics,it underperforms in CIDE_r. The model generates overly detailed captions, which affects CIDE_r (measuring n_grams) but not in SPICE.
>Does the audio-visual alignment treat each instance as a "bag of tokens"? Would there be value in considering the sequence of tokens ?
Audio-visual alignment does treat each instance as a "bag of tokens". However, our preliminary experiments revealed that token order does impact llava's performance in audio captioning with images (on AudioCaps).
To support this claim, we shuffled image tokens before feeding them to the LLM and found that the results were similar to those without shuffling (see Table A). This indicates that the sequential order of image tokens is not crucial for this task.
>Were any categories held out of training to be evaluated in zero-shot category generalization?
We only focused on zero-shot modality generalization, but exploring other aspects is worth pursuing in future work.
>Was the filtering applied to both training and test data ?
The filtering was applied only to the training set to keep test performance comparable with other methods. We will clarify this in Line 271.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I would like to thank the authors for their rebuttal. It has resolved several of my questions and issues around clarity in the paper. Several issues remain, however.
> Using too many metrics muddies the results.
This summarization is useful and improves over what was in the manuscript original, although it is still not entirely easy to follow.
> We want to clarify that our approach extends image captioners to audio without compromising their original image performance.
Is this demonstrated quantitatively in the paper? If it could be, then that would go a long way to resolving this concern and would likely increase the significance of the work.
As it currently stands, I have read the reviews and rebuttals and would still like to keep my rating as is.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback.
The original image captioner's performance remains **necessarily** intact. Indeed, **all** the parameters of the original system, especially the LLM’s, are kept frozen during the whole process. Hence, merely re-using the original image backbone (along with its MLP), we do recover the original model and indeed the original performance. Therefore, there is no need to perform an additional evaluation as the original model remains **intact**.
We do hope this clarification resolves your last concern. | Summary: The paper proposes a method for how to adapt VLM to perform audio captioning. The pipeline is as follows: 1. perform few-shot prefix tuning on images to caption the audio. 2. Use multi-modal alignment methods, namely MMD or OT, to align token space distribution of audio and visual. 3. Distillation for audio-only model from the visually-informed audio caption model. Experiments show that the proposed method outperforms contrastive-based alignment and also the pretrained CLAP model that is used for audio captioning.
Strengths: 1. The main novelty of the paper lies in its usage of MMD / OT to perform multi-modal alignments in audio-visual space, though MMD / OT is not new and has been extensively explored in cross-domain matching literature.
2. The paper is well-written and easy to follow.
Weaknesses: 1. There is lack of theoretical derivation or enough insights for showing why the OT and MMD can outperform the standard contrastive learning. Figure 4 shows the PCA plots for different methods, however it is unclear why MMD or OT is able to stay closer to CLIP encoder. I think more explanations or theoretical justifications are needed.
2. The experiment results seem mixed, in particular Table 2 for Clotho. Contrastive method is better in several metrics without audio-visual distillation, in particular CIDEr and SPIDEr. Also the SPICE performance is very close to the proposed method. These two metrics, CIDEr and SPIDEr from my point of view is also very important since they are designed specifically for caption task. Why would Contrastive outperform the proposed methods in this dataset? But in AudioCaps, the proposed methods beat the contrastive method by a lot. The inconsistency results across these two datasets make me feel confused.
Technical Quality: 3
Clarity: 2
Questions for Authors: See Weaknesees.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We’re pleased to hear that you found our paper **well-written and easy to follow** and appreciated the **novelty in our use of MMD/OT for multi-modal alignment in the audio-visual space**. In the following, we'll address your comments in detail.
>There is a lack of insights for showing why the OT and MMD can outperform, more explanations are needed.
First, regarding the performance gap between our method and the contrastive baseline, we explain the need for distribution alignment to perform this task instead of contrastive learning in Section 3.2, lines 200-206: “Llava for instance, makes use of the full token distribution as input, naturally creating the need for a full token distribution alignment, to allow for swapping the pretrained encoder with a new one targeting a new modality. Moreover, contrastive learning faces the so-called “modality gap” problem where each modality is encoded in distinct sub-regions of the embedding space (see Figure 4). For all these reasons, replacing image tokens with audio tokens obtained by alignment through a contrastive learning approach may yield undesirable responses from the language model.”
Concerning the modality gap that does not appear in our method, it is important to note that contrastive learning brings associated (positive) samples close together while pushing away unpaired (negative) samples. The cited paper [A] showed that by pushing away the negative samples in a high-dimensional space, combined with multiple false positive pairs, multimodal contrastive learning encodes each modality in a separate subspace, creating a modality gap. A more recent paper [B] goes further and shows that this phenomenon is purely due to the behavior of the contrastive loss itself in high dimensions and is not related to multimodal learning.
Now, optimal transport and Maximum Mean Discrepancy (MMD) do not suffer from this issue, as they do not rely on negative samples but focus on explicit distribution alignment instead. However, their application in audio-visual alignment has been limited due to the common practice of averaging the final representation into a single token, which does not allow for the computation of distribution distances. Employing these losses for their ability to avoid the modality gap is a core motivation behind our work.
We will modify the beginning of the Discussion paragraph in Section 5 to make clearer our motivations and why our methods do not suffer from such a modality gap. Specifically, we’ll complement it by adding the following paragraph:
"Unlike standard contrastive methods, we do not average tokens, which allows us to employ sample-level distribution alignment methods that do not depend on negative samples. Indeed, both Optimal Transport (OT) and Maximum Mean Discrepancy (MMD) focus on aligning the distributions of the embeddings directly. OT minimizes the cost of transporting one distribution to another, effectively aligning the distributions in a way that does not rely on negative samples. This approach avoids the issue of pushing different modalities into separate subspaces. Similarly, MMD measures the distance between the kernel-projected mean embeddings of the distributions, ensuring that the overall distributions are aligned without the need for negative sampling. By leveraging these methods, we can achieve a more cohesive embedding space where different modalities are aligned more closely, thus avoiding the modality gap problem inherent in contrastive learning approaches."
>Why would Contrastive outperform other methods in this Clotho without audiovisual distillation?
First, on Clotho, before the audio-visual distillation, the alignment through MMD, which is also part of our method, yields significantly better results (sometimes twice as good) than the contrastive baseline (0.0659, 0.0621, 0.0640 against 0.0460, 0.0293, 0.0377 for CIDEr, SPICE and SPIDEr, respectively).
As explained in the paper, the good performance of MMD is likely to be due to the learned image bias: line 343-344: “the image bias inherent in DALI_MMD becomes beneficial when confronted with out-of-distribution data.”
It is important to note that Clotho is known to be a more challenging dataset than AudioCaps. Moreover, it contains new audio concepts not seen in AudioSet, representing an out-of-domain scenario for our models (trained only on AudioSet), hence the observed drop in performance, compared to AudioCaps.
CIDEr and SPICE are indeed important metrics for captioning, and these metrics are roughly equivalent between contrastive and attentive OT (0.0302 0.0355 for attentive OT against 0.0293 0.0377 for contrastive in CIDEr and SPICE respectively). As both scores are fairly low, trying to interpret such small differences would be unreliable. However, those models exhibit significantly different performances when focusing on other metrics.
In particular, the ROUGE_l, which measures the longest common subsequence, and takes the order of the words into account (interpreted as the “coherence” of the generated sentence), is significantly higher in our method (0.2101 for attentive OT against 0.177 for the contrastive). A tentative interpretation could be that, while both methods failed to generate accurate enough captions, the attentive OT still encodes the audio in a space that can be "comprehended" by the LLM as it stays in the same region as the image tokens, ending up in a more coherent caption, while the contrastive approach encodes something that cannot be interpreted by the LLM, leading to hallucinations or incoherent captions.
[A] Weixin Liang and Yuhui Zhang and Yongchan Kwon and Serena Yeung and James Zou (2022) Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning
[B] Abrar Fahim and Alex Murphy and Alona Fyshe (2024) It's Not a Modality Gap: Characterizing and Addressing the Contrastive Gap
---
Rebuttal Comment 1.1:
Comment: Thank the authors for providing detailed response, and it basically solves my concerns about the paper. I would like to raise the score to weak acceptance considering the novelty part of the paper and these additional insights brought by the rebuttal response.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback | Summary: The paper under review looks at the problem of captioning of short audio-video clips, leveraging existing multimodal large-language-models. While many strong image captioners exist due to the abundance of image/caption pairs for training data, for clips of non-speech audio, the amount of data available are comparably much less. To build a performant captioner that can produce accurate captions for clips contained either of both audio and visual modalities, the paper proposes a framework that leverages LVLMs, adapts them to produce more audio-centric descriptions via prefix tuning, and most critically "aligns" the distributions of the audio and visual tokens to match making the tokenized output from the visual or audio encoder interchangeable. The main novelty is the use of Optimal Transport to align the visual and audio modalities and modifying the loss with cross attention terms to improve the alignment. While this DALI_OT^Att is not a large improvement over DALI_OT or DALI_MMD, the experimentation of these modality alignment methods and demonstrated improvement over a constrastive loss is a novel contribution. The analysis and plot of the multimodal embedding space of the various approach to align the A-V modalities further demonstrates the effectiveness. Overall the paper does demonstrate how LVLMs can be utilized to build a strong A-V captioning system.
Strengths: Goal of the paper, contributions, execution are all clear. There are a good set of experiments, visualizations of the expected improvement in alignment of the visual and audio token distributions, and overall the application of MMD or OT learning improves the alignment as demonstrated by the experimental results over the dominant baseline approach of contrastive learning.
Weaknesses: The paper has framed the problem of A/V captioning in a somewhat narrow way of leveraging LLVMs with the only comparison to a strong supervised baseline in CLAP while there are others such as GPTo, Gemini 2 or Llama 3.
Section 3.1 is rather confusing. It seems Figure 2 was revamped and the references to the figure in section 3.1 was not completely updated. Figure 2 should also include a description of steps 2d and 2e in the descriptive caption (the description of 1c needs updating too I believe. The reference on line 177 for 2-c should probably be 2-b; line 190 2-c should be 2-d; and on line 196 2-d changed to 2-e. Overall, it would be clearer to first indicate the stage and then explain it, e.g. "1-a) prefix tuning the prefix tokens" rather than the other way around. Also each of the bolded titles in section 3.1 should correspond with a stage in Figure 2 and include the stage number for better clarity.
In section 3.2, it would be helpful to the reader to provide concrete equations on how to compute the quantities and eqns 1-5. Having these in an appendix would be fine. While its understood that the code will eventually be released, having the equations that the code is intended to implement is always useful for clarity.
Technical Quality: 3
Clarity: 2
Questions for Authors: Has there been any analysis done in comparison to other large, strong multimodal LLMs such as GPT-4o, Gemini 2 or Llama 3?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review. We’re glad to hear that you found our paper’s **goal, contributions, and execution clear** and appreciated the **good set of experiments and visualizations,** along with the improvement in alignment demonstrated by our **application of MMD or OT learning**. We are now addressing your comments in detail.
> Section 3.1 is rather confusing. It seems Figure 2 was revamped and the references to the figure in section 3.1 was not completely updated. Figure 2 should also include a description of steps 2d and 2e in the descriptive caption (the description of 1c needs updating too I believe. The reference on line 177 for 2-c should probably be 2-b; line 190 2-c should be 2-d; and on line 196 2-d changed to 2-e. Overall, it would be clearer to first indicate the stage and then explain it, e.g. "1-a) prefix tuning the prefix tokens" rather than the other way around. Also each of the bolded titles in section 3.1 should correspond with a stage in Figure 2 and include the stage number for better clarity.
We thank the reviewer for pointing out these corrections. We will ensure to make the following changes:
- Figure 2 caption: Full training pipeline: a prefix tuning is performed using a few image-captions pairs (1-a). In the meantime, the audio backbone is aligned with the image backbone (1-b) through distribution alignment.
Audio captioning can then be performed by switching the image backbone with the audio backbone while adding the prefix tokens (1-c).
Visually-informed audio captions are then generated using both audio, image, and prefix tokens. The MLP that maps the audio encoder to the language model is fine-tuned using these pseudo captions (2-d).
The final inference is performed by forwarding the output of the aligned audio backbone to the trained MLP to obtain the LLM input (2-e).
- References to Figure 2: line 177 “We align the distributions (Figure 2:1-b)”; line 190 “This procedure is illustrated in Figure 2: 2-d”; line 196 “The resulting audio captioner is shown in Figure 2: 2-e”
>In section 3.2, it would be helpful to the reader to provide concrete equations on how to compute the quantities and eqns 1-5. Having these in an appendix would be fine. While its understood that the code will eventually be released, having the equations that the code is intended to implement is always useful for clarity.
We agree with those remarks. Since the details of the $\alpha$ and $\beta$ computation are already given in Appendix F, and since the OT does not have a closed-form solution, the title of Appendix F will be changed to “Implementation details” and the following precision will be added to ease the understanding of the practical computations of MMD:
The MMD distance between the audio token distribution ($X$) and the image token distribution ($Y$) is computed as follows:
$\text{MMD}(X, Y) = \frac{1}{m^2} \sum_{i=1}^{m} \sum_{j=1}^{m} k(x_i, x_j) + \frac{1}{n^2} \sum_{i=1}^{n} \sum_{j=1}^{n} k(y_i, y_j) - \frac{2}{mn} \sum_{i=1}^{m} \sum_{j=1}^{n} k(x_i, y_j)$
where $k$ is the Gaussian kernel.
>**Question**: Has there been any analysis done in comparison to other large, strong multimodal LLMs such as GPT-4o, Gemini 2 or Llama 3?
It is important to keep in mind that the purpose of our work is to extend vision captioners so they can **also** perform general audio captioning (i.e., describe an audio event—possibly happening outside of the field of view—, such as ‘a dog barking in the street' or ‘a siren heard at a distance').
Large Multimodal models like the suggested ones are trained to perform various tasks, including video captioning. However, they largely rely on the visual modality for that task, and the use of the audio modality is restricted to the speech present in the video.
Therefore, no fair comparison can be made with them as they are not trained to perform the same task as us (general audio captioning).
---
Rebuttal Comment 1.1:
Title: Rebuttal answer
Comment: Dear Reviewer,
I hope this message finds you well. We wanted to kindly follow up to see if our responses have addressed your concerns regarding our work. As the deadline is approaching in an hour, we would greatly appreciate any feedback you can provide before the end of the discussion period. | Rebuttal 1:
Rebuttal: We would like to start by thanking all the reviewers for the time they spent reading carefully our work and their valuable feedback that helped us improve the quality of the submission.
We would like to underline an important contribution of our work: we add audio capability to an LVLM while **keeping its vision-only performances intact**. This is important to keep in mind as this allows for the use of the same model for audio-only, vision-only or audio-visual tasks. This is highlighted in the 4th bullet point of our contributions, line 80-81: “Our method supports both audio and audiovisual inputs, extending the image captioner’s capabilities **without compromising its performance in image analysis tasks**.”
Below, we provide a detailed response to each reviewer's comments.
Pdf: /pdf/e9b067c2392066e018729da4ef5a0e6bb28417e5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MambaSCI: Efficient Mamba-UNet for Quad-Bayer Patterned Video Snapshot Compressive Imaging | Accept (poster) | Summary: The paper proposes a method called MambaSCI for efficient reconstruction of quad-Bayer patterned color video snapshot compressive imaging.This method surpasses state-of-the-art algorithms with lower computational and memory costs, providing improved color accuracy and demosaicing.
Strengths: 1) The contributions of this paper are quite novel. The usage of Mamba for video SCI reconstruction has been used for the first time. Moreover, the paper also introduces Quad-Bayer CFA patterned color video SCI reconstruction which is also being done for the first time.
2) The quality of the results produced is quite high. The average PSNR and SSIM scores produced with the method proposed in this paper are both comfortably beating the state of the art results.
3) The paper is well written and the presentation is very clear. The citations, and comparisons with SOTA are sufficient to build confidence in the proposed method.
4) The contributions due to each individual module has been presented which clarifies the necessity for each.
Weaknesses: 1) The reconstruction seems to be performed on fairly low resolution images(512 x 512). I would have liked to see the simulations being done on higher resolution.
2) The proposed method is tested only on simulated datasets (since there is no real SCI dataset based on quad-Bayer pattern). To me, it is unclear how the simulated datasets have been created. Was noise added while creating the simulated datasets? My biggest concern is the way the noise has been modeled while creating the simulated dataset might not match the actual data(If there was a way to do it). Therefore, currently the efficacy of the provided solution is somewhat hypothetical.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1) The paper describes the computational benefits of the proposed approach over SOTA in terms of fewer parameters and FLOPS, however no concrete numbers have been provided around the performance. Is it possible to add some information around the time it would take to reconstruct each frame?
2) There is no mention on how the masks were generated. Or the conditioning of the masking matrix. This very important information should be included.
3) What is the relationship between T (number of frames used during reconstruction) and the amount of motion between consecutive frames? For instance I would expect you could use more frames(T) if the object was moving slowly within the scene, compared to when it is moving fast. This might help in further improving the computational cost.
4) There is no tiling done during reconstruction. I wonder why this is so? Especially if you want to deal with higher resolution images?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations have been adequately addressed in the appendix, so nothing to comment upon here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer yWFS
Thanks for your valuable comments.
**Q1: Clarification of high resolution images reconstruction.**
**A:** As stated in the main manuscript (lines 233-234), we also evaluated our method on a large-scale simulated dataset. The metrics comparison and visual comparison between our method and the comparison methods for reconstructing 1920x1080 resolution video are presented in Table 4 and Figure 6 in the main manuscript.
**Q2: Noise modeling analysis of simulated datasets**.
**A2**: We have explained how the simulated dataset was created and provided an analysis of our method's effectiveness on a real dataset:
- The middle-scale simulated dataset is created by extracting and cropping frames from high-quality open-source videos available online to obtain the groundtruth (GT), while the large-scale dataset uses videos selected from YouTube as the GT. **Both datasets include noise of real-world scenes.**
- Given that we are tackling a completely new task, our focus is primarily on addressing the degradation caused by SCI compression and developing effective demosaicing techniques to achieve color fidelity and eliminate artifacts for high-quality reconstruction. The aspect of noise modeling has not yet been fully considered.
- Through our studies on various Bayer-array-based methods, their performance on simulated and real datasets is positively correlated, thus I believe our method can still achieve significant advantages on real data.
**Q3: Time required to reconstruct each frame.**
**A:** In the table below, we present a comparison of the time required to reconstruct each frame, along with the PSNR and SSIM metrics for the proposed method and other competing methods. Even though our method takes longer than EfficientSCI to reconstruct each frame, it delivers significantly superior performance.
|Method|Time(s)|PSNR(dB)|SSIM
|-|-|-|-|
|PnP-FFDnet|0.28|27.86|0.855
|STFormer|0.20|32.48|0.842
|EfficientSCI|**0.07**|32.35|0.854
|MambaSCI|0.16|**35.70**|**0.959**
**Q4: Clarification of how the masks are generated.**
**A:** In the SCI domain, the **mask typically has a specific physical significance**. The basic principle involves modulating different frames within the video cube with varying weights and then integrating the light into the sensor to create a 2D measurement [1-3]. Specifically the masks are the 0-1 matrices that are different from each other in the T-frame. Also this mask is customizable subject to its physical significance and has no effect on the reconstruction results. During training, we simulate the masks of different SCI systems by randomly generating 0-1 matrices to improve the robustness of the system.
**Q5: Clarification of the relationship between T and the amount of motion between consecutive frames.**
**A:** In SCI tasks, $T$ typically represents the compression ratio ($Cr$) and is not directly related to the motion between consecutive frames. A higher $T$ indicates a higher $Cr$ and increased reconstruction difficulty. As highlighted in **Table 1 in the global rebuttal**, both reconstruction difficulty and the required FLOPS escalate with increasing $T$. Nonetheless, our method delivers high-quality reconstruction with superior computational efficiency.
We greatly appreciate the suggestion to use smaller compression ratios for fast motion scenes and larger ones for slow motion scenes to improve efficiency. This insight has inspired us to consider dynamically adjusting the compression ratio based on object movement speed. Thank you for this suggestion, which opens a new avenue for exploration.
**Q6: Clarification of why there is no tiling done during reconstruction.**
**A:** In the reconstruction process, we first address the SCI compression degradation in the raw domain and then perform demosaicing to convert from the raw domain to RGB. In the raw domain, two common tiling methods involves divideing into non-overlapping patches and splitting the image into four sub-parts: R, G1, G2, and B. We specify why we did not employ either of these methods:
- **Dividing into Non-overlapping Patches.** Dividing an image into non-overlapping patches will lead to localized detail loss. To address this, we employ convolution for shallow feature extraction, which enables us to train effectively on low-resolution images (using a resolution of 128x128 for training and 256x256 for fine-tuning) and process on high resolution images.
- **Dividing into Four Sub-parts.** This method is efficient for Bayer arrays due to their physical arrangement. However, the quad-Bayer's unique layout poses challenges. Splitting the image into R, G1, G2, and B components requires a more intricate process because each color forms a 2x2 square, which must be considered as a whole rather than isolated pixel by pixel. We attempted this approach, but found it significantly increased both training and inference times.
Thank you for your question, and we will continue exploring efficient tiling methods to achieve lightweight reconstruction for higher-resolution videos.
[1] Yuan X, Liu Y, Suo J, et al. "Plug-and-play algorithms for large-scale snapshot compressive imaging." *CVPR* 2020.
[2] Yuan X, Liu Y, Suo J, et al. "Plug-and-play algorithms for video snapshot compressive imaging." *IEEE TPAMI* 2021.
[3] Yuan X, Brady D J, Katsaggelos A K. "Snapshot compressive imaging: Theory, algorithms, and applications." *IEEE SP* 2021. | Summary: This paper investigate video snapshot compression imaging reconstruction task by Quad-Bayer CFA pattern into color video SCI. They design a Residual-Mamba-Block consisting of ST-Mamba, Edge-Detail-Reconstruction module and Channel-wise attention module to enhance reconstruction quality and edge details. Experimental results on many datasets validate its effectiveness.
Strengths: It is interesting to introduce Mamba into a new task.
They investigate long sequence modeling to further validate the effectiveness of ST-Mamba.
They proposed a novel framework for video SCI reconstruction task, which outperform existing methods not only in effectiveness but also in efficiency.
Weaknesses: The contribution (ii) and (iii) have limited novelty. It seems that the author simply adapt ST-Mamba to video SCI reconstruction task without elaborated designs. The proposed Residual-Mamba-Block is an incremental combination of ST-Mamba from Vivim, DWConv and Channel-wise Attention. The author should further clarify the novelty.
More description of EDR module should be given. Why EDR module can provide more edge details?
Observed from Table 4, the improvements on large-scale simulation color video are not consistent across datasets. More analysis on such phenomenon should be given. While the author claim the results on Messi and Hummingbird is strained by parameters and FLOPs, specific quantitative comparison should be provided.
Technical Quality: 3
Clarity: 3
Questions for Authors: Why we need Mamba instead of self-attention in this task?
Why quad-Bayer pattern is better than a single Bayer-patterned measurement?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author is suggested to compare the efficiency of the proposed method and existing methods on long videos to validate the effectiveness on long sequence.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer xnVL
Thanks for your valuable comments.
**Q1: Clarification of the novelty.**
**A:** We have elaborated on the novelty in lines 160-167 and 289-296 of the manuscript. **Rather than merely adapting ST-Mamba, we introduced the following innovations in video SCI reconstruction:**
- **High Performance Lightweight Network Design:** Although STMamba is an effective lightweight global attention mechanism approach, it still struggles with the high demands of SCI video reconstruction compared to Vivim's medical image segmentation. Our framework enables joint global-local reconstruction, addressing these challenges.
- **Combined Global-Local Features.** DWConv efficiently extracts local features, avoiding the parameter and complexity issues of common 3D convolutions. **Meanwhile, it is not a straightforward incremental combination with STMamba.** By integrating linear transformations with DWConv features, the network integrates local and global information, enhancing the global features extracted by STMamba. This fusion is crucial for accurately capturing complex edge structures.
- **Enhanced Channel Interaction:** Many Mamba-based frameworks overlook channel interaction limitations, leading to channel isolation and suboptimal information fusion, which adversely affects reconstruction quality. To address this, we introduce a lightweight channel attention mechanism to improve multi-channel perception and feature representation, refining reconstruction quality.
**Q2: In-depth description of EDR module.**
**A:** We have added a detailed analysis of the EDR module's design principles in lines 184-192, explaining how it enhances video reconstruction by improving the representation of depth features. To better illustrate the EDR module's operation, we redraw the relevant section of Figure 4 in the manuscript. Additionally, further analysis of edge detail reconstruction attributed to the EDR module has been added to the ablation experiment section, specifically in lines 297-299. We additionally provide a visual comparison of the edge detail effect with and without the EDR module on the final reconstruction result (**Figure 3** on pdf).
Specifically, key improvements with EDR include:
- **DWConv :** This technique performs spatial convolution independently on each channel and then applies pointwise (1x1) convolution across channels. It effectively captures localized spatial features, enhancing edge detail perception.
- **GELU :** Compared to ReLU, GELU handles negative inputs more naturally, improving the model's ability to represent fine details.
- **Adaptive Weight Initialization:** Truncated normal initialization helps the model capture details more effectively during the early training stages.
- **Multi-scale Feature Fusion:** Combining linear transformations with DWConv features allows the network to extract global information from local features. This simultaneous processing of global and local information is crucial for understanding complex edge structures.
**Q3: Clarification the reasons for inconsistent improvement effects on large-scale datasets.**
**A:** The inconsistencies may be attributed to:
1. **Evaluation in Raw Domain:** As described in lines 494-500 of the Supplementary Material, we use additional demosaicing model to show RGB images reconstructed by GAP-TV, PnP-FFDnet, and PnP-FastDvDnet, which **ultimately reconstruct videos in the raw domain**. In contrast, three E2E methods reconstruct RGB video directly. For a fair comparison and excluding the influence of demosaicing, we assess performance in the raw domain. Thus, the RGB video from E2E methods is converted back to the raw domain using a quad-Bayer mask. Figure 6 in manuscript shows that while iterative methods perform better in metrics, they exhibit more visual artifacts.
2. **Training Resolution:** The training resolution is 128x128 and fine-tuned at 256x256, which is a gap with large-scale datasets.
3. **High-Speed Frame Reconstruction:** The effect of high-speed moving frame reconstruction is to be strengthened under the limitation of the number of parameters and computational complexity. The number of parameters and flops of the model are as follows. Even with the limited FLOPs, we still achieve better performance.
|Method|Params|FLOPs|PSNR|SSIM|
|-|-|-|-|-|
|STFormer|**1.23M**|6084.61G|25.11|0.724
|EfficientSCI|2.21M|11344.367G|24.15|0.710
|MambaSCI|2.47M|**1957.96G**|**30.80**|**0.896**
**Q4: Clarification of why Mamba and not self-attention.**
**A:** As stated in our response to **Reviewer M9Ao's Q1**, from both the technical perspective, including performance metrics and comparisons of Params and FLOPs as shown in Table 2 of main manuscript, and the perspective of pattern processing, where previous methods based on Transformers and CNNs have struggled with artifacts and color distortions when dealing with quad-Bayer arrays. Mamba has proven to be a more suitable choice for addressing the new challenges we face.
**Q5: Clarification of why quad-Bayer pattern is better than a single Bayer-patterned measurement?**
**A:** As noted in response to **Reviewer M9Ao's Q2**,exploring quad-Bayer-based SCI tasks is both necessary and advantageous. Due to the quad-Bayer's physical structure, where each color is represented by a 2x2 grid, it offers higher resolution, better light intake in low-light conditions, and supports HDR video through different exposure settings.
**Q6: Comparison on long videos.**
**A:** We have conducted new experiments to further validate our method's effectiveness with long sequence videos and also provided corresponding visual comparison images (**Figure 4** on PDF) and a performance comparison table (**Table 1** in global rebuttal). The default $T$ is set to 8, meaning reconstruction is performed after compressing 8 frames of video into 2D observations. To test our method's performance with longer sequences, we also evaluated it with $T = 16$ and $T = 32$.
---
Rebuttal Comment 1.1:
Comment: 1. About comparison on long videos, when increasing T to 16 and 32, I cannot see the great advantage in run-time and performance over EfficientSCI. Could you please further explain this point?
2. The author claims the global-local design in the proposed framework compared to STMamba. However, as far as I know, there is also Depth-wise Convolution in ST-Mamba. Thus, this cannot be one of the innovations.
---
Rebuttal 2:
Title: Further Explanation
Comment: Thanks for your valuable comments.
**Q1: Clarification about comparison on long videos.**
**A1:** Regarding your questions about run-time and performance, we'll address them as follows:
- **Performance:** As the model's FLOPs increase with $T$, our method maintains only **9.8%** of the FLOPs required by EfficientSCI. While achieving comparable PSNR to EfficientSCI, our method significantly outperforms it in SSIM metrics. Additionally, as shown in the visual comparison in **Figure 4** of the submitted PDF, our method provides superior visualization compared to EfficientSCI.
- **Run-time:** The reason why our method's runtime is not as competitive as EfficientSCI's can be attributed to two key factors:
- **Mamba Characteristics.** Mamba's acceleration is tied to the GPU's hardware characteristics and the underlying CUDA architecture, limiting our ability to enhance its speed directly.
- **Network Framework Optimization.** To achieve high-quality reconstruction, our framework currently uses 3D convolution for channel feature interactions at the end of each Residual-Mamba-Block. However, this process becomes more time-intensive as $T$ increases. We plan to optimize the framework further in the future to develop a lightweight, high-quality, and faster reconstruction algorithm.
**Q2: Clarification of the innovation of DWConv.**
**A2:** The depth-wise convolution used by Vivim [1] is essentially a 3D depth-wise convolutional layer, which focuses solely on spatial feature extraction. **It lacks a pointwise convolution component, meaning it doesn't account for the relationships between channels**. In contrast, the DWConv we designed **integrates both depthwise and pointwise convolution**. First, it processes spatial information through depthwise convolution, then it combines the relationships between channels via pointwise convolution to produce the final output features. This design allows us to effectively extract local detailed features and combinations between channels while reducing computational costs and the number of parameters.
Thank you for your insightful feedback. We will update the DWConv section of Figure 4 in the main manuscript accordingly and provide a more detailed explanation of DWConv in lines 186-187. Additionally, we will include further analysis in the ablation study in Section 4.5 to enhance the clarity and robustness of our findings.
[1] Yang Y, Xing Z, Zhu L. Vivim: a video vision mamba for medical video object segmentation[J]. arXiv preprint arXiv:2401.14168, 2024.
---
Rebuttal Comment 2.1:
Comment: Thank you for the rebuttal. Most of my concerns have been addressed. However, I strongly suggest the authors include these additional experiments and analyses in the revised manuscript, and clearly describe the difference against previous Mamba-related works. I tend to maintain my score. | Summary: This manuscript introduces the Mamba model and Quad-Bayer CFA pattern into color video snapshot compressive imaging (SCI) for the first time. Specifically, the proposed MambaSCI adopts a non-symmetric U-shaped encoder-decoder architecture, which includes DWConv, Residual-Mamba-Blocks, and ReConv. The Residual-Mamba-Blocks integrate several modules designed to enhance reconstruction quality and edge details. Experiments demonstrate that MambaSCI outperforms comparison methods in both quantitative and qualitative results, with fewer parameters and FLOPs.
Strengths: 1. This work introduces a method for quad-Bayer patterned color video SCI for the first time. Quad-Bayer sensors offer better hardware performance than Bayer pattern sensors and represent a promising direction worth exploring in SCI.
2. The Mamba model, as a popular module for extracting data causality, aligns well with SCI tasks. Its incorporation into SCI is well-justified.
3. The manuscript is generally well-structured and well-written.
Weaknesses: + The proposed MambaSCI takes X_in as input, which is obtained from initialization Block. So the framework can be seen as a simple video enhancement task. Furthermore, MambaSCI does not consider the characteristics of Quad-Bayer pattern like previous work (STFormer, EfficientSCI), but instead directly adopts or integrates popular modules.The in-depth analysis appears to be lacking in certain modules, specifically within the passages from lines 185-187 and 193-197.
+The lack of an explanation for the initialization block raises questions about whether special processing was done to address issues introduced by the quad-Bayer pattern. It would be better to add further visualization of the X_in results for comparison.
+In Figure 6, the results of proposed method show artifacts and color distortion in the lower left corner of ``Swinger”. More analysis should be included.
+The manuscript lacks real experiments on videos captured by quad-Bayer patterns. All the results are obtained on simulated data. Results on real data are expected to evaluate the effectiveness of the proposed method.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to weakness.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: Please refer to weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer zshs
Thanks for your valuable comments.
**Q1: Differences between the proposed method and the video enhancement-based ones.**
**A:** **MambaSCI significantly differs from video enhancement technology in the nature of the task, input differences, and use of prior knowledge**, as detailed below:
- **Nature of the Task:** Video enhancement tasks typically improve RGB video quality by addressing noise or missing frames. In contrast, the MambaSCI network addresses the more complex SCI task, which involves not only overcoming compression degradation but also handling the additional effects of the quad-Bayer array mask. Thus, the MambaSCI network has a dual mission: **accurate reconstruction of compressed data** and **conversion and demosaicing from the raw domain to standard RGB format**, requiring highly flexible and capable of managing the unique complexities of SCI technology.
- **Huge Gap Between the Input of Two Tasks:** Same to previous models like STFormer and EfficientSCI, our MambaSCI reconstruction network also requires an initialization block for up-sampling, which is a crucial component of the SCI reconstruction task. However, the $\mathbf{x_{in}}$ obtained after initialization differs significantly from traditional video enhancement inputs. We present $\mathbf{x_{in}}$ after initialization in **Figure 1** in supplemental PDF, where it is evident that the degradation is much more severe than what common video enhancement networks can handle.
- **Utilization of Different Prior Knowledge:** In the reconstruction process, MambaSCI leverages the unique priori knowledge of compression and quad-Bayer arrays, contrasting sharply with traditional video enhancement’s traditional focus on smoothness and edge preservation.
**Q2: Clarification of the quad-Bayer pattern is not characterized in the same way as in previous work.**
**A:** Both STFormer and EfficientSCI were designed as Bayer array-based video compression reconstruction methods, only considered Bayer pattern features in their initialization blocks. **In contrast, we have developed a specialized initialization block to address the unique characteristics of quad-Bayer physical arrays, and applied it specifically to GAP-TV, PnP-FFDnet, and PnP-FastDVDnet.** Details are as follows:
- GAP-TV, PnP-FFDnet, and PnP-FastDVDnet use iterative optimization by dividing the raw domain into R, G1, G2, and B blocks for independent reconstruction. Similarly, we divided the quad-Bayer array into R, G1, G2, and B sub-blocks as shown in lines 103-107 in the main manuscript. However, due to the quad-Bayer pattern's 2x2 grid representation for each color, processing is significantly more complex than standard Bayer pattern, resulting in an increased processing time from 0.0015s to 7.3312s at 512x512 resolution compared to the SCI generic initialization method.
- Leveraging the powerful modeling capabilities of the E2E model and considering the need for a lightweight design due to the limited computing power of future handsets, we selected the SCI generic initialization block for STFormer, EfficientSCI, and MambaSCI. Our experiments confirmed nearly no performance loss with this choice.
**Figure 1** of our PDF submission provides a visual comparison of $\mathbf{x_{in}}$ obtained using the SCI generic initialization block versus an initialization block designed specifically for the physical characteristics of a quad-Bayer array.
**Q3: Enhancing in-depth analysis of certain modules.**
**A:** We have added a deeper analysis of the EDR module, bottleneck layer, and the decoding layer in the following areas:
- **More Detailed Analysis:** We have thoroughly analyzed the EDR module, detailing its inner workings and its critical role in our framework. Additionally, we have improved the presentation and analysis of the bottleneck and decoding layers.
- **Figure-Text Interaction:** We have redrawn Figures 3 and 4 in the main manuscript and improved their integration with the text to facilitate better understanding and clarity for the reader.
- **Corresponding to the Ablation Experiment:** The module description is linked to the associated ablation experiments, allowing the reader to intuitively grasp the module's important role in the network.
**Q4: Lack of real experiments on videos captured by quad-Bayer patterns.**
**A:** The reasons for the lack of real experiments are as follows:
- Since we are exploring a new task and a new discovery that is not currently being investigated, there are no publicly available datasets for comparison, as stated by reviewer **yWFS (there is no real SCI dataset based on quad-Bayer pattern)**.
- To obtain the real-world datasets, a corresponding optical coding system must be assembled. We are working on building this system, but it will take six months to one year to complete, even with experienced personnel. Additionally, we need to upgrade the existing system by replacing the current modules with four-layer modules.
- To demonstrate the advantages of quad-Bayer over Bayer, we plan to acquire more datasets from extreme scenes and collect HDR data. This will significantly expand the current research field of SCI.
**Q5: Analysis of artifacts and color distortion in the lower left corner of "Swinger".**
**A:** Thanks to your careful observation, we have added the visual comparison in **Figure 2** of the submitted PDF, showing different methods to locally zoom in on the lower left corner of the "swinger." The intricate ropes in this area, as depicted in the GT, present a significant challenge due to their dense packing and rapid movement between frames, leading to ghosting artifacts in the reconstruction.
To address this limitation, we plan to refine our model further. In future work, we will focus on improving the model's ability to handle fast motion between frames, minimizing artifacts and achieving more accurate reconstructions in complex and dynamic scenes.
---
Rebuttal Comment 1.1:
Comment: 1. The quad-Bayer pattern has some advantages, e.g., higher resolution and HDR capability, in convential camera imaging. However, these advantages are not necessarily inherited by the SCI configuration due to the mask modulation. Moreover, the current simulation of SCI pipline is oversimplified without considering optical effects in real scenarios. Therefore, it is crucial to have real-scene evidence to verify the significance of the SCI task with quad-Bayer.
The manuscript lacks real experiments on videos captured by quad-Bayer patterns. All the results are obtained on simulated data. Results on real data are expected to evaluate the effectiveness of the proposed method.
2. Regarding the result of the "Swinger", I feel that the PnP-FastDvDnet method provides better visual quality than the proposed MambaSCI, given that both introduced some artifacts. While PnP-FastDvDnet presents some ghosting artifacts, MambaSCI brings obvious geometrical distortion: line structures are curved locally with also some ringing artifacts.
---
Rebuttal 2:
Title: Further explanation
Comment: Thanks for your valuable comments.
**Q1: Clarification on the lack of real experiments using videos captured by quad-Bayer patterns.**
**A1:** As we are pioneering a **new task** in the field of video SCI, which is still in its **exploratory stage**, no public dataset currently exists for this class. Meanwhile, our main innovations include:
- **New task:** Quad-Bayer offers significant advantages over traditional Bayer sensors, such as higher resolution and HDR capabilities, and is widely used in smartphone cameras. However, there have been no studies on its application in video SCI. We are **the first** to introduce quad-Bayer into video SCI, aiming to expand the field and achieve higher quality reconstruction.
- **New method:** To enable lightweight reconstruction, we are **the first** to utilize an asymmetric U-shaped architecture, employing customized Residual-Mamba-Blocks modules to achieve efficient, high-quality video reconstruction.
Furthermore, constructing the corresponding SCI encoding system requires specialized optical encoding hardware and significant time, which is often beyond the capabilities of researchers focused on decoding algorithms.
**We are working on building and refining this system** to fully leverage the advantages of quad-Bayer over Bayer, such as capturing video in low-light environments and compressing HDR video SCI by adjusting pixel exposure within the 2x2 color grid.
**Q2: Clarification of swinger's visual comparison.**
**A2:** As detailed in lines 494-500 of the Supplementary Material of main manuscript, we utilized the high-performance BJDD [1] model for joint denoising and demosaicing to display the RGB images reconstructed by PnP-FastDvDnet. Specifically, **PnP-FastDvDnet only handles Raw domain reconstruction**, while BJDD performs further denoising and demosaicing to obtain the final RGB video. In contrast, **our model integrates both Raw domain reconstruction and demosaicing**. As a result, the visual quality of PnP-FastDvDnet may appear superior due to the BJDD network's influence. Therefore, as described in the main text, to ensure a fair comparison, we converted the RGB video output from MambaSCI back to the Raw domain and calculated metrics such as PSNR and SSIM in the Raw domain. The performance metrics of MambaSCI vs. PnP-FastDvDnet on swinger in Raw domain are compared as follows:
|Methods|PSNR (dB)|SSIM|
|-|-|-|
|PnP-FastDvDNet|28.60|0.887|
|MambaSCI|**29.78**|**0.920**|
As seen in the table, our method outperforms PnP-FastDVDnet in both PSNR and SSIM in the Raw domain. Additionally, as shown in Figure 6 and Figure 15 of the main manuscript, our approach achieves superior reconstruction in other detailed areas compared to PnP-FastDVDnet.
[1] A Sharif S M, Naqvi R A, Biswas M. "Beyond joint demosaicking and denoising: An image processing pipeline for a pixel-bin image sensor." *CVPR* 2021.
---
Rebuttal Comment 2.1:
Title: Further discussion
Comment: I appreciate the authors' clarification and the new result in the Raw domain. However, I am not sure whether my concerns have been addressed.
**Q1: Real System.**
For simulation as is, changing the Bayer pattern is a simple modofication by easily replacing the old Bayer mask by the new mask in the data generation and reconstruction. Moreover, the simulation is oversimplified without considering the camera response, optical transmission, and mask discretization ($\mathbf{M} \in \mathbb{R}$ in current work), which present significant gap against practical implementation. Glad to know that the authors are constructing a real system for the proposed pipeline, which I feel that would significantly lift the contribution and quality of this research work.
**Q2: Reconstruction Quality.**
Intuitively, if trained properly, the proposed end2end reconstruction pipeline has a great potential to outperform the cascaded “PnP-FastDvDnet + BJDD” combination. If it is not the case, I feel that it would be worth to further improve the reconstruction net.
---
Rebuttal 3:
Title: Further Explanation
Comment: Thanks for your valuable comments.
**Q1: Real System.**
**A1:** Since we are exploring a new task in the SCI field, our focus has been on addressing issues like artifacts and color distortion when dealing with quad-Bayer array video. We appreciate your insight into the potential challenges that might arise in real-world scenarios. We will take your suggestions into account and work on developing a quad-Bayer-based coding system as soon as possible.
**Q2: Reconstruction Quality.**
**A2:** Our proposed E2E network strikes a balance between performance and efficiency. As demonstrated in **Figure 5 of the main manuscript**, at a resolution of 512x512, our method produces superior visual results compared to the cascaded "PnP-FastDvDnet + BJDD" combination. Our approach maintains high color fidelity without introducing additional artifacts, whereas the "PnP-FastDvDnet + BJDD" combination suffers from severe artifacts and shape distortions across multiple datasets. Also, we provide a comparison of the corresponding performance metrics as well as a comparison of the reconstruction times in the table below:
*Table 1: A comparison of performance metrics (PSNR (dB), SSIM) and reconstruction time between PnP-FastDVDnet and MambaSCI.* (**Note: The time for PnP-FastDvDnet does not include the additional time required for the demosaicing process performed by BJDD.**)
|Method|Beauty|Bosphours|Runner|ShakeNDry|Traffic|Jockey|Running time(s)|
|-|-|-|-|-|-|-|-|
|PnP-FastDvDnet|34.29,0.967|33.07,0.947|34.18,0.928|30.11,0.883|23.74,0.811|32.70,0.921|14.60|
|MambaSCI|**36.95,0.979**|**38.62,0.982**|**40.02,0.977**|**34.55,0.950**|**27.52,0.904**|**36.54,0.960**|**5.12**|
Table 1 clearly highlights the superior performance of our approach compared to PnP-FastDVDnet, along with a significant advantage in the time required for reconstruction.
Additionally, we are actively exploring improvements to the pipeline and have identified the following areas for enhancement:
- **Raw Domain Reconstruction Performance:** While our method performs well on middle-scale datasets, it does not achieve superior metrics on large-scale datasets, primarily due to the significant difference in resolution sizes between training (128\*128, and 256\*256 for finetuning) and testing (1920\*1080). We are trying to further investigate multi-scale generalization approaches to address this issue.
- **Improved Demosaicing Algorithms:** Similar to previous Bayer-based reconstruction methods, and for lightweight considerations, our current approach utilizes 3D convolution for the final demosaicing operation. However, the distinct structure of the quad-Bayer pattern makes it more challenging to demosaic compared to traditional Bayer arrays. This complexity can often result in color confusion and artifacts during the demosaicing process. **Recognizing this limitation, we have designed a lightweight demosaicing network that has already shown promising results in preliminary experiments.** We will continue to refine and explore this approach further.
---
Rebuttal Comment 3.1:
Comment: We sincerely appreciate your careful consideration and prompt response to our rebuttal. We are readily prepared to address any further questions or concerns you may have.
---
Rebuttal Comment 3.2:
Comment: Thanks for the explanation, from which I can feel their frank and honest sense. While this work investigate a topic that might be interested in the area of SCI, it leaves too many aspects unconsidered, such as camera response, optical transmission, noise modelinng (Commented also by Reviewer yWFS), and mask discretization. This suggests that the research is at its early stage, and I would encourage the authors further investigate these issues. | Summary: This paper presents a method for compressive video image using Mamba-Unet for quad bayer sensors.
Strengths: + The work seems to be in a less explored area of research.
Weaknesses: - The work is not making significant contribution in terms of method. Directly applying Mamba-Unet to this problem looks unnatural. Why not use a transformer or a CNN? I dont think there will be a much of a difference.
- The writing needs improvement and section 3 is not that clear.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the primary motivation to address quad bayer sensors when they are not that prevalent?
What is the need of using Mamba-Unet for this work?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are not addressed well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Response to Reviewer M9Ao**
Thank for your valuable comments.
**Q1. Clarification of why Mamba and not transformers or CNNs.**
**A:** We have added further discussion of this topic in lines 59-61 of main manuscript, with a more detailed analysis and experimental validation as follows:
Transformer and CNN-based Video Snapshot Compressive Imaging methods have been proposed such as **transformer-based method STFormer** and the **hybrid CNN- and transformer-based method EfficientSCI**. However, existing methods still face two issues:
1. **Technique perspective:**
- The high computational complexity of Transformers and the lack of a global attention mechanism in CNNs hinder their extension into modern, lightweight, and efficient network architectures. Therefore, we focus on exploring multi-scale reconstruction of Mamba-Unet into the Quad-Bayer pattern to achieve lightweight design and enable deployment on mobile devices.
- As shown in Table 2 of main manuscript, MambaSCI-B outperforms STFormer and EfficientSCI, using only **31%** of STFormer’s parameters and **4.5%** of its FLOPS, and **69%** of EfficientSCI's parameters and **9.8%** of its FLOPS.
2. **Pattern perspective:** All existing video SCI methods are designed based on the traditional Bayer pattern. When applied to videos captured by quad-Bayer cameras, these methods often result in color distortion and ineffective demosaicing, rendering them impractical for primary equipment. Thus, we aim to solve these issues with two key contributions: **new task and new method**. That is, efficient and qualitative recovery of quad-Bayer arrays through the asymmetric Mamba-Unet framework. **To the best of our knowledge, we are the first one to develop the Quad-Bayer Patterned Video Snapshot Compressive Imaging task and the first algorithm to elaborate designs of Mamba-UNet to this task.**
**Q2. Clarification of the necessity and advantages for quad-Bayer .**
**A:** Quad-Bayer arrays are prevalent and almost all current flagship smartphones cameras, such as the iPhone 14 Pro/Max, vivo X90 Pro+, Xiaomi 13S Ultra, and OPPO Find X6 Pro, utilize quad-Bayer arrays. We have added a description of the main applications where quad-Bayer is used and its advantages over Bayer in lines 35-38 of the main manuscript. Meanwhile, integrating quad-Bayer array into video SCI not only is necessary but also offers significant advantages over Bayer arrays:
1. **Necessity:**
- **Widespread Use on Smartphones:** Quad-Bayer arrays are common in smartphone cameras [1], which are frequently used for video recording. Existing methods face color distortion and artifacts, which our work aims to provide a customized solution for Quad-Bayer color video SCI tasks to overcome these issues while offering space-saving, high-quality reconstruction through compression and lightweight construction on smartphones.
- **New Industrial Demands and Research Trends:** Bayer arrays have been widely used in industry and well-studied in academia due to their long history, performing adequately in bright scenes. However, the imaging capabilities in low-light conditions are limited. Therefore, exploring quad-Bayer arrays is essential to address these deficiencies.
2. **Advantage:** Quad-Bayer arrays has the following advantages over Bayer arrays:
- **Higher Resolution:** Quad-Bayer provides higher resolution than Bayer arrays by realizing higher pixel density on the sensor for better detail reproduction compared to conventional Bayer arrays [2].
- **Low-Light Performance Enhancemente:** Quad-Bayer collects more light in low-light environments, improving the sensor's signal-to-noise ratio and reducing noise interference.
- **HDR Capability:** By adjusting quad-Bayer sub-pixels and setting different exposure values on different sub-pixels, high dynamic ranging (HDR) images or videos can be captured to improve the detail performance of highlights and shadows [3-4].
- **Color Accuracy:** Quad-Bayer pattern captures richer color information by sampling red, green and blue colors in four directions. This fine sampling reduces the likelihood of color distortion and improves the color accuracy and realism of the image [5].
**Therefore, we have introduced quad-Bayer arrays to video SCI for the first time, providing a solution for reconstructing higher-quality videos even HDR videos in the future.**
**Q3. Section 3 lacks clarity.**
**A:** To address this, we have made the following changes to the paper:
- **Top-Down Structure:** We use a top-down structure to introduce our framework, providing an overview followed by detailed component breakdowns. This approach ensures a clear understanding of the overall algorithm and the specific functions of each individual part.
- **Clear Delineation:** We clearly delineate Section 3.2 to provide a detailed analysis of each module, including a thorough examination of the internal structure and a clear explanation of each module's function.
- **Figure-Text Interaction:** Figures 3 and 4 in the main manuscript have been revised to align with the content and more effectively present the network framework and internal modules details.
- **Contextual Analysis:** We ensured the model description is consistent with Section 4’s experimental setup and interacts with the Supplementary Material’s pseudo-code to aid reproduction.
[1] Madhusudana P C, et al. "Mobile Aware Denoiser Network (MADNet) for Quad Bayer Images" *CVPRW* 2024.
[2] Zheng B, et al. "Quad Bayer Joint Demosaicing and Denoising Based on Dual Encoder Network with Joint Residual Learning" *AAAI* 2024.
[3] Kim J, Kim M H. "Joint demosaicing and deghosting of time-varying exposures for single-shot hdr imaging." *ICCV* 2023.
[4] Wu T, et al. "High Dynamic Range Imaging with Multi-Exposure Binning on Quad Bayer Color Filter Array. " *ICIP* 2023.
[5] Lee H, et al. "Efficient unified demosaicing for bayer and non-bayer patterned image sensors." *ICCV* 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. My doubts are cleared and such a revision should help. I have increased my rating after also reading other reviews. Good luck!
---
Rebuttal 2:
Title: Thank you for your review!
Comment: We sincerely appreciate your thorough review of our paper, careful consideration of our rebuttals, and raising the score. If you have any further questions or concerns, we are always available to provide additional clarification as necessary. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and insightful comments which have helped improve our paper. We are pleased that the reviewers found our introduction of Mamba into the SCI task to be very novel and well justified, and that our proposed quad-Bayer patterned SCI task is a direction worth exploring. Some of the main criticisms are summarized in the next section, after which we will address the comments of the delicious reviewers separately.
In response to the reviewers' constructive criticism, we added some new experiments including differential analysis of whether considering quad-Bayer feature in the initialization block, analysis of Swinger data results, evaluation of the EDR module for edge detail reconstruction, and a comparison of reconstruction in higher frame rate scenes. The new analyses are shown in **Figures 1 to Figure 4** in the associated PDF file.
**New comparative experiments and analysis.** To verify the validity and reasonableness of our method, we have conducted additional experiments and analysis accordingly. These experiments include:
- **Comparison of Initialization Methods with Quad-Bayer Characteristics and SCI Generic Blocks.** We have customized a Quad-Bayer-specific initialization block by considering the characteristics of the Quad-Bayer pattern (each color consists of 2x2 pixels) and have applied it to GAP-TV, PnP-FFDnet, and PnP-FastDVDnet. **Figure 1** shows a visual comparison of the $\mathbf{x_{in}}$ obtained considering Quad-Bayer characteristics in the initialization block and the $\mathbf{x_{in}}$ obtained using SCI generic initialization block, demonstrating the use of the SCI generic initialization block can reduce reconstruction time while maintaining performance. For a more detailed analysis, see **Reviewer zshs's Q2**.
- **Zoomed-In Patch Comparison of "Swinger" with Ghosting Artifact Analysis.** **Figure 2** shows a localized zoomed-in patch comparison image of the lower left corner of the "swinger", along with an analysis of the ghosting artifacts that appear.
- **Impact of EDR Module on Edge Detail Reconstruction.** **Figure 3** shows the effect of w/ or w/o EDR module on the impact of edge detail reconstruction, we can see the EDR module can significantly improve the reconstruction of edge details.
- **Reconstruction Effects at Higher Frames and Compression Ratios.** We conducted new experiments in longer frame rates and larger compression rate scenarios. **Figure 4** shows a comparison of the reconstruction effects of our method and the comparative methods at higher frames T (compression ratios). The related performance comparison is shown in **Table 1**:
*Tabel 1: Performance analysis at $T$=16 and 32 cases. Our MambaSCI outperforms in PSNR and SSIM while requiring less than **10%** of the FLOPS of the comparison method.*
| T | Methods | Params (M) | FLOPS (G) | PSNR (dB) | SSIM | Time (s) |
|----|---------------|-----------|-------|---------|-----------|----------|
| 16 | PnP-FFDnet | - | - | 24.85 | 0.767 | 4.52 |
| | STFormer | 19.49 | 24311.76 | 25.21 | 0.685 | 3.13 |
| | EfficientSCI | 8.83 | 11406.23| 25.35 | 0.656 | **1.37** |
| | MambaSCI | **6.11** | **1113.78** | **25.39** | **0.817** | 2.67 |
|
| 32 | PnP-FFDnet | - | - | 1.82 | 0.496 | 9.85 |
| | STFormer | 19.49 | OOM | - | - | - |
| | EfficientSCI | 8.83 | 22825.34 | **23.24** | 0.653 | **2.21** |
| | MambaSCI | **6.11** | **2227.57** | 22.44 | **0.785** | 5.20 |
Pdf: /pdf/f768a2406616daf8ad956cd9185eef8dd9f4caae.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Enhancing Graph Transformers with Hierarchical Distance Structural Encoding | Accept (poster) | Summary: This work proposes a new structural encoding for graphs, which can be applied in e.g. graph transformers. The method relies on some graph partitioning / coarsening methods to generate hierarchical clustering of a graph, and the structural encoding is more informative than shortest path structural encoding. The paper proves the the new structural encoding has higher expressivity, and empirical results are good on most graph datasets.
Strengths: The method is novel, instead of running a graph transformer on different hierarchical clusters, it makes use of the shortest path on different hierarchies.
Overall the writing is pretty good, the narrative is clear and easily understandable. The illustration is also nice.
The theoretical and empirical results are strong. The experiments are exhaustive.
Weaknesses: The graph partitioning / coarsening preprocess, including the paritioning algorithm and shortest path computation, can also be complex.
Technical Quality: 3
Clarity: 4
Questions for Authors: According to the definition of edge set $E^{k+1}$ in line 78, wouldn't there be information loss? If there are multiple edges across two clusters, the number of edges is not represented.
In section 3.3, is the partition matrix low rank? Especially if the coarsening ratio is close to 1, I guess the partition matrix would be almost full rank.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: According to authors, for larger graphs, more hierarchies are required.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for clearly understanding the core of our work and fully recognizing our contributions!**
**W1: Complexity**
>The graph partitioning / coarsening preprocess, including the paritioning algorithm and shortest path computation, can also be complex.
This is a fair point. We acknowledge that our approach introduces an additional cost. However, we would like to clarify that this cost is manageable. In our configuration, we employ the METIS algorithm for coarsening on large graphs, which is characterized by a linear time complexity of $O(|E|)$, making it efficient for partitioning graphs. For example, as reported in in *Appendix C.4* of our manuscript, it takes less than five minutes for coarsening and shortest path computation on the ogbn-products graph with 2 million nodes.
We believe that the enhanced performance of our method, as demonstrated in our empirical evaluations, justifies the additional small computational overhead.
**Q1: Theoretical Details**
> According to the definition of edge set Ek+1 in line 78, wouldn't there be information loss? If there are multiple edges across two clusters, the number of edges is not represented.
Thank you for carefully checking the theoretical details! We adhere to the standard definitions of Graph Hierarchies. Under this framework, some information loss regarding the number of edges between clusters is inevitable. However, our HDSE framework addresses this by retaining the original edge information within $\text{GHD}^0$, the shortest path distance matrix of the input graph.
> In section 3.3, is the partition matrix low rank? Especially if the coarsening ratio is close to 1, I guess the partition matrix would be almost full rank.
Yes, the partition matrix is inherently low-rank. In our settings, the coarsening ratio is much smaller than 1, ensuring that the partition matrix remains low-rank.
Once again, we sincerely appreciate your acknowledgment of our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your feedback!
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer o879,
This final confirmation is highly appreciated!
Best regards,
Authors | Summary: This paper leverages graph coarsening techniques to help the graph transformer capture hierarchical distance information on a graph, improving its performance on both node-level and graph-level tasks. Besides empirical validation, the paper also provides theoretical guarantee about the better expressiveness and generalization of the graph transformer with the proposed HDSE.
Strengths: - This paper is well organized and clearly written.
- The method incorporates coarsening technology, integrating distance information at different hierarchical levels into the attention score. It is easy to understand and implement and is somewhat novel. Additionally, it has a certain theoretical guarantee regarding the expressiveness and generalization of the proposed method.
- The experiments are comprehensive, evaluating the model's performance on homophilic and heterophilic graphs, as well as on large-scale graphs. Efficiency and ablation studies are also conducted, along with visualizations. The improvements are significant on some datasets, such as Peptides-func, Chameleon, and Squirrel.
Weaknesses: - There are some missing results in the experiments (mainly in Tables 2 and 5). For example, the results for models such as ANS-GT, NAGphormer, and LINKX on Actor, Squirrel, and Chameleon are missing. Although implementing all work on all datasets is time-consuming, the missing results reduce the paper's convincingness. Additionally, some results differ from previous reports. For instance, LINKX reported its performance as about 56 on the arxiv-year dataset [1], but in this paper, it is around 53.53. Is this because of different experimental settings?
- Besides, the performance of the model on large-scale heterophilic graphs remains unknown. The few heterophilic graphs shown in Table 5 are not large. To further demonstrate the model's performance on large-scale heterophilic datasets, experiments on datasets such as pokec, snap-patents, and wiki [1] are recommended. This would enhance the quality of the experiments.
[1] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods
Technical Quality: 3
Clarity: 3
Questions for Authors: Besides the questions mentioned above, there are some additional questions:
**Q1:** In Equation 7 and the illustration below, could you please explain why $\text{GHD}^m\prod \limits_{l=0}^{c-1}P^l$ computes distances from input nodes to clusters at the c-level graph hierarchy? For the input node, why is it not $\text{GHD}^0$? $\text{GHD}^m$ represents the distance at hierarchy $m$. What is the meaning of $\text{GHD}^m$ with the product of projection (coarsening) matrices from level $0$ to $c-1$ (i.e. $\prod \limits_{l=0}^{c-1}P^l$)?
**Q2:** In Table 6, the efficiency of GOAT+HDSE, SGFormer, and NodeFormer was compared. However, since NodeFormer and SGFormer use linear attention, is GOAT+HDSE still competitive in terms of efficiency when the graphs are larger (i.e., ogbn-products and ogbn-papers100m)? As the method is claimed to have scalability, illustrations on such large-scale graphs would be beneficial. Additionally, as HDSE is based on graph coarsening and GHD, the cost of coarsening and computation of GHD at each level should also be considered, especially on graphs with more nodes than those used in Table 17.
**Q3:** In the Experiment Setting, the same hyperparameters are used for the proposed method and the baseline transformers. It is also claimed that the hyperparameters are determined within SGFormer's grid search space. However, the optimal hyperparameters for the proposed method may not be suitable for the baseline models. Could you clarify this?
**Q4:** In Appendix 4, the attention scores are displayed between the selected node and the other nodes. How was the node selected? Would different selected nodes lead to different attention weights and still reflect the capability to capture a multi-level hierarchical structure?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: When graphs are larger, a higher maximal hierarchy level $K$, is needed to obtain multi-level structure information, which may be limited by current coarsening algorithms. This limitation can be addressed with more effective coarsening algorithms.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We greatly appreciate the very detailed feedback and your recognition of our contributions! We hope our response below will further enhance your confidence in our work.**
**W1: Incompleteness of Experimental Results**
> The results for models such as ANS-GT, NAGphormer, and LINKX on Actor, Squirrel, and Chameleon are missing.
Since we experimented with well-known datasets, **we reported the baseline results from their respective original papers**. We did not run them ourselves, but are happy to add results based on your recommendations. We have run all the missing results you mentioned. The additional results below further demonstrate the effectiveness of our HDSE. We will include these results in the revised version.
| | Actor↑ | Squirrel↑ | Chameleon↑ |
| --------------- | -------------- | -------------- | -------------- |
| ANS-GT | 35.2 ± 1.3 | 40.8 ± 2.1 | 42.6 ± 2.7 |
| NAGphormer | 34.3 ± 0.9 | 39.7 ± 0.8 | 40.3 ± 1.7 |
| LINKX | 36.1 ± 1.5 | 41.9 ± 1.2 | 43.8 ± 2.9 |
| **GOAT + HDSE** | **38.0 ± 1.5** | **43.2 ± 2.4** | **46.0 ± 3.2** |
> The arxiv-year results differ from previous reports.
Yes, the observed discrepancies on the arxiv-year dataset are due to different experimental settings. We employed the experimental setup used in GOAT to ensure a fair comparison. The results we reported—53.53 for LINKX and 53.57 for GOAT—are consistent with those in the original GOAT paper (Table 2 in GOAT paper).
**W2: Performance on Large-Scale Heterophilic Graphs**
> Experiments on datasets such as pokec, snap-patents, and wiki are recommended.
We appreciate your suggestion.
We ran additional experiments on the pokec and snap-patents datasets. We attempted to run experiments on the wiki dataset, but the download link for the label file 'wiki_views2M' was no longer valid. We contacted the authors of the LINKX paper but haven't received a response yet. We utilized the default splits and features from the LINKX paper and reported the mean accuracy over 5 runs. The results further demonstrate the effectiveness of our HDSE on large-scale heterophilic graphs. We will include the results in the revised version.
| | pokec↑ | snap-patents↑ |
| --------------- | ---------------- | ---------------- |
| LINKX | 82.04 ± 0.07 | 61.95 ± 0.12 |
| GOAT | 84.69 ± 0.18 | 62.43 ± 0.37 |
| **GOAT + HDSE** | **85.88 ± 0.33** | **63.56 ± 0.26** |
**Q1: Explanation of GHD Computation**
> Why $\mathrm{GHD}^m (\prod_{l=0}^{c-1} P^l)$ computes distances from input nodes to clusters at the $c$-level graph hierarchy?
We apologize for the confusion.
As defined in Eq.2, $\mathrm{GHD}^m \in \mathbb{R}^{|V| \times |V|}$ represents **the shortest path distance between any two *input nodes* at the $m$-level graph hierarchy.** $\forall m, \mathrm{GHD}^m$ has the same size as $\mathrm{GHD}^0$ (see illustration in Figure 1).
In Eq.7, our high-level HDSE computes, at each level $c\leq m \leq K$, distances between input nodes and clusters obtained by coarsening (i.e., super nodes at the $c$-level graph hierarchy). This is achieved by multiplying the projection matrices $\prod_{l=0}^{c-1} P^l$ to $\mathrm{GHD}^m$. In effect, it is equvalent to selecting corresponding columns from $\mathrm{GHD}^m$. For instance, referring to Figure 1, $\mathrm{GHD}^1P^0 \in \mathbb{R}^{11 \times 3}$ calculates the distances from input nodes to the super nodes at $1$-level graph hierarchy, essentially selecting the first, fourth, and tenth columns from $\mathrm{GHD}^1$.
Likewise, $\mathrm{GHD}^m (\prod_{l=0}^{c-1} P^l) \in \mathbb{R}^{|V| \times |V^c|}$ selects $|V^c|$ columns from $\mathrm{GHD}^m$ to represent the distances, at the $m$-level graph hierarchy, between the input nodes and the $c$-level super nodes (i.e., clusters obtained through coarsening).
**Q2: Efficiency on Large-Scale Graphs**
> Is GOAT+HDSE still competitive in terms of efficiency when the graphs are larger (i.e., ogbn-products and ogbn-papers100M)?
Thank you for the suggestion. As noted in lines 214-218, GOAT also uses linear attention. The integration of HDSE does not increase the complexity of GOAT. We provide additional results on larger graphs below, further demonstrating the competitiveness of GOAT+HDSE in terms of efficiency.
| Training time per epoch | ogbn-products | ogbn-papers100M |
| ----------------------- | ------------- | --------------- |
| NodeFormer | 5.6s | 595.1s |
| SGFormer | **4.8s** | 579.4s |
| **GOAT + HDSE** | 5.3s | **446.5s** |
We have reported the empirical runtime for coarsening and the computation of GHD on ogbn-products (5min using METIS) and ogbn-papers100m (59min using METIS) in *Appendix C.4* of our manuscript. We will add the results in Table 17.
**Q3: Optimal Hyperparameters for Baselines**
> Could you clarify the optimal hyperparameters for the baselines?
We apologize for the confusion.
It's important to note that **we obtained the results for all baselines from their original papers or established leaderboards, where they were optimally tuned**.
Please also note that we adopted two distinct experimental setups:
1. For **Graph-Level Tasks**, we used **Base Model + HDSE**. The base model (e.g., GT, SAT, GraphGPS, or GRIT) was optimally tuned according to the original paper. ***We used the same hyperparameters as the base model to demonstrate the plug-and-play capability of our HDSE***.
2. For **Large-Graph Node Classification**, our model is **GOAT + HDSE**. We tuned the hyperparameters of our model within SGFormer's grid search space.
We will further clarify that in the revised version.
**Q4: Selection of Nodes to Display Attention Scores**
Thank you for your careful reading. Due to space constraints, please refer to the "Global Reply": #G1 item.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed feedback, which has addressed some of my concerns. I have raised my scores for Soundness and Rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 4HNX,
Thank you very much for taking the time to review our rebuttal! We highly appreciate your positive and insightful assessment of our work.
In case there are any additional concerns we can address, please let us know.
Best regards,
Authors | Summary: To enhance the effectiveness and scalability of graph transformers, this paper proposes a hierarchical distance structural encoding (HDSE) method to incorporate hierarchical structural information with graph transformers. Theoretical analysis of graph transformer equipped with HDSE shows the improvements of both expressiveness and generalization. Experiments have been conducted to verify the effectiveness and efficiency of the proposed method.
Strengths: 1. It is interesting and reasonable to improve the performance of graph transformer based on the hierarchical structural information of the input graph. Moreover, the proposed method is easy to be applied on existing graph transformer models.
2. Theoretical analysis are provided to guarantee the improvements of expressiveness and generalization of combining HDSE with graph transformer.
3. A technique of speeding up HDSE to large-scale graphs is provided and verified in experiments on large graphs with millions of nodes.
4. The paper is well-organized and describes the motivation and methodology clearly.
Weaknesses: 1. It can be seen from Table 4 that the performance of the proposed method varies with different coarsening algorithms and transformer backbones, which should be discussed in details.
2. The experimental settings are not described clearly. Four model + HDSE methods are used for graph-level tasks but only one GOAT + HDSE used for node classification, which makes the results slightly inconvincible.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Although combining the hierarchical structural information into graph transformer indeed improves the performance of the transformer, the improvements vary with different coarsening algorithms. Thus, can the authors provide theoretical analysis on what is the optimal hierarchical structural information and conduct more experiments on how to select suitable coarsening algorithm in practice?
2. The authors integrate HDSE into GT, SAT, GraphGPS and GRIT for graph classification and regression experiments (in Table 2), but only use GOAT + HDSE in the node classification tests (in Table 5), why? Moreover, the results of node classification on classical datasets Core, CiteSeer and PubMed should be reported and discussed in the Evaluation section rather than Appendix.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We greatly appreciate your comprehensive understanding of our work and recognizing our contributions! We hope our response below will further enhance your confidence in our work.**
**W1 & Q1: Impact of Coarsening Algorithms**
>It can be seen from Table 4 that the performance of the proposed method varies with different coarsening algorithms and transformer backbones, which should be discussed in details.
>Can the authors provide theoretical analysis on what is the optimal hierarchical structural information and conduct more experiments on how to select suitable coarsening algorithm in practice?
Thank you for the insightful comment and question.
Our study on coarsening algorithms in Table 4 focuses on the ZINC dataset, where the size of graphs is typically small (around 20 nodes). The Newman algorithm exhibits optimal performance on these small graphs; however, as delineated in Appendix C.4, its high computational complexity makes it impractical for larger graphs. Therefore, **for coarsening on large graphs, we recommend using a *linear complexity* algorithm, such as METIS or Loukas**.
Following your suggestion, we conducted additional experiments to study the impact of *linear coarsening algorithms* on node classification across three datasets: Cora, CiteSeer, and PubMed. The results, as shown below, demonstrates the advantage of METIS, which is the coarsening algorithm used for node classification in our experiments.
| | Cora↑ | CiteSeer↑ | PubMed↑ |
| -------------------- | -------------- | -------------- | -------------- |
| GOAT | 82.1 ± 0.9 | 71.6 ± 1.3 | 78.9 ± 1.5 |
| GOAT + HDSE (METIS) | **83.9 ± 0.7** | **73.1 ± 0.7** | **80.6 ± 1.0** |
| GOAT + HDSE (Loukas) | 83.5 ± 0.9 | 72.5 ± 0.6 | 79.8 ± 0.9 |
Based on our observations, **we suggest using higher-complexity algorithms like Newman or Louvain for small graphs, and linear-complexity algorithms like METIS for large graphs**. We will include the discussion in the revised manuscript.
Additionally, different coarsening algorithms tend to produce different hierarchical structures, which benefit different types of graph classification tasks. For instance, preserving rings is crucial for certain tasks. Similarly, the Newman algorithm, which calculates edge betweenness and tends to identify bridges, may be particularly well-suited to networks where bridges play a critical role.
Hence, we now treat the choice of coarsening algorithm as a hyperparameter, considering the unique structures inherent to different graphs and the broad scope of various applications. This allows graph experts to tailor their analysis by selecting the most appropriate coarsening algorithm for the structures of specific graphs.
Further theoretical exploration into the optimal hierarchical structural information for different types of graphs is very interesting. We look forward to exploring this in the future.
**W2 & Q2: Clarity of Experimental Settings**
> The experimental settings are not described clearly. Four model + HDSE methods are used for graph-level tasks but only one GOAT + HDSE used for node classification, which makes the results slightly inconvincible.
>The authors integrate HDSE into GT, SAT, GraphGPS and GRIT for graph classification and regression experiments (in Table 2), but only use GOAT + HDSE in the node classification tests (in Table 5), why?
We apologize for the confusion.
Please note that we have two separate experimental settings: one for **graph-level tasks using Conventional Graph Transformers + HDSE**, another for **large-graph node classification using *Linear-attention* Graph Transformer (e.g., GOAT) + HDSE** (high-level HDSE). This distinction arises from the constraints imposed by the quadratic complexity of conventional graph transformers (like GT, SAT, GraphGPS, GRIT), which are not feasible for large-scale graphs due to out-of-memory issues.
Therefore, since **large-graph node classification necessitates the use of Linformer-style linear-attention graph transformers such as GOAT and Gapformer (see line 214)**, we use GOAT as the base model for this task. To further validate the generability and effectiveness of our HDSE framework, we also experimented with another linear transformer model, Gapformer, and observed promising results, as reported below.
| | Cora↑ | CiteSeer↑ | PubMed↑ |
| -------------------- | -------------- | -------------- | -------------- |
| Gapformer | 87.3 ± 0.7 | 76.2 ± 1.4 | 88.9 ± 0.4 |
| **Gapformer + HDSE** | **88.4 ± 0.7** | **76.9 ± 0.6** | **89.7 ± 0.5** |
Please note that we followed the supervised split setting (48%/32%/20% training/validation/test sets) used in the Gapformer paper. We are committed to conduct more experiments to further substantiate our findings and incorporate the results in the revised version.
> The results of node classification on classical datasets Core, CiteSeer and PubMed should be reported and discussed in the Evaluation section rather than Appendix.
We appreciate your suggestion. We placed the experiments on Cora, CiteSeer and PubMed in the Appendix due to space constraints, we will move them to the main paper given additional space in the final version.
We will incorporate your feedback and the rebuttal discussion into the paper. Thank you once again for helping us improve our work.
---
Rebuttal Comment 1.1:
Comment: Thanks for your feedback. I think my concerns are addressed and I will maintain my positive score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer p468,
Thank you for reaffirming your rating! We greatly appreciate your positive feedback and insights.
Best regards,
Authors | null | null | Rebuttal 1:
Rebuttal: **We express our gratitude to all reviewers for their invaluable time, effort, and the comprehensive, constructive feedback they have provided!**
--------
**G1 Additional Visualizations**
We attach a one-page PDF that contains additional visualization results as suggested by the Reviewer 4HNX. In Figure 4 of Appendix D, the node used for visualization was selected randomly. We have clearly marked this node in the **attached PDF** (Figure 1 in PDF) and will update the manuscript accordingly. Furthermore, we have conducted additional visualizations on other randomly selected nodes, also included in the **attached PDF** (Figure 1 in PDF). These visualizations confirm that different selected nodes lead to different attention weights and consistently demonstrate our HDSE's capability to capture a multi-level hierarchical structure.
--------
In the following separate responses, we address each weakness and question raised and will incorporate the suggestions and new results into our revised paper. We are happy to provide additional information if needed.
Pdf: /pdf/0384bdaa7c9c38252ea6c268cfd1c641b34bd7c2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Do causal predictors generalize better to new domains? | Accept (spotlight) | Summary: This work aims to provide an empirical study of how well models trained on causal features generalize across domains compared to models trained on all features. The major result from this study is that, contrary to the existing understanding that using causal features can generalize well in different environments, across 16 prediction tasks, models trained on all features generally perform better than those using only causal features. A similar trend is observed when using arguably causal features. Other experimental results, such as the inconsistency in the identification of causal features using classical discovery algorithms and the accuracy performance comparison between standard ML models and causal ML models, are also provided.
Strengths: (High clarity in terms of experimental Setup and Results) This work provides a detailed exposition of how causal features were chosen for each task. The authors include a diverse set of tasks from different domains for the experiment, and the experimental procedures are clearly described. This ensures the validity of their results and is crucial for reproducibility.
Weaknesses: 1. (Quality in terms of writing):
The style and structure of this work require significant improvement. Specifically, the authors should adopt a more academic writing style and strive for greater specificity in each sentence. For instance, the sentences on lines 20, 24, 62, and 70—"But it’s less clear what to do about it," "The idea may be sound in theory," "To be sure, our findings don’t contradict the theory," and "... also discussed under the terms ‘autonomy’, ‘modularity’ or ‘stability’"—are ambiguous. The authors should clearly describe the specific theories they are referring to in these instances and precisely define what 'autonomy', 'modularity', or 'stability' mean in the context of causal machine learning.
There are many other places with similar writing issues, and I leave it to the authors to identify these and make further improvements.
2. (Quality in terms of validity of the experiment): There are several aspects of the experimental setup that are not entirely convincing or fair, which raises concerns about the reasonableness of the claims made in this work. I will raise these specific issues in the next section.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. On the choice of causal features: While I understand that the authors have tried their best of knowledge to select the features that are causal, it is also very likely the case that these chosen features are not actually causal. Under this mis-specificity, it is not surprised that the accuracies, irregardless of in-domain or out-domain accuracies, will suffer. Think about the case where some of the real causal features are selected and some of the spurious features are selected. Using these features to train the model will result in expected sub-optimal prediction performance. In your experiment, it seems it might be the case the above issue is true, as when you include the arguably causal features, the prediction errors significantly drops.
2. Similar to the above issue, if the domain knowledge used to select causal features was incorrect, it is expected that the features identified by the causal discovery algorithm would not significantly overlap with the hand-selected causal features. While it is understood that causal discovery algorithms are not perfect, assuming these algorithms can accurately identify causal features, any inconsistency would likely be due to incorrect hand-selected features. Based on this reasoning, I suggest that the authors create a synthetic dataset where the ground truth is known to strengthen their claims.
3. Similar to the above issue, the robustness tests seem unconvincing. Specifically, the robustness results are based on selecting a subset of features from the hand-selected causal features. However, if the original causal features are incorrect, any subset chosen from these features is unlikely to improve generalization.
4. It is uncertain whether the comparison of accuracies between the use of machine learning models, causal methods, and methods tailored to domain robustness is a fair comparison. Specifically, for instance, it is unfair to compare XGBoost, a non-linear model, with IRM, which is a linear model. Such differences in prediction performance might come not from the methods themselves but from the different hypothesis classes and model complexities. Similarly, including methods tailored to domain robustness in the comparison can be confusing. For example, one could use the same causal features but achieve better domain generalization by employing domain robustness methods, such as sharpness-aware minimization (which is similar to the flavor or DRO), which finds flatter minima in the loss landscape to enhance generalization, which is entirely unrelated to the choice of causal features.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Although the authors include a limitations section, the authors does not directly address how their work might have potential limitations or potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback. Below, we aim to answer the reviewer’s concerns:
(*Quality in terms of writing*)
We will revise our work and adopt a more precise writing style, as well as reference in greater detail to what we are referring to.
(*Hand-selected causal features*)
> it is also very likely the case that these chosen features are not actually causal
We agree with the reviewer that the chosen features are likely misspecified. We even state a similar concern in the discussion section (“We likely made some mistakes in this classification", line 271f.).
Nevertheless, we tried out every conceivable way to obtain causal features (domain knowledge, remove single features from the ones obtained by domain knowledge, causal discovery algorithms, causal machine learning methods). If the reviewer knows another way to obtain the causal features in an empirical study, we would be happy to include it in our analysis.
> assuming these algorithms can accurately identify causal features, any inconsistency would likely be due to incorrect hand-selected features
This is true. That’s why we tried out both approaches in our study: hand-selecting based on domain knowledge and applying causal discovery algorithms. The results don’t change though.
(*Synthetic dataset*)
> I suggest that the authors create a synthetic dataset [...] to strengthen their claims.
Following your suggestion, we conducted synthetic experiments. The setup is depicted in the PDF of the author’s rebuttal. Our code is based on the synthetic study conducted by Montagna, et al., 2023. We refer the reviewer to their results for a detailed performance analysis of the causal discovery methods.
The synthetic experiments confirm our empirical findings. Using all features achieves best out-of-domain prediction accuracy. The one exception is if the distribution shift is exclusively on the anti-causal features and even in this case, a strong shift is needed before causal features achieve best out-of-domain accuracy.
(*Robustness tests*)
> if the original causal features are incorrect, any subset chosen from these features is unlikely to improve generalization
Yes, the robustness test on the causal features merely tests for misclassifying one feature as causal although it is not. To enhance our robustness test, we also tested 500 random subsets for each task. The subsets are randomly drawn from all features. See Appendix C.5. for details. We welcome any suggestions to further test the robustness of our results.
(*Empirical evaluation*)
> It is uncertain whether the comparison of accuracies [..] is a fair comparison. [...] Such differences in prediction performance might come [...] from the different hypothesis classes and model complexities.
We follow the example of preceding empirical studies when comparing standard machine learning methods, causal machine learning methods and domain robustness methods via accuracy (Gulrajani and Lopez-Paz, 2020, Miller, et al., 2021, Gardener, et al., 2023). The objective is the applicability in practice. It is of separate interest to explore and disentangle the reasons for the inferior performance of certain methods, e.g., different hypothesis classes and model complexities.
> For example, one could use the same causal features but achieve better domain generalization by employing domain robustness methods
We train all feature sets in the main analysis with the same methods. In particular, we also train on the causal features on domain robustness methods (DRO, GroupDRO, an adversarial label robustness method).
> address how their work might have potential limitations or potential negative societal impacts.
We will improve our limitation section and explicitly state our limitations and potential negative societal impacts.
*References:*\
Montagna, F., Mastakouri, A., Eulig, E., Noceti, N., Rosasco, L., Janzing, D., ... & Locatello, F. (2024). Assumption violations in causal discovery and the robustness of score matching. Advances in Neural Information Processing Systems, 36.\
Gulrajani, I., & Lopez-Paz, D. (2020). In search of lost domain generalization. arXiv preprint arXiv:2007.01434.\
Miller, J. P., Taori, R., Raghunathan, A., Sagawa, S., Koh, P. W., Shankar, V., ... & Schmidt, L. (2021, July). Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization. In International conference on machine learning (pp. 7721-7735). PMLR.\
Gardner, J., Popovic, Z., & Schmidt, L. (2024). Benchmarking distribution shift in tabular data with tableshift. Advances in Neural Information Processing Systems, 36.
---
Rebuttal Comment 1.1:
Comment: Thank you so much for your response. However, although the authors tried every possible approach to select the features, the possibility of mis-specification of features still exists. Additionally, I am still not quite convinced about the fairness of comparing different ML models and training methods. Therefore, I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for reading the rebuttal. We appreciate your affirmation that we analyze "every possible approach to select the features", even though our explanations and additional experiments couldn't convince you. | Summary: This paper attempts to test the hypothesis of whether models trained on causal features generalize better across domains.
The authors found that predictors using all available features both causal and non-causal, have better in-domain and out-of-domain accuracy compared to causal features-based predictors. The authors also discuss that causal discovery algorithms perform poor in selecting causal variables. The authors provide empirical analysis to support their claims.
Strengths: The paper is written in a well-presented manner and is easy to follow. The authors provided an extensive experimental analysis considering different real-world scenarios.
Weaknesses: Here I discuss my major concerns about the paper:
* It is unclear what type of distribution shift the authors are considering. For example, for a pair of variables X, Y, is it target shift: P(Y ) changes while P(Y |X) stays fixed, ii)conditional shift: P(Y |X) changes while P(Y ) stays fixed or, iii)covariate shift: only P(X) changes (see [1],[2] for details).
* It is unclear if the test datasets the authors considered properly represent enough domain shifts. The author should more precisely numerically mention (like above) how much the test distribution changed compared to the train distributions.
* One possible reason for causal features not performing well might be that the test distribution is not different enough compared to the training distribution. For example, if we consider a dataset of birds (Caltech-UCSD Birds-200-2011 (CUB) dataset) and we consider all available features including the background (water, land, forest, etc) for training, we will probably achieve better performance (due to background shortcut) than only causal features. However, if we consider a test dataset with higher domain shift such as Waterbirds [3], a model trained on all features will perform poor since the background feature will affect its prediction. In this scenario, causal predictors might perform well [4]. Maybe the irrelevant/non-causal features are helping because the test distributions are not significantly different. I would request the authors to share their perspectives about this possibility.
* The authors should perform an ablation study by removing one non-causal feature at a time and measure the corresponding performance. This should show which non-causal features are improving the accuracy and how much. If they are not causal features, then why such features are improving the model accuracy. These should be discussed in detail.
* If the conclusion drawn by the authors is to utilize all features for prediction, then the prediction would highly depend on features whose distribution changes with the domain. As a result, the prediction would be highly dependent on the training domain and perform poor in the test domain. This would prevent the model from generalization. Please read the introduction section of [2], [5] for details. How do the authors plan to deal with such domain dependence and achieve generalization if they use all features.
* Are the authors considering the possibility of confounders (shared unobserved parent of observed variables)? Due to the presence of confounders, we might not observe all the causal parents. For example, [5] discusses a medical scenario, where the goal is to diagnose a target condition T, say lung cancer using information about patient chest pain symptoms C and whether or not they take aspirin A. Lung cancer leads to chest pain (L->C) and that aspirin can relieve chest pain (A->C). Smoking K (unobserved confounder) is a risk factor for both lung cancer (T), and aspirin (A) is prescribed to smokers as a result: T<—[K]—>A. In such scenarios, different causal methods such as [2,5] utilize non-causal features such as children (C) or neighbor (A) for prediction (T) as causal parent smoking (K) is unobserved. Thus, if we are only using causal parents, it is necessary to make sure that there are no unobserved causal parents.
* The authors performed their experimental analysis on their selected datasets. They should also show their results on the datasets where causal predictors claim to perform well. Did the authors check any such datasets?
[1] Kun Zhang, Bernhard Schölkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. In International conference on machine learning, pages 819–827. PMLR, 2013.\
[2] Lee, Kenneth, Md Musfiqur Rahman, and Murat Kocaoglu. "Finding invariant predictors efficiently via causal structure." Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence. 2023.\
[3] Sagawa, Shiori, et al. "Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization." arXiv preprint arXiv:1911.08731 (2019).\
[4] Kaur, Jivat Neet, Emre Kiciman, and Amit Sharma. "Modeling the data-generating process is necessary for out-of-distribution generalization." arXiv preprint arXiv:2206.07837 (2022).\
[5] Adarsh Subbaswamy, Peter Schulam, and Suchi Saria. Preventing failures due to dataset shift: Learning predictive models that transport. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 3118–3127. PMLR, 2019s
Technical Quality: 1
Clarity: 3
Questions for Authors: Below I share my questions:
* Why are the non-causal features helping the model to perform well? Are completely irrelevant features helping as well?
* To my knowledge, the PC algorithm performs conditional independence tests to discover the undirected skeleton. How do the authors obtain causal parents from PC (Line 240)?
* Please answer and discuss the questions in the weakness sections.
Confidence: 3
Soundness: 1
Presentation: 3
Contribution: 2
Limitations: The authors clearly discussed their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback. To answer the reviewer’s concerns and questions:
(*Distribution Shift*)
> unclear what type of distribution shift
We are considering natural distribution shifts. For example, the distribution shift induced by switching between geographic regions or demographic groups. Therefore, we are likely to suffer from all three forms of distribution shift at once. A recent study by Liu, et al., 2023, shows that conditional shifts are prevalent in tabular settings.
> precisely numerically mention how much the test distribution changed
Following your suggestion, we added a table with details on the observed distribution shifts in the PDF of the author’s rebuttal. We adapted the metrics for target shift, conditional shift and covariate shift from Gardener, et al, 2023. See Appendix E.2 of their paper for the detailed definitions.
> Maybe the irrelevant/non-causal features are helping because the test distributions are not significantly different
We agree with the reviewer that one explanation of the empirical results is that the test distribution may not differ from the training distribution in a substantial way. In support of this argument is the fact that the training and testing domains are, for instance, different geographic regions within the U.S. bordering each other, or population groups with different education levels or racial identity.
(*Datasets*)
> Please read the introduction section of [2], [5] for details. How do the authors plan to deal with such domain dependence and achieve generalization if they use all features
Lee, et al, 2023, and Subbaswamy, et al, 2019 propose proactive algorithms, anticipating potential sources of unreliability in the data. In that context, our study should be interpreted as testing how unreliable variables in common tabular data are. In other words, do we need to act proactive when dealing with tabular data encountering natural distribution shifts? Are there domain dependences that limit the generalization?
We didn’t find empirical evidence for that within our datasets. All datasets we studied are commonly used in empirical evaluations, and most of them are currently part of a benchmark for out-of-domain generalization. Hence, we conclude to utilize all features for prediction in common tabular datasets. Note that our conclusion does not extend to, in that sense, uncommon datasets with known high domain dependence.
> They should also show their results on the datasets where causal predictors claim to perform well
We looked for tabular datasets with (i) public or open credentialized access and (ii) interpretable features that can be classified in ‘causal’ and ‘non-causal’. We haven’t found any dataset matching these criteria where the causal predictors performed well. We’d welcome any suggestions from the reviewer, and are happy to include these datasets in our study.
(*Ablation Study*)
> perform an ablation study by removing one non-causal feature at a time [...] Why are the non-causal features helping the model to perform well?
We conducted an ablation study and provided the results in the PDF of the author’s rebuttal. We remove anti-causal and non-causal features one at a time and measure the corresponding out-of-domain accuracy. In a later comment, we discuss in detail the non-causal features whose removal significantly dropped the out-of-domain performance.
> Are completely irrelevant features helping as well?
Our datasets are carefully curated. They are based on surveys that collect information experts in that field deem relevant. In some cases, features are selected from a multitude of variables, e.g., 23 features from the US Census data for predicting income. In addition, our datasets cover applications dealing with complex social dynamics (health, employment, politics,...). Therefore, it is hard to declare any feature as completely irrelevant.
(*Confounders*)
> Are the authors considering the possibility of confounders?
We strongly believe that there are confounders within our datasets. Their existence is another plausible explanation for the inferior performance of the causal features. We do not model the data generating processes precisely enough to employ methods proposed by Lee, et al, 2023, and Subbaswamy, et al, 2019.
(*PC Algorithm*)
> How do the authors obtain causal parents from PC (Line 240)?
The PC algorithm estimates a completed partially directed acyclic graph (CPDAG), see Figure 33 in Appendix C.4 as an example. We use the implementation from the R package ‘pcalg’.
The edges connecting the target with other features are directed for the tasks ‘Food Stamps’, ‘Income’ and ‘Unemployment’. Therefore, we can readily identify these features as estimated causal parents.
In the task ‘Diabetes’, the target is only connected to high blood pressure via an undirected edge. We treat high blood pressure as potential causal parent. If deemed necessary, we can also include all skeletons as figures in our appendix.
*References:*\
Liu, J., Wang, T., Cui, P., & Namkoong, H. (2024). On the need for a language describing distribution shifts: Illustrations on tabular datasets. Advances in Neural Information Processing Systems, 36.
---
Rebuttal 2:
Title: Comments on Ablation Study
Comment: We discuss the non-causal features whose removal significantly dropped the out-of-domain performance. We split by task.
*Food Stamps* (food stamp recipiency in past year for households with child across geographic region):
- Relationship to reference person: There could be a stable and informative correlation within the survey of US Census between kind of household members (encoded in relationship to the reference person/head of the household, e.g., multiple generation household vs roommates) and food stamp recipiency. We didn’t classify this variable as causal, as it’s survey related.
*Income* (income level across geographic regions)
- Relationship to reference person: same argument applies.
- Marital status: Marital status and personal income are both intricately linked with socio-economic status, although we haven’t found any research of causally linking them together.
- Insurance through a current or former employer or union / Medicare for people 65 or older, or people with certain disabilities: These insurances are benefits not tied to income, but rather the person’s employer or age and medical condition. They are however indicative of the economic and social environment of the individual, which helps to classify the income level.
- Year: The year, e.g., 2018, encodes information about the economic status, which may be predictive across geographic regions.
*Public Coverage* (public coverage of non-Medicare eligible low-income individuals across disability status):
- State/year: The current state of living and year encode information about the economic status.
*Voting* (voted in the US presidential elections across geographic regions):
- Party preference on specific topics, e.g. pollution
- Opinion on party inclinations, e.g., which party favors stronger government
- Opinion on sensitive topics, e.g., abortion, religion, gun control
The opinions/preferences of an individual may sort them to specific sub-groups of the populations, wherein civil duty is or is not prominent. It is fathomable that similar sub-groups form across geographic regions.
*Hypertension* (High blood pressure across BMI categories):
- State: The current state of living encodes information about the socio-economic status, which research linked to hypertension in several studies (Leng, 2015).
*Sepsis* (sepsis across length of stay in ICU):
- Hospital: Hospitals serve different groups of the populations, which may be correlated with different risks of attaining sepsis.
*References:*\
Leng, B., Jin, Y., Li, G., Chen, L., & Jin, N. (2015). Socioeconomic status and hypertension: a meta-analysis. Journal of hypertension, 33(2), 221-229.
---
Rebuttal 3:
Comment: Thanks to the authors for their efforts in their submitted manuscript and their detailed rebuttal. I apologize for replying late. Below I share my concerns based on the authors’ responses.
1.
>The authors mentioned that their considered datasets are likely to suffer from all three forms of distribution shift at once. They provided covariate shift, concept shift and label shift for all datasets.
However, in my opinion, these shifts should be measures for specific variables (weakness 1) instead of the whole dataset. For example: conditional distribution for which variable changes the most compared to other variables when you consider a new domain. That would explain the source of shift, and hint us why a predictor performs good/ bad. These information are more insightful instead of seeing a number for the whole dataset.
2.
>The authors mentioned, “do we need to act proactive when dealing with tabular data encountering natural distribution shifts? Are there domain dependences that limit the generalization?”
I understand the authors’ arguments about causal predictors performing bad. However, my perspective is that if a predictor using all the features does not perform bad in their considered new domain even in the presence of different types of shifts, should we consider that a new domain at all? Does it contain significant amount of shift to be considered as a new domain?
3.
>Hence, we conclude to utilize all features for prediction in common tabular datasets.
Again, I understand the authors’ claim about causal predictor performing bad in the considered common datasets and the fact that using all features gave better accuracy. However, there are many existing works, which show how the prediction gets domain specific or highly dependent on sensitive variables if we condition on all variables. Are we proposing a method with new failures modes?
4.
> We strongly believe that there are confounders within our datasets.
To me, the role of confounders was not explicitly mentioned in the main paper. It should be explicitly mentioned as one of the possible reasons of why causal features do not perform well.
5. A question to the authors:
How does a predictor perform better in new domains if the prediction is dependent on “State/Year/hospitals”? Aren’t these one type variables that create new domains? This means that the trained predictor would be domain specific and fail when deployed in different state/year/hospitals.
6. A final message to the authors is that their empirical results are very impressive. However, these results should not mislead readers about using causal features for prediction tasks. Different assumptions and possibilities (ex: presence of counfounders) should be explicitly mentioned.
Based on authors’ responses and the discussion with other reviewers, I would consider changing my scores.
---
Rebuttal Comment 3.1:
Comment: Thanks for reading the rebuttal. We appreciate that you find our empirical results "very impressive". These are, in fact, our main contribution. We're happy to add additional clarification and discussion along the lines of what you suggest. In particular, we will detail the conditional distribution shift on a variable level. We will emphasize more strongly that we adapt the choice of domains from a recent benchmark and that demonstrating the utility of causal methods likely requires other benchmark datasets than
the ones currently available (see line 279f.). We will also explicitly mention confounders as one of the possible reasons why the causal features do not perform well. | Summary: The paper empirically investigates the hypothesis that machine learning models trained on causal features generalize better across domains. Analyzing 16 prediction tasks across various datasets, the study finds that models using all available features outperform those using only causal features in both in-domain and out-of-domain accuracy, challenging the assumed external validity of causal modeling techniques. Extensive robustness checks support these findings, suggesting that current causal methods do not necessarily provide better generalization in practical settings.
Strengths: - Comprehensive empirical analysis with 16 diverse tasks across domains
- Robustness checks confirm the stability of results
- Challenging an existing assumption contributes valuable insights for the field
- Great detail in the appendix.
Weaknesses: - It may have been my mistake, but it took me a while to grasp how the domain split is done. Maybe this can be described more clearly.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Anti-causal improves out-of-domain performance of arguably causal features (ll 221): Do you have an explanation/hypothesis?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - Feature classification into causal/non-causal is somewhat subjective (the authors do acknowledge this fact).
- The study only provides empirical results. It does so very carefully, but still, this makes it hard to make any more general claims.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments and positive feedback! To answer the reviewer’s comments:
> how the domain split is done [...] can be described more clearly
Thank you for drawing our attention to that. We will state how we obtain the domain splits more clearly.
> Anti-causal improves out-of-domain performance of arguably causal features (ll 221): Do you have an explanation/hypothesis?
A reasonable explanation is that the relationship of the anti-causal feature and target does not change strongly between domains. For example, patients with diabetes have a considerably increased risk of cardiovascular disease. Their relationship is partly explained by biomedical mechanisms, e.g., dyslipidemia (abnormal levels of lipids in the bloodstream) is a common feature of diabetes and poses a significant risk factor for cardiovascular diseases (Schofield, et al., 2016). It is conceivable that these mechanisms are stable across domains (population groups with different racial identity).
> The study only provides empirical results [...] hard to make any more general claims.
To address your concerns, we conducted synthetic experiments to test how far our claims generalize. The setup is depicted in the PDF of the author’s rebuttal. Our code is based on the synthetic study conducted by Montagna, et al., 2023.
The synthetic experiments confirm our empirical findings. Using all features achieves best out-of-domain prediction accuracy. The one exception is if the distribution shift is exclusively on the anti-causal features and even in this case, a strong shift is needed before causal features achieve best out-of-domain accuracy.
*References:*\
Schofield, J. D., Liu, Y., Rao-Balakrishna, P., Malik, R. A., & Soran, H. (2016). Diabetes dyslipidemia. Diabetes therapy, 7, 203-219.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and additional explanations. My vote remains and I hope to see this paper accepted.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for reading the rebuttal, and appreciate their continued support of our paper! | Summary: In this paper, an extensive evaluation of Ml methods with different feature sets is performed.
Only tabular data is considered and different feature set are considered: all, arguably causal and causal.
In the experiments no advantage of using causal features is shown for the domain generalization task.
Strengths: - very useful, inspiring results which will probably spark new research directions and/or rebuttals.
- very well-written, clear and well-structured paper
- code available and well-described experiment
- interesting "negative" results for an interesting question which is often stated as a "strength" of causal machine learning models.
Weaknesses: - maybe the number of different benchmarks/tasks could be higher, but I think that it is sufficient as it is
- maybe simulation experiments? (see questions)
Technical Quality: 3
Clarity: 4
Questions for Authors: - It would be interesting to add an experiment with simulated data, with varying degrees of difference in the out-of-distribution data, so as to probe the theoretical ideas behind the claim that causal features should generalize better to unseen domain (something similar to what is done in anchor-regression literature).
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments and amazing feedback! To answer the reviewer’s question:
> add an experiment with simulated data, with varying degrees of difference in the out-of-distribution data [...] (something similar to what is done in anchor-regression literature)
Following your suggestion, we conducted synthetic experiments. The setup is depicted in the PDF of the author’s rebuttal. Similar to Rothenhäusler, et al., 2021, we vary the degree of difference in the out-of-domain data using shift intervention on target, features and confounders.
The synthetic experiments confirm our empirical findings. Using all features achieves best out-of-domain prediction accuracy. The one exception is if the distribution shift is exclusively on the anti-causal features and even in this case, a strong shift is needed before causal features achieve best out-of-domain accuracy.
*References:*\
Rothenhäusler, D., Meinshausen, N., Bühlmann, P., & Peters, J. (2021). Anchor regression: Heterogeneous data meet causality. Journal of the Royal Statistical Society Series B: Statistical Methodology, 83(2), 215-246. | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you for your thoughtful comments and suggestions!
(*Contribution*) We are encouraged that you found our work has "very useful, inspiring results which will probably spark new research directions and/or rebuttals" (MY3d) and "contributes valuable insights for the field" (bBCb). Our separation of features into causal, arguably causal, anti-causal and non-causal is praised as "reasonable and human-like (i.e. that's how everyone should cluster features before training a predictive model)" (yvyj).
(*Soundness*) We are particularly pleased that our experiments are described as "extensive" (QBUN) and "comprehensive"(bBCb). The reviewers appreciated that we used a "diverse set of tasks from different domains" (6nV6), and concluded that this makes our findings "more applicable to real-world scenarios" (yvyj). Our robustness checks are perceived to "confirm the stability of results" (bBCb) and make our conclusions "seem trustworthy" (yvyj).
(*Presentation*) Our work is praised to have "high clarity in terms of experimental setup and results" (6nV6), be a "well-described experiment" (MY3d) and have "great detail in the appendix" (bBCb). The reviewers highlighted that this "ensures the validity of their results and is crucial for reproducibility" (6nV6).
We have also taken your feedback into account and made the following key changes to improve our paper:
1. (*Synthetic experiments*)\
Following your suggestion (6nV6, bBCb, MY3d), we conducted synthetic experiments. The setup is depicted in the PDF of the author’s rebuttal. The causal mechanisms are modeled as (i) linear with weights randomly drawn in (-1,1) and (ii) based on a neural network with random instantiation. The noise variables are drawn from a standard normal distribution. The task is to classify whether the target is larger than 0.\
\
Similar to Rothenhäusler, et al., 2021, we vary the degree of domain shift using shift intervention on target, features and confounders, as proposed by (MY3d). We draw 1,000 training samples from the causal mechanism, and evaluate the performance on 1,000 testing samples from the intervented causal mechanism with shift interventions varying from 0 to 10; step size is 0.1. Our code is based on the synthetic study conducted by Montagna, et al., 2023.\
\
The synthetic experiments confirm our empirical findings. Using all features achieves best out-of-domain prediction accuracy. The one exception is if the distribution shift is exclusively on the anti-causal features and even in this case, a strong shift is needed before causal features achieve best out-of-domain accuracy. We highlighted the plots where the exception occurs.
2. (*Strength of distribution shift*)\
To meet your requests (QBUN, yvyj), we added a table with details on the observed distribution shifts in the PDF of the author’s rebuttal. We adapted the metrics for target shift, conditional shift and covariate shift from Gardener, et al, 2023. See Appendix E.2 of their paper for the detailed definitions.
3. (*Ablation study*)\
As you suggested (QBUN), we conducted an ablation study and provided the results in the PDF of the author’s rebuttal. We remove anti-causal and non-causal features one at a time and measure the corresponding out-of-domain accuracy. We will discuss in detail the non-causal features whose removal significantly dropped the out-of-domain performance and try to give explanations.
We hope these updates address any concerns expressed by the reviewers — we are happy to respond to any additional concerns that might arise during the Author Reviewer Discussion period!
Best regards,\
The authors of #16003
*References:*\
Gardner, J., Popovic, Z., & Schmidt, L. (2024). Benchmarking distribution shift in tabular data with tableshift. Advances in Neural Information Processing Systems, 36.\
Montagna, F., Mastakouri, A., Eulig, E., Noceti, N., Rosasco, L., Janzing, D., ... & Locatello, F. (2024). Assumption violations in causal discovery and the robustness of score matching. Advances in Neural Information Processing Systems, 36.\
Rothenhäusler, D., Meinshausen, N., Bühlmann, P., & Peters, J. (2021). Anchor regression: Heterogeneous data meet causality. Journal of the Royal Statistical Society Series B: Statistical Methodology, 83(2), 215-246.
Pdf: /pdf/a221463f8942d04878c73db236732e8930cba1d4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The authors provide a thorough benchmark and analysis of machine learning models trained on different features to generalize to unseen domains. They use tabular datasets from various fields, including health, employment, education, etc., and categorize features into groups ranging from causal to anti-causal to test their influence on the model performances. Their key finding is that models utilizing all available features, irrespective of their causal nature, achieve better in-domain and out-of-domain accuracy than those relying solely on causal features across a battery of methods. This work challenges the practical applicability of theoretical causal advantages in real-world tabular data scenarios.
Strengths: - very clear and accurate writing style, intuitive presentation of results in form of the figures, clear summary statements
- the authors use a variety of datasets from different areas like health, employment, and politics, which makes the findings more applicable to real-world scenarios
- the separation of features into the four categories, seems reasonable and human-like (i.e. that's how everyone should cluster features before training a predictive model).
- the large battery of models is great
- the authors conducted thorough checks to ensure their results are robust, making their conclusions seem trustworthy, and try to give explanations why certain results are observed
Weaknesses: - Could you add the sample size to Table 1?
- The explanations about the results seem to be a bit short. It would be great to elaborate a bit more and interpret the results wrt to the strength of the domain shift (e.g. it's obvious that anti-causal features are predictive as long as the shift is small)
Typos:
- Figure 1: Predictors based ON all features
- L63: theortical -> theoretical
- L108: that THE distribution
- L172: Remove "a"
- L214: considerably
- L275: knowlege -> knowledge
Technical Quality: 3
Clarity: 4
Questions for Authors: - I would be curious to see if these findings also translate to genomics, where the feature size is huge, and a subset of genes has been discovered as causal for certain phenotypes. Have you considered this?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors adequately addressed the limitations!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments and positive feedback! We will fix the typos, thanks for pointing them out to us. To answer the reviewer’s questions:
> add the sample size to Table 1?
Yes, we will add the sample sizes to Table 1. Note that they are currently provided in Table 6 in the Appendix.
> elaborate a bit more and interpret the results wrt to the strength of the domain shift
We added a table with details on the strength of the domain shifts in the PDF of the author’s rebuttal. We adapted the metrics for target shift, conditional shift and covariate shift from Gardener, et al, 2023.
We will explain and interpret the results in more detail, taking into account the strengths of the domain shifts.
> these findings also translate to genomics [...] Have you considered this?
No, we have focused our attention on datasets with easily interpretable features so far. If the reviewer has a suggestion on a plausible task based on genomics data and an appropriate natural distribution shift, we’d be happy to include that in our analysis.
---
Rebuttal Comment 1.1:
Title: Reply #1
Comment: Thank you for the updates.
>Yes, we will add the sample sizes to Table 1. Note that they are currently provided in Table 6 in the Appendix.
Sorry, I did not see that.
I'll leave my score as is and hope this paper gets into the conference.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for reading the rebuttal, and appreciate their vote of confidence! | null | null | null | null | null | null |
Interfacing Foundation Models' Embeddings | Accept (poster) | Summary: The paper proposes to construct an interface to connect the embeddings and predictions from different foundation models. With the designed interface, the overall system has a promise to interleave any modality in a flexible manner. To showcase this flexibility, this paper constructs a benchmark, FIND, by leveraging existing COCO labels and text-only GPT-4 model. As a generalist model, the paper shows that FIND generally performs better than existing generalist model and sometimes even specialized model.
Strengths: - The paper idea is novel and promising if extended well.
- The paper conducts an extensive experiments on COCO and compares with valid baselines.
Weaknesses: Method and experiment
- Lack of baseline of instruction-tuned vision-language models, such as LLaVA.
- Lack of out-of-domain evaluation. The advantages of foundation models is that they have a higher possibility to generalize to unseen domain due to their scale. However, it is unclear when the COCO-trained interface is able to generalize to other domains.
Writing to improve
- $\textbf{sim}$ is first introduced in L109 and is later explained in L149
- Figure 3 is not self-contained. It’s hard to follow what E, O mean. There are too many notations and too little text to understand the tasks and how the task unification works here.
- In Figure 4 (center), the caption says “the shape of different polygons represents different embedding types” I cannot find where in the text specifying different “types of embeddings”.
- Missing citations in L206,207 after each model
- Where is “interleaved segmentation” in Table 3 mentioned in L256
Technical Quality: 3
Clarity: 1
Questions for Authors: Benchmark
- From Table 1, it is not clear to me how does the text-only data engine resolve the cases where there are multiple instances within the same classes. For example, in `Prompt for GPT4 Engine`, say the image contains ten mugs on a table. How does data engine resolve the mapping between the bounding boxes of each mug to the captions?
Method
- How does the embedding sampler work for Llama? Specially, which embeddings is used? Is it the last hidden states of the last tokens? Do the authors have any insights in extracting the embeddings from an auto-regressive model? It is non-trivial to use the embeddings in an auto-regressive mdoels, this paper [1] specifically ablate the design choice.
- Does the approach involves any similarity calculation? In section 3.2.1 task unification, the authors use a similarity measurement to connect the source and the target domains. While in the Fig. 4, I feel the FIND interface is purely relied on a transformer network followed by several projection heads. Can the authors clarify the method?
Experiments
- In Table 2
- Why SEEM, X-decoder, BLIP-2 is marked as not “jointly trained” (the third column). Please clarify it.
- What does * mean? It’s shown in the first column and the seventh column (mIoU for RefCOCO-g).
- In section 4.1, the authors claim that the less optimal image-text retrieval results are due to the batch size during fine-tuning. In L245-248, I am very confused with the justification. Here are my questions:
- What resolution FIND trained with?
- “Pilot experiments with X-decoder showed … well across tasks.” Is this described in X-decode paper or is it reproduced by the authors? Also, I checked the section 4.1 in X-decoder paper, I believe the authors want to say “1024 for segmentation and 224 for image-text data”.
- I am not sure if the authors are claiming that the main issue is the batch size or the resolutions. If it’s the batch size, why not use gradient accumulation?
[1] Vision-Language Models Provide Promptable Representations for Reinforcement Learning
Confidence: 5
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: The limitations include
1. Presentation is not clear enough and has a lot to clarify
2. The paper works around foundation models but it's limited to COCO dataset.
3. The benchmark data engine requires more qualitative examples and justification of the design choices.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We sincerely appreciate your thorough and comprehensive reading of our paper.** We understand your frustration with the confusing implementation details despite your diligent effort. We apologize for any confusion caused. **We kindly ask that you consider the paper from a higher-level perspective**, as highlighted by the strengths noted by Reviewer-kGcR and Reviewer-iZKM. Meanwhile, we will address these constructive detailed comments in the camera-ready version if any.
[Q1] Lack of Instruction baseline e.g. LLaVA.
In **[Common Question 1]**, we provide a comprehensive discussion on why instruction-based methods are not comparable to our approach, considering both the output type and the specific problem we are addressing. Additionally, in **[Common Question 3]**, we thoroughly explain the importance of multimodal understanding and mapping in the steerable era.
[Q2] Lack of out-of-domain evaluation.
Thank you for pointing this out. We have created a new benchmark on SAMv2, using an improved protocol outlined in Table.1 for interleave grounding (we have illustrated this improved protocol in the rebuttal PDF). This benchmark is compared with SoM+GPT-4o, which serves as a high-standard evaluation protocol, the SoM mark is labeled with our best power SEEM model.
→ SAMv2 Benchmark statistics.
| Dataset | # Images (videos) | # Instances | Avg Text Length |
| --- | --- | --- | --- |
| sav_val | 136 | 532 | 221 |
| sav_test | 116 | 451 | 209 |
→ SAMv2 Experiment results.
| | sav_val | | | sav_test | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Model | mIoU | cIoU | AP50 | mIoU | cIoU | AP50 | Latency |
| FIND_Focal-T_LLaMA_640 | 58.9 | 63.7 | 65.6 | 57.8 | 59.4 | 66.5 | 0.0809 s/iter |
| FIND_Focal-T_UniCL_640 | 58.3 | 65.7 | 65.9 | 60.1 | 60.3 | 68.7 | 0.0528 s/iter |
| FIND_Davit-d5_LLaMA_640 | 62.5 | 69.8 | 71.6 | 60.2 | 64.6 | 71.3 | 0.1978 s/iter |
| FIND_Davit-d5_UniCL_640 | 61.7 | 66.0 | 72.3 | 62.0 | 65.6 | 72.2 | 0.1699 s/iter |
| SoM + GPT4o | 75.3 | 83.6 | 75.3 | 76.7 | 81.2 | 76.7 | ~10 s/iter |
[Q3] Sim is first introduced in L109 and is later explained in L149.
We apologize for the confusion. We will add a reference the first time we use the term “sim.” Initially, we did not include it because “sim” seemed straightforward to understand.
[Q4] Hard to follow what E, O mean in Fig.3.
We have clearly defined E in L103, and O L110 of the main paper.
[Q5] Too many notations and too little text to understand the tasks and how the task unification works here.
The mathematical symbols are carefully curated to abstract and unify the tasks, as highlighted by Reviewer-kGcR in strength 3: “The writing and visualizations are clear and well-presented.” We provide a comprehensive textual explanation of task unification in the rebuttal PDF, section xxx.
[Q6] Cannot find where specifying different “types of embeddings” in Fig.4.
The types of prompts and queries are clearly labeled in Fig. 4. You can find the text “Vision, Language, Interleave Prompts” directly above the pentagon (5-edge-polygon), and “Queries” above the gray square.
[Q7] Missing citations in L206-207 after each model.
Thanks so much for pointing this out, we will add the citation in the camera-ready version if any.
[Q8] What is interleaved segmentation in Table.3.
Thanks for pointing out, Interleave Segmentation actually means Interleave Grounding defined in L110. We will edit this in Camera Ready if any.
[Q9] How does the text-only data engine resolve the cases where there are multiple instances within the same classes?
- In Table. 1 **SI** contains object descripts generated by LLaVA, together with bounding box and annotation ID, the GPT4 data engine is able to identify the object appearance. We show an example to resolve the confusion on “ten-mug problem” in the section [Qualitative examples on FIND-COCO-Bench] of the **rebuttal PDF**.
- In addition, for the newly created SAMv2 benchmark, we are using SoM [1] + GPT4-V to further reduce object hallucination. We show 7 random examples for FIND-SAMv2-Bench in the **rebuttal PDF**.
[Q10] How does the embedding sampler work for Llama.
For language embedding, we use identity sampling for all language embeddings. Please refer to [Common Question 2] for more details.
[Q11] Any insights into extracting the embeddings from an auto-regressive model?
Yes, this is one of our main contributions, clearly stated in L49-50 and L66. Additionally, Table 5 shows that the -12 layers feature in LLaMA aligns best with visual representation in terms of semantic meaning. We have demonstrated this insight in L290-295.
[Q12] Related work “Vision-Language Models Provide Promptable Representations for Reinforcement Learning”.
- The related work uses the last layer embeddings from LLMs as a knowledge prior for policy generation. They ablate the design choices of “No Prompt; No Generation; Change Aux.; Text Oracle Detector” in Table 8, which are very different in content and objective from our approach.
- The referenced paper is a concurrent work with the NeurIPS template, updated in May 2024 on ArXiv.
[Q13] Does the approach involve any similarity calculation?
Task unification is achieved by treating all tasks as similarity mapping problems. This similarity mapping is applied after “the transformer network and several projection heads”; it is used to compute the loss and identify the output. We have provided a use case study in the rebuttal PDF, Sec.xxx, demonstrating how similarity mapping is applied to interleave grounding.
[Q14] What is jointly trained?
Joint training means whether the model is jointly trained on all the tasks with the number indicated in Table.2. We apologize for the typo that X-Decoder and SEEM should be jointly trained.
[Q15] What does * mean in Table.2 for mIoU?
We apologize for the confusion. * means the model can evaluate this number, but it was not reported in the original paper, so we did not include it.
---
Rebuttal 2:
Comment: [Q16] What resolution FIND trained with?
We have clearly stated in L245-248:
*“In Table 2, models are either 384x384 with batch size 384 or 1024x1024 with batch size 192 for all tasks. Other tables show results with a 640x640 training resolution and a 192-batch size.”*
[Q17] “Pilot experiments with X-decoder showed … well across tasks.” Is this described in X-decode paper or is it reproduced by the authors?
The experiments are produced by the authors.
[Q18] Presentation is not clear.
We hope you have a better understanding after the rebuttal and will update the camera-ready version if needed. As Reviewer-kGcR noted, “The writing and visualizations are clear and well-presented.”
[Q19] Limitation to COCO dataset.
Thank you for the suggestion. We have addressed this by introducing the new FIND-SAMv2-Bench. Please refer to [Q2] for more details.
[Q20] Need more qualitative examples for the data engine and justifications.
We have provided additional justifications for the design in our response to [Q9], and more qualitative examples are included in the **rebuttal PDF**.
---
Rebuttal 3:
Comment: Thanks for the detailed clarification.
I read through the rebuttal. Overall, I start to appreciate the idea of the proposed framework *in high-level*, but I need some time to put the submission and the clarification in the rebuttal together to refine my understanding.
To facilitate the discussion, I want to start by adding a follow-up question:
1. I appreciate the effort for the new dataset [Q2]. Can the author give some interpretations of the provided table?
---
Rebuttal Comment 3.1:
Comment: We again appreciate your review **actually helped us rethink our model in a more logical way** than ever, and we hope ***the following contents could facilitate your understanding towards the final decision*** because we do believe our work is *novel and inspiring for future works.* We also aligned your question with the structure provided below.
> **High-Level Motivation**
>
1. We aim to align embeddings across different granularities (e.g., segments and images for visual understanding; paragraphs and phrases for language) within foundation models like X-Decoder and LLaMA.
2. We seek to align embeddings between foundation models across modalities (e.g., Vision and Language), consistent with the Platonic representation hypothesis.
3. We aim for the aligned embeddings to communicate seamlessly and efficiently.
4. We want to identify the optimal design choices for embedding extraction from different foundation models.
5. We believe all understanding tasks fall under the broader scope of information mapping. → [Q3, Q13]
> **Interface Design aligned with Motivation**
>
1. We design the interface to incorporate queries and prompts that unify granularity within a foundation model. Specifically, queries can attend to an arbitrary number of prompts through content attention, enabling unified granularity embeddings. → [Q6, Q10]
2. Additionally, since queries can attend to prompts across different modalities through content attention, we align these modalities within the same representation space via queries. → [Q6, Q10]
3. The condition attention allows queries that span granularity and modality to communicate efficiently. → [Q4]
4. We utilize object queries for vision embeddings and exhaustively test which layer embeddings are optimal for language models like LLaMA. → [Q11]
5. Projection and similarity mappings enable the unification of all the recognition tasks under a consistent framework. → [Q5, Q13]
> **Experiments proves the Design choice and Motivation**
>
We evaluated generic segmentation, interactive segmentation (Table 2, Main Paper), and text-paragraph grounding (Fig. 1, Second Row, Main Paper) to demonstrate unification across granularity within the foundation model, in both vision and language.
1. We assessed grounded segmentation and image-text retrieval (Table 2, Main Paper) to validate cross-modality alignment (vision and language). Additionally, we confirmed that this alignment is invariant across foundation models trained independently (Table 4, Main Paper).
2. To evaluate the effectiveness of cross-granularity and cross-modality communication, we developed the FIND-Bench for interleave grounding and retrieval. FIND-Bench is compared with other models focused on information fusion in Table 3. → [Q2, Q9]
3. The design choices for vision have been explored in previous works (e.g., Mask2Former, X-Decoder, SAM). We conducted an ablation study on language design choices in Table 5 (Main Paper, Lower Part) to determine how to extract the correct information from language models.
4. In Table 5 (Upper Part), by gradually removing each task in FIND, we demonstrated the effectiveness of similarity mapping on recognition tasks.
> **A Missing Piece for Alignment**
>
We believe that embeddings from different models within the same modality can also effectively communicate. Several concurrent pilot studies, such as [1, 2, 3], are exploring this direction.
> **Future Research**
>
On the road towards the most powerful “GPT4-V”, there are three major contents for the final models:
1. Tokenization across modality.
2. Effective communication and reasoning between modalities.
3. Detokenized to human language.
Our model focuses on the effective communication and reasoning between modalities (**Point 2**), where these components should work together seamlessly. In the multimodal section (7.2) of LLaMA 3.1 [4]. their approach is similar to ours in terms of design choices, but we extend this concept to a finer granularity (pixel-image, phrase-paragraph). **Our approach is novel in this direction, addressing an area that is essential yet underexplored**.
For approaches focusing on **instructional tuning** (e.g., LLaVA), **they primarily work on Point 3**, detokenizing into human language for steerable interaction. ***We emphasize that our exploration is a parallel effort, addressing different aspects of the problem.*** → [Q1]
[1] Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
[2] MouSi: Poly-Visual-Expert Vision-Language Models
[3] Theia: Distilling Diverse Vision Foundation Models for Robot Learning
[4] The Llama 3 Herd of Models
---
Rebuttal 4:
Comment: Thanks so much for your reply, **feel free to ask any further questions if needed** : )
For the additional table in [Q2], we leverage the newly released SAMv2 [1] dataset that originally worked on video object/part segmentation. We provide a link [2] below for your reference on the dataset explorer. The dataset contains one validation set, and one test set, we generate new benchmark labels for those videos. SAMv2 (a.k.a SAV) dataset has the following benefits in solving your confusion:
(1) Video datasets are definitely in another domain compared with the COCO dataset, e.g. they contain motion blur and the scene is usually different from COCO-style scenes. This will solve your confusion on [Q2] better.
(2) Benchmarking on the video dataset will unleash more potential for our model, we have observed qualitatively that FIND can do interleave video tracking on SAMv2 dataset. If there is a future version, we will add those results (the rebuttal cannot add figures or links).
-> We create the FIND-SAV-Bench using the following protocol:
(1) We generate instance-level annotations using SEEM-d5 [3] and filter out the annotations with (a) Low confidence. (b) Very small region.
(2) We use Set-of-Mark prompting [4] to annotate the visual images with the generated annotations. This will give better results than purely using language prompts as shown in Table.1 (Main paper).
(3) We prompt GPT4-V with the following exact prompt, this is an improved version of the method shown in Table.1 in solving the lack of fine-grained annotation in SAMv2:
```
1. Can you describe each instance in the [image1] with detailed appearance, in the order of how confident you are on the recognition? \n
2. Can you selecte 3-5 instances in the [image1] that also likely appear in the [image2]? \n
After these steps, generate image captions with grounded entities and attributes with following instructions: \n
1. [image1] is the sampled image to generated grounded caption. \n
2. [image2] is the reference image help to select which entities in [image1] should be included in the generated caption. \n
3. Numbered instances in [image1] are the proposed candidate entities. \n
5. The number in [image1] and [image2] are different, you should use in the number in [image1]. \n
an example output format would be: ##output##:"[entity_id]<A woman> sitting next to [entity_id]<a handsome man>, with their hands holding together under [entity_id]<the blue sky>.", where [entity_id] and <xxx> are associated with the ground truth bounding boxes. \n
generated caption constraints:
1. [entity_id] should be the same as the number in [image1], e,g: [1]. \n
2. Try to find the entities in [image1] that also appear in [image2]. \n
3. The selected entity should be the instance you are very confident on recoginition, selecte around 3-4 entities would be fine. \n
4. The entity description in <> should be accurate and detailed to identify the instance, e.g. <the dog with black and white dot>, <the second bottle from the front>. \n
5. Focus more on instance classes instead of stuff classes. \n
Please generate the grounded caption for [image1] accordingly.\n
```
(4) We do human pruning (approve/decline) to remove any ridiculous examples.
The final examples of FIND-SAMv2-Bench are shown in the **rebuttal PDF** with the section name "Qualitative examples on FIND-SAMv2-Bench". And the statistics are shown in the [SAMv2 Benchmark statistics], with the metrics below:
(1) # Images (videos): The total number of videos/images in the splits, because we only annotated the first image in a video for benchmarking as the frame looks similar inside one video. This labeling strategy will maximize evaluation capability with minimum effort.
(2) # Instances: Total number of instances annotated in the split.
(3) Avg Text Length: The average sentence length of the grounding annotation in character.
[1] Ravi, Nikhila, et al. "SAM 2: Segment Anything in Images and Videos." arXiv preprint arXiv:2408.00714 (2024).
[2] https://ai.meta.com/datasets/segment-anything-video/
[3] Zou, Xueyan, et al. "Segment everything everywhere all at once." Advances in Neural Information Processing Systems 36 (2024).
[4] Yang, Jianwei, et al. "Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v." arXiv preprint arXiv:2310.11441 (2023).
---
Rebuttal 5:
Comment: After the FIND-SAMv2-Bench is prepared, we evaluate the interleave grounding with the FIND model. Because the video/frame number is too few in SAV val and test sets we do not propose interleave retrieval evaluation here.
For interleave grounding, we use the following protocol to generate the query: For each entity we want to do the grounding, 0.5 probability is applied to determine whether we want to use visual or text reference for grounding. An example grounding example is shown below:
```
[1]<A woman> sitting next to [2]<a handsome man>, with their hands holding together under [3]<the blue sky>.
```
Thus, <> would either be text or visual reference of the instance. The visual reference is the masked scribble part of the original instance in the image, one better option could be the tracked instance in another frame.
We evaluate the FIND-SAMv2-Bench on our Tiny and Large (d5) model with both UniCL (CLIP style language encoder) and LLaMA language encoder. As shown in the table [SAMv2 Experiment results], in addition to the effective interleave grounding in the out-of-domain dataset, we have also observed that our interface is more effective when the vision backbone is stronger (e.g. davit-d5-Florence) with evidence on the comparison results between LLaMA or UniCL language encoder.
To further prove the effectiveness of our FIND approach, we compared it with the strongest multimodal baseline of GPT4-o, as it is not able to do interleave grounding from scratch, we use SoM [1] as the adapter to bridge pixel-level visual grounding in GPT4-o. It is clearly shown in the table [SAMv2 Experiment results], that FIND similar results on AP50 metrics with GPT4-o while 58 times faster than GPT4-o.
The exact prompt for evaluating GPT4-o are shown below:
```
You are given a marked image with masks and number in each region.
Given the full sentence {}, and the corresponding viusal reference, you are instructed to select the region that best matches the query entities {}.
The output format should be: ##output##: [Entity_id]: (Region number), [Entity_id]: (Region number), ...
```
[1] Yang, Jianwei, et al. "Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v." arXiv preprint arXiv:2310.11441 (2023).
---
Rebuttal 6:
Comment: Dear Reviewer AFHH,
There are only four hours left before the discussion deadline, it would be appreciated if the reviewer could take a look at the answers and give the final decision! We again appreciate your reviewing efforts : )
Best,
Authors | Summary: FIND is a generalized interface for aligning foundation models' embeddings using a lightweight transformer without tuning pretrained model weights. It supports various tasks like retrieval and segmentation, is adaptable to new tasks and models, and creates a shared embedding space through multi-task training. FIND-Bench, an extension of the COCO dataset, showcases its effectiveness, achieving state-of-the-art performance in interleave segmentation and retrieval.
Strengths: 1. The starting point and motivation of the paper is good.
2. The experimental results are very good.
Weaknesses: 1. This paper is somewhat difficult to understand. For example, what is the **embedding**? Is it the embedding generated by the tokenizer or the feature generated by the foundation model? And what does **interleaved** mean? My understanding is that different modalities are interleaved. It is not limited to image+text or text+text; it can be image+text+image+text. I am not sure if my understanding is correct. However, it took me many careful readings to figure this out. It would be best to clearly define and emphasize key terms and settings at the very beginning.
2. The method in section 3.2.2 is somewhat simplistic and lacks innovation. It doesn't seem to have much new design. It needs to be clarified why this interleaved approach is challenging (for example, if all the inputs are just treated as a sequence, will the performance be significantly worse? If so, why?). More analysis and intuition about that will be better.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please see Weakness part.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: No Limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We greatly appreciate your comprehensive comments in the weakness section. We sincerely hope that, after reading our rebuttal, you will have a new perspective.** We believe most of the confusion stems from terminology common within the small multimodal understanding community. It would be beneficial if you could also consider the high-level comments from Reviewer-kGcR and Reviewer-iZKM, as these comments highlight our contributions from a broader perspective. *We believe a paper’s impact should be evaluated from both high-level and detailed viewpoints.*
[Q1] The definition of embedding is confusing.
**Embeddings** is a commonly used term in the context of foundation models. It typically refers to “the transformation of raw data into continuous vector representations that capture the semantic relationships and essential features of the data, enabling efficient and meaningful processing by the model [1]”.
- A well-known early work, “word2vec,” introduced the concept of a continuous vector space for natural language. This continuous vector space is commonly referred to as “the embedding space.”
- In addition, a recent well-known work, “The Platonic Representation Hypothesis [3],” presented as a position paper at ICML 2024, uses the term “embeddings” extensively, mentioning it 12 times throughout the paper.
[Q2] Confusing in the definition of interleave.
Firstly, thank you for taking the time to understand the term “interleave.” We will provide a more comprehensive explanation here.
- Your understanding “It is not limited to image+text or text+text; it can be image+text+image+text” is correct. This is clearly illustrated in Figure 2 (2) of our main paper.
- The terminology “interleave” was not introduced by us; it first gained attention within the community through the Flamingo paper [4] by Google, which had a significant impact in 2023. In Fig.7 of Flamingo, it clearly provides an example of an interleaved token “<BOS>Cute pics of my pets!<EOC><image>My puppy sitting in the grass.<EOC><image> My cat looking very dignified.<EOC>”. Hope this can help you to have a better understanding on interleave.
[Q3] The method is somewhat simplistic and lacks innovation, not much new design.
I would suggest that simplicity with minimal overhead is best for model design.
- As stated on the first page of LLaMA3 [5] paper, it indicates:
*“We believe there are three key levers in the development of high-quality foundation models: data, scale, and **managing complexity**.”*
Managing simplicity is very important for the potential of success for scaling up.
- Our main contribution in the methods section is the design of a sophisticated interface that leverages the attention mask for various downstream tasks without any overhead. We have provided a comprehensive unification demonstration of all the proposed tasks under the FIND interface in [Common Question 4]. We will give a preview (notations are defined in the PDF) below for fast reference:
[Q4] Why the interleave approach is challenging.
- Interleave retrieval was introduced in the FROMAGe paper [6] without any benchmark, prompting us to propose an interleave retrieval benchmark and use FROMAGe as a baseline for comparison.
- For interleave retrieval, we compared FIND with strong baselines like ImageBIND and BLIPv2, which lack interleave understanding, and FROMAGe, which does. Table.3 demonstrates that FIND achieves the best performance in interleave retrieval.
- We are the first to propose interleave grounding, a challenging task because it requires joint reasoning of vision and language information. As shown in Table 3, separating grounding vision and language tokens (SEEM baseline) results in significantly worse performance on interleave grounding compared to FIND, with differences as high as ~10 points across various scales and datasets.
[Q5] Why not just treat vision and language tokens as a sequence?
- Most interleave methods treat vision and language information sequentially. However, pixel-level information may require multiple tokens (e.g. 512 tokens) for accurate representation. To address this, we separate queries and tokens, allowing us to compress token information into queries for interleave retrieval and segmentation.
[1] GPT4-o with prompt “what does embedding means in the context of foundation model in short”.
[2] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. ICLR 2023.
[3] Huh, M., Cheung, B., Wang, T., & Isola, P. (2024). The platonic representation hypothesis. ICML 2024.
[4] Alayrac, Jean-Baptiste, et al. "Flamingo: a visual language model for few-shot learning." NeurIPS 2023.
[5] Dubey, Abhimanyu, et al. "The Llama 3 Herd of Models." *arXiv preprint arXiv:2407.21783* (2024)
[6] Koh, Jing Yu, Ruslan Salakhutdinov, and Daniel Fried. "Grounding language models to images for multimodal inputs and outputs." ICML 2023.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer kpHd,
We sincerely appreciate your thorough review of our paper and the valuable feedback you have provided. We would be grateful if you could review our rebuttal and share any additional feedback or questions you might have. If the rebuttal clarifies any points of confusion or if the comments from other reviewers (such as iZKM and AFHH) offer new insights into the paper, we kindly ask you to consider re-evaluating your rating. Thank you once again for your time and efforts!
Best,
Authors.
---
Rebuttal 2:
Comment: Thanks very much for the author's response. Regarding the question about Q3, I think the author misunderstood my point. I agree that the simpler the design of network methods, the better. The academic work that involves additional design for improving performance is just a paper; the industry would never adopt complicated methods, and in this regard, I believe I am on the same view as the author. However, simplicity does not mean a lack of contribution, or perhaps the way the author presents their ideas makes it difficult for me to grasp the key points of contribution.
Figure 4 is particularly difficult to understand.
If the other reviewers find it easy to understand, AC can disregard my opinion. :)
I apologize, but I still prefer to maintain my original score.
---
Rebuttal 3:
Comment: Thanks so much for the reviewer's response : ) We are actually on the same page for the simplicity of the design choice, especially when the reviewer gives the following comments:
*“I agree that the simpler the design of network methods, the better. The academic work that involves additional design for improving performance is just a paper”*
The reviewer actually gives comments in a neutral and constructive way. But we the authors want to emphasize, that our work is not lacking in contributions; **it represents a highly efficient and effective unification of foundation model embeddings across both granularity and modality, enabling seamless communication.** Achieving this unification is a significant, non-trivial effort. The authors focused on integrating all components with minimal overhead, which may make the individual contributions appear limited. Although we prepared similar explanations for R4-AFHH, who also found the paper challenging to understand, ***we want to re-emphasize the content here to endeavor the reviewer’s support***.
> **Impact and Novelty**
>
On the road towards the most powerful “GPT4-V”, there are three major contents for the final models:
1. Tokenization across modality.
2. Effective communication and reasoning between modalities.
3. Detokenized to human language.
Our model focuses on the effective communication and reasoning between modalities (**Point 2**), where these components should work together seamlessly. In the multimodal section (7.2) of LLaMA 3.1 [1]. their approach is similar to ours in terms of design choices, but we extend this concept to a finer granularity (pixel-image, phrase-paragraph). **Our approach is novel in this direction, addressing an area that is essential yet underexplored**.
For approaches focusing on **instructional tuning** (e.g., LLaVA), **they primarily work on Point 3**, detokenizing into human language for steerable interaction. *We emphasize that our exploration is a parallel effort, addressing different aspects of the problem.*
> **High-Level Motivation**
>
1. We aim to align embeddings across different granularities (e.g., segments and images for visual understanding; paragraphs and phrases for language) within foundation models like X-Decoder and LLaMA.
2. We seek to align embeddings between foundation models across modalities (e.g., Vision and Language), consistent with the Platonic representation hypothesis.
3. We aim for the aligned embeddings to communicate seamlessly and efficiently.
4. We want to identify the optimal design choices for embedding extraction from different foundation models.
5. We believe all understanding tasks fall under the broader scope of information mapping.
> **Interface Design aligned with Motivation**
>
1. We design the interface to incorporate queries and prompts that unify granularity within a foundation model. Specifically, queries can attend to an arbitrary number of prompts through content attention, enabling unified granularity embeddings.
2. Additionally, since queries can attend to prompts across different modalities through content attention, we align these modalities within the same representation space via queries.
3. The condition attention allows queries that span granularity and modality to communicate efficiently.
4. We utilize object queries for vision embeddings and exhaustively test which layer embeddings are optimal for language models like LLaMA.
5. Projection and similarity mappings enable the unification of all the recognition tasks under a consistent framework.
> **Experiments proves the Design choice and Motivation**
>
We evaluated generic segmentation, interactive segmentation (Table 2, Main Paper), and text-paragraph grounding (Fig. 1, Second Row, Main Paper) to demonstrate unification across granularity within the foundation model, in both vision and language.
1. We assessed grounded segmentation and image-text retrieval (Table 2, Main Paper) to validate cross-modality alignment (vision and language). Additionally, we confirmed that this alignment is invariant across foundation models trained independently (Table 4, Main Paper).
2. To evaluate the effectiveness of cross-granularity and cross-modality communication, we developed the FIND-Bench for interleave grounding and retrieval. FIND-Bench is compared with other models focused on information fusion in Table 3.
3. The design choices for vision have been explored in previous works (e.g., Mask2Former, X-Decoder, SAM). We conducted an ablation study on language design choices in Table 5 (Main Paper, Lower Part) to determine how to extract the correct information from language models.
4. In Table 5 (Upper Part), by gradually removing each task in FIND, we demonstrated the effectiveness of similarity mapping on recognition tasks.
[1] The Llama 3 Herd of Models
---
Rebuttal Comment 3.1:
Comment: Thanks for the author's reply, which has given me a better understanding of the paper's motivation.
To be honest, the writing of the paper needs significant improvement (although this shouldn't be the primary criterion for judging whether the paper should be accepted). I hope the author will improve the writing in future versions of the paper.
Additionally, regarding the method in the paper, for example, in Figure 1, there are multiple language encoders—why are multiple language encoders necessary? Isn't one sufficient? Is it a good choice to make the model redundant in order to achieve a slight performance improvement? Having multiple task decoders is fine because the tasks are varied.
After reading the author's rebuttal, I have raised my score. However, I still hope the author will improve the writing in the final version, especially focusing on the motivation and related aspects.
---
Reply to Comment 3.1.1:
Comment: Thanks for the reviewer's response, it is quite encouraging. And we again thank you for your suggestion on the gradient for paper improvement.
[Question] Multiple language encoders in Figure 1.
If reading the figure zoomed in or in detail, there is only **one solid black arrow**, which is the language encoder that we are using. We just show the potential of our model that can integrated with **one of** the language encoders or vision encoders.
Again, thanks for the understanding efforts in the rebuttal session, we will improve the camera-ready version if any. | Summary: The paper explores a unified multimodal embedding space across three image-text interleaved tasks, covering different granularity levels from image-level to pixel-level tasks.
Strengths: 1. The work investigates various multimodal tasks under image-text interleaved inputs, including grounding, retrieval, and segmentation, providing rich semantic understanding from image-level to pixel-level. The exploration of a unified embedding space is meaningful.
2. The paper proposes a new benchmark, FIND-Bench, which includes new training and evaluation ground truths for interleaved segmentation and retrieval.
3. The writing and visualizations are clear and well-presented.
Weaknesses: 1. It would be valuable to explore the effectiveness of the proposed method on larger datasets.
2. The authors also mention recent multimodal models such as Llava and BLIP-v2. It would be interesting to discuss how the current approach compares with these recent methods, as they also aim for modality unification. More insights into the differences between these approaches and whether the current method achieves similar results with less data or solves problems that the aforementioned models do not address, such as pixel-level visual tasks, would be helpful.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your clear and insightful comments. We have made our best effort to address them carefully below and in [Common Question 1], and [Common Question 3]:
[Q1] Experiment on a Larger dataset.
Thank you so much for your interest in scaling up the training recipe. Unfortunately, we do not have sufficient computational resources at the moment. However, we have proposed a new dataset in FIND-Bench, incorporating the latest SAMv2 dataset for interleaved grounding in video frames, to enable further benchmarking. We hope this new dataset will unlock more capabilities of our FIND interface. We are open to future collaborations on scaling up.
→ SAMv2 Benchmark statistics.
| Dataset | # Images (videos) | # Instances | Avg Text Length |
| --- | --- | --- | --- |
| sav_val | 136 | 532 | 221 |
| sav_test | 116 | 451 | 209 |
→ SAMv2 Experiment results.
| | sav_val | | | sav_test | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Model | mIoU | cIoU | AP50 | mIoU | cIoU | AP50 | Latency |
| FIND_Focal-T_LLaMA_640 | 58.9 | 63.7 | 65.6 | 57.8 | 59.4 | 66.5 | 0.0809 s/iter |
| FIND_Focal-T_UniCL_640 | 58.3 | 65.7 | 65.9 | 60.1 | 60.3 | 68.7 | 0.0528 s/iter |
| FIND_Davit-d5_LLaMA_640 | 62.5 | 69.8 | 71.6 | 60.2 | 64.6 | 71.3 | 0.1978 s/iter |
| FIND_Davit-d5_UniCL_640 | 61.7 | 66.0 | 72.3 | 62.0 | 65.6 | 72.2 | 0.1699 s/iter |
| SoM + GPT4o | 75.3 | 83.6 | 75.3 | 76.7 | 81.2 | 76.7 | ~10 s/iter |
We also compare our model with the current most capable model GPT4o with SoM labeled mark, the marks are computed by SEEM with most capable vision backbone.
[Q2] Difference and benefits in addition to LLaVA and BLIPv2.
This is a very good question that deserves the attention of all reviewers. We have compared our work with LLaVA and BLIPv2 in [Common Question 1] and discussed the importance of mapping-based methods in the era of steerable models in [Common Question 3].
---
Rebuttal Comment 1.1:
Comment: After reading the rebuttal, I will maintain my positive score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your positive comments : ) | Summary: The authors propose a benchmark for evaluation of what they call 'interleave understanding', or tasks which depend on the embeddings which are aligned across both modalities and task granularity, and they call this benchmark FIND-Bench. Find-Bench includes variants of segmentation and retrieval tasks, derived mostly from COCO datasets. The authors then propose a method for lightweight fusion of llm and vision features with a multi-task objective, and find the model to be effective at various tasks.
Strengths: Originality: The idea of making a universal benchmark for grounding, retrieval, and segmentation at various granularities is novel. The solution of aligning existing foundation models with light-weight multi-task tuning seems to both be an effective solution and in line with the underlying motivation for foundation models.
Quality: The model design is reasonable, and the experiments are relatively extensive. The ablations are interesting, especially the feature embeding layer for the llm.
Clarity: The motivation and findings, in addition to most of the technical details, of the paper are clearly presented. See weaknesses for a few cases where more exposition is needed.
Significance: The ability for a model to become a universal model for vision tasks is important, and the benchmark does test a form of universality. It would be helpful to further strenghen this point with motivation from real-world use-cases. See weaknesses.
Weaknesses: Missing related works: The problem of Composed Image Retrieval e.g. ([1]) is quite related to interleaved retrieval. The authors should discuss the differences between the proposed work and the existing Composed Image Retrieval works.
Missing Method Description: Specifics of the sampler, especially for text, are missing in the main paper. In fact, it is unclear what the 'Embedding Sampler' in Figure 4(b) does.
Missing Baselines: Although the number of baselines is relatively large, it does seem like LLaVA-type models (eg. [2]) can handle interleaved image-text inputs. Is it possible to retrieve with them?
Improved Clarity: In some cases, clarity needs to be improved. In addition to some missing method description, there is also a missing task description of Interactive Segmentation in section 3.1.1.
Motivation: Although the benchmark proposed is interesting, there is a limited connection to what capabilities it has the ability to unlock. Benchmarks, in order to be very impactful, should somehow be connected to an ability models should have
[1].Saito, Kuniaki, et al. "Pic2word: Mapping pictures to words for zero-shot composed image retrieval." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[2] Jiang, Dongfu, et al. "Mantis: Interleaved multi-image instruction tuning." arXiv preprint arXiv:2405.01483 (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses. Also, the FIND pipeline uses unimodal LLM and Vision Encoder, and unifies them with some training. There are many multimodal encoders these days like LLaVA[3], why not use those?
To focus the rebuttal, from my end, additional experiments supporting the weakness of missing baselines are nice-to-have but improved text clarity, related works, discussion of method is more important to me.
[3] Liu, Haotian, et al. "Visual instruction tuning." Advances in neural information processing systems 36 (2024).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed in minimal way, but I see no major societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your comprehensive reviews. We are motivated to address your concerns in detail below. We hope this clarifies your questions and enables you to have a better impression of our paper.
[Q1] Missing related work with “Pic2word”.
Thanks for pointing out this related work, this composed image retrieval is very related, we have evaluated their method on demo examples shown in the **rebuttal PDF**, section Compared with Pic2Word Baseline.
- It clearly shows that FIND has better multimodal reasoning capability, pic2word may tend to favour one of the entities in the query sentence and neglect the reasoning between multiple entities.
- Meanwhile, FROMAGe is a concurrent work with Pic2work that is much closer to our setting, we have compared interleave retrieval with FROMAGe in Table. 3 and showing privilege results.
- Additionally, we want to mention that our approach has more capabilities than Pic2work where FIND can do genetic, interactive segmentation, and grounded interleaved segmentation that is not achievable by Pic2Work.
[Q2] Missing details on embedding sampler.
We appreciate your attention to this issue. We address it in the main rebuttal section under **[Common Question 2]**.
[Q3] Missing baselines in comparison with LLaVA or LLaVA types of models (e.g. Mantis).
- We have discussed the relationship with LLaVA in the related work section in L63 - L66. In addition, the evaluation benchmark of LLaVA and FIND is very different because their output type is different. Please refer to **[Common Question 1]** in the main rebuttal for a detailed explanation.
- LLaVA does not support interleave understanding, LLaVA is multimodal understanding (order of image, text doesn’t matter), in Figure. 2 (2) we clearly compare multimodal and interleave understanding.
- In addition, **LLaVA only supports one image input, even in LLaVA 1.6**, which does not support any of our tasks as well. As of last month, the LLaVA-Next-Interleave [1] technical report began supporting multiple images as input, which is later than our implementation. However, they still do not support retrieval and segmentation.
- **Mantis is definitely concurrent work**; their first arXiv version was released on May 2, 2024, making it not a desirable comparison with a NeurIPS submission. Additionally, they focus on language output like QA or captioning, which is not comparable with FIND, as stated in **[Common Question 1]** for LLaVA-type models.
[Q4] Missing Description on Interactive Segmentation.
Thank you for pointing this out. We have not described what interactive segmentation is in FIND. We will add this to the camera-ready version, if applicable. By default, interactive segmentation involves locating the relevant segment in the image with human reference, such as a point, bounding box, or scribbles. SAM [2] and SEEM [3] are two good references for understanding interactive segmentation.
[Q5] Unclear benchmarking capabilities and lack of strong baselines.
- Our FIND-Bench for interleave retrieval and grounding **benchmarks the capabilities of joint vision and language understanding**. For example, in the scenario “A [Image: Dog] is sitting on the [Text: Bench]”, we want the phrase “is sitting on” to be reasoned across both the image and the text contents. Our evaluation metrics will penalize instances where the content cannot be accurately interpreted (e.g. wrong retrieval/segmentation).
- Actually, at the time of our submission, Grounding-SAM, BLIPv2, and ImageBIND were all very strong baselines, **utilizing industry-level computational resources**. However, we appreciate you mentioning this. We create a new benchmark on the latest SAMv2 dataset, and benchmark it with the most capable multimodal foundation model GPT-4o.
→ SAMv2 Benchmark statistics.
| Dataset | # Images (videos) | # Instances | Avg Text Length |
| --- | --- | --- | --- |
| sav_val | 136 | 532 | 221 |
| sav_test | 116 | 451 | 209 |
→ SAMv2 Experiment results.
| | sav_val | | | sav_test | | | |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Model | mIoU | cIoU | AP50 | mIoU | cIoU | AP50 | Latency |
| FIND_Focal-T_LLaMA_640 | 58.9 | 63.7 | 65.6 | 57.8 | 59.4 | 66.5 | 0.0809 s/iter |
| FIND_Focal-T_UniCL_640 | 58.3 | 65.7 | 65.9 | 60.1 | 60.3 | 68.7 | 0.0528 s/iter |
| FIND_Davit-d5_LLaMA_640 | 62.5 | 69.8 | 71.6 | 60.2 | 64.6 | 71.3 | 0.1978 s/iter |
| FIND_Davit-d5_UniCL_640 | 61.7 | 66.0 | 72.3 | 62.0 | 65.6 | 72.2 | 0.1699 s/iter |
| SoM + GPT4o | 75.3 | 83.6 | 75.3 | 76.7 | 81.2 | 76.7 | ~10 s/iter |
- Lastly, we want to emphasize to the reviewer that the benchmark is only one aspect of our contributions. Our main contribution lies in proposing the FIND interface, which leverages the raw foundation model embeddings for various downstream tasks.
[Q6] Why not use LLaVA vision encoder?
- LLaVA does not have its own vision encoder; it uses the pretrained CLIP encoder, which can be found in: https://github.com/haotian-liu/LLaVA/blob/c121f0432da27facab705978f83c4ada465e46fd/llava/model/multimodal_encoder/clip_encoder.py#L7. Moreover, they do not fine-tune the weights, as indicated by the use of torch.no_grad() in the forward path of: https://github.com/haotian-liu/LLaVA/blob/c121f0432da27facab705978f83c4ada465e46fd/llava/model/multimodal_encoder/clip_encoder.py#L133.
[Q7] Rebuttal focus: new baseline numbers (Important), improved text clarity, related works, discussion of method (More important).
Thanks for your clarification, we have summarize the rebuttal in the following contents:
- New Results: Q1, Q5.
- Discussions: Q2, Q3, Q4, Q6.
[1] Li, Feng, et al. "LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models." *arXiv preprint arXiv:2407.07895* (2024).
[2] Kirillov, Alexander, et al. "Segment anything." ICCV 2023
[3] Zou, Xueyan, et al. "Segment everything everywhere all at once." NeurIPS 2023.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I thank the authors for their responses. They have adequately addressed most clarity concerns (e.g. comparison to Pic2Word) and discussion (e.g. details on embedding sampler). Therefore, I raise my score my one point. I strongly encourage all clarifications to make it into any final version. Moreover, because the embedding sampler is identity for the most part, my personal take is that introduces unneeded complexity into the paper writing. I encourage the authors to consider this perspective in any further version.
---
Rebuttal 2:
Title: Thanks for your feedback : )
Comment: Thanks so much for your prompt reply! We are happy that the rebuttal materials solve your confusion, for the embedding sampler, it seems that this creates confusion for nearly all the reviewers, so that we will correct this in the camera-ready version if any. Thanks again for increasing the rating! | Rebuttal 1:
Rebuttal: → We thank all the reviewers for their constructive comments with the **following strengths listed**:
**[Novelty (iZKM, kGcR, kpHd, AFHH)]**: The idea of making a universal benchmark for grounding, retrieval, and segmentation at various granularities is novel. The exploration of a unified embedding space is meaningful. The starting point and motivation of the paper is good. The paper idea is novel and promising if extended well.
**[Experiment (iZKM, kpHd, AFHH)]**: The experiments are relatively extensive, and the ablations are interesting. The experimental results are very good. The paper conducts an extensive experiments on COCO and compares with valid baselines.
**[Writing (iZKM, kGcR)]**: Paper is clearly presented for technical details and findings. The writing and visualizations are clear and well-presented.
**[Benchmark (iZKM, kGcR)]**: The paper proposes a new benchmark, FIND-Bench, which includes new training and evaluation ground truths for interleaved segmentation and retrieval. The benchmark does test a form of universality of vision problems.
In general, the reviewers regard the paper with good novelty and strong experimental results.
→ We also addressed the common question raised by the reviewers:
**[Common Question 1]**: Clarification of the main idea and the relationship with instruction-based methods (e.g., **LLaVA**) and Q-Former styled methods (e.g., **BLIPv2**).
- Logically, our method is an experimental proof of “The Platonic Representation Hypothesis [1],” which suggests:
*“Neural networks, trained with different objectives on different data and modalities, are converging to a shared statistical model of reality in their representation spaces.”*
Our experiments in Table 4 clearly indicate that although SAM and LLaMA are trained on very different datasets, the embeddings of the two models can be projected into the same embedding space for similar semantic meanings. Additionally, it shows that models trained on very different objectives (segmentation and text generation) can converge to a shared statistical model.
- LLaVA uses a fixed vision model as a tokenizer for LLM to enable generative visual QA. Meanwhile, BLIPv2 aligns the vision output with the language input as well. Both models have tuned the LLM weights, which does not provide clear insight into how vision foundation models and LLMs are connected. Unlike these, FIND aligns the vision backbone features (embeddings) with the intermediate layer features (embeddings) of LLMs, which is distinctly different from LLaVA and BLIPv2 (L62 - L64, Table 5).
- The output modality of LLaVA and BLIPv2 is very different from ours. They use LLMs as generative language decoding tools for QA and captioning, whereas we focus on mapping problems (e.g., retrieval, segmentation, grounding, etc.). **We are not supposed to compare with LLaVA, as it does not support retrieval and segmentation, and we do not support question answering.** For BLIPv2, it supports retrieval, so we compare the retrieval results in Table 3.
- The training budget of LLaVA and BLIPv2 is much heavier than FIND. (1) We only tune the vision projector (6 layers of transformers) and the FIND interface (9 layers of transformers) in a single stage. In contrast, both BLIPv2 and LLaVA involve multistage training and LLM tuning, requiring much higher computation for tuning.
**[Common Question 2]**: **Embedding sampler** is not well defined.
- The embedding sampler is a variant of the visual sampler presented in “Segment Everything Everywhere All at Once [2].” In L173, we mention that “Technically, the embedding sampler is usually an interpolation or grid sample layer in PyTorch.”
- Except for interactive segmentation or interleave segmentation with visual reference (they are visual samplers in [2]), the embedding sampler is an identity grid sampling layer (i.e., it keeps all the embeddings). We maintain the potential for any future sampling for long language context, so by using the term embedding sampler.
- As stated in our main paper:
*“Technically, the embedding sampler is usually an interpolation or grid sample layer in PyTorch.”*
**[Common Question 3]**: Why understanding and **mapping is important in the steerable era**.
- Understanding the mapping is crucial for comprehending the embedding properties of foundation models.
- Aligning the mapping serves as an intermediate step toward steerable interaction, where instruction-tuned models decode embeddings into human-understandable language.
- As shown in Table 5, vision embeddings have the best alignment with the intermediate embeddings of LLaMA. This finding supports the design choice in the multimodal Llama3, as discussed in “The Llama 3 Herd of Models [3],” which utilizes cross-attention layers to interact with vision embeddings. We believe this design choice is not ideal for both LLaVA and BLIPv2, as noted in L64-66.
[1] Huh, M., Cheung, B., Wang, T., & Isola, P. (2024). The platonic representation hypothesis. ICML 2024.
[2] Zou, Xueyan, et al. "Segment everything everywhere all at once." NeurIPS 2023.
[3] Dubey, Abhimanyu, et al. "The Llama 3 Herd of Models." *arXiv preprint arXiv:2407.21783* (2024)
Pdf: /pdf/ac0257e25e036a4928a16284faf24660de5fcc9f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Targeted Sequential Indirect Experiment Design | Accept (poster) | Summary: This paper designs comprehensive experiments that maximize the information gained about the query of interest within a fixed budget of experimentation, including nonlinear, multi-variate, confounded settings.
Strengths: This paper is well-motivated and relevant to the causal inference. Overall, this work is well-written and organized.
Weaknesses: 1. Are the theoretical results practically useful when addressing real-world problems?
2. How does the proposed method handle non-convex optimization, and what are the assumptions on the loss function?
3. Can the authors provide more intuitive theoretical explanations of the difference between convex and non-convex optimization in their settings?
4. How can the effectiveness of the proposed method be guaranteed when the dimension p is significantly larger than the number of training samples?
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Are there any crossing problems when minimizing the optimization problem in Equation (5)? How do we ensure that $Q^{+}(\pi)$ is always larger than $Q^{-}(\pi)$?
2. Theoretically, are there any requirements for the dimensions of $Z$ and $U$? In your experiments, you only consider a low-dimensional setting. Can you add any experiments for high dimensions for $Z$ and $U$?
3. For Theorem 2, the proposed method involves several tuning parameters, such as $λ_g,λ_f,λ_c$. This step can be quite worrying for the practical implementation of the proposed method. 1) it can be time-consuming and 2) setting the candidate values is likely quite subjective. The authors should conduct more thorough studies and provide more guidelines on how the choice of tuning parameters affects the method's performance. For example, the authors have selected $λ_g=0.1,λ_f=0.1,λ_c=0.1$, and the learning rate $α_t=0.01$. This selection process appears too subjective.
4. The experiments are not sufficient for several reasons: (1) The number of replications used in the experiments is quite small (only 25), which raises concerns about the time efficiency of the proposed method. (2) The paper does not include a real-world data analysis to validate the effectiveness of the proposed algorithm from a practical perspective.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: 1. Theoretical requirements for the dimensions of $Z$ and $U$ are not discussed. Experiments only consider a low-dimensional setting, see questions
2. The proposed method involves several tuning parameters, which can be problematic for practical implementation due to time consumption and the subjective setting of candidate values; see questions.
3. Insufficient Experiments: see questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their useful comments and address the mentioned concerns/questions one by one.
## Weaknesses
1 **Practical usefulness**: Yes, we do believe our contribution is practically useful. In particular, academia (e.g., the “A-Lab” at the Berkeley lab) and industry (big pharma and biotech startups) alike work feverishly on AI driven “automated labs” to accelerate drug, material, or molecule design and scientific discoveries. Efficient and targeted adaptive algorithms to inform subsequent experiments remain a major open challenge. Notably, our targeted formulation (aiming at a specific property of the mechanism) and allowing for non point-estimable properties (via bounding) are major steps towards this goal.
2+3 **Convexity**: We are not entirely sure which (non-)convexity is being referred to, and appreciate a clarification if we are answering the wrong question.
There are multiple optimizations. First, there is the minimax problem (eq. (4)) required to obtain a single upper/lower bound on $Q[f_0]$ under a fixed policy. Thm 1 and Thm 2 show that both the inner supremum (eq. (7)) as well as the outer minimization (eq. (9)) can be solved analytically within RKHSs. Hence, we do not see any issues around (non-)convexity for those. The remaining optimization is to update the policy $\pi$ from one round to the next to minimize $\Delta (\pi)$. This is an optimization over the space of distributions on $Z$ with the loss function being given by eq. (9) and estimated from finite data in practice. Depending on how we parameterize $\pi$ in practice and what we choose for $Q$, this loss function need not be convex in the optimization parameters and we optimize it with a local gradient-based method with no guarantees of finding a global optimum. Importantly, the upper/lower bounds estimation at each step is consistent so our method always provides (narrowing) valid bounds. They may just not be as narrow as they could have been. In practice, we do achieve informative bounds throughout.
4 **p > n:** We assume that $p$ refers to $d_x$, the treatment dimension? We are relying on kernel ridge regression for the estimation tasks from $X$ to $Y$, i.e., mapping $\mathbb{R}^{d_x} \to \mathbb{R}$. Since intuitively this means that the $d_x$ features are embedded into an infinite-dimensional feature space, regularization is required in any case to deal with the “sparsity” (more features than samples), which is why we use kernel ridge. Increasing $d_x$ beyond the sample size $n$ is thus no issue in our framework and taken into account by the regularization.
## Questions
1 **Crossing:** The optimization problem in eq. 4 is the Lagrangian multiplier formulation of maximizing/minimizing $Q[f]$ over$\mathcal{F}$ subject to the moment conditions ($\mathbb{E}[(Y - f(X))g(Z)]$). Therefore, $Q^+$ and $Q^-$ are the maximum and minimum over the same set $\mathcal{F}_c$. Therefore, $Q^+ \ge Q^-$ (no crossing) is guaranteed. In practice, our estimators (inevitably) have non-zero finite sample variance, which can lead to crossing only when (a) the gap is already very small (we have essentially identified $Q[f_0]$ already), or (b) the sample size is very small (diagnosable from large variance in estimates across rounds).
2 **Dimensionality of $U$ and $Z$:** In general, there are no theoretical requirements on the dimensionality of $U$ and $Z$. The IV setting allows for arbitrary confounding $U$ of any dimension between $X$ and $Y$ and we never explicitly need $U$ in our framework (as it is unobserved). Typically a large number of instruments $Z$ is desirable as having more instruments helps identification. We primarily focus on the more difficult (yet more realistic) setting where $d_z < d_x$ and thus underspecification is likely. However, our framework equally applies to $d_z \ge d_x$, where identification is generally going to be easier. (The full theoretical conditions for identification are Sec 3.1.)
Following the suggestion, Figs. 1 and 2 in the rebuttal pdf show results for higher-dimensional settings with $d_z = 5, d_x = 20$ and $d_z=20, d_x=20$, demonstrating that our method indeed scales to such settings while remaining informative. In the $d_z=5, d_x=20$ setting, our adaptive approach is the only one that essentially rules out negative values and the causal MSE is smallest both on average as well as in variance for our adaptive procedure.
Fig. 3 in the rebuttal pdf reports runtimes, demonstrating the computational feasibiilty. The number of samples per experiment is the key driver of computational cost, which can be easily reduced by deploying established techniques to handle the inversion of large kernel matrices [1, 2]. We note that computational cost and time are likely negligible compared to the cost and time of physical experiments in real-world applications.
3 **Hyperparameter Tuning:** Since the three mentioned hyperparameters (the different $\lambda_s$s) are relative weights, we can set $\lambda_s := \frac{\lambda_f * \lambda_g}{ \lambda_c}$ and only tune $\lambda_s$ and $\lambda_c$. Intuitively, $\lambda_s$ regularizes the smoothness of the function spaces and $\lambda_c$ weighs the functional $Q$ relative to the moment conditions. In Fig. 4 in the pdf, we show the dependence of our bounds on these hyperparameters: very low values of $\lambda_s$ lead to conservative bounds whereas large values of $\lambda_s$ yield narrow bounds throughout, as $\lambda_s$ effectively controls the size of the search space via regularity restrictions. We note that there are substantial ranges of reasonable hyperparameter choices where our bounds are fairly insensitive to the exact choice.
In practice, all learning rates below a certain value work. More gradient update steps are required for convergence for smaller learning rates and – as in most (stochastic) gradient based optimizations – one chooses a learning rate just below a value that yields reliable convergence.
---
Rebuttal 2:
Comment: 4 **Insufficient experiments:** Our runtime evaluation in appendix C, Fig. 4 (see also Fig. 3 in the rebuttal pdf), shows that the computational cost of our approach is not a severe limitation, i.e., runtime is not a real concern. While 25 replications already provide a good assessment of the finite sample variance, we happily increase this number to 100 for the revised version. From the first finished settings, the results are visually unchanged.
We agree that we would have loved to include a real-world application of the method, but point to our discussion of the fundamental difficulty of assessing our method in a real-world setting in our “limitations” setting (l.392 and following).
---
Rebuttal Comment 2.1:
Comment: Thanks for the author's response, which addressed most of my concerns. I will maintain my score. | Summary: This paper proposes a framework for designing sequential indirect experiments for estimating targeted scientific queries in complex, nonlinear environments with potential unobserved confounding when direct intervention is impractical or impossible. The authors formulate the problem as the sequential instrument design, using instrumental variables and minimax optimization to estimate the upper and lower bounds of targeted causal effects. The proposed method then tightens these bounds iteratively through adaptive strategies. Experiments on simulated data demonstrate the proposed method's efficacy compared to non-adaptive experimental design baselines.
Strengths: 1. This paper focuses on targeted sequential indirect experimental design, which is an important problem in scientific discovery where direct intervention is often impractical or impossible.
2. The proposed method is more flexible as it considers nonlinear, multi-variate, and confounded settings, and formulating the problem as a sequential underspecified instrumental variable estimation is intuitive and sound.
3. The authors develop closed-form estimators for the bounds given targeted queries when the mechanism is in an RKHS.
4. The proposed method outperforms non-adaptive baselines in synthetic experiments.
Weaknesses: 1. Although the proposed method focuses on indirect experimental design, it still requires the causal structure between variables known, which is also one of the main challenges in scientific discovery. I am curious about the method's sensitivity to imperfections in the causal structure, assumptions regarding instrumental variables, and the presence of confounders. It would be great if the authors could discuss further about them.
2. The authors only conduct synthetic experiments in a simple setting when the number of variables is small. I wonder if the authors could conduct more experiments in a more complex setting with a larger number of nodes, and also discuss the scalability of the proposed method (since kernel-based techniques are used for optimization as mentioned in the paper).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Please see the questions in the Weaknesses part.
2. It seems that the proposed method is closely related to causal Bayesian Optimization [1]. I wonder if the authors could discuss the connection in detail.
3. Experiments demonstrate that the proposed method performs well for local causal effect queries. However, it is usually unknown whether the target causal effect is local or more global (i.e., long-range) in real-world applications. I wonder if the proposed method could estimate long-range causal effects accurately.
4. What is the difference/connection between the proposed method and targeted indirect experiment design in an active learning setting?
[1] Aglietti, V., Lu, X., Paleyes, A., & González, J. (2020, June). Causal bayesian optimization. In International Conference on Artificial Intelligence and Statistics (pp. 3155-3164). PMLR.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their kind assessment and useful comments. We reply to the raised questions one by one.
**Causal Structure and Assumptions:** As long as the instrumental variable assumptions hold, we are agnostic to any confounding $U$ between the treatment variable $X$ and the outcome $Y$. We agree that the IV assumptions are rather strong and “valid instruments are hard to come by” (l.129, l.404-405). However, this issue is less severe in our setting than it is in other applications, as we control/design the instrument, e.g. the independence assumption $Z \perp U$ holds by simply randomizing the instrument. Moreover, we think it is reasonable that domain experts would not include instruments that may have no effect at all on $X$, e.g. a biologist may know the gene that a CRISPR guide is targeting, or the protein to which a compound binds. The exclusion restriction ($Z \perp Y \mid X, U$) is a potential constraint (e.g., l.125). However, as we consider multi-dimensional treatments we can make the exclusion restriction more realistic as we may add all known factors via which $Z$ may affect $Y$ into $X$. A useful example are orally administered antibiotics in so-called “sub-therapeutic dosages”: with the gut microbiome as $X$, it is reasonable to assume the antibiotics only affect macroscopic health outcomes via changes to the gut microbiome.
**Sensitivity**: Sensitivity analyses are an important direction for further work. Frameworks (e.g., [1]) analyze the sensitivity of IV estimates to exclusion restriction violations, but are limited to linear settings and do not apply here. Extensions to non-linear systems (e.g., [2]) are limited and, in particular, cannot handle multi-dimensional treatments or underspecified settings. Developing such sensitivity analyses for point estimates and especially for bounds as in our setting would be a significant contribution.
There is also an active area looking into other types of partial identification in the IV setting (e.g., [3], where partial identifiability stems from dropping the additive noise assumption). While additive (measurement) noise may actually be a defendable assumption in the settings we’re aiming at, major technical challenges remain in the multiply nested optimizations that are required to perform sensitivity analysis (in the case of [3], requiring a nested optimization in and of itself) on top of our adaptive procedure.
**Scalability:** We conducted higher-dimensional experiments as suggested, presenting results in the main rebuttal pdf. Fig. 1 and 2 show results for settings with $d_z = 5, d_x = 20$ and $d_z=20, d_x=20$, demonstrating the method's scalability beyond low-dimensional settings. In the $d_z=5, d_x=20$ setting, our adaptive approach effectively rules out the functional being negative or zero. The causal MSE is the smallest for our adaptive procedure on average, with the smallest variance across runs.
We also report runtimes, highlighting that while runtime does increase in higher-dimensions, this increase is extremely mild. A key driver of the runtime is still the number of samples per experiment. As this may grow exceptionally large in certain applications, one may leverage existing techniques to handle the inversion of large kernel matrices, i.e. [4, 5]. Importantly, we note that computational cost is mostly a concern in synthetic experiments, as the time and cost to run physical experiments in real-world applications dwarf computational cost, which are at most polynomial in their input size. We will include these clarifications as well as the additional experimental results.
**Causal Bayesian Optimization**: While there are clearly parallels between our work and Causal Bayesian Optimization, we believe that the setup (and thus the resulting methodology) are still distinct:
(a) Their main goal is to maximize a target outcome, i.e., to set variables to certain levels that maximize the expected value of another variable $Y$ in the causal graph. Our goal is to maximally inform a causal mechanism (the function that determines $Y$) with no regard for what value $Y$ takes.
(b) They assume feasible interventions on all graph variables. In contrast, we note that such interventions are often unrealistic, and experimentation amounts to “perturbations” (instrumenting) rather than “intervening”. If interventions on any variable were possible, we would directly intervene on $X$ for a trivial solution. Note that direct intervention allows them to consider general graphs, while our methodology pertains to perturbing/instrumenting a single treatment that directly affects the outcome.
(c) In contrast to their approach, we do have a path dependence in our setting, i.e. our objective function does depend on the experiments we have performed so far. Due to this difference, our adaptive algorithm is closer to reinforcement learning instead of bayesian optimization or active learning where the objective function does not change depending on previous trials.
We will add Causal Bayesian Learning to the related work section and clarify these differences.
**global causal queries**: To identify global functionals, $h(z)$ with $z \sim \pi$ must put sufficient mass on all relevant treatment space regions for $Q$. If this region is large, more samples per experiment may be needed for reliable estimates. In practice, especially when $Z$ is much lower-dimensional than $X$, finding perturbations that cover the relevant $X$ regions can be difficult. In general, bounding nonlinear global functionals is possible, but still remains an important direction for future work, except for the linear case [6], where global properties can be inferred via linear extrapolation.
**active learning**: See response to Causal Bayesian Optimization. One additional difference is the assumption of no unobserved confounding in active learning which we can relax via the use of instrumental variables. We will clarify these differences in the manuscript.
---
Rebuttal 2:
Comment: [1] Cinelli, Carlos, and Chad Hazlett. "An omitted variable bias framework for sensitivity analysis of instrumental variables." Available at SSRN 4217915 (2022).
[2] Vancak, Valentin, and Arvid Sjölander. "Sensitivity analysis of G‐estimators to invalid instrumental variables." Statistics in Medicine 42.23 (2023): 4257-4281.
[3] Kilbertus, Niki, Matt J. Kusner, and Ricardo Silva. "A class of algorithms for general instrumental variable models." Advances in Neural Information Processing Systems 33 (2020)
[4] Drineas, P., Mahoney, M.W. (2005). On the Nystrom Method for Approximating a Gram Matrix for Improved Kernel-Based Learning, 2005, Journal of Machine Learning Research, http://jmlr.org/papers/v6/drineas05a.html
[5] Li, Mu et al. “Large-Scale Nyström Kernel Matrix Approximation Using Randomized SVD.” IEEE Transactions on Neural Networks and Learning Systems 26 (2015): 152-164.
[6] Elisabeth Ailer, Jason Hartford, and Niki Kilbertus. 2023. Sequential underspecified instrument selection for cause-effect estimation. In Proceedings of the 40th International Conference on Machine Learning (ICML'23), Vol. 202. JMLR.org, Article 19, 408–420. | Summary: The authors provide a procedure to use a sequence of encouragement designs to identify target functionals about a particular causal relationship.
Strengths: This is a really cool problem setting. It isn't obvious to me that it's particularly common, but I think the larger idea of trying to think about _which_ experiments to run in order to gain knowledge about the world is a worthy line of inquiry.
In general, I think the approach of getting partial identification on a parameter and then determining a series of actions to take in order to reduce the width of that bound is a fantastic approach. This is a great way to conceptualize a variety of problems, I suspect.
Weaknesses: It isn't clear to me why Equation 4 is a bound on Q[f_0]. Maybe this should be obvious to me, but I think if it isn't obvious to me, it probably won't be obvious to a lot of your readers, as I think I've read more into this literature than most people. This feels like your big contribution: after this bound is setup, the remainder isn't what I would call straightforward, but it feels more like standard machinery that I'm used to. Conditional on the bound, Theorem 1, 2 and the experiment selection procedures you provide all make a lot of sense. But I just don't see where this bound comes from. I would like to see (i) a clearer explanation of why this expression bounds Q[f_0], (ii) some clearer examples of the cases in which the bounds are equal and Q[f_0] is identified. I think it might also help to lay out when a non-optimal policy (what you call "non-informative experimentation") doesn't identify Q[f_0]. This is a bit confusing to me, as I'd expect that complete randomization as a policy would provide identification so long as there is positivity across the whole space: one could do something like off-policy evaluation to identify any particular \pi(Z). I believe the problem comes with the restriction discussed at the start of Section 3.2, but I do not see why it is that the solution to ensure that r_0 lies in TT^* is to take the +\- of Q[f] in the objective as from Eq 3. There's a connection here that you have not clearly spelled out to me.
Maybe its possible that the problem is that I need to read Bennett et al 2023 much more carefully to see why this follows, but that isn't a fair ask for your readers who are reading _this_ paper rather than that one.
Unfortunately, it's quite difficult for me to evaluate the overall novelty of this work without understanding this component. I think you have something very cool here, but I can't quite work that out based on the paper as it stands. I'd like to reiterate that I follow what you're doing at a high level (i.e. line 96-106 makes sense in general), but I don't follow when you get into specifics.
Some other miscellaneous thoughts that aren't as important:
- Do you _have_ to call this (\partial_i f)(x^*) a local effect? Between LATEs and everything else, this feels like an absurdly overloaded term.
- The experiments section makes it difficult to see clear quantitative comparisons between methods. I would like to see things like causal mean-squared error. I recognize that's a bit difficult when you're doing partial-ID, but demonstrating that the bounds collapse to the correct Q[f_0] is important. Even just doing something like showing the midpoint of the bounds quantitatively (with confidence interval based on replicates) and the width of the bound would be useful to make sure the process is behaving sensibly.
Technical Quality: 3
Clarity: 3
Questions for Authors: see above
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their kind assessment and useful comments. We reply to the raised questions one by one.
**Eq. 4 giving valid bounds**: Thanks for pointing out the missing steps here. Indeed, much of the machinery to see that eq. (4) yields valid bounds is in the Bennet et al. (2023) paper. Essentially, we follow the steps in their Section 3 replacing $\tfrac{1}{2} \langle h, h\rangle_{L_2}$ with $Q[h]$. (They use $h$ instead of $f$, apologies for that!) Then, the first equation in their Section 3 makes it more obvious that the constrained optimization yields bounds on $Q[f_0]$ as they read: “Minimize/maximize $Q[f]$ subject to $f$ satisfying the moment conditions, i.e., being compatible with the IV assumptions and data.” The realizability assumption then ensures that the optima of these optimizations are valid bounds on $Q[f_0]$. From here on, the solution to this problem is shown to be equivalent to the solution of our eq. (4).
We follow Bennet et al. in (their) eq. (3), in using the method of Lagrange multipliers to obtain an alternative minimax formulation of the problem. Note that so far they do this in $L_2$, spaces. Analogous to their eq. (4), we can then replace the resulting estimator with a finite sample version and restrict ourselves to some function classes for $h$ (our $f$) and $g$. They then view the $L_2$ norm term as a penalization term (“so we call our estimator a penalized minimax estimator”) and we do the same for $Q[f]$. Up until this point, this distinction did not matter for the theory. As Bennet et al. note, the key distinction from previous work is to use the method of Lagrange multipliers. These steps demonstrate that – if we manage to show their eq. (5) for our modified Lagrangian – our minimax problem in (our) eq. (4) indeed yields bounds on $Q[f_0]$. In Section 4, Bennet et al. then go into showing that the problem can indeed be formulated as the minimax problem using the assumptions stated there (which we also had to carry over to our work). The results there (in particular Lemma 3) also directly apply to our modified Lagrangian. The key point in which their proof of the key identification Theorem 1 can fail in our modified setting is when they leverage the strong convexity of their Lagrangian in $h$ (our $f$), which comes from the $\langle h, h, \rangle_{L_2}$ term. Hence, as we discuss in “(In)validity of bounds” (specifically l.270 onwards), not every $Q$ will guarantee a unique optimum and valid bounds for eq. (4). First, valid bounds can be recovered for strictly convex functionals $Q$ (following the proof by Bennet et al. 2023). Second, we argue in the same paragraph why even for functionals that are not strictly convex, our method remains highly useful. We’re happy to discuss these arguments further.
We agree that this line of reasoning is not self-contained in our paper and we will add a section in the appendix cleanly working out the computations for our Lagrangian explaining that we closely follow the work of Bennet et al. 2023.
**non-informative policies and identification**: Thanks for the suggestion. We will add the following trivial examples in the appendix.
1) *Unidentified:* Consider $Z, Y \in \mathbb{R}$, $X \in \mathbb{R}^2$ and $h(z) = (z, 0)$, $f(x_1, x_2) = 0.5 * x_1 + 2 * x_2$ and $Q[f] = \partial_2 f (x^*)$. In this fully linear setting, due to the structure of $h$ the instrument can only ever perturb the 1st component of $X$ regardless of what policy we choose. Hence, it is impossible to identify the derivative of $f$ w.r.t. the second argument, which is just constant 2 (regardless of $x^*$). Even a policy with full support will not (even partially) identify $Q[f_0]$.
2) *Fully identifiable:* Consider a similar setting as above, but with $f(x_1, x_2) = x_1^2 + x_2$ and $Q[f] = \partial_1 f(x^*)$. Let’s say $x^* = (6, 1)$, then $Q[f_0] = 2 \cdot 6 = 12$. Any policy $\pi$ that has positive density in a neighborhood of $z=6$ will be able to fully identify $Q[f_0]$. However, any policy that puts no mass (or in practice, little mass) near $z=6$, will not identify $Q[f_0]$. This would be an uninformative policy.
**”local” effect**: Indeed, this is not the best wording in this context. We’ll think of a more distinct alternative (not so easy actually).
**experiments and metrics**: Thanks for this suggestion. We included the causal mean-squared error in the plots in the main rebuttal pdf as well as in the revised version of the paper. As expected, our adaptive method not only achieves the best average causal MSE, but also exhibits the least variance across runs. We are still working on – and plan to also include – the suggested plots with the midpoint between bounds + ranges as intervals to present results more clearly.
[1] Luenberger, David G. Optimization by vector space methods. John Wiley & Sons, 1997.
[2] Elisabeth Ailer, Jason Hartford, and Niki Kilbertus. 2023. Sequential underspecified instrument selection for cause-effect estimation. In Proceedings of the 40th International Conference on Machine Learning (ICML'23), Vol. 202. JMLR.org, Article 19, 408–420.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarifications, I think I see more clearly where the bound is coming from. I'll raise my score given this.
To put it a bit more plain (to me), you're taking the minimum and maximum norm solutions (by changing the sign of Q[f] in Eq.4) from within the satisfying set of functions $f$ and $g$. Under full identification, those two would optimands coincide, as there is only one satisfying function, but under partial identification that may not be the case.
I also think the comment the authors provided to reviewer GhAo regarding the CRISPR use-case is interesting. Coming from more of a social science perspective, this is not a setting which would come up much, but I suspect this may be different in the case you're considering. Making this more clear in the text why this setting matters would help the paper's impact.
---
Rebuttal 2:
Comment: We thank the reviewer for their comment and increasing their score. The intuition about the bounds is very well put. We highly appreciate the suggestion and will include the description of the CRISPR use-case into the revised version. | Summary: The authors' primary goal is to design experiments that maximally inform a query of interest about the underlying causal mechanism, within a fixed experimentation budget. They address this by maintaining upper and lower bounds on the query and sequentially selecting experiments to minimize the gap between these bounds. They show that by treating experiments as instrumental variables, they can estimate these bounds using existing techniques in nonlinear instrumental variable estimation. Their procedure involves a bi-level optimization: an inner optimization estimates the bounds, while an outer optimization seeks to minimize the gap between them. For certain queries, when assuming the underlying function lies within a reproducing kernel Hilbert space (RKHS), they derive closed-form solutions for the inner estimation problem. The authors develop adaptive strategies for the outer optimization to iteratively tighten these bounds and demonstrate empirically that their method robustly selects informative experiments, leading to identification of the query of interest when possible within the allowed experimentation framework.
Strengths: S1. The presentation is generally clear, with a logical structure that guides readers through the problem formulation, methodology, and initial experimental results.
S2. The authors present a novel approach to causal effect estimation in nonlinear systems with potential unobserved confounding. This framework addresses some challenges in scientific settings where direct experimentation is not possible, potentially aiding more targeted scientific discovery.
Weaknesses: W1. The paper only demonstrates results on a low-dimensional synthetic setting (d_x = d_z = 2). While this provides a proof-of-concept, it's unclear how the method scales to higher dimensions or performs on real-world data.
The authors could:
- Extend experiments to higher dimensions (e.g. d_x, d_z = 10 or 20) to show scalability.
- Test on semi-synthetic data by using real covariates with simulated outcomes.
- Discuss computational complexity as dimensionality increases.
W2. The paper makes strong assumptions (e.g. RKHS, valid IVs) without thoroughly discussing their implications. The authors could:
- Provide sensitivity analyses for key assumptions.
- Discuss scenarios where assumptions may not hold in practice.
- Clarify which parts of the method rely on which assumptions.
Technical Quality: 4
Clarity: 3
Questions for Authors: No.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their kind assessment and useful comments. We reply to the raised questions one by one.
**Scalability:** Following the suggestion, we have worked on additional higher-dimensional experiments. We show all results in the main pdf of the rebuttal. In Fig. 1 and 2, we present results for higher-dimensional settings with $d_z = 5, d_x = 20$ as well as $d_z=20, d_x=20$. Overall, these demonstrate that the method can indeed scale beyond the proof-of-concept low-dimensional setting. Specifically, in the $d_z=5, d_x=20$ setting, our adaptive approach is the only one that essentially rules out the functional being negative (or zero). Similarly, the causal MSE is not only the smallest for our adaptive procedure on average but also has the smallest variance across runs.
Finally, we report runtimes for these settings as well, highlighting that while runtime does increase in higher-dimensions, the increase is extremely mild. A key driver of the practical runtime is still the number of samples per experiment. As this may grow exceptionally large in certain applications, one may leverage existing techniques to efficiently handle the inversion of large kernel matrices, i.e. [1, 2] to speed up computations. Importantly, we note that computational cost is mostly a concern in our synthetic experiments, as we imagine the time and cost to run physical experiments in real-world applications dwarf computational cost and time, which are at most polynomial in their input size (like kernel methods). We will include these clarifications as well as the additional experimental results (in polished figures) to the final paper.
**Strong Assumptions:** We agree that obtaining strong guarantees in causality typically requires strong assumptions and that our methodology may be restricted by those (as mentioned in l.125, 129). We have the following comments:
1) *valid IVs*: Indeed “valid instruments are hard to come by in practice” (l.129, see also l.404-405). At the same time, we believe this problem to be less grave in our setting than it is in other applications, where one hopes to “accidentally find valid instruments naturally”. In our setting, we control/design the instrument. Therefore, the independence assumption $Z \perp U$ actually holds (by simply randomizing the instrument). Similarly, it is reasonable that domain experts would not include instruments in the considered domain that may have no effect at all on $X$, i.e., we believe it is not outrageous to defer to practitioners to only consider fairly strong instruments for which $Z \not \perp X$ holds. For example, a biologist may know the gene that a CRISPR guide is targeting, or the protein to which a compound binds. Off-target effects remain a very real concern, but when IV’s are experimentally selected, we can hope to select IVs with the IV assumptions in mind to minimize exclusion violations.
As mentioned in the paper (e.g., l.125), the exclusion restriction ($X \perp Y \mid X, U$) remains a potentially restricting factor. At the same time, the fact that we may consider multi-dimensional treatments can effectively make the exclusion restriction more realistic as we may add all known factors via which $Z$ may affect $Y$ into $X$. A useful example are orally administered antibiotics in so called “sub-therapeutic dosages”. This means that the antibiotics enter the stomach and gut, but cannot be detected in the blood stream. When using the gut microbiome for $X$, it is then reasonable to assume that the antibiotics only affect macroscopic health outcomes via the changes to the gut microbiome.
2) *Sensitivity*: We agree that sensitivity analyses towards these assumptions are an important direction for further work. In terms of the exclusion restriction, there has been a line of works (recently summarized and extended into a usable framework by [4]) providing tools to analyze the sensitivity of IV estimates to violations of the exclusion restriction. However, these works are limited to the linear setting and thus do not apply to our setting. Rare extensions to non-linear systems (e.g., [5]) only apply to limited settings and in particular cannot handle multi-dimensional treatments or underspecified settings. Providing such sensitivity analyses not only for point estimates, but to the bounds in our underspecified setting with multi-dimensional treatments would likely be a substantial contribution in and of itself.
There is also an active area looking into other types of partial identification in the IV setting (e.g., [6], where partial identifiability stems from dropping the additive noise assumption). While additive (measurement) noise may actually be a defendable assumption in the settings we’re aiming at, major technical challenges remain in the multiply nested optimizations that would be required to perform sensitivity analysis (in the case of [6], requiring a nested optimization in and of itself) on top of our adaptive procedure.
3) *Source condition*: The practical limitations of the source condition are not conclusively resolved or at least still debated. To the best of our knowledge, [3] are the first to relax this assumption. Some of the ideas there may transfer to our setting to relax the source condition, but we consider this a direction for future work that likely requires substantial novel theoretical developments.
4) *RKHS*: We do not fully understand the restrictions the reviewer is pointing towards by assuming RKHSs. Broadly speaking, RKHS are in our experience generally viewed as a reasonable and flexible assumption for many practical purposes. For characteristic/universal kernels (like the RBF we use), they enjoy universal approximation properties and often have strong consistency and convergence guarantees (see [8] for an overview). Together with the empirical success of kernel-based methods in many domains and the efficient existing algorithms for practical estimation, we believe them to be a reasonable choice.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed explanation, which has addressed my concerns. I ask this question because recent research in social sciences has highlighted that the validity of instrumental variables remains a significant challenge [1]. I have increased my score accordingly.
Also, CRISPR example piqued my interest. I'm curious about how your work relates to or differs from research in biostatistics centered on Mendelian randomization (if any).
[1] Rain, rain, go away: 194 potential exclusion-restriction violations for studies using weather as an instrumental variable. Jonathan Mellon.
---
Rebuttal 2:
Comment: [1] Drineas, P., Mahoney, M.W. (2005). On the Nystrom Method for Approximating a Gram Matrix for Improved Kernel-Based Learning, 2005, Journal of Machine Learning Research, http://jmlr.org/papers/v6/drineas05a.html
[2] Li, Mu et al. “Large-Scale Nyström Kernel Matrix Approximation Using Randomized SVD.” IEEE Transactions on Neural Networks and Learning Systems 26 (2015): 152-164.
[3] Bennett, Andrew, et al. "Minimax Instrumental Variable Regression and $ L_2 $ Convergence Guarantees without Identification or Closedness." The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023.
[4] Cinelli, Carlos, and Chad Hazlett. "An omitted variable bias framework for sensitivity analysis of instrumental variables." Available at SSRN 4217915 (2022).
[5] Vancak, Valentin, and Arvid Sjölander. "Sensitivity analysis of G‐estimators to invalid instrumental variables." Statistics in Medicine 42.23 (2023): 4257-4281.
[6] Kilbertus, Niki, Matt J. Kusner, and Ricardo Silva. "A class of algorithms for general instrumental variable models." Advances in Neural Information Processing Systems 33 (2020)
[7] Sriperumbudur, Bharath K., Kenji Fukumizu, and Gert RG Lanckriet. "Universality, Characteristic Kernels and RKHS Embedding of Measures." Journal of Machine Learning Research 12.7 (2011).
[8] Schölkopf, Bernhard, and Alexander J. Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002.
---
Rebuttal 3:
Comment: We thank the reviewer for the comment and increasing their score.
The connection to Mendelian Randomization can be drawn via the Instrumental Variable setup we use as experiments. MR is a special case of instrumental variables in which genetic variants are used as instruments. Therefore---in theory---our method would propose the next genetic variant which would inform the estimator the most. In practice, MR relies on existing genetic variation within a population and not so much on designing genetic variation. The difference therefore lies in the assumption of the instrument: in our setting we are able to adjust the instrument/experiment, MR looks at existing genetic variation. We will add this aspect in the manuscript as both approaches target similar questions. Thank you for the comment. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback. We included the pdf showcasing various additional experiments in response to specific questions and kindly refer to the individual responses. Due to the space restrictions for the individual rebuttals, we had to heavily compress our replies to the point where we could not express everything we planned to comment on and important details or nuance may have gotten lost. We are more than happy to elaborate on any of the answers in more detail.
Pdf: /pdf/606297c432923cca6e5df24fc07d2d3165044b52.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Ordered Momentum for Asynchronous SGD | Accept (poster) | Summary: In distributed learning environments, asynchronous SGD (ASGD) and its variants are often used to deal with computing nodes with uneven computing power. Momentum methods are beneficial for both optimization and generalization in deep model training, but directly applying momentum to ASGD may hinder its convergence. The paper proposes the OrMo method to integrate momentum into ASGD by sorting gradients based on iteration index. The paper provides a theoretical proof of the convergence of OrMo on non-convex problems.
Strengths: 1. A new asynchronous stochastic gradient descent optimization method (OrMo) is proposed, and a theoretical proof of convergence on non-convex problems is provided, which is the first time that it does not rely on the bounded delay assumption.
2. Experiments show the advantages of the OrMo method in convergence performance.
3. The paper provides detailed algorithm implementation details and open access links to the experimental code, which enhances the reproducibility.
4. The experiment setting is clear, and the wall-clock time shows that the OrMo outperforms baseline methods significantly.
Weaknesses: 1. Although experimental validation was performed on the CIFAR dataset, the generalization ability of the OrMo method may need to be further tested in more types of datasets and different application scenarios.
2. The paper does not discuss in detail the computational resource requirements of the OrMo method on data and models of different scales. Specifically, the ordered momentum need to be cached on the server, the extra storage costs should be considerred.
3. The proposed re-ordering method inherently is like the staleness control. The difference is that the OrMo proposes using the exponentioal term on \beta to control the staleness of the incoming gradients.
Technical Quality: 2
Clarity: 3
Questions for Authors: Figure 3 shows that the curve is very unstable. So, on which epoch, the model is used as the final model for testing?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive comments and the support of our work. Below, we respond to the raised concerns and questions point by point.
**Response to Weakness 1:**
Thank you for your valuable feedback and suggestions regarding the generalization ability of the OrMo method. We acknowledge the importance of testing OrMo on a wider range of datasets and in various application scenarios to fully understand its generalization capabilities.
We conduct an additional experiment by training a Vision Transformer [[1](https://github.com/lucidrains/vit-pytorch/blob/main/vit_pytorch/vit_for_small_dataset.py)] model on CIFAR10 dataset. The experiments are conducted on TITAN XP GPUs. The hyperparameter settings are following that in the submitted paper. Due to current constraints in time and computational resources, the experiments are still ongoing. More experimental results will be uploaded as soon as possible. The following table shows the test accuracy of OrMo and SMEGA$^2$ in the homogeneous setting when there are $32$ workers.
| | SMEGA$^2$ | OrMo |
| -- |--|--|
|K=32 (homogeneous) | 60.67\% | 61.32\% |
We appreciate your understanding and hope that the current results still offer valuable insights into the potential of OrMo.
**Response to Weakness 2:**
Compared with vanilla ASGD, the only additional computational resource for OrMo is the storage cost for the momentum on the server. As a momentum method, the extra storage cost for the momentum in OrMo is inevitable. The total storage complexity for the momentum in OrMo is $\mathcal{O}(d)$, where $d$ is the dimension of the momentum.
Moreover, OrMo only maintains one momentum on the server. In contrast, shifted momentum [A] maintains one momentum on each worker. Therefore, the total storage complexity $\mathcal{O}(d)$ for the momentum in OrMo is much less than the $\mathcal{O}(Kd)$ complexity of shifted momentum [A], where $d$ is the dimension of the momentum and $K$ is the number of workers.
**Response to Weakness 3:**
The update rule in OrMo is motivated by that in SSGDm. The key insight of the update rule of the momentum and parameter in OrMo lies in tracking the sequences $\hat{\bf w}_t$ and $\hat{\bf u}_t$ defined in Subsection 3.3, which are updated in a manner similar to SSGDm. OrMo cannot be simply regarded as a staleness control method. Due to limited space here, please refer to paragraphs 2-5 in our Author Rebuttal for the details.
**Response to Question 1:**
The main reason for the instability of the curves in Figure $3$ is that naive ASGDm occasionally fails to converge, as shown in Table $1$. In contrast, our proposed method OrMo demonstrates stable convergence across all experiments. We use the model obtained after the last epoch (i.e., $160$-th epoch for CIFAR10 and $200$-th epoch for CIFAR100) as the final model for testing.
We hope that we have addressed the reviewer's concerns, and we are always willing to respond to any further concerns. Meanwhile, we would greatly appreciate it if the reviewer could re-evaluate our work based on our response.
[A] Giladi et al., At Stability's Edge: How to Adjust Hyperparameters to Preserve Minima Selection in Asynchronous Training of Neural Networks? ICLR 2020.
---
Rebuttal Comment 1.1:
Title: Thanks for responses
Comment: I have two follow-up comments:
- 1. Why do you train ViT on CIFAR-10, instead of ResNet? The ResNet 18 trained with SGD momentum can obtain round 94% test accuracy, which should be the most standard baseline to compare optimizers with SGD momentum.
- 2. While the derivation of how to adjust the momentum is complex, the core of it is still like staleness control and a penalty function.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your timely response. We would like to answer the questions point by point as follows:
1. In the maintext of our submission, we have presented the empirical results of training a ResNet20 model on the CIFAR-10 dataset. More experimental results about ResNet20 are given in the response to Reviewer NjVh.
Meanwhile, we promise to add the experimental results of training a Vision Transformer model on CIFAR-10 in our initial responses. The number of workers is set as $32$. The additional experiments are finished now. We would like to present the results below. Each experiment is repeated 3 times.
|Test Accuracy | (hom.) | (het.) |
| -------|--------|---------|
|ASGD | 56.09\% $\pm$ 0.41\% | 56.33\% $\pm$ 0.56\% |
|naive ASGDm | 55.66\% $\pm$ 0.15\% | 55.74\% $\pm$ 0.35\% |
|shifted momentum | 57.43\% $\pm$ 0.28\% | 57.39\% $\pm$ 0.47\% |
|SMEGA$^2$ | 60.54\% $\pm$ 0.05\% | 60.66\% $\pm$ 0.12\% |
|OrMo | **61.21\% $\pm$ 0.17\%** | **61.30\% $\pm$ 0.11\%** |
We have also tested the performance of the OrMo method on the ResNet18 model, as you mentioned, over the past three days. Compared to the above Vision Transformer model which has about $0.5$ million parameters, ResNet18 has $10$ million parameters. OrMo achieves around 94% test accuracy on the ResNet18 model, as shown in the table below. Please note that while shifted momentum performs well in terms of final test accuracy, it converges more slowly than OrMo. Specifically, OrMo requires approximately 30 epochs to reach 85% test accuracy, whereas shifted momentum requires 60 epochs. Due to format constraints in the discussion period, we are unable to include figures in this response. We will provide more details (e.g., training curve figures) in the final version. These additional results are consistent with those in the main text and further support the conclusions of our work.
|Test Accuracy | (hom.) | (het.) |
| -------|--------|---------|
|ASGD | 91.45\% | 91.52\% |
|naive ASGDm | 93.74\% | 93.10\% |
|shifted momentum | 94.02\% | 94.20\% |
|SMEGA$^2$ | 93.72\% | 93.36\% |
|OrMo | **94.32\%** | **94.26\%** |
2. We sincerely thank you for the valuable comment, which makes us rethink about our method. We have provided a discussion about the differences between OrMo and existing penalty function methods in the general response, and we will add the discussion in the final version. Meanwhile, we would like to briefly summarize the differences between OrMo and existing methods that use a penalty function below.
+ For penalty function methods, the key insight is to reduce the contribution of gradients with larger delays to the parameter update.
For OrMo, the key insight lies in tracking the sequences $\hat{\bf w}_t$ and $\hat{\bf u}_t$ defined in Subsection 3.3, which are updated in a manner similar to SSGDm.
+ In penalty function methods, gradients with larger delays are given smaller weights when updating the parameter.
In OrMo, gradients with larger delays are given smaller weights when updating the momentum, but larger weights when updating the parameter. Both theoretical derivations and empirical ablation studies highlight the importance of giving larger weights to gradients with larger delays when updating the parameter
For more details, please refer to our general responses.
Thank you again for letting us know your remaining concerns. We are always willing to answer any further questions.
---
Rebuttal 2:
Title: Thanks for explanations
Comment: Thanks for the efforts and explanations. As illustrated by authors,
> In penalty function methods, gradients with larger delays are given smaller weights when updating the parameter. In OrMo, gradients with larger delays are given smaller weights when updating the momentum, but larger weights when updating the parameter.
Thus, the main differences between the penalty and the OrMo is the updating object, i.e., parameter v.s. momentum. Inherently, the core idea is similar, as adjusting the weights according to the staleness. In light of this, I think the claim that the OrMo is not a staleness-control method is not appropriate. Maybe authors should reconsider this claim. Nevertheless, I appreciate the idea of integrating momentum into ASGD by sorting gradients based on iteration index. Thus, I'd like to keep the positive score.
---
Rebuttal Comment 2.1:
Comment: We greatly appreciate the insightful follow-up comments, which helps us rethink about our work. We agree that OrMo can be considered as another type of staleness-control methods since the weights in OrMo are depended on the staleness. We will add the discussion in the final version and sincerely thank you for your support of our work. | Summary: This paper proposed a new ordered momentum for asynchronous SGD based on the delayed update characteristic of asynchronous training, which weights the momentum according to the actual iteration index. The authors proved the convergence of the algorithm both theoretically and experimentally.
Strengths: * This paper proposes a new momentum algorithm for ASGD and proves that the algorithm is convergent.
* Experiments are conducted in this paper to verify the effectiveness of the OrMo algorithm.
Weaknesses: * This paper studies asynchronous training algorithms but does not explore the effect of asynchronous delays on convergence.
* The theoretical results in this paper are limited by the number of distributed workers $K$, i.e., the algorithm does not scale well to large-scale distributed training systems.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The theoretical results in this paper do not contain asynchronous delay information and fail to reveal the effect of asynchronous delay on the convergence of the algorithm. This may be due to rough gradient upper bounds that obscure the delay gradient information. However, the reviewer believes that analyzing the effect of asynchronous delay is essential in a work that studies ASGD algorithms. Also, the experimental section did not test the empirical effect of different delays on the algorithm.
2. Theorem 1 shows that the convergence of the algorithm is linearly dependent on the number of distributed workers $K$, which implies that the algorithm will fail to converge when $K$ is large. In light of this theoretical result, can't the algorithm proposed in this paper be applied to large-scale distributed training systems?
3. The algorithm design and theoretical analysis in this paper relies on a fixed constant learning rate. Can this be extended to adaptive learning rates or decreasing learning rates that are more common in practice?
4. The result of Lemma 4 in the main text is inconsistent with the proof in the Appendix.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Please refer to Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive comments and the support of our work. Below, we respond to the raised concerns and questions point by point.
**Response to Question 1 (Weakness 1):**
This is a very good question. In fact, a core contribution of this paper is exactly the theoretical guarantee for OrMo, which doesn't contain the asynchronous delay information $\tau_{max}$ (the maximum delay). In contrast, previous works, such as [B], show that ASGD has a convergence rate of $\mathcal{O}(\frac{\sigma}{\sqrt{T}}+\frac{\tau_{max}}{T})$. However, this convergence result containing the maximum delay $\tau_{max}$ is not consistent with the actual convergence performance, since vanilla ASGD can converge well when $\tau_{max}$ is extremely large in practice.
The most closely related work to ours is [A], which analyzes vanilla ASGD with unbounded gradient delays. We extend the analysis for vanilla ASGD in [A] to OrMo. Our analysis shows that the convergence of OrMo doesn't depend on the maximum delay $\tau_{max}$. Our experimental results offer additional insights into the convergence rate. In the following table, we show the maximum delays of two different settings in CIFAR10 training when $K=64$. In heterogeneous settings, the existence of slow workers results in a maximum delay that exceeds that of the homogeneous setting by about 100 times. Our experiments show that OrMo can achieve comparable performances in these two settings, which aligns with our theoretical analysis results.
| | homogeneous |heterogeneous|
| --- |--| --|
| Maximum Delay | 343 | 30911 |
| Training Loss | 0.15 $\pm$ 0.02 | 0.16 $\pm$ 0.03 |
| Test Accuracy | 88.03% $\pm$ 0.28% | 87.76% $\pm$ 0.57% |
Our analysis provides an upper bound for the convergence rate of OrMo and shows that the maximum delay doesn't affect the convergence rate.
**Response to Question 2 (Weakness 2):**
In our analysis, OrMo achieves a convergence rate of $\mathcal{O} (\frac{\sigma}{\sqrt{T}}+ (\frac{K}{T})^{\frac{2}{3}} + \frac{K}{T})$. Our convergence result aligns with the theoretical results in Theorem $2$ of [A], which are currently the tightest convergence analyses for vanilla ASGD. This convergence rate indicates that OrMo has a comparable convergence rate to SGD for non-convex problems when $T$ is sufficiently large, as the term $\frac{\sigma}{\sqrt{T}}$ becomes dominant and the non-dominant terms $(\frac{K}{T})^{\frac{2}{3}}+\frac{K}{T}$ have a minimal impact on the overall convergence rate. Thus, OrMo can be applied for large-scale algorithms.
In existing distributed optimization works (e.g., [C][D][E]), the non-dominant terms in their theoretical results are also constrained by the number of distributed workers. We take the convergence analysis for Gossip-PGA in [E] as an example. According to Subsection $1.1$ of [E], Gossip-PGA has a convergence rate of $\mathcal{O} (\frac{\sigma}{\sqrt{K\tilde{T}}} + \frac{1}{\tilde{T}^{\frac{2}{3}}}+\frac{1}{\tilde{T}})$, where $\tilde{T}$ is the number of iterations. Please note that Gossip-PGA is a synchronous algorithm, which requires $K$ gradients on all workers for one update iteration of the parameter. In contrast, one update iteration of the parameter in OrMo requires only one gradient from a single worker. For fair comparison, we rewrite the convergence rate of Gossip-PGA in terms of the number of the gradient computations: $\mathcal{O} (\frac{\sigma}{\sqrt{C}} + \frac{K^{\frac{2}{3}}}{C^{\frac{2}{3}}}+\frac{K}{C})$, where $C$ is the number of the gradient computations and $C = K\tilde{T}$. The convergence rate of OrMo in terms of the number of the gradient computations can be written as $\mathcal{O} (\frac{\sigma}{\sqrt{C}} + \frac{K^{\frac{2}{3}}}{C^{\frac{2}{3}}}+\frac{K}{C})$. We can observe that the non-dominant terms $\mathcal{O}(\frac{K^{\frac{2}{3}}}{C^{\frac{2}{3}}}+\frac{K}{C})$ in [E] and OrMo are both constrained by the number of the workers.
**Response to Question 3:**
We sincerely thank you for the constructive comment. We agree that incorporating adaptive learning rates into asynchronous methods is an important research direction. Due to limited space here, please refer to paragraphs 5-6 of our Author Rebuttal for more discussion about the adaptive learning rates. The theoretical analysis for a constant learning rate is a common setting in existing distributed optimization works [C][D][E]. Actually, the stage-wise decreasing learning rate in our experiment can be considered as a combination of constant learning rates and the widely-used restarting technique [F][G].
We promise to add the statements above in the final version.
**Response to Question 4:**
We sincerely thank you for your careful review. There is a typo in Lemma 4 in the maintext. The right-hand side of the inequality in Lemma 4 should be $2\frac{\beta^2}{(1-\beta)^4}\eta^2K^2G^2$. We will fix the typo in the final version.
We sincerely thank the reviewers for their valuable time and their support of our work again. Meanwhile, we would greatly appreciate it if the reviewer could re-evaluate our work in light of our response.
[A] Mishchenko et al., Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays. NeurIPS 2022.
[B] Stich et al., The Error-Feedback framework: SGD with Delayed Gradients. JMLR 2020.
[C] Xie et al., CSER: Communication-efficient SGD with Error Reset. NeurIPS 2020.
[D] Yu et al., On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization. ICML 2019.
[E] Chen et al., Accelerating Gossip SGD with Periodic Global Averaging. ICML 2021.
[F] Powell et al., Restart procedures for the conjugate gradient method. Mathematical Programming 1977.
[G] Li et al., Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the in the O(epsilon^(-7/4)) Complexity. JMLR 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their responses. I currently have no further questions.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response. We are grateful for your support of our work. | Summary: The paper introduces Ordered Momentum (OrMo), a novel method enhancing the performance of ASGD by systematically incorporating momentum based on the iteration indexes of gradients. The authors provide theoretical proofs demonstrating the convergence of OrMo for non-convex problems, marking an advancement as this is purportedly the first such convergence analysis for ASGD with momentum that does not rely on the bounded delay assumption. Empirical results further validate that OrMo outperforms standard ASGD and other asynchronous variants with momentum in terms of convergence rates.
Strengths: 1. The proposed OrMo algorithm addresses the common problem of momentum integration in asynchronous settings, which is often associated with convergence issues.
2. The paper provides a convergence analysis of OrMo in non-convex settings without the need for a bounded delay assumption.
Weaknesses: 1. The writing and presentation of the paper seems to require substantial improvement. Especially for Section 3, it is hard to fully follow the algorithmic and theoretical details, making it laborious to check the correctness.
2. The paper considers only scenarios with homogeneous data distributions, where the workers access to a shared dataset $\mathcal{D}$.
3. Assumption 1 includes a bounded gradient assumption, which is rather demanding.
4. The experiment setup does not quite match the theoretical analysis. In the experiment, the learning rate is multiplied by a factor after a specific number of epochs, while Theorem 1 is based on a constant learning rate. There also lacks discussion on how the multiplication factors and the intervals are tuned.
Technical Quality: 2
Clarity: 1
Questions for Authors: 1. What is the impact of the size of the waiting set $\mathcal{C}$? How to choose the size of $\mathcal{C}$?
2. What are the key insights that the convergence can be guaranteed without bounded delay?
3. What is the complexity and cost in terms of the ordering of gradients?
Confidence: 2
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The authors have partially addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive comments. Below, we respond to the raised concerns and questions point by point.
**Response to Weakness 1:**
In Section 3, we first propose a new reformulation of SSGDm, which serves as the inspiration for designing OrMo for ASGD. We then present the details of OrMo for ASGD, including the algorithm and convergence analysis. Thanks for your valuable feedback. We will refine the writing and presentation of this section.
**Response to Weakness 2:**
Thank you for the constructive comment. Our analysis can be easily extended to heterogeneous data distributions with modified assumptions:
Assumption 1' : For $\forall {\bf w} \in \mathbb{R}^d, \forall k \in [K]$,
$\mathbb{E}\_{\xi^k}[\nabla f({\bf w}; \xi^k)]=\nabla F\_k({\bf w}),$ $\mathbb{E}\_{\xi^k}\|\nabla f({\bf w}; \xi^k)-\nabla F\_k({\bf w})\|^2 \leq \sigma^2,$ and $ \mathbb{E}\_{\xi^k}\\|\nabla f({\bf w};\xi^k)\\|^2 \leq G^2.$
Assumption 4 (heterogeneity): There exists $\zeta \geq 0$ such that $\\|\nabla F\_k({\bf w})-F({\bf w})\\|\leq \zeta^2, \forall {\bf w} \in \mathbb{R}^d, \forall k \in [K]$.
OrMo has the following convergence rate for heterogeneous data distributions:
$$\frac{1}{T}\sum\_{t=1}^T \mathbb{E}\\|\nabla F({\bf{w}}\_t)\\|^2 \leq \mathcal{O} (\frac{\sigma}{\sqrt{T}}+ (\frac{K}{T})^{\frac{2}{3}} + \frac{K}{T} + \zeta^2).$$
Compared to the homogeneous setting, the convergence rate in the heterogeneous setting includes an additive term $\zeta^2$, where the dependence on $\zeta^2$ is unavoidable without additional assumptions in asynchronous methods. This result is also consistent with existing works [A].
**Response to Weakness 3:**
Thank you for pointing out the demanding nature of the bounded gradient assumption in Assumption $1$. We agree that the bounded gradient assumption is a little stronger compared to the other assumptions. However, we think it is acceptable since the bounded gradient assumption is widely used in theoretical analyses in existing distributed optimization works (e.g., [B] [C]). And the bounded gradient assumption is also aligned with our empirical experience during model training. We will add the discussion above in the final version and thank you for the valuable comment again.
**Response to Weakness 4:**
Thank you for the valuable comments. Actually, the stage-wise decreasing learning rate in our experiment can be considered as a combination of constant learning rates and the widely-used restarting technique [D] [E].
In our experiments, we only tune the initial learning rate while the multiplication factors and the intervals are fixed. In particular, the settings of the multiplication factors and intervals in our experiment are exactly the same as those in previous works ([F] for CIFAR10 and [B] for CIFAR100). These are commonly used configurations for CIFAR training.
We sincerely apologize for not making it clear in the submission and promise to add the statements above in the final version.
**Response to Question 1:**
Thank you for the question. The waiting set $\mathcal{C}$ is the set of worker indexes that are waiting for the server to send the parameter. The size of the waiting set indicates the number of workers that are idle and waiting for the server to send the parameter. In other words, the size of the waiting set $\mathcal{C}$ is not a hyper-parameter and thus does not need to be set manually. This concept is introduced to unify SSGD and ASGD into a single framework in Algorithm $1$.
**Response to Question 2:**
We use the example in [A] to describe the insights. Suppose that there are two parallel workers: one fast worker that takes only $10^{-6}$ seconds to compute a stochastic gradient and one slow worker that takes $1$ second. If we implement ASGD with these two workers, the delay of the slow worker’s gradients will be $1$ million, since in the $1$ second that the server waits for the slower worker, the gradients from the fast worker will produce $1$ million updates. Consequently, analyses based on $\tau\_{max}$ degrades by a factor of $10^6$. However, ASGD actually performs very well in this scenario, since $99.9999\\%$ of the updates on the server use the stochastic gradients with no delay.
Briefly speaking, slower workers which typically send heavily delayed gradients will engage in fewer training iterations. And existing analyses for ASGD based on $\tau\_{max}$ aren't aligned with the actual performance.
**Response to Question 3:**
OrMo organizes the gradients into the momentum in order based on their iteration indexes. Since the momentum is a weighted sum of the gradients with predefined weights, each gradient is incorporated into momentum by multiplying it with a weight $\beta^{b\_{t+1} - \lceil \frac{ite(k\_t, t)}{K}\rceil}$. The complexity related to the ordering of the gradient is $\mathcal{O}(1)$, making the cost negligible.
We hope that we have addressed the reviewer's concerns, and we are always willing to respond to any further concerns. Meanwhile, we would greatly appreciate it if the reviewer could re-evaluate our work based on our response.
[A] Mishchenko et al., Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays. NeurIPS 2022.
[B] Xie et al., CSER: Communication-efficient SGD with Error Reset. NeurIPS 2020.
[C] Xu et al., Detached Error Feedback for Distributed SGD with Random Sparsification. ICML 2022.
[D] Powell et al., Restart procedures for the conjugate gradient method. Mathematical Programming 1977.
[E] Li et al., Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the in the O(epsilon^(-7/4)) Complexity. JMLR 2023.
[F] He et al., Deep Residual Learning for Image Recognition. CVPR 2016.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response.
The paper integrates momentum into ASGD. Compared to vanilla ASGD [1], this paper requires stronger assumption, while not improving the convergence rate. I am inclined to change my score to 4. As I have not carefully checked the technical details of the analysis, my confidence remains limited.
[1] Mishchenko el al., Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays, NeurIPS 2022.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the willingness to increase the rating and the follow-up comments. Meanwhile, we would like to address the remaining concern below.
**Q: The paper integrates momentum into ASGD. Compared to vanilla ASGD [1], this paper requires stronger assumption, while not improving the convergence rate.**
+ We would like to first clarify that momentum is widely used in distributed machine learning and has a good performance in many practical applications. There are several existing works that integrate momentum into ASGD [2][3][4]. However, as far as we know, there is almost no convergence analysis of these methods in existing works. In view of this challenge, we introduce the ordered momentum to ASGD and present the method OrMo, which is theoretically proven to be convergent and empirically performs better than existing methods.
+ Although the analysis of ASGD for adaptive learning rates in [1] does not require the bounded gradient assumption, the analysis of ASGD for constant learning rates in [1] is based on the bounded gradient assumption. Therefore, the analyses in our work and in [1] for constant learning rates are under the same assumptions ($L$-Lipschitz smoothness, lower-bounded objective, and bounded gradient).
+ As presented in Remark 6 at the end of Section 3, OrMo can also be used with adaptive learning rates. We have discussed this point in the general response, and will further explore it in future work.
Given the reasons above, we carefully think that there are adequate theoretical contributions in our work. We hope that our response can address the reviewer's concern. Meanwhile, we would greatly appreciate it if the reviewer could re-evaluate our work based on our response.
[1] Mishchenko el al., Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays, NeurIPS 2022.
[2] Giladi et al., At stability’s edge: How to adjust hyperparameters to preserve minima selection in asynchronous training of neural networks?, ICLR 2020.
[3] Cohen et al., SMEGA2: distributed asynchronous deep neural network training with a single momentum buffer, ICPP 2022.
[4] Mitliagkas et al., Asynchrony begets momentum, with an application to deep learning. In Proceedings of the Annual Allerton Conference on Communication, Control, and Computing, 2016. | Summary: This paper introduces OrMo, a novel method to weight stale gradients received on the server in asynchronous SGD with momentum. The algorithm is based on the idea of organizing the sequence of gradients received on the server into "buckets" to approximate standard minibatch SGD with momentum. The method is shown to converge under arbitrary (possibly unbounded) delays, with experiments on CIFAR-10 and CIFAR-100 validating the strength of OrMo compared to some baseline.
Strengths: * **Convergence without bounded delay assumption:** Convergence is proved without relying on the bounded delay assumption, extending recent analysis of ASGD [[Mishchenko et al., 2022]](https://arxiv.org/pdf/2206.07638 ), [[Koloskova et al., 2022]](https://arxiv.org/pdf/2206.08307 ) to ASGD with momentum.
* **Clear explanation of the "momentum as a sum of weighted buckets" idea:** Time is spent to clearly lay down the ideas of "buckets" and re-writing the momentum updates so that OrMo comes naturally, which is helped by Fig.1 \& 2.
* **Promising experimental results:** Compared to naive implementation (e.g., plain ASGD with momentum) and some chosen baselines, OrMo is shown to perform better in practice, especially in cases where the workers speed is heterogeneous.
Weaknesses: * **Lack of discussion between OrMo and its approximation of standard synchronous SGD with momentum:** While the idea of OrMo may seem natural when we visualize standard momentum as a "sum of weighted buckets of gradients", the update performed by OrMo simply *approximate* synchronous momentum. Indeed, as the parameters hosted on the server are updated at each stochastic gradient in the asynchronous setting, the *points* $w_t$ on which the gradients are computed all differ even inside a given "bucket", which is not the case for synchronous SGDm. Yet, this is not discussed.
* **Standard baselines are lacking in the experiments:** In Table 1\&2, and in Fig. 3\& 4, a comparison *(for reference only)* to synchronous SGDm could be interesting to have (to see whether or not there is still a gap to fill with synchronous methods in terms of "performance per iteration"). Moreover, it seems that the baselines "tuning the momentum value" for asynchronous SGD with momentum described in [[Zhang and Mitliagkas, 2018]]( https://arxiv.org/pdf/1706.03471 ) and [[Mitliagkas et al., 2016]]( https://arxiv.org/pdf/1605.09774 ) are lacking. Finally, we can see OrMo as a method to penalize stale gradients received by the server, using some sort of "exponential weight". Empirically, there have been other "penalty functions" that have been tried in Asynchronous Federated Learning, such as the ones described Part 5.2 of [[Xie et al., 2020]]( https://www.opt-ml.org/papers/2020/paper_28.pdf ). Have you compared OrMo to them?
Technical Quality: 3
Clarity: 3
Questions for Authors: * In Theorem 1, the quantities $L,G$ do not appear in the convergence rate (contrary to what is displayed in Theorem 2 of [[Mishchenko et al., 2022]](https://arxiv.org/pdf/2206.07638 ) ). Is it normal?
* Fig.5: As the Parameter Server is known to become a bottleneck for communications at scale, how does OrMo fair compared to synchronous SGDm at larger scale than $K=16$? (for instance, $K=64$)
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive comments and the support of our work. Below, we respond to the raised concerns and questions point by point.
**Response to Weakness 1:**
As you pointed out, the gradients in SSGDm and OrMo are computed at different points due to the asynchronous nature of OrMo. In SSGDm, the set of the computed gradients can be expressed as $\\{{\bf g}\_0^0, \cdots, {\bf g}\_0^{K-1}, {\bf g}\_K^0, \cdots, {\bf g}\_K^{K-1}, {\bf g}\_{2K}^0, \cdots, {\bf g}\_{2K}^{K-1}, \cdots\\}$. In contrast, the set of the computed gradients in OrMo can be formulated as $\\{{\bf g}\_0^1, \cdots, {\bf g}\_0^{K-1}, {\bf g}\_1^{k\_0}, {\bf g}\_2^{k\_1}, {\bf g}\_3^{k\_2}, \cdots, {\bf g}\_K^{k\_{K-1}},\cdots\\}$. This difference between SSGDm and OrMo makes it challenging to directly measure how closely OrMo approximates SSGDm. To address this, in the theoretical analysis presented in Subsection 3.3, we define two auxiliary sequences, $\hat{\bf w}\_t$ and $\hat{\bf u}\_t$, for the parameter and the momentum, respectively. These sequences are updated in a manner similar to SSGDm but use the gradient set from OrMo instead of SSGDm's gradient set. *Lemma $1$ and Lemma $2$ rigorously formulate the differences between OrMo and the auxiliary sequences, providing a theoretical foundation for understanding the relationship between OrMo and SSGDm despite their differences in gradient computation.*
Thanks for your very insightful comment which makes us think more deeply about the relationship and difference between our OrMo and its approximation of standard synchronous SGD with momentum. We will clarify this approximation in the final version.
**Response to Weakness 2:**
**2.1: A comparison (for reference only) to synchronous SGDm could be interesting to have.**
The results are presented in the following table, and we will refine the figures in the final version.
|CIFAR10 | K=16 | K=64 |
| -- |--|--|
|OrMo (hom.) | 90.95\% $\pm$ 0.27\% | 88.03\% $\pm$ 0.28\% |
|OrMo (het.) | 91.01\% $\pm$ 0.10\% | 87.76\% $\pm$ 0.57\% |
|SSGDm | 90.85\% $\pm$ 0.26\% | 88.96\% $\pm$ 0.18\% |
|CIFAR100 | K=16 | K=64 |
| -|-|-|
|OrMo (hom.) | 67.56\% $\pm$ 0.34\% | 65.48\% $\pm$ 0.17\% |
|OrMo (het.) | 67.71\% $\pm$ 0.33\% | 65.43\% $\pm$ 0.35\% |
|SSGDm | 67.86\% $\pm$ 0.32\% | 66.43\% $\pm$ 0.50\% |
**2.2: Moreover, it seems that the baselines "tuning the momentum value" for asynchronous SGD with momentum are lacking.**
Thanks for your insightful advice. Following the suggestion in [A], we conducted experiments to tune the momentum value $\beta$ for naive ASGDm and present the results on CIFAR10 here. While tuning the momentum value can enhance the performance of naive ASGDm, hyperparameter tuning is quite time-consuming and costly. In contrast, our method achieves better performance using the commonly used momentum value of 0.9, without requiring extensive tuning. We will include the momentum value tuning experiments in the final version.
|Algorithm ($\beta$) | K=16 (hom.) | K=64 (hom.) | K=16 (het.) | K=64 (het.) |
| -|-|-|-|-|
|naive ASGDm (0.1) | 89.85\% $\pm$ 0.24\% | 83.93\% $\pm$ 0.25\% | 89.95\% $\pm$ 0.19\% | 83.76\% $\pm$ 0.34\% |
|naive ASGDm (0.3) | 89.91\% $\pm$ 0.20\% | 84.23\% $\pm$ 0.49\% | 90.26\% $\pm$ 0.05\% | 84.43\% $\pm$ 0.22\% |
|naive ASGDm (0.6) | 90.39\% $\pm$ 0.24\% | 83.87\% $\pm$ 0.28\% | 90.56\% $\pm$ 0.13\% | 84.07\% $\pm$ 0.38\% |
|naive ASGDm (0.9) | 88.15\% $\pm$ 1.70\% | 82.39\% $\pm$ 1.79\% | 73.23\% $\pm$ 31.61\%| 68.75\% $\pm$ 29.51\%|
|OrMo (0.9) | **90.95\% $\pm$ 0.27\%** | **88.03\% $\pm$ 0.28\%** | **91.01\% $\pm$ 0.10\%** | **87.76\% $\pm$ 0.57\%** |
**2.3: Finally, we can see OrMo as a method to penalize stale gradients $ \cdots $**
The update rule in OrMo is motivated by that in SSGDm. The key insight of the update rule of the momentum and parameter in OrMo lies in tracking the sequences $\hat{\bf w}\_t$ and $\hat{\bf u}\_t$ defined in Subsection 3.3, which are updated in a manner similar to SSGDm. Hence, OrMo cannot be simply regarded as a method to penalize stale gradients received by the server. Due to limited space here, please refer to paragraphs 2-5 in our Author Rebuttal for the details.
**Response to Question 1:**
For simplicity, we omit some quantities (e.g., $L$ and $G$) in Theorem $1$ due to the $\mathcal{O}(\cdot)$ notation. When considering $L$ and $G$ and setting $\eta = \min \\{\frac{1}{2KL}, \frac{1}{\sigma \sqrt{TL}}, \frac{1}{{(KLG)}^{\frac{2}{3}} T^{\frac{1}{3}}}\\}$, the convergence rate in Theorem 1 can be expressed as:
$$\frac{1}{T}\sum\_{t=1}^T \mathbb{E} \\|\nabla F({\bf{w}}\_t) \\| ^2 \leq \mathcal{O} (\sqrt{\frac{L\sigma^2}{T}}+ (\frac{KLG}{T})^{\frac{2}{3}} + \frac{KL}{T}).$$
We will refine the presentation of Theorem $1$ using this convergence rate. Thanks a lot for pointing out this.
**Response to Question 2:**
Both OrMo and SSGDm are implemented based on the Parameter Server framework. The implementation details of OrMo are outlined in Algorithm 2. For SSGDm, the only modification is to replace the asynchronous communication scheduler in line $15$ of Algorithm 2 with a synchronous communication scheduler (i.e., only when $\mathcal{C}=[K]$, the server sends the parameter ${\bf w}\_{t+1}$ and its iteration index $t+1$ to all workers in $\mathcal{C}$ and sets $\mathcal{C} = \emptyset$), as described in Remark 4. This ensures a fair comparison between OrMo and SSGDm.
We sincerely thank the reviewers for their valuable time and support of our work. Meanwhile, we would greatly appreciate it if the reviewer could re-evaluate our work in light of our response.
[A] Mitliagkas et al., Asynchrony begets Momentum, with an Application to Deep Learning. arXiv:1605.09774v2, 2016.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttals, and their thorough discussion of the points I raised. I hope the authors will include these additional experimental results, the more "standard" presentation of convergence results, as well as the discussions (especially the one about the difference with penalty function methods which I found particularly instructive) in their final version of the paper. Given that my concerns were answered, I raise my score.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate the insightful comments, which help us rethink our work. The additional results, the more "standard" presentation of convergence results, and the discussions will be included in the final version of the paper, as we promised. Thank you deeply for the support of our work. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for the comments concerning our manuscript entitled "Ordered Momentum for Asynchronous SGD". These comments are valuable and very helpful. We have read through the comments carefully and responded to the comments point by point. Based on the comments of all the reviewers, we provide additional clarification on our proposed method OrMo.
The update rule in OrMo is motivated by that in SSGDm. The key insight of the update rule of the momentum and parameter in OrMo lies in tracking the sequences $\hat{\bf w}\_t$ and $\hat{\bf u}\_t$ defined in Subsection 3.3, which are updated in a manner similar to SSGDm.
Moreover, OrMo cannot be simply regarded as a penalty function method, as described in Subsection 5.2 of [A] (or a staleness control method, as described in Subsection 3.2 of [B]). The key insight of both the penalty function methods and the staleness control methods is to reduce the contribution of gradients (or parameters in federated learning algorithms) with larger delays to the parameter update. Concretely, these two methods typically assign smaller weights to the gradients (or parameters in federated learning algorithms) with larger delays when updating the parameter. OrMo differs significantly from these methods.
In OrMo, the update rule of the momentum is formulated as
$${\bf u}\_{t+1} = {\bf u}\_{t+\frac{1}{2}} + \eta \beta^{b\_{t+1} - \lceil \frac{ite(k\_t, t)}{K}\rceil}{\bf g}\_{ite(k\_t, t)}^{k\_t},$$
where the gradient ${\bf g}\_{ite(k\_t, t)}^{k\_t}$ is incorporated into momentum by multiplying it with an exponential weight $\beta^{b\_{t+1} - \lceil \frac{ite(k\_t, t)}{K}\rceil}$, as shown in line $12$ of Algorithm $2$. The exponential weight $\beta^{b\_{t+1} - \lceil \frac{ite(k\_t, t)}{K}\rceil}$ indeed decreases as the delay of the gradient increases. However, when it comes to the update rule of the parameter, the situation changes. As shown in line $13$ of Algorithm $2$, the update rule of the parameter is $${\bf w}\_{t+1} = {\bf w}\_{t+\frac{1}{2}} - \eta \frac{1-\beta^{b\_{t+1} - \lceil \frac{ite(k\_t, t)}{K}\rceil + 1}}{1-\beta}{\bf g}\_{ite(k\_t, t)}^{k\_t},$$
where the gradient ${\bf g}\_{ite(k\_t, t)}^{k\_t}$ is used to update the parameter with the weight $\frac{1-\beta^{b\_{t+1} - \lceil \frac{ite(k\_t, t)}{K}\rceil + 1}}{1-\beta}$. For a given $t$, an increase of the delay $t-ite(k\_t, t)$ implies a decrease in $ite(k\_t, t)$, typically causing the weight $\frac{1-\beta^{b\_{t+1} - \lceil \frac{ite(k\_t, t)}{K}\rceil + 1}}{1-\beta}$ to increase. This is in direct contrast to the insight behind both the penalty function methods and the staleness control methods. The design of the parameter update rule with respect to the momentum value $\beta$ is crucial for formulating the gap between $\hat{\bf w}\_t$ and ${\bf w}\_t$ as outlined in Lemma $2$ in Subsection 3.3. An ablation study was also conducted to justify the parameter update rule in line $13$ of Algorithm $2$. We replaced the update rule in line $13$ of Algorithm 2 with a vanilla SGD step, ${\bf w}\_{t+1} = {\bf w}\_{t+\frac{1}{2}} - \eta {\bf g} \_{ite(k\_t, t)}^{k\_t}$ and name it OrMo (vanilla SGD step). The comparison between OrMo and OrMo (vanilla SGD step) is presented below.
|Test Accuracy (CIFAR10) | K=16 (hom.) | K=64 (hom.) | K=16 (het.) | K=64 (het.) |
| ---|---|-----|-----|----|
|OrMo (vanilla SGD step) | 90.32\% $\pm$ 0.45\% | 86.08\% $\pm$ 1.33\% | 90.23\% $\pm$ 0.32\% | 86.10\% $\pm$ 1.71\% |
|OrMo | **90.95\% $\pm$ 0.27\%** | **88.03\% $\pm$ 0.28\%** | **91.01\% $\pm$ 0.10\%** | **87.76\% $\pm$ 0.57\%** |
The relationship between OrMo and the penalty function methods should be considered orthogonal. An adaptive learning rate, multiplied by a "penalty function" related to the delay of the gradient, can be incorporated into OrMo. For example, replace the learning rate $\eta$ in Algorithm 2 with $\eta\_t = \eta \times penalty(\tau\_t)$, where $penalty$ is the penalty function related to the gradient delay. In this way, the update rule in lines $12$ and $13$ will be
$${\bf u}\_{t+1} = {\bf u}\_{t+\frac{1}{2}} + \eta\_t \beta^{b\_{t+1} - \lceil \frac{ite(k\_t, t)}{K}\rceil}{\bf g}\_{ite(k\_t, t)}^{k\_t},$$ $${\bf w}\_{t+1} = {\bf w}\_{t+\frac{1}{2}} - \eta\_t \frac{1-\beta^{b\_{t+1} - \lceil \frac{ite(k\_t, t)}{K}\rceil + 1}}{1-\beta}{\bf g}\_{ite(k\_t, t)}^{k\_t}.$$
Actually, incorporating adaptive learning rates into asynchronous methods is an important research direction. Some works, such as [C], provide convergence analysis for adaptive learning rates, but these analyses rely on demanding assumptions, such as bounded delays. Some works, such as [D] and [E], offer rigorous theoretical guarantees but lack sufficient empirical verification. Designing adaptive learning rates for asynchronous methods that provide both strong theoretical guarantees and promising empirical results remains an open challenge. Our focus in this paper is on incorporating momentum into ASGD, which is orthogonal to the adaptive learning rate schemes. We plan to explore integrating OrMo with an adaptive learning rate in future work.
We hope that we have addressed the reviewer's concerns, and we are always willing to answer any further questions. Meanwhile, we would greatly appreciate it if the reviewer could re-evaluate our work in light of our response.
[A] Xie et al., Asynchronous Federated Optimization. OPT 2020.
[B] Wu et al., HiFlash: Communication-Efficient Hierarchical Federated Learning with Adaptive Staleness Control and Heterogeneity-aware Client-Edge Association. IEEE TPDS 2023.
[C] Barkai et al., Gap Aware Mitigation of Gradient Staleness. ICLR 2020.
[D] Koloskova et al., Sharper Convergence Guarantees for Asynchronous SGD for Distributed and Federated Learning. NeurIPS 2022.
[E] Mishchenko et al., Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays. NeurIPS 2022. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Neural Cover Selection for Image Steganography | Accept (poster) | Summary: This paper presents a steganographic cover optimization framework that can be used to enhance existing steganographic methods. The authors use a pre-trained DDIM to reconstruct the cover image, optimizing the latent in the process and thus reducing the message extraction error. Meanwhile, the authors deeply analyze the working principle of steganographic encoder and verify it experimentally and theoretically. The experimental results demonstrate that the method proposed in this paper reduces the message extraction error rate while improving the quality of stego images.
Strengths: (1) From the perspective of steganographic cover optimization, the authors propose to use DDIM to adjust the cover image, which not only reduces the message extraction error rate, but also improves the quality of the stego image. Different from previous cover selection methods, the cover optimization method proposed in this paper has stronger interpretability.
(2) The authors deeply analyze the mechanism of steganographic encoder and conclude that "the encoder prefers to encode messages at pixel positions with smaller variance", which is verified by "waterfilling problem for Gaussian channels".
Weaknesses: (1) The method framework description is not detailed enough. This paper implements the training of the whole steganography method by optimizing the message extraction loss. However, the framework in the paper contains two models: the pre-trained DDIM and the pre-trained LISO. Readers will naturally have questions: which of these two models are fixed? And which are trainable? Based on the contextual reading, I presume that the DDIM model parameters are updatable, while the LISO model is fixed. The authors should elaborate on these settings in Section 3.1.
(2) The experiment on steganalysis is not comprehensive enough. In Section 5.3, the authors evaluate the performance of the proposed steganography method to resist steganalysis. However, only the experiment of resisting the steganalysis network XuNet is designed, and the experiment of resisting the more advanced steganalysis network SRNet should be added to make the experimental results more convincing. Also, Section 5.3 lacks a detailed description of the experimental setup.
Technical Quality: 3
Clarity: 2
Questions for Authors: (1) Unifying the term robustness, resistance to steganalysis should not be called robustness. In the field of steganography, robustness usually refers to the ability to resist channel perturbations, such as resistance to JPEG compression, resistance to Gaussian noise, and so on. The ability to resist steganalysis is often called security. For the definition of robustness, you can refer to the following articles:
1. Zeng, K., Chen, K., Zhang, W., Wang, Y., & Yu, N. (2023). Robust steganography for high quality images. IEEE Transactions on Circuits and Systems for Video Technology, 33(9), 4893-4906.
2. Zeng, K., Chen, K., Zhang, J., Zhang, W., & Yu, N. (2024). Towards Secure and Robust Steganography for Black-box Generated Images. IEEE Transactions on Information Forensics and Security.
(2) In Section 5.3, the settings of steganalysis experiments should be given, e.g., how many samples are used for training and how many samples are used for testing. In addition, experiments for resisting the more advanced steganalyzer SRNet should be added.
(3) In section 3.1 detail how to implement the description of latent optimization, is the message extraction loss ‖m-m ̂‖ used to update the pre-trained DDIM?
(4) The work in this paper is to invert the cover image to make it more suitable for steganography. Since this process does not involve "selection", I think it is not appropriate to call it "cover selection", you can consider modifying it to "cover generation" or "cover reconstruction".
(5) In Section 5.3, the performance index for evaluating resistance to XuNet is "detection rate", and the detailed definition of "detection rate" should be given here. Meanwhile, to evaluate the performance of resisting steganalysis, we usually use the index of "error rate", which is defined as error rate = N_false/N_test, where N_false is the number of classification errors and N_test is the total number of samples. The closer the error rate is to 50%, the better the performance is.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's recognition of our contributions. Our work advances steganographic cover optimization by using DDIM to adjust the cover image, reducing extraction errors and improving stego image quality. Unlike previous methods, our approach offers stronger interpretability. We show the encoder prefers low-variance pixels for message encoding, verified by the waterfilling algorithm for Gaussian channels.
We address the two weaknesses as follows.
1. **Clarification on fixed vs. trainable components**: The DDIM and LISO models are pre-trained and remain fixed throughout the process. The latent vector $x_T$ generated by the DDIM's forward pass is the only component optimized by minimizing the extraction loss $||m-\hat{m}||$ (lines 141-142). This method is both practical and efficient, as it avoids the need for costly fine-tuning of DDIM. We will clarify this point in the revised manuscript.
2. **Additional experiments using SRNet**: We conducted experiments using SRNet, and the results are presented in Table 1 of the supplementary material. Our observations indicate that the images generated by our framework effectively resist steganalysis by SRNet. This is evidenced by the significant drop in detection rate when transitioning from scenario 1 to scenario 2. As a reminder, in scenario 2, we exploit the differentiability of the steganalyzer (SRNet) and incorporate an additional loss term to account for steganalysis . We will ensure this information is clearly presented in the manuscript.
Below, we provide a detailed, line-by-line response to each of the questions.
1. **Security and robustness terminology**: Indeed; it makes more sense to use "security" for resistance to steganalysis and "robustness" for resistance to channel perturbations. We will update the terminology in the manuscript accordingly.
2. **Steganalysis settings**: Regarding the settings of the steganalysis experiments, we adhered to the default values specified in the LISO paper. We will add detailed information about the number of samples used for training and testing in the revised manuscript.
3. **Latent optimization description**: Thank you for your suggestion, we will make this process clearer in the text. The DDIM is pre-trained and fixed during the entire process, as is the LISO model. The latent vector $x_T$ produced by the DDIM’s forward pass is the only entity being optimized by minimizing the extraction loss $||m-\hat{m}||$ (lines 141-142). This approach is practical and efficient, as it avoids the costly fine-tuning of the DDIM.
4. **Reconsidering terminology, replacing “Cover Selection”**: Thanks for the suggestion. We are considering several options such as "cover reconstruction," "cover fine-tuning," or "cover re-generation" to better describe our process.
5. **Clarification on detection and error rates in steganalysis**: Thank you for your feedback. We define the detection rate as detection rate=N_true/N_test. We will include this definition in the revised manuscript. Our use of this definition follows the steganographic framework presented in LISO [1]. We will also clarify its connection to the "error rate" as you described, noting that the closer the error rate is to 50%, the better the performance in resisting steganalysis. We will ensure these definitions and their implications are clearly explained.
[1] Xiangyu Chen, Varsha Kishore, and Kilian Q Weinberger. “Learning iterative neural optimizers for image steganography.” In International Conference on Learning Representations, 2023.
---
Rebuttal Comment 1.1:
Title: Comment on Authors' Response
Comment: The authors have addressed most of my concerns, so I will raise my score.
---
Rebuttal 2:
Comment: Thank you for engaging in our discussion. We genuinely appreciate your constructive feedback. | Summary: This paper introduces an innovative cover selection framework that optimizes within the latent space of pretrained generative models to identify the most suitable cover images, distinguishing it from traditional exhaustive search methods. This approach offers significant advantages in both message recovery and image quality. Furthermore, the paper presents an information-theoretic analysis of the generated cover images, revealing that message hiding predominantly occurs in low-variance pixels, consistent with the waterfilling algorithm principles in parallel Gaussian channels. Extensive experiments validate the superior performance of this framework.
Strengths: 1.This paper describes the limitations of current cover selection methods and introduces a novel, optimization-driven framework that combines pretrained generative models with steganographic encoder-decoder pairs.
2.The results demonstrate that the error rates of the optimized images are an order of magnitude lower than those of the original images under specific conditions. Impressively, this optimization not only reduces error rates but also enhances the overall image quality, as evidenced by established visual quality metrics.
3.This paper investigates the workings of the neural encoder and finds it hides messages within low variance pixels, akin to the water-filling algorithm in parallel Gaussian channels. In addition, the authors observe that this selection framework increases these low variance spots, thus improving message concealment.
Weaknesses: We observed that the quality of the images used in the experimental section of the paper was generally not high, whether original or algorithmically generated. This may be due to the limitations of the dataset itself. Therefore, future experiments with high-quality images are necessary to further verify the effectiveness of this framework.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.You mentioned a JPEG layer in section 5.2, and I suggest that you add a brief description of this JPEG layer in your paper.
2.In Figure 11 of Appendix F, I observe that the subgraphs in the third row and third column are fuzzier than the other subgraphs in the third row. What do you think is the reason for this?
3.You selected the BigGAN network and DDIM network (Kim et al. 2022) in the experimental part of the paper. What are the advantages of these two networks?
4.What is the role of weight initialization in Figure 2?
5.In the introduction, I did not find this article in the references (Evsution et al. 2018), please check your references carefully.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author has proposed an improved algorithm to address the limitations of current steganography methods, which has partially alleviated these constraints. It is recommended that in future work, the author focuses on enhancing image quality without compromising the algorithm's security performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's recognition of our work, including (a) identifying limitations of existing cover selection methods, (b) integrating pretrained generative models with steganographic encoder-decoder pairs, and (c) demonstrating that our neural encoder hides messages within low variance pixels, similar to the water-filling algorithm. Our approach yields lower error rates and enhanced image quality, confirmed by visual metrics, and increases low variance pixels, further improving message concealment.
We would like to provide pointers to the experiments and key results in the manuscript and appendix that we strongly believe address the main weaknesses and limitations identified in the review.
1. **Experiments with high-quality images:** We agree that this is an important aspect and would like to emphasize that we conducted extensive experiments using high-quality images from the CelebA-HQ dataset at 1024x1024 resolution and Animal Faces-HQ (AFHQ) dataset at 512x512 resolution. The qualitative results can be found in Fig. 10 of the appendix, with additional results shown in Fig. 1 and Fig. 2 of the supplementary material. These figures illustrate the high quality of the CelebA-HQ and AFHQ images. The effectiveness of our framework is further validated by the metrics presented in Table 2 of the main text, including Error Rate, BRISQUE, PSNR, and SSIM. These results indicate that we achieve similar improvements compared to the baselines for both high-quality and low-quality images.
2. **Enhancing image quality without compromising the algorithm's security performance**: We would like to clarify that the algorithm's security performance is not compromised. As shown in Table 4 of the manuscript, XuNet's detection rate using the AFHQ dataset (high-quality 512x512 images) remains similar, if not lower, before and after applying our framework, indicating that our approach maintains security performance. Additionally, the high quality of our AFHQ images is validated in Table 2, measured by metrics such as BRISQUE, SSIM, and PSNR.
Perhaps these setups were not clear, and we will clarify them in the revised manuscript. We kindly ask the reviewer to consider these results in their review.
We provide answers to the questions and suggestions below.
1. **JPEG layer description:** Thanks for the suggestion. We agree that a brief description of the JPEG layer in Section 5.2 will enhance the clarity of our paper. We will include this in the revised manuscript.
2. **Fig. 11 third row third column fuzziness:** The subgraph in the third row and third column does appear fuzzier than the others. This issue is an outlier and has not been observed in CelebA-HQ, AFHQ, or other ImageNet classes. It arises due to the LISO framework, on which our method is built. LISO is the state-of-the-art steganographic encoder-decoder, which is why we chose to build our method on it. However, training LISO on ImageNet is challenging because of the vast number of classes. To address this, we fine-tuned LISO using additional owl images obtained through data augmentation techniques such as rotation and flipping, and achieved a slightly better-quality image, as shown in Fig. 3 of the supplementary material.
We would also like to emphasize that the main focus of this work is to design a cover selection algorithm, not the steganographic encoder and decoder. While we have made improvements to the owl image, further investigation into enhancing LISO’s performance on ImageNet would be necessary for even better results, which was beyond the scope of the main paper.
3. **Advantage of BigGAN network and DDIM network (Kim et al. 2022):** BigGAN and the diffusion model from Kim et al. (2022) are among the state-of-the-art generative models, known for their high-quality image generation capabilities. Both models are open-sourced, making them accessible for replication. Additionally, the papers introducing these models are highly cited, indicating their significant impact and validation within the research community. We would also like to note that our approach is applicable to other generative models as well.
4. **Clarification on the role of weight initialization in Fig. 2:** As described in Section 3.1 of the manuscript, our process consists of two steps. In step 1, the initial cover image goes through the forward diffusion process to get the latent $x_T$. This serves as the initialization for the next step. In step 2, we optimize the already initialized elements of $x_T$ such that when it goes through the backward diffusion process, the output image minimizes the message reconstruction loss. In other words, we treat $x_T$ as a learnable weight matrix initialized from step 1, and its weights are updated via step 2 using gradient descent. We hope this addresses your question, and we will add more details to this section to make it clearer.
5. **Reference pointer (Evsutin et al. 2018):** We have carefully checked our references, and the article by Evsutin et al. (2018) is included (lines 350-352). If there are any specific concerns or if you need further clarification, please let us know. | Summary: This paper presents a novel framework for cover selection in image steganography to enhance the message recovery performance, which optimizes the latent code. The effectiveness of this approach is validated through intensive experiments. Additionally, the paper empirically analyzes intriguing behaviors occurring during the message hiding process inside the message encoder.
Strengths: * Introducing a novel approach for cover selection in image steganography aimed at minimizing error rates.
* This straightforward approach optimizes the latent code by minimizing message loss, yet yields promising results.
* The proposed method is thoroughly validated, demonstrating strong performance across various metrics including message recovery, robustness, and security (steganalysis).
Weaknesses: * The optimization of latent variables reduces recovery errors but alters the original image content (Fig. 1), making it impractical for real-world use. Additionally, as depicted in Figure 10 (Appendix E), steganographic images may exhibit unnatural visual characteristics, potentially leading to easy detection.
* While the inclusion of an additional loss term for steganalysis is pivotal for setting 2, the author fails to address this point. Moreover, the results in setting 1 are puzzling: why does detection increase with 1bpp but not with 2bpp, despite the increased payload?
Technical Quality: 3
Clarity: 3
Questions for Authors: * Why the authors do not compare with others steganography frameworks such as StegaStamp [1], RoSteALS [2], UDH [3], or StegaStyleGAN [4]
* Can the authors show more qualitative results between the original and encoded images? Is there any effect of the optimization on the orignal images?
* Can the authors explain why the percentages of the identified high-message positions are encoded in low-variance pixels will affect the flexibility for data embedding ?
[1] Tancik, Matthew, Ben Mildenhall, and Ren Ng. "Stegastamp: Invisible hyperlinks in physical photographs." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. \
[2] Bui, Tu, et al. "Rosteals: Robust steganography using autoencoder latent space." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. \
[3] Zhang, Chaoning, et al. "Udh: Universal deep hiding for steganography, watermarking, and light field messaging." Advances in Neural Information Processing Systems 33 (2020): 10223-10234. \
[4] Su, Wenkang, Jiangqun Ni, and Yiyan Sun. "StegaStyleGAN: Towards Generic and Practical Generative Image Steganography." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 1. 2024.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations, and broader impact in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewer for their valuable feedback, and we will address each point in detail.
**Weaknesses**:
- **Image alteration:** We acknowledge the concern regarding the alteration of the original image content due to the optimization of latent variables. However, it is important to highlight that such modifications are not only common but also well-founded in the literature.
**a) Established practice in research:** Our approach of altering the cover image is in line with established practices in the field of cover selection. For example, studies such as [1] and [2] analyzed different cover images and selected the most suitable ones to improve the robustness and accuracy of message extraction. These works provide concrete evidence and justification for the alteration of cover images, demonstrating that such modifications are beneficial for enhancing performance in steganographic applications.
[1] Farzin Yaghmaee and Mansour Jamzad. Estimating watermarking capacity in gray scale images based on image complexity. EURASIP Journal on Advances in Signal Processing, 2010:1–9, 2010.
[2] Zichi Wang and Xinpeng Zhang. Secure cover selection for steganography. IEEE Access, 7:57857– 57867, 2019.
**b) Preservation of image semantics:** While our algorithm alters the cover image to minimize decoding error, it does not change its meaning or semantics. In semantic communications, these modifications are acceptable as the focus is on accurately transmitting essential information, not preserving the exact original image content.
- **Unnatural visual characteristics (Fig. 10), potentially leading to easy detection**: We want to clarify that the algorithm's detection accuracy remains intact. Table 4 of the manuscript demonstrates that XuNet's detection rate with the AFHQ dataset stays consistent or even decreases after applying our framework, showing that our method does not compromise security performance.
- **Loss term for steganalysis:** Our paper addresses the inclusion of an additional loss term for steganalysis, as outlined in Section 5.3 (lines 304-306). We incorporate the steganalyzer’s logit output into the loss function. Our results in Table 4 show that, when embedding 4 bits per pixel, detection accuracy drops from 97.35% to 8.6% after incorporating the loss, indicating improved resistance to steganalysis. We will clarify this further in the main text.
- **Unexpected decline in detection rate:** Indeed, it is puzzling that detection decreases despite the increased payload. This effect is not related to our introduced method but is inherent to the LISO framework on which our method is based. As shown in Table 5 of the LISO manuscript [1], they also observe this unexpected behavior where the detection rates do not increase consistently with higher payloads. This phenomenon is not fully understood.
We chose to use LISO because it is a state-of-the-art steganographic framework. However, we recognize that this reliance on LISO also means inheriting some of its unexplained behaviors. Future work could explore approaches combining LISO with other frameworks or develop novel techniques to address detection rate inconsistencies with different payloads. We will highlight this in our paper and acknowledge the need for further research to understand and improve the framework.
[1] Xiangyu Chen, Varsha Kishore, and Kilian Q Weinberger. “Learning iterative neural optimizers for image steganography.” In International Conference on Learning Representations, 2023.
**Questions:**
- **Comparisons to other schemes:** While related, the mentioned frameworks are designed for different settings than those we used. The approaches in [1, 2] involve networks that learn algorithms **robust to image perturbations** between the encoder and decoder, whereas our settings assume **no such perturbations**. In [3], the focus is on hiding secret **images** within cover images, unlike our method, which hides secret **binary messages**. Lastly, [4] deals with coverless steganography, mapping secret messages to latent vectors for a generative adversarial network, which then generates **random cover images**. In contrast, our settings involve embedding secret messages into **given cover images** using a dedicated encoder.
- **Additional qualitative results:** We have included additional qualitative results comparing the original and encoded images in Fig. 1 of the supplementary material.
- **Effect on optimized images:**
1. **Relationship with Image Complexity Metrics:** We would like to refer the reviewer to Fig. 12 of Appendix G, where we conducted extensive simulations to answer this important question. In our experiments on the AFHQ dataset, we analyzed the relationship between the decoding error and various image complexity metrics: entropy, edge density, compression ratio, and color diversity. Our results show a negative correlation between the decoding error and the first three metrics, and a positive correlation with color diversity. We conjecture that optimizing for decoding error is affecting these image complexity metrics. While these findings are encouraging, more research is needed to fully understand the effects of optimization on the original images.
2. **Increase in Low-Variance Pixels:** As discussed in Section 4.3 of the main text, the optimization process increases the number of low-variance pixels in the images from 81.6% to 92.4%. This is related to our discussion about the algorithm hiding data in low-variance spots. When the number of low-variance pixels increases, the encoder has more flexibility and options for data hiding, resulting in better performance.
We will include more details of this discussion in the manuscript to highlight the need for further investigation.
- **Clarification, encoding in low-variance pixels:** We would like to refer the reviewer to part (3) of the global response. We hope this description addresses the question.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors's detailed response. The authors address well my concerns about the framework, however, the authors should include more discussion about several issues raised in the revised version. The authors's assumption which is no pertubation in images may not be practical in the secret communication, I suggest the authors should consider this scenario and provide the comparative results with several methods such as RoSteALS, or StegaStamp. I would like to keep my original rating.
---
Rebuttal 2:
Title: Response to Reviewer kQa7's Comment
Comment: We thank the reviewer for the response.
We would like to clarify three key points:
1) The primary focus of our work is on developing a cover selection algorithm rather than a complete steganographic encoding-decoding framework. Our method is versatile and can be integrated with various steganographic frameworks, including RoSteALS and StegaStamp. The assumption of no perturbation aligns with the steganographic frameworks described in [1,2,3]. Among those 3, we specifically selected the LISO framework [3] due to its status as the state-of-the-art. This line of work operates without assuming any perturbations, which is why perturbation results are not presented in our work.
[1] Kevin Alex Zhang, Alfredo Cuesta-Infante, Lei Xu, and Kalyan Veeramachaneni. “Steganogan: High capacity image steganography with gans.” arXiv preprint arXiv:1901.03892, 2019.
[2] Varsha Kishore, Xiangyu Chen, Yan Wang, Boyi Li, and Kilian Q Weinberger. “Fixed neural network steganography: Train the images, not the network.” In International Conference on Learning Representations, 2021.
[3] Xiangyu Chen, Varsha Kishore, and Kilian Q Weinberger. “Learning iterative neural optimizers for image steganography.” In International Conference on Learning Representations, 2023.
2) Although we do not provide perturbation results, we present results related to JPEG distortion (Section 5.2), which is highly relevant to real-world applications and offers a meaningful representation of our method’s effectiveness.
3) Regarding the request to “provide comparative results with several methods such as RoSteALS or StegaStamp”, we are uncertain about the exact nature of the comparative results the reviewer is seeking. As mentioned earlier, our focus is on designing a cover selection algorithm for a given steganographic encoder-decoder pair. If RoSteALS outperforms StegaStamp, then our cover selection framework applied to RoSteALS would also outperform its application to StegaStamp. If the reviewer is suggesting a comparison of our framework when applied to LISO versus RoSteALS or StegaStamp, it would not be a fair comparison, as LISO is trained under different conditions than the other methods.
---
Rebuttal Comment 2.1:
Comment: Thank you for your answer. I'm aware that RoSteALS, StegaStamp, and LISO are under different training setting. However, given that response, there are no quantitative results to show that your method is robust with other steganographic frameworks including RoSteALS and StegaStamp. Moreover, in steganography, we do care about the distortion scenarios, hence, even though your report the results related to JPEG compression I think it would not be enough. I would like to suggest the authors can improve these concerns in the future work. Again, I would like to keep my rating.
Thanks for your work! | null | null | Rebuttal 1:
Rebuttal: 1. We thank all the reviewers for acknowledging our contributions and providing constructive feedback. Our ***key contributions*** include (a) identifying limitations of existing cover selection methods, (b) integrating pretrained generative models with steganographic encoder-decoder pairs for cover fine-tuning to minimize the error extraction loss, and (c) conducting theoretical studies to shed light on the inner workings of learning-driven steganography methods, drawing analogies to the water-filling algorithm in signal processing.
2. We conducted ***additional experiments*** to address the reviewers' concerns, including: (1) steganalysis results on SRNet (requested by Reviewer DaKE), (2) additional qualitative comparisons between the original and encoded images (requested by Reviewer kQa7), and (3) an experiment to enhance image quality, addressing the concern raised by Reviewer XM9B. The results to these experiments are included in the ***attached PDF***.
3. We also clarify an important question raised by Reviewer kQa7 regarding our analysis of the inner workings of the steganographic encoder and its analogy to the water-filling algorithm:
- ***Analogy:*** We view the process of hiding secret messages as sending information through N parallel communication channels, where N represents the number of pixels in an image. In this analogy:
- Each pixel functions as an individual communication link.
- The secret message to be hidden and eventually recovered acts as the signal.
- The cover image, which hosts the hidden message, introduces noise that is unknown to the decoder.
- ***Advantage of low variance pixels in hiding messages:*** The analogy above immediately helps understand the role of pixel variances within a cover image in hiding messages. Intuitively, low-variance pixels within a cover image indicate low noise, reducing uncertainty for the decoder and enhancing message recovery.
- ***Key Observation. High-message positions are highly aligned with low-variance pixels:*** In Section 4.1, we make an observation that the high-message positions are highly aligned with low-variance pixels, meaning the steganographic encoder actively utilizes the low-variance pixel positions for hiding the messages, which is a highly desired and natural behavior. Our findings show that **81.6%** of high-message regions align with low-variance pixels, leveraging lower noise to enhance recovery accuracy. **We highlight that we are the first to make this observation, despite there being several relevant works on learning-driven steganography; none of these prior studies conducted an interpretation analysis of the encoder to uncover this behavior, as also acknowledged by Reviewers XM9B and DaKE.**
- ***Theoretical Reasoning behind our Key Observation:*** Interestingly, we find that the learned message embedding behavior closely aligns with the waterfilling strategy, the theoretically optimal embedding strategy for parallel Additive Gaussian Noise channels. This strategy involves embedding more messages in lower-variance pixel positions, which increases message recovery accuracy. Surprisingly, steganography methods tend to adopt this strategy implicitly, without explicit training to do so, as discussed in Section 4.2. To the best of our knowledge, this is the **first instance where theoretical studies are used to shed light on the inner workings of learning-driven steganography methods.**
- ***How is this discussion relevant to our cover selection/fine-tuning problem?*** In Section 4.3, we observe a significant increase in the number of low-variance pixels within a cover image after applying our framework. Our findings demonstrate that **92.4%** of high-message regions now align with low-variance pixels, compared to **81.6%** before the optimization of the cover image. This indicates a significant enhancement in the encoder's performance, as it efficiently leverages the low-variance pixel positions to conceal the messages.
Pdf: /pdf/48bf1654f73e2b531da967c8ade08e666c89f30c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Non-asymptotic Analysis of Biased Adaptive Stochastic Approximation | Accept (poster) | Summary: This paper studied convergence rate of general adaptive stochastic approximation with biased gradient. This is an important field since only biased gradient is accessible in many practical machine learning problem. The authors establied non-asymptotic bound based on various assumption, some of which are quite strong.
Strengths: The authors provided convergence analysis of general biasd adaptive optimization method, and applied their results to popular adaptive algorithms including Adagrad, RMSProp and AMSGrad. They also mentioned various practical machine learning problems suffering from biased gradient. Extensive numerical experiments were conducted to verify the theoretical rate.
Weaknesses: 1. The techinque is not novel compared with existing works on biased SGD, e.g. [1]. The authors directly impose an upper and lower bound of preconditioning matrix in H3, in which case the analysis is not very different from that of SGD. A better way is to establish the boundedness of preconditioning matrix along the tracjectory as in e.g., [2], instead of taking it as prior assumption.
2. The authors claimed that they analyzed Adam throught out the paper, but in fact they only did AMSGRAD. This is misleading since they are two totally different algorithms with distinct practical performance. I suggest the authors correct this point.
### Typos:
line 230: $\mathcal{O}(n^{-1/4})$->$\mathcal{O}(\log n/n^{1/2})$
[1] Liu, Yin, and Sam Davanloo Tajbakhsh. "Adaptive stochastic optimization algorithms for problems with biased oracles." arXiv preprint arXiv:2306.07810 (2023).
[2] Li, Haochuan, Alexander Rakhlin, and Ali Jadbabaie. "Convergence of adam under relaxed assumptions." Advances in Neural Information Processing Systems 36 (2024).
Technical Quality: 2
Clarity: 3
Questions for Authors: In Theorem 4.2, what does $\delta_j$ stand for? Does it correspond to $\delta$ in eq (4)? What is the dependence of $\delta$ in the final convergence rate of Adagrad and other algorithms?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations are pointed out in the weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments.
It is true that the stochastic optimization literature is very rich and is the focus of a great deal of research activity. For instance, the work proposed in [81] focuses on the theoretical analysis of stochastic optimization problems with biased gradients.
First of all, the proof for adaptive biased SA differs from that of biased SGD in how we control the preconditioning matrix. The adaptivity mentioned in [81] refers to how the authors update the gradient, not to the adaptive steps used in Adagrad, RMSProp, and Adam. The title was even changed since the first submission by removing "Adaptive".
Furthermore, our results for Adagrad and RMSProp in Section 4.3, as well as the extension to Adam in Section 4.4, provide new theoretical bounds for adaptive optimization algorithms widely used in machine leaning. Their proofs are significantly different from the sketch proof of biased SGD.
In Theorems 4.1 and 4.2, since the exact form of $A_n$ is unknown (as we aim to encompass all adaptive algorithms), we must assume certain properties about the preconditioning matrix, specifically the bound of its eigenvalues (H4). However, when applied to algorithms such as Adagrad, RMSProp, and Adam (AMSGRAD), we have demonstrated that this assumption holds.
In [82], they only consider the convergence results of Adam in an unbiased setting, which means they do not need to make assumptions about the preconditioning matrix. The assumption on the preconditioning matrix in our case is only for the general case. The convergence results for AMSGRAD with biased gradients do not take the boundedness of the preconditioning matrix into account.
We believe that the originality of the contribution of the paper is that we cover adaptive step sizes applied to the biased gradient setting with relaxed assumptions (for instance expected smoothness) as supported for instance by Reviewer aKkE and Reviewer 3iVV.
$\textbf{The authors claimed that they analyzed Adam throught out the paper, ... I suggest the authors correct this point.}$
- Since AMSGRAD is a minor modification of Adam and is not as well known as Adam, we believed it makes more sense to refer to Adam. Even in packages like PyTorch and TensorFlow, Adam has an option "AMSGRAD" to use this algorithm. The difference between these two algorithms is already mentioned in lines 221-225 of Section 4.4 of the paper. Furthermore, [64] highlights the convergence difficulties of Adam while proposing AMSGRAD.
However, we have revised the paper to address this point and clarify the distinction between the two algorithms.
$\textbf{In Theorem 4.2, what does $\delta_{j}$ stand for? ...}$
- In Theorem 4.2, $\delta_{j}$ is defined as $\delta_{j} = L\gamma_{j}^{2}\beta_{j}^{2} /2$. This notation is introduced to simplify the terms in the convergence rate analysis, and $\delta$ represents the regularization parameter.
Regarding the final convergence rate, there is a term $\log \left( 1 + \frac{nM^2}{\delta}\right) / \sqrt{n}$, which is asymptotically similar to $\log n / \sqrt{n}$.
$\textbf{The authors establied non-asymptotic bound based on various assumption, some of which are quite strong.}$
- As the other assumptions are classical in the literature, we set the focus more on H3 and H4.
As discussed after Assumption H3, it is shown to be a minimal requirement for achieving the convergence rate in adaptive SA (see also [20]).
Assumption H4 [10, 32] relates to the adaptive algorithm. We have shown that it can be verified in Adagrad, RMSprop, and Adam. For other algorithms where verification is difficult, we can use the truncation method described after this assumption.
The truncation method involves replacing the random matrices $\tilde{A}\_n = \min${$||A\_n||,\beta\_{n+1}$}$A\_n/||A\_n||$). It is also stated that truncation is only used with very low probability, which means that the estimators $A_{n}$ are minimally affected by these truncations [32]. Since we choose $\beta_{n+1}$, we have control over the convergence rate.
We provided additional comments on the assumptions for instance to Reviewer 1qy9.
$\textbf{line 230: $\mathcal{O}\left(n^{-1/4}\right) \rightarrow \mathcal{O}\left(\log n / \sqrt{n} \right)$}$
- If the additive bias $b_n$ in the convergence rate is of order $\mathcal{O}\left(\log n / \sqrt{n} \right)$, we achieve a convergence rate of $\mathcal{O}\left(\log n / \sqrt{n} \right)$. However, in line 230, we want to state that if the bias of the gradient estimator is of order $\mathcal{O}\left(n^{-1/4}\right)$ (since $r_{n+1}$ represents an additive bias term, generally of the order of the square of the bias of the gradient estimator, as mentioned after H3), then we achieve a convergence rate of $\mathcal{O}\left(\log n / \sqrt{n} \right)$.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I would like to clarify that regarding w1, I notice in the analysis of Adagrad, RMSProp and AMSGrad, the Assumption H5 is used. This assumption, along with the existence of $\delta$, directly imposes an upper and lower bound on the preconditioning matrix globally. Hence I don't think the analysis will be too much different from biasd SGD. This is different from [2] which establishes the boundedness along the tracjectory.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments. Regarding Adagrad and RMSProp, although the analysis may appear similar to that of biased SGD under Assumption H3(i), verifying H3(i) requires managing the form $A_n$, which differs from the approach used in biased SGD, which is detailed in line 573. Additionally, the analysis of AMSGrad differs significantly from both the general case and the specific applications to Adagrad and RMSProp.
The proof for AMSGrad is derived from scratch and does not rely on assumptions about the boundedness of eigenvalues. However, we do assume that gradients are bounded (as per Assumption H5). Even with this assumption, the proof for AMSGrad is different from the analysis of biased SGD, as detailed in Appendix A.6.
While Assumption H5 is common in adaptive algorithm literature [8, 64, 70, 71], recent works have provided theoretical guarantees for unbiased gradients without relying on the boundedness of gradients (e.g., [82]).
Our work focuses on adaptive algorithms with biased gradients, including scenarios with decreasing bias. We have shown that in various applications, such as Stochastic Bilevel Optimization and Conditional Stochastic Optimization, all our assumptions, including H3 and H5, can be verified.
Most results that do not rely on Assumption H5 are provided with high probability. Furthermore, the obtained bounds generally have a rate of $\mathcal{O}\left(\text{poly}(\log n) / \sqrt{n} \right)$, with a loss of a logarithmic factor in the polynomial.
However, even some recent works on SGD and adaptive methods continue to rely on the assumption of bounded gradients (e.g., see [1, 2]).
One approach to avoid assumption H5 is to use the truncation method. To control the smallest eigenvalue in Adagrad, we can consider a truncation that leads to
$$
A_{n} = \left( \text{diag} \left( \beta_{n+1}^{-2}I_{d} + \frac{1}{n+1}\sum_{k=0}^{n}H_{\theta_{k}} \left( X_{k+1} \right) H_{\theta_{k}}\left( X_{k+1} \right)^{T} \mathbf{1}\_{|| H_{\theta_{k}} \left( X\_{k+1} \right) ||^{2} \leq v\_{k+1}} \right) \right)^{-1/2} ,
$$
so that
$$
\lambda_{min} \left( A_{n} \right) \geq \left(\beta_{1}^{-1} + \frac{1}{\sqrt{n+1}}\sqrt{\sum_{k=0}^{n}v_{k+1}} \right)^{-1} .
$$
Then, choosing $v_{k} = C_{v}k^{-v}$ with $v \geq 0$ leads to
$$
\lambda_{\min} \left( A_{n} \right) \geq \frac{1}{\beta_{1}^{-1} + \sqrt{C_{v}} (n+1)^{v/2}} =: \lambda_{n+1}.
$$
The value of $v$ can be chosen arbitrarily to achieve the desired convergence rate. However, the larger the chosen $v$, the less truncation is applied, which may result in a slower convergence rate. It is also possible to adjust other parameters, such as $\gamma_{n+1}$ and $\beta_{n+1}$.
While there are a few other works on the scalar version of Adagrad with biased gradients, their assumptions are stronger than ours.
Our work is the first step to providing the general convergence results for the biased gradients and adaptive algorithms for any form of $A_n$, which applies to Adagrad and RMSProp and extends to AMSGRAD under the assumption of bounded gradients.
Additionally, at the suggestion of Reviewer ioBT and Reviewer BcqM, we have provided convergence results for the Martingale and Markov Chain cases. Our work focuses on deriving convergence results for various adaptive algorithms based on commonly used assumptions in the literature, applicable to many different scenarios. We believe that our paper is a valuable contribution to the community, and that exploring convergence results for biased gradients without relying on Assumption H5 may require a separate paper.
We plan to discuss the boundedness of gradients along the trajectory and will address this in future work.
[1] Xidong Wu, Jianhui Sun, Zhengmian Hu, Junyi Li, Aidong Zhang, and Heng Huang. Federated conditional stochastic optimization. In Advances in Neural Information Processing Systems, volume 36, 2024.
[2] Xidong Wu, Feihu Huang, Zhengmian Hu, and Heng Huang. Faster adaptive federated learning. In Proceedings of the AAAI conference on artificial intelligence, volume 37, pages 10379–10387, 2023. | Summary: This paper aims to analyze SGD with biased gradients in a non-asymptotic manner, where the steps are also adaptive. In particular, under certain assumptions and conditions, it provides convergence guarantees and establishes convergence rates for a critical point of non-convex smooth functions for various adaptive methods.
Overall, I think the theoretical results of this paper are interesting and insightful.
Strengths: - This paper is well-organized and clearly written. In particular, it well demonstrates the motivation of the study in the introduction and accurately compares its contributions with previous related works. All the theorems and the corresponding set of assumptions are concretely presented. In addition, the structures of proofs in the Appendix are also clear and easy to follow.
- The proposed methods are general in the sense that it not only covers the adaptive step sizes (e.g., Adam) but also applies to the biased gradient, making it a comprehensive and interesting work in the direction of analyzing non-asymptotic convergence guarantees for variants of SGD.
- The numerical experiments for IWAE and BR-IWAE are comprehensive and well support the theoretical claims.
Weaknesses: One thing unclear to me is the role of parameters that are related to sampling noise (e.g. batch size) in the theorems. These parameters are supposed to be crucial since they are actually closely related to the training process and generalization. How do theorems in this paper characterize the effects of such parameters (other than some simple bounds from the assumptions)?
Technical Quality: 3
Clarity: 4
Questions for Authors: - Please see the weaknesses part.
- In addition, although I understand that the settings considered in this paper are novel, there are already a lot of works in closely related topics. Does there exist any technical difficulty that cannot be solved by techniques from previous works? If it does, what are the technical difficulties compared to previous works?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the limitations in the main part of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive and thoughtful review and the opportunity to clarify our contributions.
- In our framework, the gradient is computed using a single sample, as is common in many theoretical results. However, this can easily be extended to account for batch size. In such a case, instead of having $\sigma^2$ (variance noise), $\sigma^2$ would be scaled by the batch size.
For instance, denoting as $B_k$ the batch size at iteration $k$, the convergence rate in Theorem 4.2 transforms into the following:
$$
\mathbb{E}\left[\left\| \nabla V\left(\theta_{R}\right)\right\|^{2}\right] \leq 2\frac{\mathbb{E}[V(\theta_{0}) - V(\theta^{*})] + \sum_{k=0}^{n} w_{k+1} \delta_{k+1} \sigma_{k}^2 / B_{k} + \sum_{k=0}^{n} w_{k+1} \gamma_{k+1} \lambda_{k+1} r_{k+1} }{\sum_{j=0}^{n} w_{j+1} \gamma_{j+1} \lambda_{j+1}}.
$$
Here, we observe that taking a large $B_k$ leads to a smaller second term in the convergence rate (the reduction of the variance).
It is well-known that increasing the batch size leads to faster convergence but also increases computation time. Therefore, a balance must be struck between batch size and computational efficiency. These comments will be added in the revised version of the paper.
- Most of the existing literature on biased SA focuses on SGD with biased gradients, assuming a constant bias throughout the iterations. Only a few studies address biased gradients with adaptive steps, and these typically focus on the scalar version of Adagrad with strong assumptions.
For instance, [3] studied convergence results for biased gradients with Adagrad in the Markov chain case, focusing on the norm of the gradient of the Moreau envelope while assuming the boundedness of the objective function.
In contrast, our work provides convergence rates in a broader and more general setting that encompasses various applications and adaptive algorithms. One of the key challenges addressed in our work is the need to manage the preconditioning matrix effectively.
A significant technical difficulty in this context arises from the term $\mathbb{E} \left[ \left\langle \nabla V \left( \theta_{n} \right), A_{n}H_{\theta_{n}} \left( X_{n+1} \right) \right\rangle \mid \mathcal{F}\_{n} \right]$, where the stochasticity depends on $X_{n+1}$, which is part of the biased gradient estimator $H_{\theta_{n}} \left( X_{n+1} \right)$, as well as the preconditioning matrix $A_n$. To address this challenge, Assumption H3$(i)$ was introduced.
Furthermore, we have shown that for Adagrad, Adam, and RMSProp (Corollary 4.5), a sufficient condition to verify H3$(i)$ is to bound the bias of the gradient, that is, to show that there exists $\tilde b_{n+1}$ such that
$$||\mathbb{E}[H_{\theta_n}(X_{n+1})\mid\mathcal{F}\_n]-\nabla V(\theta_n)||\leq\tilde b\_{n+1}.$$
Compared to existing convergence rates for Adagrad with biased gradients, our approach is notable for its weaker assumptions and its coverage of both the scalar and diagonal versions of Adagrad. Additionally, it introduces novel results for other algorithms, such as RMSProp and Adam.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarification
Comment: I thank the authors for the detailed response which addressed my concerns and questions. I keep my positive score unchanged. | Summary: This paper proposes non-asymptotic convergence guarantuees on several gradient-based optimization methods, ranging from the SGD to AdaGrad-like methods, when the estimation of the gradients is biased. More specifically, the convergence results include a theorem with the Polyak-Lojasiewicz condition and another one without the PL condition, both in a non-convex smooth setting.
Strengths: # Originality
This paper studies a problem that is not yet broadly dealt with.
# Clarity
This paper is easy to read and the statements are clear.
# Significance
The setup of the provided convergence results include non-convex smooth settings, which are very common in ML.
# Quality
Overall, the theoretical results are sound and the proofs are easy to read (and seem to involve only classical techniques).
The experimental section provides examples where the estimation is naturally biased, which helps to evaluate the significance of the results.
Weaknesses: # Signifiance
How limitating are hypotheses H1-5? It is difficult to check when the proposed models fulfill them.
Technical Quality: 4
Clarity: 4
Questions for Authors: How limitating are hypotheses H1-5? It is difficult to check when the proposed models fulfill them.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive and thoughtful comments.
There are three types of hypotheses discussed: those concerning only the adaptive algorithm, those concerning only the application (objective function), and those concerning both. These assumptions are discussed in the paper, but we will provide more detailed comments in the revised version.
- Assumption H4 [10, 32] relates only to the adaptive algorithm. We have shown that it can be verified in Adagrad, RMSprop, and Adam. For other algorithms where verification is difficult, we can use the truncation method described after this assumption.
The truncation method involves replacing the random matrices $\tilde{A}\_n = \min${$||A\_n||,\beta\_{n+1}$}$A\_n/||A\_n||$). It is also stated that truncation is only used with very low probability, which means that the estimators $A_{n}$ are minimally affected by these truncations [32]. Since we choose $\beta_{n+1}$, we have control over the convergence rate.
- Assumption H1 relates to the Polyak-Łojasiewicz (PL) condition. We provide convergence rates with and without this condition. This assumption is weaker than strong convexity and remains satisfied even when the function is non-convex. Note that it has been studied theoretically [42] and has been verified empirically in many applications, such as in deep networks [24] and for Linear Quadratic Regulator [26].
- Assumption H2 is crucial for obtaining the convergence rate [9, 56]. This assumption can be restricted to the generalized smoothness condition [80]. Since we assume the boundedness of gradients (H4), the smoothness and generalized smoothness are equivalent. Assumption H4 is standard in adaptive algorithms [18, 64, 70, 71] and, in practice, can be satisfied by clipping the gradient.
- Assumption H3 is the only assumption that relates to both the adaptive algorithm and the objective function. H3$(ii)$ is a relaxed assumption on the gradient variance, referred to as “expected smoothness” [20], as mentioned in our paper and noted by reviewer aKkE.
Since H3$(i)$ involves the preconditioning matrix and the objective function, providing a necessary and sufficient condition in a general setting is challenging.
However, we have shown that for Adagrad, Adam, and RMSProp (Corollary 4.5), a sufficient condition to verify H3$(i)$ is to bound the bias of the gradient, that is, to show that there exists $\tilde b_{n+1}$ such that
$$||\mathbb{E}[H_{\theta_n}(X_{n+1})\mid\mathcal{F}\_n]-\nabla V(\theta_n)||\leq\tilde b\_{n+1}.$$
Applications, where this sufficient condition can be verified, are listed in Appendices C and D. We also refer reviewer ioBT to see how to verify H3$(i)$.
Furthermore, we have demonstrated that our framework is applicable to numerous scenarios, including Bilevel Optimization and Conditional Stochastic Optimization, where all these assumptions are verified. | Summary: This paper studies the non-asymptotic convergence guarantees of SGD with adaptive step sizes and (time-dependent) biased gradient estimators for nonconvex smooth functions. Applications to AdaGrad, RMSProp and Adam are developed. Numerical experiments on bilevel and conditional stochastic optimization, as well as deep VAE are used to illustrate the established theoretical results.
Strengths: This work considers the biased gradient setting, which is away from the standard unbiased gradient setting like most standard results in the literature. This work also considers a more recently relaxed assumption on the gradient variance called “expected smoothness”, which makes the analysis even more general.
Weaknesses: The bounded spectrum assumption of $A_n$ might not hold for every optimizer considered, and it remains to see how to relax it.
Technical Quality: 3
Clarity: 3
Questions for Authors: Sometimes “adaptive step sizes” is referred to as “adaptive steps” in the paper. I guess using “adaptive step sizes” for all instances will avoid possible confusion. Can you also address the above point in weaknesses?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive and thoughtful comments.
We have changed "adaptive steps" to "adaptive step sizes" in the revised paper.
In Theorems 4.1 and 4.2, since the exact form of $A_n$ is unknown (as we aim to cover all adaptive algorithms), we must assume certain properties about the preconditioning matrix, specifically the bound on its eigenvalues (H4). However, when applied to algorithms such as Adagrad, RMSProp, and Adam, we have shown that this assumption holds.
It can be challenging to verify this assumption for some other algorithms, such as Stochastic Newton.
As discussed after Assumption H4, the truncation method can be used for $A_{n}$ as done in [32].
The truncation method involves replacing the random matrices $\tilde{A}\_n = \min${$||A\_n||,\beta\_{n+1}$}$A\_n/||A\_n||$). It is also stated that truncation is only used with very low probability, which means that the estimators $A_{n}$ are minimally affected by these truncations [32]. Since we choose $\beta_{n+1}$, we have control over the convergence rate.
Although controlling the minimal eigenvalue is not strictly necessary, ensuring the boundedness of the gradient is generally sufficient to control this quantity in algorithms such as Adagrad, RMSProp, and Adam. Further details on controlling the minimum and maximum eigenvalues are provided in line 566 for Adagrad and line 572 for RMSProp. For Adam, the control is the same as for RMSProp since both use the same preconditioning matrix $A_n$.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thanks so much to the authors for your rebuttal. Your rebuttal has addressed my concern. | Rebuttal 1:
Rebuttal: We would like to express our gratitude to the reviewers for their helpful reviews and constructive feedback, which have helped us further improve our paper. Below, we provide a common response to comments made by several reviewers, followed by our point-by-point responses addressing all the specific concerns. Please note that all reference numbers correspond to those in the original paper, with any new references listed below.
Here are some comments on our results in the Martingale and Markov Chain cases, as requested by some reviewers. We provide below the convergence results for both the Martingale and Markov Chain cases. First, we want to highlight that our framework is more general and encompasses all possible applications and adaptive algorithms. In many cases, the data $X_n$ is i.i.d. (with noise as a Martingale difference) or forms a Markov Chain.
### $\textbf{Obtaining the form of the bias}$
We provide the bound $\tilde b_n$ of the gradient estimator $|| \mathbb{E}[H_{\theta_{n}}(X_{n+1}) \mid \mathcal{F}\_{n} ] - \nabla V(\theta_{n}) ||$ for both the i.i.d. and Markov cases.
#### $\textbf{I.I.D case}$
Let us assume that {$X_{n}, n \in \mathbb{N}$} is an i.i.d. sequence. If the mean field function (the conditional expectation of the gradient $h(\theta_{n}) = \mathbb{E}\left[H_{\theta_{n}}\left(X_{n+1}\right) \mid \mathcal{F}\_{n}\right]$) matches the gradient of the objective function, then there is no bias. Any bias arises from the difference between the mean field function and the true gradient of the objective function.
In this case, $ \tilde b_{n+1} = ||h(\theta_n) - \nabla V(\theta_n)|| $.
#### $\textbf{Markov Chain case}$
We now assume that {$X_{n}, n \in \mathbb{N}$} is a Markov Chain. In this case, even if $h(\theta) = \nabla V(\theta)$, there is an additional bias introduced by the properties of the Markov Chain. Specifically, the total bias consists of two components: one due to the difference between the mean field function and the true gradient of the objective function, and the other due to the characteristics of the Markov Chain.
We define the stochastic update as:
$$H_{\theta_{k}}\left(X_{k+1}\right) = \frac{1}{T} \sum_{i=1}^{T} H_{\theta_{k}}\left(X_{k+1}^{(i)}\right). $$
When using $T$ samples per step to compute the gradient, if {$X_{n}, n \in \mathbb{N}$} is an ergodic Markov chain with stationary distribution $\pi$ and $||h(\theta_n) - \nabla V(\theta_n)||$ is bounded, then $\tilde b_{n+1} = ||h(\theta_n) - \nabla V(\theta_n)|| + M \sqrt{\tau_{\text{mix}}/T}$, where $h(\theta) = \int H_{\theta}(x) \pi(dx)$ and $\tau_{\text{mix}}$ is the mixing time defined below..
If the general optimization problem reduces to the following stochastic optimization problem with Markov noise, as considered in most of the literature [77, 78, 79]:
$$
\min_{\theta \in \mathbb{R}^d} V(\theta) := \mathbb{E}\_{x \sim \pi} [f(\theta; x)],
$$
where $\theta \mapsto f(\theta; x)$ is a loss function, and $\pi$ is some stationary data distribution of the Markov Chain and $H_{\theta_{k}}\left(X_{k+1}^{(i)}\right)=\nabla f(\theta_{k}; X_{k+1}^{(i)})$, then $\tilde b_{n+1}=M\sqrt{\tau_{\text{mix}}/T}$, similar to SGD with Markov Noise [78].
### $\textbf{Convergence Results in Both Cases}$
Once we have the form of the bias of the gradient $\tilde b_n$, we can determine the convergence rate for our theorems. For instance, the convergence rate in Theorem 4.1 becomes $\mathcal{O}(n^{-\gamma+2\beta+\lambda} + ||h(\theta_n) - \nabla V(\theta_n)||^{2})$ for i.i.d case and $\mathcal{O}(n^{-\gamma+2\beta+\lambda} + ||h(\theta_n) - \nabla V(\theta_n)||^{2} + M^2 \tau_{\text{mix}}/T)$ for Markov Chain case.
- The mixing time $\tau_{\text{mix}}$ of a Markov chain with stationary distribution $\pi$ and transition kernel $P$ is defined as
$
\tau_{\text{mix}} := \inf $ { $t : \sup_{x} D_{\text{TV}}(P^t(x, \cdot), \pi) \leq \frac{1}{4}$},
where $D\_{\text{TV}}$ denotes the total variation distance.
[77] John C Duchi, Alekh Agarwal, Mikael Johansson, and Michael I Jordan. Ergodic mirror descent. SIAM Journal on Optimization, 22(4):1549–1578, 2012.
[78] Ron Dorfman and Kfir Yehuda Levy. Adapting to mixing time in stochastic optimization with markovian data.
In International Conference on Machine Learning, pages 5429–5446. PMLR, 2022.
[79] Aleksandr Beznosikov, Sergey Samsonov, Marina Sheshukova, Alexander Gasnikov, Alexey Naumov, and Eric
Moulines. First order methods with markovian noise: from acceleration to variational inequalities. In Advances
in Neural Information Processing Systems, volume 36, 2024.
[80] Jingzhao Zhang, Tianxing He, Suvrit Sra, and Ali Jadbabaie. Why gradient clipping accelerates training: a theoretical justification for adaptivity. In International Conference on Learning Representations, 2020.
[81] Yin Liu and Sam Davanloo Tajbakhsh. Adaptive stochastic optimization algorithms for problems with biased
oracles. arXiv preprint arXiv:2306.07810, 2023.
[82] Haochuan Li, Alexander Rakhlin, and Ali Jadbabaie. Convergence of adam under relaxed assumptions. Advances
in Neural Information Processing Systems, 36, 2024. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Stochastic and adaptive optimization algorithms are commonly used in advanced machine learning techniques. However, the analysis of non-convex optimization with biased gradients is lacking in the literature. This work considers the general scenario of optimizing machine learning objectives with practical optimizers given biased gradient estimators. Convergence bounds for different optimizers are proved with or without the Polyak-Łojasiewicz condition, and developed under an adaptive (preconditioning) matrix, which indicates the possibility of achieving the same convergence speed as with unbiased estimators by proper hyperparameter tuning.
The results are theoretically applied to bi-level optimization and others, empirically applied to importance weighted variational-autoencoder (IWAE) optimization. It is shown that the proved bounds apply to several applications. The experiments on Cifar-10 and FasionMNIST conclude that controlling the bias is beneficial to the convergence speed when the bias term dominates the optimizer term and the benefits become marginal after a certain threshold.
Strengths: My overall judgement of the work is quite positive from its logic flow. However, I am not able to check all the details of the proofs.
The paper performs theoretical analysis of a common issue that covers a wide range of settings. Early in the paper, the background and application of adaptive stochastic approximation are established, which includes commonly used optimizers like SGD, Adagrad, RMSProp and Adam. The theoretical part is thorough with hyperparameters and different assumption groups, which is quite general. The most important assumption in the paper bounds the bias of the gradient, which also seems reasonable.
The settings of the IWAE experiment are not realistic, since an increasing number of samples are assumed during optimization. However, it can be viewed as a verification instead of an application of the bounds. As commented by the authors, the experiments with FasionMNIST indicate that the proved bound may be tight. The combination of theoretical and empirical evidences makes the paper strong.
Weaknesses: Despite having wide applications, the experiments only include one case that verifies the proved bounds. The paper spent some efforts in bi-level optimization, but did not provide empirical analysis. IWAE has a remote connection to bi-level optimization when q is flexible enough, which could be utilized with a different optimization scheduling.
minor: line 135, there seems to be a redundant factor of 1/2 for $r_{n+1}$
Technical Quality: 3
Clarity: 3
Questions for Authors: The variational distribution q may also affect the convergence. In the experiments, is q fixed or trained? I think the analysis applies to fixed q, but would like to learn about assumptions and implementation details here.
The introduction also talked about Markov chain based optimization schemes, but the analysis later in the paper seems to be far from it. How is assumption H3 achieved in such case? How are the theorems connected to the Martingale and Markov chain cases?
The main analysis in the paper is limited to optimization with vanishing bias, which does not directly apply to some practical methods (e.g. relaxation with Gumbel-softmax/concrete distribution, variational sequential Monte Carlo that drops part of the gradient). Can you provide a useful bound when r_n is a (or only bounded by a) constant from Theorem 4.2?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I do not see a limitation section and it would be beneficial to sketch future works by discussing limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive and thoughtful review and the opportunity to clarify our contributions.
$\textcolor{blue}{\textbf{Link Between IWAE and Bilevel Optimization}}$
We agree that assuming an increasing number of samples during optimization in IWAE is unrealistic; it is used merely to illustrate our convergence results. In our experiments, we have illustrated our results in the PL case using an artificial example (see Appendix E.1). To avoid oversimplification, we also train the variational distribution $q$ rather than keeping it fixed. In this context, we are also in a biased gradient case, which is more aligned with Stochastic Bilevel Optimization.
As you noted, there is indeed a connection between IWAE and bilevel optimization.
The problem of VAE can be considered a Stochastic Bilevel Optimization problem, given by:
$$\min_\theta V(\theta)=\mathbb{E}\_\pi[\log p_\theta(x)]=\mathcal{L}(\theta,\phi^*(\theta)) + \mathbb{E}\_\pi[\text{D}\_{KL}(q\_{\phi^*(\theta)}(z|x)||p\_\theta(z|x))]$$
subject to
$$\phi^*(\theta)\in\arg\min_\phi\mathcal{L}(\theta,\phi).$$
Additionally, the IWAE problem can also be interpreted as Conditional Stochastic Optimization by setting $f$ to correspond to the identity function and $g_z((\theta,\phi),x)=\log p_\theta(x,z)/q_\phi(z|x)$ in Eq (14).
All discussions on the link between IWAE and Stochastic Bilevel Optimization, as well as Conditional Stochastic Optimization IWAE, will be added in the experiments section. Additional details on the model configuration will be included in Appendix E in the revised paper.
$\textcolor{blue}{\textbf{Discussion about the Martingale and Markov Chain Cases}}$
As you mentioned in your comments, the purpose of Assumption H3 is to provide a very general framework that covers all possible applications and adaptive algorithms. In many cases, $X_n$ is i.i.d. (noise as a Martingale difference) or forms a Markov Chain.
For IWAE, this applies to the Martingale case, but we also provide examples of applications involving the Markov Chain case, such as Sequential Monte Carlo Methods discussed in Appendix D.2.
Since H3$(i)$ involves both the preconditioning matrix and the objective function, providing a necessary and sufficient condition in a general setting is challenging. However, we have shown that for Adagrad, Adam, and RMSProp (Corollary 4.5), a sufficient condition to verify H3$(i)$ is to bound the bias of the gradient, that is, to show that there exists $\tilde b_{n+1}$ such that
$$||\mathbb{E}[H_{\theta_n}(X_{n+1})\mid\mathcal{F}\_n]-\nabla V(\theta_n)||\leq\tilde b\_{n+1}.$$
Whether $X_n$ is i.i.d. or a Markov Chain, the goal is simply to find $\tilde b_n$. We have used this approach to verify H3$(i)$ for several applications, such as Stochastic Bilevel Optimization and Conditional Stochastic Optimization, which are discussed in Section 5.1 and Appendix C.
Following also the comments of Reviewer ioBT, we provide the form of $\tilde b_n$ for both the i.i.d. and Markov cases.
### $\textbf{I.I.D case}$
Let us assume that {$X_{n}, n \in \mathbb{N}$} is an i.i.d. sequence. If the mean field function (the conditional expectation of the gradient $h(\theta_n)=\mathbb{E}[H_{\theta_n}(X_{n+1})\mid\mathcal{F}\_n]$) matches the gradient of the objective function, then there is no bias. Any bias arises from the difference between the mean field function and the true gradient of the objective function.
In this case, with $\tilde b_{n+1}=||h(\theta_n)-\nabla V(\theta_n)||$, Assumption H3$(i)$ is verified.
### $\textbf{Markov Chain case}$
We now assume that {$X_{n}, n \in \mathbb{N}$} is a Markov Chain. In this case, even if $h(\theta) = \nabla V(\theta)$, there is an additional bias introduced by the Markov Chain's properties. Specifically, the total bias consists of two components: one due to the difference between the mean field function and the true gradient of the objective function, and the other due to the Markov Chain’s characteristics.
We define the stochastic update as:
$$H_{\theta_{k}}\left(X_{k+1}\right)=\frac{1}{T}\sum_{i=1}^{T}H_{\theta_k}\left(X_{k+1}^{(i)}\right).$$
When using $T$ samples per step to compute the gradient, if {$X_{n}, n \in \mathbb{N}$} is an ergodic Markov chain with stationary distribution $\pi$ and $||h(\theta_n)-\nabla V(\theta_n)||$ is bounded, Assumption H3$(i)$ is verified with $\tilde b_{n+1}=||h(\theta_n)-\nabla V(\theta_n)||+M\sqrt{\tau_\text{mix}/T}$, where $h(\theta)=\int H_{\theta}(x)\pi(dx)$ and $\tau_\text{mix}$ is the mixing time.
If the general optimization problem reduces to the following stochastic optimization problem with Markov noise, as considered in most of the literature [77, 78, 79]:
$$\min_{\theta\in\mathbb{R}^d}V(\theta):=\mathbb{E}\_{x\sim\pi}[f(\theta;x)], $$
where $\theta\mapsto f(\theta;x)$ is a loss function, and $\pi$ is some stationary data distribution of the Markov Chain and $H_{\theta_k}(X_{k+1}^{(i)})=\nabla f(\theta_k;X_{k+1}^{(i)})$, then $\tilde b_{n+1}=M\sqrt{\tau_\text{mix}/T}$, similar to SGD with Markov Noise [78].
### $\textbf{Convergence Results in Both Cases}$
Once we have the form of the bias of the gradient $\tilde b_n$, we can determine the convergence rate for our theorems. For instance, the convergence rate in Theorem 4.1 becomes $\mathcal{O}(n^{-\gamma+2\beta+\lambda}+||h(\theta_n)-\nabla V(\theta_n)||^{2})$ for i.i.d case and $\mathcal{O}(n^{-\gamma+2\beta+\lambda}+||h(\theta_n)-\nabla V(\theta_n)||^{2}+M^2\tau_{\text{mix}}/T)$ for Markov Chain case.
$\textcolor{blue}{\textbf{The Case with Constant Bias}}$
In fact, our analysis also applies to scenarios with a constant bias. For example, if the additive bias term $r_n$ is bounded by a constant $a$, the convergence rate becomes $\mathcal{O}(\log n/\sqrt{n}+a)$. To ensure that $\mathbb{E}\left[||\nabla V(\theta_{n})||^{2}\right]\leq\varepsilon$ for some $\varepsilon > 0$, the constant $a$ must be less than $\varepsilon$. Otherwise, this constant bias will adversely affect the convergence.
---
Rebuttal 2:
Comment: Thank you for the rebuttal which addresses my questions. After reading the other reviews and responses, I do not have any additional concerns. | Summary: The authors provide convergence guarantees for the biased adaptive stochastic approximation framework and establishes convergence to a critical point for non-convex setting of Adagrad type methods. The authors illustrate their results in the setting of IWAE.
Strengths: Stated results for the rates of convergence of biased SA under Polyak-Łojasiewic condition are novel, yet the obtained rates are classical in the literature.
Weaknesses: Checking assumption H3 in its current form is not obvious for different types of noise, since it involves taking expectation w.r.t. dynamics of the process $\{\theta_n\}$ itself. With this assumption the proof of the main results is rather classical, following standard techniques in non-convex optimization. At the same time, it is not clear how H3 can be checked even for independent sequence $\{X_n\}$. Checking the same assumption for $\{X_n\}$ being a Markov chain is even more tricky due to correlations between $X_n$ and $\theta_n$. It is typical that previous papers on the subject (see e.g. [Karimi et al, 2019]) state the assumptions, that do not involve expectations over $\theta_n$. I suggest the authors to clarify H3 for independent/martingale/Markov noise separately and to clarify respective assumption (when $X_n$ is a Markov chain - probably in terms of mixing time of this sequence). Provided that my concerns regarding H3 are resolved, I will be happy to increase my score.
Also the bibliography can be complemented by the paper [Dorfman and Levy, 2022], where the authors consider adaptivity of Adagrad-type algorithms to mixing time.
References:
[Dorfman and Levy, 2022] Dorfman, R. and Levy, K.Y. Adapting to mixing time in stochastic optimization with markovian data. In International Conference on Machine Learning 2022, pp. 5429-5446. PMLR.
[Karimi et al, 2019] Karimi, B., Miasojedow, B., Moulines, E. and Wai, H.T. Non-asymptotic analysis of biased stochastic approximation scheme. In Conference on Learning Theory, 2019, pp. 1944-1974.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors should discuss in more details the types of considered data stream $(X_n)$ and its relations with H3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback, we give below some clarifications concerning H3. The purpose of Assumption H3 is to provide a very general framework that covers all possible applications and adaptive algorithms. Since H3 $(ii)$ is a well-known assumption [20], we understand that your question concerns Assumption H3 $(i)$.
We would like to first stress that, with this assumption, the proofs of the main results (Theorems 4.1 and 4.2) still differ from that of biased SGD in the way in which we control the preconditioning matrix $A_n$. Furthermore, the application of these results to Adagrad and RMSProp, as well as the extension to Adam, are significantly different from the proof of biased SGD.
Moreover, it is actually possible to verify Assumption H3 $(i)$ in many settings. It is important to keep in mind that this assumption is very general and depends on three quantities.
- The type of adaptive algorithm (and therefore the form of the matrix $A_n$),
- The application, that is, the gradient of the objective function $\nabla V(\cdot)$ and its empirical counterpart $H_{\theta_{n}} \left( \cdot \right)$,
- The stochasticity of the sequence {$X_{n}, n \in \mathbb{N}$}.
Let us first discuss the last point, which concerns the stochasticity of $X_{n}$. It is actually possible to avoid computing ``an expectation with respect to the dynamics of the process itself'', as mentioned in the review. Indeed, using the tower property, we have:
$$
\mathbb{E}\left[\left\langle\nabla V\left(\theta_{n}\right),A_{n}H_{\theta_{n}}\left(X_{n+1}\right)\right\rangle\right]=\mathbb{E}\left[ \mathbb{E}\left[\left\langle\nabla V\left(\theta\_{n}\right),A\_{n}H_{\theta\_{n}}\left(X\_{n+1}\right)\right\rangle |\mathcal{F}\_{n}\right]\right],
$$
where $(\mathcal{F}\_{n})\_{n\geq 0}$ represents the filtration generated by the random variables $(\theta\_{0}, X_1, ..., X_n)$.
Then, we need to verify that
$$
\mathbb{E}\left[\left\langle\nabla V\left(\theta_{n}\right),A_{n}H_{\theta_{n}}\left(X_{n+1}\right)\right\rangle |\mathcal{F}\_{n}\right]\geq \lambda_{n+1}\left(||\nabla V(\theta_{n})||^{2}-r_{n+1}\right).$$
Here, the stochasticity depends only on $X_{n+1}$, and not on the dynamics of the process {$\theta_{k}$}$_{k \leq n}$.
Moreover, we have shown that for Adagrad, Adam, and RMSProp (Corollary 4.5), a sufficient condition to verify H3$(i)$ is to bound the bias of the gradient, that is, to show that there exists $\tilde b_{n+1}$ such that
$$
||\mathbb{E}[H_{\theta_{n}}(X_{n+1})\mid\mathcal{F}\_{n}]-\nabla V(\theta_{n})||\leq\tilde b\_{n+1}.
$$
Then, whatever the assumptions on stochasticity, that is, whether $X_n$ is i.i.d. (the noise as a Martingale difference) or a Markov Chain, the goal is simply to find $\tilde b_n$. We have used this approach to verify H3$(i)$ for several applications such as Stochastic Bilevel Optimization and Conditional Stochastic Optimization (in which case, the sequence {$X_{n}, n \in \mathbb{N}$} is arbitrary) or Sequential Monte Carlo Methods, which corresponds to a Markov Chain case. This is discussed in Section 5.1 of the main paper and in Appendix C and D.
However, we agree that this was not clear enough in the first version. We will use the extra page to clarify this point. To help the reader, we will give more details on the form of $\tilde b_n$ in the i.i.d. and Markov cases, as detailed below.
#### $\textbf{I.I.D case}$
Let us assume that {$X_{n}, n \in \mathbb{N}$} is an i.i.d. sequence. If the mean field function (the conditional expectation of the gradient $h(\theta_{n}) = \mathbb{E}\left[H_{\theta_{n}}\left(X_{n+1}\right) \mid \mathcal{F}\_{n}\right]$) matches the gradient of the objective function, then there is no bias. Any bias arises from the difference between the mean field function and the true gradient of the objective function.
In this case, with $ \tilde b_{n+1} = ||h(\theta_n) - \nabla V(\theta_n)|| $, Assumption H3$(i)$ is verified.
#### $\textbf{Markov Chain case}$
We now assume that {$X_{n}, n \in \mathbb{N}$} is a Markov Chain. In this case, even if $h(\theta) = \nabla V(\theta)$, there is an additional bias introduced by the Markov Chain's properties. Specifically, the total bias consists of two components: one due to the difference between the mean field function and the true gradient of the objective function, and the other due to the Markov Chain’s characteristics.
We define the stochastic update as:
$$H_{\theta_{k}}\left(X_{k+1}\right) = \frac{1}{T} \sum_{i=1}^{T} H_{\theta_{k}}\left(X_{k+1}^{(i)}\right). $$
When using $T$ samples per step to compute the gradient, if {$X_{n}, n \in \mathbb{N}$} is an ergodic Markov chain with stationary distribution $\pi$ and $||h(\theta_n) - \nabla V(\theta_n)||$ is bounded, Assumption H3$(i)$ is verified with $\tilde b_{n+1} = ||h(\theta_n) - \nabla V(\theta_n)|| + M \sqrt{\tau_{\text{mix}}/T}$, where $h(\theta) = \int H_{\theta}(x) \pi(dx)$ and $\tau_{\text{mix}}$ is the mixing time defined in [78].
If the general optimization problem reduces to the following stochastic optimization problem with Markov noise, as considered in most of the literature [77, 78, 79]:
$$
\min_{\theta \in \mathbb{R}^d} V(\theta) := \mathbb{E}\_{x \sim \pi} [f(\theta; x)],
$$
where $\theta \mapsto f(\theta; x)$ is a loss function, and $\pi$ is some stationary data distribution of the Markov Chain and $H_{\theta_{k}}\left(X_{k+1}^{(i)}\right)=\nabla f(\theta_{k}; X_{k+1}^{(i)})$, then $\tilde b_{n+1}=M\sqrt{\tau_{\text{mix}}/T}$, similar to SGD with Markov Noise [78].
### $\textbf{Convergence Results in Both Cases}$
Once we have the form of the bias of the gradient $\tilde b_n$, we can determine the convergence rate for our theorems. For instance, the convergence rate in Theorem 4.1 becomes $\mathcal{O}(n^{-\gamma+2\beta+\lambda} + ||h(\theta_n) - \nabla V(\theta_n)||^{2})$ for i.i.d case and $\mathcal{O}(n^{-\gamma+2\beta+\lambda} + ||h(\theta_n) - \nabla V(\theta_n)||^{2} + M^2 \tau_{\text{mix}}/T)$ for Markov Chain case.
---
Rebuttal Comment 1.1:
Comment: Thank you for detailed answer. I will increase my score to 6. | null | null | null | null |
UniSDF: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections | Accept (poster) | Summary: This paper proposes UniSDF, an improved NeuS architecture capable of reconstructing photorealistic reflective surfaces and robustly working in real-world scenes. The method models reflective color and base color as separate MLPs, using learned weights to blend them and obtain the final colors. Qualitative and quantitative results on several datasets demonstrate that the method surpasses other baselines and results in fewer geometric artifacts.
Strengths: The method proposes a new network architecture to better represent the reflective radiance field, although most of the concepts are derived from previous works. There are some original contributions, such as the blending between two fields inspired by NeRFRen, which is extended to non-planar surfaces, and the coarse-to-fine training from Neuralangelo, used here to reduce the ambiguity of reflections. Therefore, while the method is novel, its significance is not substantial.
Another strength is the robust and high-quality reconstruction demonstrated in the paper and supplementary material, which surpasses other baselines. This is valuable since this problem is crucial in real-world 3D reconstruction scenarios.
Weaknesses: The biggest weakness is the lack of physical interpretation. In Eq. 8, the final color is a linear combination of the camera view and reflected view radiance fields, resulting in unclear physical meaning for each color component. Although the method focuses on novel view synthesis and geometry reconstruction regardless of the underlying physical components, this lack of physical meaning leads to ambiguity about the assumptions the authors have for the reconstructed scene.
For example, the model doesn't account for how reflections change with different surface roughness, raising the question of whether it will perform well on surfaces with spatially varying roughness. Additionally, the model does not explicitly trace the reflective ray but instead models the color of the reflective ray through an MLP. This raises concerns about its ability to handle near-field lighting conditions effectively.
Furthermore, I am curious about the differences between Ref-NeRF's equation c = diffuse + specular × tint and Eq. 8 c=(1−w) c_cam+w c_ref. Will these two modeling approaches yield similar performance?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Title: It's unclear from the abstract and introduction where the "unifying" concept comes from. "Unifying scene representation" suggests combining several scene representations, but theoretically, it is still a NeuS-based representation.
2. Missing Citations:
a. MS-NeRF: Multi-space Neural Radiance Field: This paper also presents a similar idea of linearly combining different color fields.
b. Several Concurrent Works Regarding Reflective NeRF:
i. NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections
ii. SpecNeRF: Gaussian Directional Encoding for Specular Reflections
c. Reflection Handling in Gaussian Splatting:
i. MirrorGaussian: Reflecting 3D Gaussians for Reconstructing Mirror Reflections
ii. Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting
3. Illustration of Existing Works: In line 131, I wouldn't call Ref-NeRF[43] as using a single radiance field, as it uses two MLPs to model diffuse color and specular color separately (though the specular MLP is conditioned on the feature of the diffuse MLP)
4. Hyperparameters in Final Loss: The final loss contains several hyperparameters for controlling the strength of each term. In my experience, these weights play an important role. It would be beneficial to conduct small ablations or illustrate how these weights were decided.
5. Input Redundancy: The f_ref and f_cam functions take in both the spatial coordinate x and the bottleneck feature b from the SDF MLP. Given that b is already a spatial feature vector, what's the point of adding x as one of the inputs? This seems a bit redundant.
6. Normals and Orientation Loss: There are two normals being produced: n is the normal from the SDF field, and n′ is the MLP-predicted normal. For Eq. 11, Ref-NeRF uses n′ for orientation loss. It's unclear if this is a typo or if n is actually used for the orientation loss.
7. Volumetric Rendering SDF: In line 124, it would be better to cite which volumetric rendering SDF system is used. I assume it's NeuS.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: There are no potential negative society impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Explicitly tracing the reflective ray**
In this work, similar to many recent methods that are tailored to handle reflections [12,14,22,50], we follow Ref-NeRF and parameterize part of view-dependent appearance as a function of the reflected view direction $\mathbf{\omega}\_r$. Explicitly tracing the reflective ray is an interesting direction for us and it is shown effective in the concurrent work NeRF-Casting.
>**Ref-NeRF's model v.s. our model**
Our custom baseline “RefV” uses the same physics based model as Ref-NeRF (L215), where we separate the color into diffuse and specular components. As shown in Table.3, our method outperforms RefV in both surface reconstruction and rendering on 3 datasets. Fig.5 shows that RefV reconstructs artifacts on the surfaces, while our method performs better. We also observe that when using the hash grid backbone, RefV may have optimization issues with separate diffuse and specular components as visualized in Fig.13. We discuss this in L533-543.
>**Q1: Unifying neural representations**
What we mean is that we combine the two neural representations for the radiance field, i.e., camera view and reflected view radiance field. We will rephrase the term “scene representation” in the abstract.
>**Q2: Missing citations**
Thank you for pointing this out. We will add them in our paper. MS-NeRF focuses on view synthesis and combines $K \geq 2$ camera view radiance fields, where each camera view radiance field has a volume density field. It is unclear how to extract the surface since there are $K$ different volume density fields and iso-surfaces can be extracted from each of them. In contrast, we use a single SDF field to represent geometry and directly extract iso-surface from it.
>**Q3: Ref-NeRF as a single radiance field**
Thank you for pointing this out. We will rephrase.
>**Q4: Ablation study on loss weights**
We perform an ablation study on the eikonal loss weight $\lambda\_1$ on Ref-NeRF real dataset and summarize the results as follows. Setting $\lambda\_1=10^{-4}$ produces the best performance. If $\lambda\_1$ is too large (e.g., $\lambda\_1=10^{-2}$), we observe that the training becomes unstable and may fail. This is similar to our findings on BakedSDF (Sec. C).
As shown in [46,49], the normals, estimated as the gradients, of the SDF field are well-defined and more accurate than those computed from the volume density field. Recent BakedSDF [50] and Ref-NeuS [14] combined SDF field with reflected view radiance field to reconstruct reflective surfaces. They found that using eikonal loss $\mathcal{L}\_{\text{eik}}$ for regularization is enough to get accurate normals and thus it is not necessary to use the normal smoothness loss $\mathcal{L}\_p$ or the orientational loss $\mathcal{L}\_o$ like Ref-NeRF. In our method, we still use $\mathcal{L}\_p$ and $\mathcal{L}\_o$ for additional regularization. However, since we use eikonal loss already, we simply set $\lambda\_2, \lambda\_3$ to relatively small values and do not tune them a lot. We only slightly increase the weight $\lambda\_2$ for normal smoothness loss $\mathcal{L}\_p$ on real data because of the potential noise (L481), similar to Ref-NeRF.
Methods | PSNR $\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$
--|--|--|--
$\lambda\_1=10^{-2}$ | 20.99 | 0.509 | 0.413
$\lambda\_1=10^{-3}$ | 23.24 | 0.611 | 0.295
$\lambda\_1=10^{-4}$ (Ours) | **23.70** | **0.636** | **0.265**
$\lambda\_1=10^{-5}$ | 23.62 | 0.633 | 0.268
>**Q5: Input redundancy**
It is correct that $\mathbf{b}$ is dependent on $\mathbf{x}$. The reason that we use both $\mathbf{b}$ and $\mathbf{x}$ is to align with the state-of-the-art implicit reconstruction pipelines such as VolSDF [49] and NeuS [46], which use both $\mathbf{b}$ and $\mathbf{x}$ to compute radiance.
>**Q6: Normals and orientation loss**
Since the normal estimated from the volume density gradient are often extremely noisy, Ref-NeRF uses the predicted normal $n’$ throughout its pipeline, e.g., computing reflected view direction $\mathbf{\omega}\_r$, including for orientation loss. As shown in [46,49], the normals, estimated as the gradients, of the SDF field are well-defined and much smoother than those computed from the volume density field. Compared with the noisy normal of the volume density field of Ref-NeRF, we use a SDF field as our geometry representation and thus the normal is already smoother and more accurate. So we use $n$ in our pipeline (also for orientation loss) and only use the predicted normal $n’$ for regularization in Eq.10. Note that BakedSDF [50] and Ref-NeuS [14] also use the SDF normal $n$ for their reflected view radiance fields.
>**Q7: Volumetric rendering the SDF**
Sorry, we missed the reference here. We use VolSDF [49] as our representation (L199). | Summary: The paper tackles the problem of 3D reconstruction in the presence of highly reflective objects. To address the problem, they propose to learn an SDF-based neural representation. Different from prior work, they use two radiance branches in their representation, one conditioned on the camera viewing direction rotated about the normal and one conditioned on the regular viewing direction. The outputs of the two networks are combined using a learned weight to arrive at the final rendered radiance.
The authors argue that while the reflected direction is better at capturing the details in reflective surfaces, the viewing direction is more robust to real data.
Through extensive experiments, they show their method is competitive with prior art in both the tasks of NVS and mesh recovery (as measured by Chamfer Distance).
Strengths: I like the problem being attempted in the paper, I also like the insight that the viewing direction conditioning is more robust, while the reflected direction produces better geometry for reflective surfaces in controlled environments. The results are also impressive.
Quality: The ablation studies are well/extensively done.
Clarity: The paper is very well written and easy to follow. The figures are all well understandable and the work should be easily reproducible from just the description.
Weaknesses: Significance: I think the method is generally quite a straightforward extension of existing work and the paper's contributions are fairly limited. As I see it both the radiance branches have previously existed and the Ref-NeRF parametrization could be considered more “physically based”, the authors forgo a more mathematically solid setup in favour of an increase in performance.
I think the paper would benefit a bit from providing more insight into the reasons for some of the phenomena i.e. why do the weights separate like that for more specular/diffuse regions even though both are conditioned on some sort of parametrization of the viewing direction etc.
I am sympathetic however to the authors that some of these phenomena just occur and might be hard to reason about, but some of the simulated experiments could provide more insight into how the optimization behaves.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1) Is there a stopgrad on any components of the normal smoothness loss (predicted vs computed normal in Eq 10)?
2) I see there is an ablation on this, but why do the authors think the weight to mix the outputs of the two radiance branches is necessary, why doesn’t this factor just get automatically learned by the two networks?
3) The authors seem to have ran ENVIDR on the Ref-NeRF real dataset (as evidenced by Fig 1?), but do not include the results in Table 2? Why?
4) Have the authors tried passing in $n^\prime$ instead of $n$ to the radiance network or also calculating $w_r$ about it? If I understand correctly $n^\prime$ is supposed to be “smoother”, perhaps this would make results more robust for just RefV?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: Limitations have been adequately discussed in the supplement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Why learned weights separate specular/diffuse regions without explicit supervision**
In Ref-NeRF, the ablation study shows that when rendering reflective surfaces, using the reflected view direction as the MLP’s input is explicitly better than using the camera view direction of NeRF. In our method, the main difference between two radiance fields is the view directional input. The reflected view radiance field uses reflected view direction as Ref-NeRF, while the camera view radiance field uses camera view direction. Similar to the findings in Ref-NeRF, our reflected view radiance field can model the appearance of reflective regions better than the camera view radiance field. Thus the weight $\mathbf{W}$, which corresponds to the rendered color $\mathbf{C}\_{ref}$ of reflected view radiance field (Eq.8), for these reflective regions increases during optimization to reduce the composed color loss $\mathcal{L}\_{\text{color}}$. Conversely, when the color computed from the camera view radiance field is better, the weight $\mathbf{W}$ decreases.
>**Q1: Stopgrad on any components of the normal smoothness loss**
As in Ref-NeRF, we do not use stopgrad on any components.
>**Q2: Weight field to mix the outputs of the two radiance branches**
Thank you for the suggestion. For this rebuttal, we evaluate our method while removing the weight field. The final color is composed as $\mathbf{C} = \mathbf{C}\_{ref} + \mathbf{C}\_{cam}$. We evaluate on the Ref-NeRF real dataset and summarize the results as follows. We observe that our method performs better than this ablation.
As shown in Fig.3, the learned weight field has the advantage of leading to better interpretability as it can be used to detect highly reflective regions from different viewpoints with volume rendering. This is a challenging task because reflection is an attribute of surfaces that objects or parts of objects have. Therefore, it is difficult to detect such regions using existing semantic models. For example, we tried state-of-the-art open vocabulary semantic segmentation methods with “reflective objects” or “reflective surfaces” as prompt, and they were unable to successfully segment the reflective objects in the image.
Methods | PSNR $\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$
--|--|--|--
w./o. weight field | 23.39 | 0.625 | 0.279
ours | **23.70** | **0.636** | **0.265**
>**Q3: Results of ENVIDR**
The metrics and rendered images of ENVIDR are the official results provided by the authors of ENVIDR. On the Ref-NeRF real dataset, ENVIDR is evaluated on “gardenspheres” only. In the limitation section of the ENVIDR paper, it is said that ENVIDR is unable to handle unbounded scenes, while the “sedan” scene of Ref-NeRF real dataset is unbounded.
>**Q4: Use $n’$ instead of $n$**
As shown in [46,49], the normals, estimated as the gradients, of the SDF field are well-defined and much smoother than those computed from the volume density field of NeRF. Compared with the noisy normal of the volume density field of Ref-NeRF, we use a SDF field as our geometry representation and thus the normal is already smoother and more accurate. So we only use $n’$ for regularization in Eq.10. Note that BakedSDF [50] and Ref-NeuS [14] also use the SDF normal $n$ for their reflected view radiance fields.
---
Rebuttal 2:
Comment: Thank you very much for the response to my questions. I especially appreciate the response in the global rebuttal.
I'm not fully satisfied with the answer to the question about specular/diffuse separation--I feel like the authors are just reiterating what's happening, as opposed to providing any reasoning.
I also think since reviewer 5Tqb and I both had questions about this, it would be useful to run a quick (even on a smaller subset of scenes) the effect of using different normals in the reflected view direction and/or loss formulations for the camera ready version.
Nonetheless, I think the experimental results, alongside some of the arguments laid out in the global rebuttal, such as the comment of Ref-NeRF not being a complete physical model (not explicitly handling interreflections, etc), make a stronger case for the paper. I am happy to increase my score.
---
Rebuttal 3:
Comment: Thank you for the comments. We will add an ablation study about the normal that we use, i.e. SDF normal $n$ or predicted normal $n’$, for the reflected view direction and loss formulations in the paper. | Summary: The paper proposes a method to reconstruct scenes containing both reflective surfaces and non-reflective surfaces with high fidelity. Specifically, it trains a camera view radiance field and a reflected view radiance field separately, combining them by a learnable weight. The method is evaluated on four datasets, which covers different situations including objects and 360 scenes, shiny objects, real-world captures. And the method outperforms or is on par with SOTA on all these datasets, demonstrating its effectiveness.
Strengths: 1. Extensive experiments. The method was compared to several SOTA methods and outperforms or is on par with them, showing its ability to reconstruct complicated scenes with reflective surfaces on four datasets.
2. The paper is well written. And the figures clearly serve for their purposes.
3. The author did a great job in analyzing the limitations and social impact, which is highly encouraged.
Weaknesses: 1. The method involves two radiance fields and one weight field, and it has to perform volume rendering for colors from both radiance fields and the weights, which is very time-consuming even with implementation of iNGP.
2. I wonder if you have a possible explanation for why some reflected view radiance field methods is not robust to real-world scene.
3. I noticed the comparison with Factored-NeuS is only performed on DTU. Since Factored-NeuS is designed to also handle glossy objects, I wonder if how it look like when compared to this paper on Shiny Blender and Ref-NeRF real datasets.
4. As the authors mention in the section of limitations, this method requires posed images and SfM can fail on reflective surfaces. While this is a common issue, I still wonder how robust this method is to inaccurate camera pose. Even a failure case is fine.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors listed a quite complete list of limitations. I think some are common issues for many works like requiring camera pose and dense views. With that being said, in the future the authors can consider extending this work to sparse views and how to eliminate the requirement for several fields and volume rendering, making the algorithm more efficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**W1: Time consuming**
Our method has three fields and is based on volume rendering. Thus our rendering efficiency is not high. Recently, 3D Gaussian Splatting has become popular in view synthesis and there are some concurrent works focusing on reconstructing the surface [1*] or rendering reflections [2*]. Therefore, we think extending our pipeline with 3D Gaussian Splatting is a potential future direction to improve efficiency. Thank you for your suggestion to explore eliminating the multiple fields, this is an interesting direction for future works.
>**W2: Explanation for why some reflected view radiance field methods are not robust to real-world scene**
Disambiguating the influence of geometry, color and reflection is an ill-posed problem in image-based 3D reconstruction. Methods such as Ref-NeRF assume that there are no inter-reflections or non-distant illumination, which is not often the case in real world data. Please see the global rebuttal for more details.
For the reflected view radiance field, the view-dependent appearance mainly depends on $\mathbf{\omega}_r$, the reflected view direction around the normal $\mathbf{n}$. However, the geometry and surface normal $\mathbf{n}$ are unknown in the beginning and need to be optimized during training. Therefore, the directional input $\mathbf{\omega}_r$ of view-dependent appearance keeps changing during training. We conjecture that this makes the optimization process more complex and ill-posed.
>**W3: Results of Factored-NeuS**
The metrics of Factored-NeuS on DTU are the official results from their paper (the public ICLR 2024 submission). We just found that Factored-NeuS was open-sourced recently. Thus we evaluate Factored-NeuS on ShinyBlender dataset and summarize the results as follows. Our method performs better than Factored-NeuS in both rendering and surface accuracy. As shown in Fig.2 of the rebuttal PDF, Factored-NeuS reconstructs explicit artifacts on the “helmet” scene, while our method reconstructs the surface more accurately.
Methods | PSNR $\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | MAE $\downarrow$ | Acc $\downarrow$
--|--|--|--|--|--
Factored-NeuS | 30.89 | 0.954 | 0.076 | 5.31 | 1.90
Ours | **36.82** | **0.976** | **0.043** | **4.76** | **1.06**
>**W4: Results with inaccurate camera pose**
Thank you for the suggestion. In this rebuttal, we evaluate some scenes of the Ref-NeRF real dataset on camera poses with uniform noise. Specifically, we first add random translation noise (with a range as [-0.01,0.01]) along each axis for each camera pose. Note that before adding noise, the scene is already normalized to a unit cube following Mip-NeRF360. Second, we add random rotation noise to the camera poses. For each camera pose, we randomly sample a 3d unit vector and rotate the camera pose around it (with a range as [-1,1] degrees). As shown in Fig.3 of the rebuttal PDF, both the rendering and surface become worse. Integrating existing methods for pose optimization during training could be an interesting future direction.
[1*] Huang et al. 2D Gaussian Splatting for Geometrically Accurate Radiance Fields. SIGGRAPH, 2024
[2*] Jiang et al. GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces. CVPR 2024
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's efforts on answering my questions especially providing additional results in such a short time. My concerns / questions are solved. I'll keep my positive rating. | Summary: This paper proposes a new strategy for modelling view-dependent effects in Neural Radiance Field-based scene models. Existing approaches have used networks conditioned on camera view directions, as well as on reflected view directions using surface normals, but this work proposes and validates the idea that both model types should be used simultaneously. This is achieved by employing a learned weight model that mixes the two color predictions, thereby allowing regions where the color is not well explained by the reflection-based model to be handled by the view direction-based model. This is shown through ablations to be a better strategy than either strategy individually, and is also compared to a number of baseline methods which employ reflection-based modelling of view-dependent effects.
Strengths: This paper presents a simple, yet effective idea that would be easy to incorporate into other works which use reflection-based models. I think a lot of the contribution is in showing such a simple but surprising result which is not obvious to try.
The quality and clarity of the writeup is high, and the evaluation seems quite thorough in comparing to other relevant works and on relevant datasets. I think the value of the proposed strategy is shown quite clearly by the experiments.
Weaknesses: The only notable weakness is that the proposed strategy is a fairly minor deviation from previously proposed methods. However, I think this is largely mitigated by the non-obvious nature of the change and the in-depth experimental evaluation.
Technical Quality: 4
Clarity: 4
Questions for Authors: Given the increasing prevalence of methods like 3DGS which do not use neural networks for modelling view-dependence but rather simpler approaches like spherical harmonics, do you see a way of applying the ideas of this paper to such methods?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The only limitation I see is the reduced interpretability/editability of the model compared to other approaches, but this is addressed by the authors. I see no issues with societal impact beyond generic concerns that are relevant to all scene modelling methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >**Applying the idea in 3D Gaussian Splatting**
Thank you for your interesting suggestion. Recently, 3DGS techniques are advancing rapidly and there are some concurrent works trying to reconstruct the surface [1*] or render reflections [2*]. GaussianShader [2*] introduces shading attributes for each 3D Gaussian, such as diffuse, tint, roughness and normal, to model reflections. Its rendering is similar to Ref-NeRF. Therefore, we think it is possible to apply our idea to 3DGS. We can combine the original 3DGS and GaussianShader representation with an optimizable “weight” attribute, similar to the weight field in our method, that is assigned to each 3D Gaussian point. We consider this as a potential future work.
>**Reduced editability**
We discuss this limitation in the paper (Sec. G). Though our method does not support editability, with the high-quality mesh extracted from our method, it is possible to perform editing tasks such as relighting. For example, recent single image relighting methods [3*] use diffusion models for relighting and adopt “radiance cues” -- renderings of the object’s geometry with various roughness levels under the target environment illumination -- as conditioning. The mesh extracted from our method can be used to render such radiance cues.
[1*] Huang et al. 2D Gaussian Splatting for Geometrically Accurate Radiance Fields. SIGGRAPH, 2024
[2*] Jiang et al. GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces. CVPR 2024
[3*] Zeng et al. DiLightNet: Fine-grained Lighting Control for Diffusion-based Image Generation. SIGGRAPH, 2024 | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable feedback. In this global rebuttal, we address the common questions raised by the reviewers as follows:
> **Physical interpretation**
In this work, we mainly focus on robust surface reconstruction of real-world complex scenes with both reflective and non reflective surfaces. Current NeRF methods designed to handle reflections are mainly evaluated on synthetic datasets or scenes captured in well-controlled environments. It is challenging but important to extend them to more general scenes.
**Intuition on the need of a dual representation.**
To render reflective surfaces, Ref-NeRF [43] separates the color into physical components such as diffuse and specular components. During our extensive experiments, we find that this representation is not robust in complex real-world scenarios. As mentioned as a limitation in the Ref-NeRF paper, the representation does not “explicitly model inter-reflections or non-distant illumination”. Therefore, complex reflections that we can find in real world scenes will not be handled by similar methods. As discussed in L157-169, we experimentally observe advantages and limitations using existing camera view and reflected view radiance fields separately. Thus we decide to exploit the advantages of two radiance fields by combining them with a learnable weight.
**Ambiguous interpretation of reflections as geometry.**
Our custom baseline “RefV” is a physically-based model, where we follow Ref-NeRF and separate the color into diffuse and specular components. We observe that accurately separating diffuse and specular components using RefV only is a difficult task, as shown in Fig.13. In particular, view dependent effects can be represented as either geometry artifacts (holes and bumps) or with specularity. We discuss this problem in L533-543.
**Ambiguous interpretation of reflections as diffuse colors.**
Moreover, diffuse reflected components can be represented as both diffuse color and specular reflections due to the low frequency. We observe that separating reflections and color is more challenging when the distant reflected world is less detailed. For example, the gardensphere scene is easier to reconstruct with RefV only as the background (trees, buildings) is clearly reflected. On the other hand, the sedan scene is more challenging as the reflections of the background on the car are more blurry. This observation coincides with the limitations listed in the Ref-NeRF paper.
**Ambiguous interpretation of diffuse colors as reflections.**
We find that even if the color decomposition like Ref-NeRF does not fail, the color components may not be accurate enough. For example, BakedSDF [50] follows Ref-NeRF and uses diffuse and specular color components. However, we find that BakedSDF does not accurately separate these two components. As shown in Fig.1 of the rebuttal PDF, the specular component wrongly represents the diffuse colors of the objects that have almost no reflections, such as leaves and grass.
Due to the many limitations of physics based models for real world scenes, we propose to extend such methods by unifying a physics based radiance field with an additional camera view radiance field. Therefore leading to more robust reconstructions, as highlighted by reviewers.
Pdf: /pdf/65fa255fc89319bbd99091834ee9c135d5d5a584.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Optimal Algorithms for Online Convex Optimization with Adversarial Constraints | Accept (spotlight) | Summary: In this work, the authors study online convex optimization with adversarial and time-varying constraints, and aim to bound both the regret and the cumulative constraint violation (CCV). Compared with previous studies, the main contributions of this work can be summarized as: 1) a new algorithm with $O(\sqrt{T})$ regret and $O(\sqrt{T}\log T)$ CCV for convex functions; 2) a lower bound of $O(\sqrt{T})$ for the regret and CCV with convex functions; 3) a new algorithm with $O(\log T)$ regret and $O(\sqrt{T\\log T})$ CCV for strongly convex functions (or $O(\log T)$ CCV for strongly convex functions with *non-negative* regret); 4) an extension to the online constraint satisfaction problem.
Strengths: Compared with previous studies, this paper has the following strengths.
1) For convex functions, an improved algorithm is proposed to achieve the $O(\sqrt{T}\log T)$ regret and CCV bounds. By contrast, previous studies can only achieve the $O(T^{3/4})$ CCV bound when keeping the $O(\sqrt{T})$ regret bound.
2) A matching lower bound of $O(\sqrt{T})$ is established for the regret and CCV when functions are convex, which reveals the optimality of the proposed algorithm.
3) For strongly convex functions, this paper proposed an algorithm with $O(\log T)$ regret and $O(\sqrt{T\\log T})$ CCV. Although previous studies have achieved these regret and CCV bounds, they need to utilize Conv-OPT per round. By contrast, the proposed algorithm only needs to perform one projection per round. Moreover, if further assuming that the regret is *non-negative*, the CCV can be further reduced to $O(\log T)$.
Weaknesses: Although this paper provides some new results for online convex optimization with adversarial and time-varying constraints, I have some concerns.
1) The lower bound for convex functions only holds for the case with $d=T$ or $d>T$. However, for online problems, it may be common to consider a very large $T\gg d$, which implies that the optimality of the proposed algorithm for convex functions may only hold in limited cases.
2) Different from the case with convex functions, the authors do not provide a lower bound for strongly convex functions. So, it is not clear whether the $O(\log T)$ regret and $O(\sqrt{T\\log T})$ CCV can still be improved.
3) It seems to be trivial to derive the $O(\log T)$ regret and $O(\log T)$ CCV for strongly convex functions under the *non-negative* regret assumption. I cannot find any challenge in the analysis, and the existing algorithms may also enjoy the same result.
4) Although it may be valuable to develop an extension to the online constraint satisfaction problem, the derived CCV bounds depend on some problem-dependent variables, and thus could be even linear in $T$.
Technical Quality: 3
Clarity: 2
Questions for Authors: Besides the concerns discussed above, I also have the following two questions.
1) Is it possible to derive the $O(\log T)$ regret and $O(\log T)$ CCV for strongly convex functions without the *non-negative* regret?
2) Can the authors provide more discussions on the CCV bounds established in Theorems 4 and 5?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have provided some discussions on the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\textbf{On the lower bound:}$
$\textbf{Comment:}$ The lower bound for convex functions only holds for the case with $d=T$ or $d>T$. However, for online problems, it may be common to consider a very large $T\gg d$, which implies that the optimality of the proposed algorithm for convex functions may only hold in limited cases.
$\textbf{ Reply:}$
Our lower bound is in the spirit of constructing an input instance for which no online policy can achieve better $\it{simultaneous}$ regret and CCV, which happens to consider $d\ge T$. As far as we know, this is the first such lower bound, even though the literature on COCO is extensive. Since the simultaneous lower and upper bounds match, optimality is established in the strictest sense.
For the general case when $d<T$, from prior work on unconstrained OCO (special case of COCO without constraints), the regret is $\Omega(\sqrt{T})$ anyway. Thus, the only possibility, as correctly pointed out by the reviewer, is that the CCV can be lower when $d<T$, but finding a simultaneous lower bound on regret and CCV in this setting appears challenging.
Furthermore, in the practically relevant setting when the cost and constraint functions are given by deep neural networks (see, $\it{e.g.,}$ the application to the anomaly detection problem described above), typically, the dimension of the parameters $(d)$ is an order of magnitude larger than the number of data points $(T)$. This implies that the regime $d\geq T$ is practically relevant as well.
$\textbf{Strongly Convex Case}:$ It turns out that deriving a matching lower bound for the strongly convex case (which has much better regret and CCV guarantee) is much more challenging.
Even without constraints, the progression of results for OCO with strongly convex loss functions also followed a similar trend where the $O(\log(T))$ regret for OGD was shown much earlier than the lower bound of $\Omega(\log(T))$. Finding a tight lower bound for COCO in the strongly convex case is an important problem and something that we are actively working on.
$\textbf{Bounds under the non-negative regret assumption:}$
$\textbf{Comment:}$
``It seems to be trivial to derive the $O(\log T)$ regret and $O(\log T)$ CCV for strongly convex functions under the non-negative regret assumption. I cannot find any challenge in the analysis, and the existing algorithms may also enjoy the same result."
$\textbf{Reply:}$
To clarify, under the non-negative regret assumption, only the cost function $f_t$'s are strongly convex while the constraint functions $g_t$'s are just convex.
Since $f_t$'s are strongly convex, it is expected that regret is $O(\log{T})$, but the fact that
CCV is also $O(\log{T})$ even though the constraints are only convex is the surprising part.
Essentially, the assumption that the regret is non-negative is the main reason for getting $O(\log{T})$ instead of the expected CCV of $O(\sqrt{T})$ even though the constraints are only convex, which points to a
non-trivial interplay between the strongly convex cost and convex constraint functions through the surrogate loss given by Eqn (5). To the best of our knowledge, this is the first such result in the constrained OCO literature. The simple proof attests to the power of the regret decomposition inequality (Eqn (6)), which we introduce in the literature for the first time in this paper. It is not clear whether some of the previous algorithms for COCO also enjoy this property as they use a much more involved primal-dual-based analysis, which does not give direct control over the regret and CCV as Eqn (6). That being said, we would still highly appreciate it if the reviewer could point us to any appropriate reference for a similar result or with simpler proof in the literature on constrained OCO.
$\textbf{Discussion on the bounds in Theorem 4 and 5:}$
$\textbf{Comment:}$ ``Although it may be valuable to develop an extension to the online constraint satisfaction problem, the derived CCV bounds depend on some problem-dependent variables, and thus could be even linear in $T$."
$\textbf{Comment:}$ ``Can the authors provide more discussions on the CCV bounds established in Theorems 4 and 5?"
$\textbf{Reply:}$
Similar to the dynamic regret bounds in the OCO literature, which involve path length, the bounds in Theorem 4 and 5 are problem-specific bounds. It is impossible to obtain sublinear violation bounds in the OCS problem without any restriction on the problem instance. This is because if the constraints are such that their feasible sets change non-trivially on every round, then one needs to control the dynamic regret for the time-varying benchmark, which is known to depend on problem-specific parameters such as path lengths. In this problem, the parameters $S$ and $P_T$ capture the extent of time-variation of the constraints.
Theorems 4 and 5 show that when the parameters $S$ and $P_T$ are sublinear, our algorithm yields sublinear CCV without making the stringent feasibility assumption (Assumption 3).
$\textbf{Comment:}$ Is it possible to derive the $O(\log T)$ regret and $O(\log T)$ CCV for strongly convex functions without the non-negative regret?
$\textbf{Reply:}$ It seems highly unlikely to obtain $O(\log T)$ CCV bound for convex cost functions without making any extra assumptions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. I am a bit confused about the experimental results newly provided by the authors. There do exist some algorithms for the studied problem, but the authors do not include any baselines in their experiments.
---
Reply to Comment 1.1.1:
Title: On the experiments
Comment: As mentioned in the rebuttal, the performance metric for these demonstrating experiments is the area under the ROC curve (AUC-ROC), which comes out to be $0.92$ for our algorithm. Since the maximum AUC-ROC possible for any algorithm is $1.0$, our algorithm has near-optimal performance. The reviewer is correct in saying there are other algorithms for this problem; however, because of limited time during rebuttal, we could not include a comparison with existing baseline algorithms, which we will do In the final version of the paper. | Summary: This paper considers online convex optimization with constraint. The authors propose a new surrogate loss function, and by applying traditional online learning algorithms to the surrogate loss, $\mathcal{O}(\sqrt{T})$ regret and $\mathcal{O}(\sqrt{T \log T})$ can be obtained in the online learning setting. The result improves the state-of-the-art in the domain.
Strengths: This paper proposes an interesting construction of surrogate loss function and transforms the regret analysis of constrained online learning into the analysis of a standard online convex optimization problem. The choice of the regularization function $\Phi$ is interesting and novel to me.
Weaknesses: 1. Compared with Section 2, the results from Section 3 looks not less interesting and they are not the focus of the paper. I suggest the authors could conduct some demonstrating experiments and reduce the contents in Section 3.
Technical Quality: 3
Clarity: 3
Questions for Authors: **Minor issues**
1. Line 17
modelling => modeling
2. Line 198
Please ensure that $x^\star$ and $x^\ast$ are consistent
3. Line 228
0,) => 0),
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please add a section on limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\textbf{On the relevance of the OCS problem}$
$\textbf{Comment:}$ ``Compared with Section 2, ... reduce the contents in Section 3."
$\textbf{Reply:}$
The OCS problem considered in Section 3 is interesting from both technical and practical points of view for the following reasons. On the technical side, it gives non-trivial sublinear bounds for constraint violation with a relaxed feasibility assumption, which is much weaker than Assumption 3, used in Section 2. On the practical side, in Appendix A.3, we show that the multi-task learning problem can be formulated in the OCS framework. Reviewer pcig also shares this view and says ``I further find the OCS problem interesting, and the results towards this quite powerful and elegant."
$\textbf{Adding Experiments:}$
In the revision, we plan to add experiments for the following important learning problem.
$\textbf{Online anomaly detection:}$
We formulate the online version of the anomaly detection problem (described below) in the COCO framework. Our gradient-based algorithm is especially suitable for training neural network models as it only needs to compute the gradients. The online operation is indispensable in the credit card fraud detection setting, where the algorithm needs to continuously learn in a dynamic environment.
$\textbf{Setup:}$
We receive a sequence of $d$-dimensional feature vectors $\\{z_t\\}, t \geq 1$ and the corresponding binary labels $\\{y_t\\}, t \geq 1$. The algorithm predicts the label $\hat{y}_t$ for a given data point $z_t$ before its true label $y_t$ is revealed. Typically, the number of legitimate transactions (denoted by class ‘0’) outnumbers the number of fraudulent transactions (denoted by class ‘1’) by orders of magnitude. Since in this problem, we want to always detect any fraudulent transactions, just maximizing the classification accuracy is insufficient for the purpose.
$\textbf{Formulation:}$ Let $\hat{y}_t(z_t)$ be the probability for class $1$ predicted by some parameterized model for the feature vector $z_t$. Hence, the log-likelihood $L(t)$ of the data on round $t$ can be expressed as
\begin{eqnarray}
L(t) = \begin{cases}
\log(\hat{y}_t), \textrm{ if } y_t = 1, \\\\
\log(1 - \hat{y}_t), \textrm{ if } y_t = 0
\end{cases}
\end{eqnarray}
Consider the following optimization problem for training the model:
$$ \max \sum_{t=1}^T (1-y_t)\log(1-\hat{y}_t)$$
s.t.
$$ \sum_{t=1}^T y_t \log(\hat{y}_t) \geq 0,$$
where the maximization is done over the parameters of the model. In other words, we want to maximize the log-likelihood for the legitimate transactions subject to the constraint that all fraudulent transactions have a likelihood close to $1$ (i.e. log-likelihood values close to zero).
The above problem can be straightforwardly formulated as an instance of the COCO problem by defining the cost and constraint functions as follows:
$f_t(x) \equiv -(1-y_t) \log(1-\hat{y}_t(z_t,x)), ~~g_t(x)\equiv -y_t (\log(\hat{y}_t(z_t,x))),$
where $x$ denotes the parameters of the model.
In our experiments, we consider the common scenario where the prediction probability is modeled by the output of a feedforward neural network. Note that the feasibility assumption (Assumption 3) is naturally satisfied as the overparameterized models often perfectly fit the data.
$\textbf{Experiments:}$
We experiment with a publicly available credit card fraud detection dataset [1]. It is a highly imbalanced dataset containing only $492$ frauds ($\sim 0.17\%$) out of a total of $284,807$ reported transactions, which makes it a challenging benchmark.
$\textbf{Dataset and Network Architecture:}$ Each data point has $D_{\textrm{in}}=30$ features and binary labels. As a proof of concept, we choose a simple two-layer architecture with $H=10$ hidden nodes and sigmoid non-linearity. All network weights are independently initialized according to a standard Gaussian distribution. The model is then trained in an online fashion using Algorithm 1.
$\textbf{Results:}$
Due to the high class imbalance, instead of accuracy, the area under the ROC curve, which plots the True Positive Rate (TPR) against the False Positive Rate (FPR), is an appropriate metric to evaluate the performance of any prediction algorithm for this problem. By varying the hyperparameter $\lambda$, we obtain the ROC curve shown in Figure 1 of the attached pdf. The area under the ROC curve is computed to be $\approx 0.92$, which is an excellent score (cf. ideal score $=1.0$), notwithstanding the fact that the algorithm learns in an entirely online fashion starting from random initialization. Figure 2 shows the expected sublinear variation of CCV during one of the runs of the algorithm.
$\textbf{References}$
[1] Dal Pozzolo, Andrea, Olivier Caelen, Yann-Ael Le Borgne, Serge Waterschoot, and Gianluca Bontempi. ``Learned lessons in credit card fraud detection from a practitioner perspective." Expert systems with applications 41, no. 10 (2014): 4915-4928.
$\textbf{Typos:}$ We will fix them in the final version.
$\textbf{Comment:}$ ``Please add a section on limitations."
$\textbf{Reply:}$
$\textbf{Limitations and Future works:}$
$\textbf{1. Relaxed feasibility assumption:}$ Assumption 1, which requires the existence of a single action which satisfies the constraints on all rounds, is non-trivial to ensure. Our results in Section 3 show that it is possible to relax this assumption considerably for the OCS problem. An interesting open problem is to prove similar bounds for the COCO problem under a relaxed feasibility assumption.
$\textbf{2. Bandit Feedback:}$ It will be interesting to consider the problem under a weaker bandit feedback model, where the algorithm can only query the values of the cost and constraint functions at points of its choice.
$\textbf{3. Practical Evaluation:}$ It will be interesting to run large-scale experiments and compare our policy with the existing ones in the literature on the standard benchmark problems.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you for the response. I do not have additional concerns and maintain my evaluation of the paper.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the feedback. | Summary: The authors study Online Convex Optimization (OCO) under adversarial constraints of the form $g_{t,i}(x) \leq 0$ where $g_{t,i}$ are arbitrary convex and Lipschitz. The authors design a simple algorithm based on Lyapunov optimization which obtains an optimal regret rate and simultaneously optimal constraint violation (CCV) of $\widetilde O(\sqrt{T})$, and prove a matching lower bound. Their algorithm is based on performing projected OGD updates on surrogate loss functions which are a combination of the objective functions and the constraint functions, with coefficients that depend on a predetermined potential function. For strongly convex losses, the authors establish an improved logarithmic regret bound while maintaining the $\widetilde O(\sqrt{T})$ constraint violation.
Strengths: * The authors provide the first provably optimal algorithm in the general COCO setting for convex and Lipschitz loss functions and constraints.
* The algorithm is strikingly simple and the its analysis is not too complicated given standard OCO regret bound from the literature.
* The paper is nicely written and the authors provide an overview of the analysis which helps understand the technical contributions.
* A single algorithmic framework obtains the optimal regret bounds for both convex & Lipschitz and strongly convex settings, with changes made only in choosing the hyperparameters of the algorithm.
Weaknesses: * The authors do not formally define the interaction protocol in the online setting studied in this paper. Specifically, from lines 27-28 is is implied that the adversary chooses the loss function $f_t(\cdot)$ adaptively after seeing $x_t$, which of course cannot be the case if sublinear regret is obtained. It is my understanding that the adversary is actually oblivious (the functions and constraints are fixed in advance) and the functions are only revealed to the algorithm after choosing $x_t$. In any case, the authors should properly define the interaction protocol and the exact notion of the adversary in question.
* It is a bit unclear to me what are the novel techniques utilized in this paper compared to previous works on constrained OCO. Since the analysis seems relatively simple, I think the authors should elaborate on the technical challenges faced when proving their main results, and what novel techniques they used.
* While the authors claim to only require a call to a projection oracle at every round, there is a subtle point in which they also call a gradient oracle on a combination of the objective function and the constraint function. I am not an expert in the constrained OCO literature, so if this is a standard assumption in this setting I think the authors should mention it or alternatively formally define the type of gradient oracle used.
Technical Quality: 3
Clarity: 3
Questions for Authors: Beside the points I raised under "Weaknesses", I would appreciate it if the authors could address the following:
* The authors remark that the setting of $k$ constraints is equivalent to the single constraint setting by defining the constraint function as the pointwise max over the constraints. While this is true in the offline setting, there is a subtle point in the online setting where the definitions of CCV are not equivalent. Specifically, for $k$ constraints the CCV is essentially a maximum over sum while for a single constraint the CCV is a sum over maximum which is only larger, and thus it suffices, when proving an upper bound, to consider the single constraint setting. I think the authors should elaborate a bit on this point since this is not trivial at first glance.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\textbf{Type of adversary:}$
$\textbf{Comment:}$ ``The authors do not formally define the interaction protocol in the online setting studied in this paper. Specifically, from lines 27-28 is is implied that the adversary chooses the loss function $f_t(\cdot)$ adaptively after seeing $x_t$, which of course cannot be the case if sublinear regret is obtained. It is my understanding that the adversary is actually oblivious (the functions and constraints are fixed in advance) and the functions are only revealed to the algorithm after choosing $x_t$. In any case, the authors should properly define the interaction protocol and the exact notion of the adversary in question."
$\textbf{Reply:}$
We reiterate that we consider an adaptive adversary, not just a weaker, oblivious adversary. To be precise, the adversary chooses the cost function $f_t$ and constraint function $g_t$ $\it{after}$ seeing the action $x_t$ for round $t$. This is a standard definition of the adaptive adversary in online convex optimization (OCO) literature, $\it{e.g.}$, the textbook [Hazan, Section 1.1], defines the following interaction protocol:
``$\textrm{.. At iteration t, the online player chooses}$ $ x_t \in K.$$\textrm{ After the player has committed to this choice, a convex cost function}$ $f_t$ $\textrm{is revealed."}$
See also [Orabona, Section 2.1], which considers the same interaction protocol. For adaptive adversaries, the standard online gradient descent policy is known to enjoy a sub-linear $O(\sqrt{T})$ regret [Hazan, Theorem 3.1].
While revising the paper, we will make the description of the adversary more clear.
$\textbf{Novelty:}$
$\textbf{Comment:}$ ``It is a bit unclear to me what are the novel techniques utilized in this paper compared to previous works on constrained OCO. Since the analysis seems relatively simple, I think the authors should elaborate on the technical challenges faced when proving their main results, and what novel techniques they used."
$\textbf{Reply:}$ Most of the previous papers on this problem used a primal-dual-based technique, which yielded sub-optimal bounds (see Table 1). In this paper, for the first time, we proved a novel regret decomposition inequality (Eq. 6), which results from an entirely different Lyapunov-based argument. Our analysis hinges on solving the regret decomposition inequality, which gives direct control simultaneously over the regret and constraint violations. On the other hand, the primal-dual-based techniques typically analyze the dynamics of the action sequence and lead to a more involved analysis. One of the major contributions of this paper is to pioneer Lyapunov-based techniques in the online convex optimization literature with a simple and elegant analysis, which should be seen as a strength and not a weakness.
$\textbf{Access to Gradient oracles:}$
$\textbf{Comment:}$ ``While the authors claim to only require a call to a projection oracle at every round, there is a subtle point in which they also call a gradient oracle on a combination of the objective function and the constraint function. I am not an expert in the constrained OCO literature, so if this is a standard assumption in this setting I think the authors should mention it or alternatively formally define the type of gradient oracle used."
$\textbf{Reply:}$
Access to the gradient oracle is standard in OCO literature with first-order methods [Hazan, Chapter 3]. This is typically the `minimal' requirement for obtaining non-trivial results, $\it{e.g.,}$ the online gradient descent (OGD) algorithm. Alternatives to OGD, such as $\textbf{ Follow the Regulaized Leader}$-type algorithms require full function access for the previous time steps. Note that in addition to the gradient oracles, many of the previous papers on the same problem (referenced in the paper) required access to the entire constraint function (of the previous step) to solve a convex optimization problem (see algorithms using Conv-OPT in Table 1). On the other hand, our proposed algorithm is much more efficient as it only needs to compute the gradient of the cost and constraint functions at the current action $x_t$.
$\textbf{CCV bound with multiple constraints:}$
$\textbf{Comment:}$ ``The authors remark that the setting of $k$ constraints is equivalent to the single constraint setting by defining the constraint function as the pointwise max over the constraints. While this is true in the offline setting, there is a subtle point in the online setting where the definitions of CCV are not equivalent. Specifically, for $k$ constraints the CCV is essentially a maximum over sum while for a single constraint the CCV is a sum over maximum which is only larger, and thus it suffices, when proving an upper bound, to consider the single constraint setting. I think the authors should elaborate a bit on this point since this is not trivial at first glance."
$\textbf{Reply:}$
As the reviewer correctly pointed out, since the sum over max is larger than max over sum, constructing a single constraint function $g_t$ by taking a point wise maximum of $k$ constraints suffices as the resulting constraint function $g_t$ satisfies the feasibility assumption (Assumption 3) and bounding the CCV for $\{g_t\}$'s bounds the CCV for each of the individual constraint functions. In the revised version, we will state this point more explicitly.
$\textbf{References:}$
[1] Introduction to Online Convex Optimization, Second Edition, Elad Hazan, MIT Press (2022).
[2] Orabona, Francesco. "A modern introduction to online learning." arXiv preprint arXiv:1912.13213 (2019).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. I indeed misunderstood the online protocol studied here, so I thank the authors for resolving the issue.
I am also more convinced in that the simplicity of the analysis using Lyupanov-based techniques compared to previous work on this problem should be evaluated as a positive.
After reading the other reviews, and as I'm not very much bothered with the nonnegative regret assumption in the strongly convex case nor with issues regarding empirical evaluations, I will increase my score to 7.
---
Reply to Comment 1.1.1:
Comment: We appreciate the feedback from the reviewer. | Summary: The major part of this paper studies adversarial OCO with unknown, time varying constraints in the soft-enforcement setup, wherein at each round, both a loss $f_t$, and constraints $g_{t,i}$ are revealed. The performance metrics are the usual regret $\mathrm{Regret}_T = \sum_t f_t(x_t) - f_t(x^*),$ and the cumulative constraint violation $\mathrm{CCV}_T := \max\_i \sum\_t (g\_{t,i}(x\_t))^+,$ where $(\cdot)^+ = \max(0,\cdot),$ and $x^*$ is the best-in-hindsight competitor that satisfies _all_ of the constraints, i.e., $\forall i, t: g_{i,t}(x^*) \le 0$.
The paper shows that under the standard smoothness assumption, a simple modification of OGD, which can be seen as OGD on a loss that regularises constraint violation through a Lyapunov function, ensures that both $\mathrm{Regret}_T$ and $\mathrm{CCV}_T$ are $\tilde{O}(\sqrt{T})$, without any further assumptions, as were made by prior work when deriving such a result. Further, under strong convexity, the regret bounds improve to $\log T$ with the same CCV bound. Surprisingly, under strong convexity, the CCV bounds further improve to $\log T$ if $\mathrm{Regret}_T > 0$. The authors complement this result with a lower bound that simultaneously asserts that these metrics must both be larger than $\sqrt{T}$, at least if $T \le d,$ the dimension of the space.
The remainder of the paper is devoted to defining and studying the 'Online Constraint Satisfaction' (OCS) problem, which relaxes the assumption that a point satisfying all constraints exists, and demands methods for selecting $x_t$ such that $\sum g_t(x_t)$ is small over all intervals in $[1:T]$. Under natural relaxations of the existence of an $x$ satisfying all constraints (either by assuming that some $x$ satisfies all "somewhat aggregated" constraints, or quantitatively demanding that some $x$ has limited violation across all subintervals), the authors again show that a modified version of OGD, similar in spirit to their earlier method, ensures that the resulting sequence $x_t$ has low violation in the aforementioned strong sense.
Strengths: OCO with unknown constraints has attracted significant interest in recent years, amidst a general rise of interest in multiobjective online learning. I think that the paper makes a valuable contribution to this subfield by removing technical assumptions present in prior work, and by presenting an elegant algorithm with a slick and surprisingly simple analysis for the same. I further find the OCS problem interesting, and the results towards this quite powerful and elegant. I think that this paper will certainly be of interest to the online learning community at neurips.
Weaknesses: My only real grouse with the paper is the lower bound of Theorem 3, which I find to be overstated. The theorem is presented as "under assumptions 1, 2, and 3, the regret and CCV are $\Omega(\sqrt{T}),$ but the proof only defines a single instance where this occurs, and this further requires the strange assumption that the dimension of the domain $d$, exceeds $T$. I would much prefer that this is made explicit, e.g., to say something like _for any $d, T$ and algorithm, there exists an instance with a single time-varying constraint such that the method suffers $\min(\mathrm{regret}\_T, \mathrm{CCV}\_T) = \Omega(\sqrt{\min(T,d)}$._ This still makes the main point of the statement, but is explicit about the strength of the bound.
Technical Quality: 3
Clarity: 3
Questions for Authors: A suggestion: Definition 1 mixes the definition of the concept of $P_T$ with a bunch of editorialising, which just makes it hard to see what the object being defined is. I would suggest that the definition simply states that $P_T = \frac1F \min \max \cdots,$ and all of these comments are moved appropriately to either before or after it, but certainly not within the definition itself.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This is fine
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\textbf{Lower Bound of Theorem 3}$
$\textbf{Comment:}$ ``My only real grouse with the paper is the lower bound of Theorem 3, which I find to be overstated. The theorem is presented as - under assumptions 1, 2, and 3, the regret and CCV are $\Omega(\sqrt{T}),$ but the proof only defines a single instance where this occurs, and this further requires the strange assumption that the dimension of the domain $d$, exceeds $T$. I would much prefer that this is made explicit, e.g., to say something like for any $d, T$ and algorithm, there exists an instance with a single time-varying constraint such that the method suffers $\min(\mathrm{regret}_T, \mathrm{CCV}_T) = \Omega(\sqrt{\min(T,d)}$. This still makes the main point of the statement, but is explicit about the strength of the bound."
$\textbf{Reply:}$ The reviewer is correct in pointing this out, and we will make the necessary changes to correctly reflect the strength of the result.
On $d\ge T$ being a ``strange assumption," we would like to note that when the cost and constraint functions are given by deep neural networks (see, e.g., our experimental setup on the anomaly detection problem discussed below), typically, the dimension of the parameters ($d$) is an order of magnitude larger than the number of data points ($T$).
Technically, please note that as far as we know ours is the first result in the literature for the COCO problem that simultaneously lower bounds the regret and CCV (cumulative constraint violations) even under this assumption. It would certainly be interesting to remove this restriction and prove a lower bound with a constant dimension.
$\textbf{Suggestion to fix Definition 1}$
$\textbf{Comment:}$ ``A suggestion: Definition 1 mixes the definition of the concept of $P_T$ with a bunch of editorialising, which just makes it hard to see what the object being defined is. I would suggest that the definition simply states that $P_T = \frac1F \min \max \cdots,$ and all of these comments are moved appropriately to either before or after it, but certainly not within the definition itself."
$\textbf{Reply:}$ We agree with the suggestion, and in our revised version, we will make Definition 1 clearer.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. I think expanding on the application settings where $d > T$ is a reasonable assumptions, and including simulations in such setups, will be beneficial to the paper. I hope this makes it in, along with the change to the lower bound statement. I will keep my score as is.
---
Reply to Comment 1.1.1:
Title: Many thanks for the suggestions
Comment: We thank the reviewer once again. In the final version, we will incorporate the above suggestions. | Rebuttal 1:
Rebuttal: In the attached pdf, we report our experimental results for the credit card fraud detection problem.
Pdf: /pdf/4419fa2c7ffb73449c226c692e6154d747d290b2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Test Where Decisions Matter: Importance-driven Testing for Deep Reinforcement Learning | Accept (poster) | Summary: This paper proposes a model-based method for testing the safety properties of Deep Reinforcement Learning (DRL). The method computes a ranking of state importance across the entire state space, dividing the state space into safe and unsafe regions. The approach provides optimal test-case selection and guaranteed safety by providing formal verification guarantees over the entire state space. The method is evaluated on several examples, showing it discovers unsafe policy behavior with low testing effort.
Strengths: + The paper introduces a novel method to rank state importance across the entire state space, which further assist the testing procedure.
+ The use of optimistic and pessimistic safety estimates provides a range of expected outcomes and is interesting
+ The authors conduct a detailed evaluation of the method on several popular DRL tasks and demonstrate its effectiveness.
Weaknesses: - Some technical details should be explained more clearly.
- The authors doesn't compare IMT with the SOTA DRL testing method, like [15] mentioned by the authors in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: In general, the idea of ranking the importance of the state and leveraging it as the testing guidance is very interesting. I believe this work would have a certain impact on the further development of DRL testing.
There are a few technical points unclear.
1. Page 2, Line 60. It would be good if the authors gave a more concrete concept of the formal guarantee provided by IMT, like probabilistic certification.
2. Page 5, Line 175. What do you mean by "significantly larger"? Do you perform statistical testing here?
Moreover, some related work is missing:
- Gerasimou, Simos, Hasan Ferit Eniser, Alper Sen, and Alper Cakan. "Importance-driven deep learning system testing." In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering, pp. 702-713. 2020.
- Song, Jiayang, Xuan Xie, and Lei Ma. "$\mathtt {SIEGE} $: A Semantics-Guided Safety Enhancement Framework for AI-enabled Cyber-Physical Systems." IEEE Transactions on Software Engineering (2023).
- Shi, Ying, Beibei Yin, and Zheng Zheng. "Multi-granularity coverage criteria for deep reinforcement learning systems." Journal of Systems and Software 212 (2024): 112016.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors address it in the guideline.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our idea of guiding DRL testing through an importance ranking interesting. Given the current lack of testing methodologies for DRL and the numerous potential extensions of our approach, we believe our method could significantly impact the development of testing strategies for DRL. We will address the identified weaknesses, especially we will clarify the technical details. Please also see the global rebuttal, where we address the comparison to other baselines.
## Q1: Concept of formal guarantees
We agree that the provided formal guarantees should already be stated clearly in the introduction. We will amend this in the introduction.
In Section 3.1. (Lines 149-158), we explain the provided formal guarantees:
The estimates provide lower and upper bounds on the expected outcomes of the policy execution across all modeled states in the state space.
If the optimistic estimate violates a safety objective, then the policy is guaranteed to violate the safety objective, i.e., the probability that the safety CTL formula is satisfied using the agent’s policy is lower than the defined safety threshold.
On the other hand, any safety objective assured by the pessimistic estimate are formally proven to hold for the policy, i.e., the probability that the safety CTL formula is satisfied using the agent’s policy is greater or equal than the defined safety threshold.
We will give a similar explanation in the introduction.
## Q2: “significant” difference (Line 175). Are you performing statistical testing?
The values for $e_{opt}(s,a,n)$ and $e_{opt}(s,a',n)$ are directly computed using sound value iteration which yields exact probabilities. Thus, we are not performing statistical testing.
We consider a state important if the maximal difference for two distinct actions is substantially larger than 0, i.e. if the decision of the agent has an important impact on the expected probability to ensure safety. We apologize for the misleading phrasing. We will rephrase this and clarify it in the camera-ready version.
## Other Comments - Related Work
We thank the reviewer for giving us pointers to related work. We will include all three references in the related work section.
Highly relevant are [1] and [3]. [1] defines importance over the most-important neurons and tests the behavior of the most important neurons. In contrast, we define the ranking over the environment states, from which the policy should be explored. [3] is relevant since they are the first that discuss coverage criteria for DRL testing.
Additionally to comparing to other testing frameworks for DRL, we compare with approaches that combine model-based formal methods with model-free RL. [2] is clearly relevant in that regard, as they are combining DRL and model learning to design controllers that satisfy STL properties.
[1] Gerasimou, Simos, Hasan Ferit Eniser, Alper Sen, and Alper Cakan. "Importance-driven deep learning system testing."
[2] Song, Jiayang, Xuan Xie, and Lei Ma. "SIEGE: A Semantics-Guided Safety Enhancement Framework for AI-enabld Cyber-Physical Systems."
[3] Shi, Ying, Beibei Yin, and Zheng Zheng. "Multi-granularity coverage criteria for deep reinforcement learning systems."
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal! It address my concerns and I think it should be accepted. | Summary: This paper proposed a novel model-based importance-driven framework for testing trained DRL policies. By focusing on testing the cases with higher importance ranks, the proposed framework improves the testing scalability. The evaluation of several case studies demonstrates that the proposed framework can more efficiently discover unsafe policy behavior with low testing effort.
Strengths: 1. The paper is well-structured and easy to follow.
2. The idea of testing on important samples is interesting.
Weaknesses: 1. The definition of the safety formula $\phi$ and the formula for calculating $\mathbb{P}\_{\mathcal{M}^\pi, \phi}$ should be stated clearly.
2. The current framework depends on several human-defined thresholds, for instance, $\delta_{\phi}$, $\epsilon_{\phi}$ $\delta_{i}$. How those parameters are determined and sensitive to the final testing result should be discussed more clearly.
3. The current framework is shown to be applicable in a discrete state and action space. It would be nice to see the discussion on the potentials and limitations applied to the continuous state and action space.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The importance ranking is based on the maximal difference between the optimistic estimates concerning the available actions, which considers the impact of decisions on the optimistic estimates. Why not use the pessimistic estimate, as it is more concerned about safety violations?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our testing approach interesting. We believe our method could significantly impact the development of testing strategies for DRL and hope that it will lead to interesting follow up research. We will answer the questions and address the mentioned weaknesses in the following.
## Q1. Why is the importance ranking based on the maximal difference between the optimistic estimates? Why not use the pessimistic estimate, as it is more concerned about safety violations?
To evaluate the safety of executing action a1 in state s0, we must assess the safety of any state s1 that can be reached from s0 via a1. To avoid sampling the agent’s policy in s1 and subsequent states, we can either assume the agent follows the safest policy (using $P_{max}$) or the least safe policy (using $P_{min}$). Here, we justify why our testing framework uses $P_{max}$ ($e_{opt}$) rather than $P_{min}$ ($e_{pes}$) to assess safety.
Using $e_{opt}$:
If for a given state s0, there exist actions a1 and a2 with a large difference in the maximal expected probability of satisfying $\varphi$, this state will be highly ranked.
Suppose we want to evaluate how safe it was to take action a1 in s0. Assume the transition (s0,a1,s1) occurs with high probability and s1 has a high optimistic estimate. In this case, there exists a safe policy from s1 (which the agent could choose to follow). Thus, entering s1 was not safety-critical since any safety issues could be avoided from s1.
Further, assume that if the agent selects action a2 from s0, the transition (s0,a2,s2) occurs with high probability and s2 has a low optimistic estimate (i.e., it is not possible to execute a safe policy from s2 anymore). In this scenario, state s0 is highly ranked as it is crucial to choose action a1 over a2.
Using $e_{pes}$:
The pessimistic estimate does not have this property since it is based on the minimal expected probability of satisfying $\varphi$. Due to the possibility of reaching unsafe states from most states, the difference between actions a1 and a2 does not reflect the impact of the action choice on mitigating a safety violation as the optimistic estimate does. This leads to similar rankings for most states.
Consequently, we would need to test any state from which the agent could cause a safety violation. For example, in a skiing scenario, even if the agent is far from any object, we would still need to test the state even if from any successor state, crashes can still easily be avoided. This is exactly what we want to avoid.
Once we have sufficiently restricted the environment MDP, the pessimistic estimate becomes more informative. It can then identify candidate states that have not been tested and from which unsafe areas cannot be entered. This is highlighted in lines 151 to 155.
## W1: Clearly state the safety formula $\varphi$ and the formula for calculating the probabilities
We consider objectives in the safety fragment of computation tree logic (CTL) [1]. This allows us to express objectives like: $AG \neg collision$, which says that “on all paths (A) globally (G) no collision should occur”. We will add such an example in the paper.
The probabilities used in the paper ($P^{max}$, $P^{min}$ for MDPs, P for Markov Chains) can be computed using standard algorithms for solving MDPs. Many tools allow to compute these probabilities efficiently (like PRISM, STORM, or TEMPEST) using various algorithms, e.g., using sound value iteration (SVI) [2]], a dynamic programming approach, or a translation into a linear program. Thus, the estimates are computed using exact probabilities, when sound algorithms are used to compute $P^{min}$ and $P^{max}$.
The core of SVI is the iterative computation of the probabilities using the transition matrix of $M^i$. The complexity therefore scales polynomially with the number of states in the MDP. The powerful tool support in the community allows to compute these probabilities for MDPs with tens of millions of states efficiently [3]. We will point this out more clearly in the paper.
[1] C. Baier and J. Katoen, “Principles of Model Checking”.
[2] T. Quatmann et al. “Sound Value Iteration”
[3] J. Katoen, “The Probabilistic Model Checking Landscape”
## W2: Selection and sensitivity to the parameters.
We thank the reviewer for the comment. While we think that the used parameters are quite natural, we agree that a discussion of the selection and the effects of the parameters will improve the paper.
$\delta_\phi$: The probability of visiting an unsafe state should be below some threshold $\delta_\varphi$. This threshold is part of the safety specification. For example, the $P (AG \neg collision) <= \delta_\varphi$. The higher the threshold, the fewer risks the agent is allowed to take. Extreme cases: if $\delta_\varphi = 1$, the framework classifies an action as unsafe if there is any risk that safety will be violated. If $\delta_\varphi = 0$, any behavior is safe.
$\epsilon_\phi$: IMT stops if the difference between the optimistic and the pessimistic safety estimate is below this threshold for all states. Thus, the smaller this threshold is selected, the more test cases have to be executed until the algorithm terminates, but the more accurate is the estimated expected probability of the agent's policy to satisfy safety.
$\delta_i$: When using clustering, we want to cluster highly ranked states. Thus, this threshold defines the states that will be clustered and potentially tested. The lower the value, the more states will be clustered and potentially tested. The chosen value for $\delta_i$ in the skiing experiment is $0.8$ (Line 297) since this value successfully excludes states that are far from any potential safety critical states. We apologize for not using the defined variable.
## W3: Continuous states and action spaces.
We will add a paragraph to discuss this extension. Please also see our answer in the global rebuttal. | Summary: This paper presents a framework called importance-driven model based testing for RL models. It uses a model-based approach to compute estimates of safety based on the MDP explored so far using the policy and ranks the importance of states based on the impact of decisions on safety. Then it samples the policy on these state to further restrict the MDP. The framework provides formal verification guarantees and partitions the state space into safe and unsafe regions. The authors also propose a version to cluster similar states to further reduce the number of samples but at the cost of a relaxed guarantees. Experimental evaluations on different environments of various sizes demonstrate its effectiveness in reducing number of samples needed compared to random sampling both with and without the model.
Strengths: - **Relevance**: Policy testing is an important area of research in reinforcement learning.
- **Novelty**: The paper introduces a novel algorithm that uses model-based approaches to reduce sample complexity while providing strong guarantees.
- **Soundness and clarity**: The experiments are thorough. The results clearly illustrate the system's effectiveness and are presented clearly with appropriate images and plots. Figure 2, in particular, best illustrates the effectiveness of this system.
Weaknesses: Some of the weaknesses of this paper are:
- The approach relies on very strict assumptions on having a model for the environment with fixed set of states and actions, which could be very restricting for real world RL tasks.
- The introduction and abstract could better motivate and establish the problem. Currently, they only explain the algorithm without much context. For example, it is not clear how the safety estimates are computed until sections 2 and 3 or why computing them iteratively is easier than sampling the action on every state.
Technical Quality: 4
Clarity: 3
Questions for Authors: - How can number of policy queries in Fig 3c be greater than the number of states in the environment?
- For runtimes, it will also be useful to break it down into model checking and policy sampling times for both IMT and MT approaches.
- Please add legends to Figure 3
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors briefly talk about limitation about the requirements of the current approach, but they could definitely expand a bit more on these limitations early on the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed evaluation of our paper and for seeing the relevance of our proposed testing approach. We believe that our approach has the potential to significantly impact testing for DRL and hope that it will lead to additional interesting follow up research. In the following, we will address the mentioned weaknesses and answer the questions posed.
## Q1: How can the number of policy queries in Fig 3c be greater than the number of states in the environment?
One of the significant advantages of our model-based testing approach is that we only need to sample the policy once per test case, as the model immediately provides information about the safety of the selected action (including induced safety risks in the following steps).
This is not the case in random testing. Since random testing does not have a model in the background, it cannot measure the induced safety risks of a particular decision. Thus, we sample the policy and simulate the environment for multiple steps. This is the predominant approach in (software) testing, where a system is executed for a predetermined number of steps in order to reveal defects encountered along the way.
Hence, for every test from a selected state, we query the policy several times.
## Q2: Breaking down the runtimes into times for model checking and policy sampling and Q3: Add legends to Figure 3
For both Q2 and Q3 we thank the reviewer for the comment and will extend the paper to make the requested adjustments. | Summary: This work looks into the RL testing problem. RL policies are complex and hard to understand in terms of safety and performance. This work aims to test the policies via states in which the agent’s decisions have the highest impact on the expected outcome. This paper proposes a model-based method to compute a ranking of state importance and focus the testing on the highest-ranked states. The evaluation covers multiple benchmarks and shows that this approach can find unsafe policies.
Strengths: 1. RL testing is important and few works touched on this.
2. The idea of using the highest impact states to test the policies is interesting and seems effective.
Weaknesses: 1. The guarantee about the correctness of the ranking does not seem to be clarified. Do you iterate over all the actions in the action set? What if the action space is large and continuous?
2. The testing is based on probabilistic model checking instead of worst-case checking. It would be good to bring out this in the abstract and introduction explicitly.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How is the ranking computed? Do you cover continuous space?
2. What’s the scalability of this approach? Are there any cases where testing fails?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations are not discussed in the main test.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s interest in our idea of guiding DRL testing through an importance ranking. We also agree that there is a current lack of testing methodologies for DRL. Additionally, considering the many potential extensions of our approach, we believe our method could significantly impact the development of DRL testing strategies. As it is the first paper that brings model based testing into DRL, we hope it will inspire follow up research.
## Question - Computation and correctness of the ranking (W1)
The optimistic and pessimistic estimates are computed using the probabilities $P^{max}$ and $P^{min}$, which define the expected maximal and minimal probability to satisfy a property $\varphi$, respectively (see definition 3.1 paper). These probabilities can be computed with standard algorithms for solving MDPs. Many tools allow to compute these probabilities efficiently using various algorithms, e.g., using sound value iteration (SVI) [1], a dynamic programming approach, or a translation into a linear program. Thus, the estimates are computed using exact probabilities, when sound algorithms are used to compute the probabilities. Hence, if the MDP model is accurate, the provided estimates are exact.
To compute the ranking, we indeed iterate over all the actions in the action set (we are currently not considering continuous action spaces. With our current approach, we would need to discretize the action space).
The rank of a state s is given as the maximal difference between the optimistic estimates with respect to the available actions ($max_{a,a′} (e_{opt}(s, a, n) − e_{opt}(s, a′, n))$. $e_{opt}(s, a, n)$ is computed using the probabilities $P^{max}$.
For a detailed discussion about the definition of the ranking, please see our answer to Q1 of reviewer xBae.
## Question - Scalability
Being based on probabilistic model-checking, which internally uses value iteration, the test-case generation scales to tens of millions of states in the MDP that serves as the test model [2].
Please note that the size of the environment under test can be far larger. In model-based testing, a common strategy for large and complex systems is usually to create different models that capture different aspects of the system under test. Furthermore, to test for safety, it is often sufficient to consider abstract MDP model with a reduced feature space. States in an MDP typically encompass all relevant features of an agent and its environment. States then become vectors of feature values, where each combination of these constitutes a state. Not all features may be relevant to safety, though. By disregarding any features irrelevant to safety, the original model can be pruned to a
much smaller model. Retaining the safety-relevant dynamics of the environment allows the use of model-based techniques for high-dimensional environments.
Thus, it is often possible to construct MDP models with a manageable size (tens of millions of states are fine) that are suitable for safety testing, even if the environment under test is continuous or very high dimensional. This concept is also deployed in other areas like enforcing safe exploration during training via shielding [3]. As soon as such an MDP is available, our testing approach can be deployed.
## Question - Continuous states and actions spaces
We refer to our answer in the global rebuttal.
## Question - Are there any cases where testing fails?
When using the approach in Section 3.1, our approach provides verification guarantees: Thus, after our testing framework terminates, the policy is verified to adhere to the safety requirements.
As soon as clusters are used, the formal verification guarantees are lost, and unsafe behavior might be missed, even if the algorithm terminates. The advantage of using our testing approach is that similar states are clustered: Thus, states close to a certain safety hazard are clustered together and the agent's behavior with respect to this hazard is tested several times (even if not exhaustively). Unsafe behavior is only missed if all tested states within a cluster are found to be safe, but there is an untested state within the cluster from which the agent selects an unsafe action.
Note that as soon as we detect a single unsafe behavior in a cluster, all states in the cluster are marked as unsafe.
[1] Quatmann et al. “Sound Value Iteration”
[2] Katoen. “The Probabilistic Model Checking Landscape”
[3] Jansen et al. ”Safe Reinforcement Learning Using Probabilistic Shields” | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their valuable feedback. Our paper introduces model-based testing to the reinforcement learning setting. Since this is the very first paper in that direction, we agree with the reviewers that there are numerous intriguing directions yet to be explored, such as continuous state and action spaces. We hope that this work will inspire future research in model-based testing for deep reinforcement learning (DRL).
In this global rebuttal, we aim to address the questions and concerns raised by multiple reviewers.
## The approach relies on the assumption of having an MDP model of the environment.
We agree that needing an MDP model of the environment may be considered a hurdle toward the adoption of model-based testing (MBT). Indeed, it is often perceived as such in industrial applications of MBT for testing of conventional software. However, experience reports [1,2] from industry reveal benefits resulting from the activity of modeling software systems and their environments beyond just enabling test-case generation. The benefits include a better understanding of the system, environment, and requirements. The model commonly resolves ambiguities and helps to communicate between different stakeholders. It also opens venues for other activities such as the verification of temporal properties. In RL, environmental models enable safe exploration during training via techniques like shielding. Furthermore, the application areas of neurosymbolic AI are constantly growing, utilizing the same concept of combining model-based symbolic AI (e.g., to guarantee safety) with subsymbolic AI (e.g., to achieve high scalability and performance). Additionally, the software engineering community showed several successful applications of model learning to create test models automatically, thus circumventing manual modeling.
This paper is the first to introduce MBT into the testing of reinforcement learning. In line with related model-based techniques in other areas, we believe the benefits—such as using the model to select the most important test cases, reducing the number of policy samples, and computing estimates over the entire state space—outweigh the drawback of needing a model.
[1] Emil Alegroth et al. “Practitioners’ best practices to Adopt, Use or Abandon Model-based Testing with Graphical models for Software-intensive Systems”, Empirical Software Engineering (2022) 27: 103
[2] Robert V. Binder et al. “Model-Based Testing: Where Does It Stand?”, COMMUNICATIONS OF THE ACM (2015), 58: 2
## Extension to continuous state and action spaces
Indeed, extending our framework to environments with continuous state and action spaces is possible, and we plan to pursue this as our next step. We plan to learn abstract finite-state models of the environment. To automatically learn such a model, we are currently evaluating two potential approaches: (1) Compute an abstract state-space of the MDP model representation through dimensionality reduction and clustering of observed environmental states. The stochastic transitions are learned via a combination of active and passive model learning, as proposed in [5].
(2) Apply world model learning via neural networks, as in DreamerV3 [4], from which we extract finite-state models via quantized bottleneck insertion as in [3].
In this paper, we provide the first step: We introduce the notion of importance-driven testing and show its potential via the Atari Skiing game. To handle continuous domains, we will apply importance rankings in abstract environment models. We believe that the learned MDP models will be sufficiently precise to compute good importance rankings across all states, thereby identifying critical situations that need to be tested. Thus, we believe our method will evolve into a valuable testing method for intricate control policies in complex environments.
[3] Anurag Koul et al. "Learning Finite State Representations of Recurrent Policy Networks"
[4] Danijar Hafner et al. "Mastering Diverse Domains through World Models"
[5] Martin Tappler et al. "Learning Environment Models with Continuous Stochastic Dynamics"
## Comparison to stronger baseline
A comparison to existing search-based testing is difficult for two reasons. (1) There is no common framework for RL testing methods yet. For example, MDPFuzz only fuzzes the initial states, while STARLA creates complete episodes that must end in a terminal state. Therefore, our approach would be hardly comparable due to the difference in setting. (2) A fair comparison would likely need to go beyond safety-violation detection rates, which has been the focus of existing work, toward more comprehensive criteria. For example, STARLA might create different test cases for the same failure states, i.e., different episodes that traverse a common subset of states. As a result, STARLA might have an artificially high failure-detection rate, although certain states are covered multiple times, which we strictly avoid. In our comparison to random testing, we check how many of the failure states are found, which is not easily enforceable in the meta-heuristic search performed by STARLA.
Hence, we need suitable quality metrics for test cases in RL, which we are currently exploring in addition to model learning. Note that quality metrics (called adequacy criteria in software testing) and test-case selection criteria are two sides of the same coin, thus our approach presents a step toward defining proper quality criteria. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper proposes an algorithm (IMT) to test a deterministic policy in finite MDPs where a model of the MDP is available. The primary contributions of the paper are algorithmic and empirical. IMT works by iteratively partitioning the state space into safe, unsafe or undetermined. Using the MDP model, IMT is able to construct optimistic and pessimistic bounds on the property being tested (e.g., safety). These estimates are used to identify the states with the largest gap in their optimistic estimates between any pair of actions. IMT prioritizes testing these states next, which has the effect of tightening the bounds and taking the largest possible step towards convergence and halting. The final result includes a partitioning of the state space into safe and unsafe states, which is valuable feedback for improving the policy (offline). The paper also includes a clustering-based variant to scale the algorithm to larger MDPs. Experiments on 3 domains demonstrate the utility of IMT against a random baseline and a variant which does not rank the states.
Strengths: + The paper studies the important problem of verifying black-box policies used in control applications. This is an important question as RL-learned controllers are deployed more widely. Advances here are likely to be of significant interest to the community.
+ The paper is well written. The main ideas, algorithms, ideas and experiments are generally explained clearly. Although some important implementation details are missing, the paper is nevertheless easy to read.
+ The proposed algorithm is intuitively clear and using the estimates to identify "important" states to test next seems to work well. In cases where a model of the discrete MDP is available (or perfectly learnable), this seems like a useful technique to rigorously test a policy and uncover states where the policy does not behave safely. The experiments and appendices include a good amount of detail.
Weaknesses: - The paper does not provide sufficient detail on important algorithmic implementation details. Examples include the prerequisites for the algorithm (simulators, CTL), implementation of `computeEstimates`, $e_opt(s, a, n)$, clustering similarity/distance function, etc.
- The assumption of a correct model of the MDP is a somewhat large one in many domains. A discussion of the effort needed to obtain such a model would improve the paper and make its applicability to new domains clearer. Additionally, exploring the use of noisy simulators with varying levels of noise in the dynamics would be really interesting to see. As the paper mentions in Line 331, being able to leverage models of the environment learned from data is likely to increase the applicability. However, these are likely to include noisy dynamics so an experimental evaluation of the impact of noisy dynamics in the current experimental setup would have strengthened the paper.
- A comparison to stronger baselines would make it easier to assess the empirical contributions of the paper. Currently, the only comparison is to random testing. Comparing safety-violation detection rates against black-box policy testers (e.g., MDPFuzz, STARLA) using the same budget would have helped place this paper in the larger body of work on DRL testing as well as understand the applicability of these techniques in domains with different characteristics (size, model availability or learnability, etc.).
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the exact implementation of `computeEstimates`? How does it scale with the size of the input (restricted) MDP? Similarly, how exactly is $e_opt(s, a, n)$ estimated?
- What steps are involved in adding a new domain from scratch? How easy or hard is it to create the simulator, specify the CTL objective, etc.?
- Is it feasible to quantify or conduct investigations into how AMT might handle noise in the transition function in the current experimental setup?
- What is the precise definition of a robust policy in Line 195?
- What is the computational cost of the clustering approach used here? Should it be reflected in the complexity analysis in Lines 214 - 219?
- How does the quality of the resulting clusters affect performance? Does Alg 2 degrade gracefully as cluster quality decreases?
- Is it necessary to use a different mechanism (sink states) to restrict the MDP when clustering? How does terminating the trajectory at the sink state affect the e_opt, e_pes estimates, if at all? Does the approach still work if the restriction mechanism from Alg 1 is used?
- Does it make sense to compare safety-violation detection rates against other baselines (MDPFuzz, STARLA)?
- Does it make sense to consider continuous state and / or action MDPs using this framework? If yes, what modifications might be needed to make it work on cMDPs?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed evaluation of our paper. We believe that our proposed testing framework has the potential to significantly impact testing for DRL. Although testing DRL is intrinsically challenging, the problem has only recently garnered attention from the testing community. Currently, there is no common framework for RL testing methods, and most papers explore search-based strategies. Our paper is the first to introduce model-based testing into the DRL setting, a method that has proven highly relevant in many other domains and we believe has much potential also in the DRL setting.
## Q1: Details on computation of estimates, scalability, and what is estimated.
The optimistic and pessimistic estimates are computed using the probabilities $ P^{max} $ and $P^{min}$, which define the expected maximal and minimal probability to satisfy a property $\varphi$, respectively (see definition 3.1 paper). These probabilities can be computed with standard algorithms for solving MDPs. Many tools allow to compute these probabilities efficiently using various algorithms, e.g., using sound value iteration (SVI) [1], a dynamic programming approach, or a translation into a linear program. Thus, the estimates are computed using exact probabilities, when sound algorithms are used.
The core of SVI is the iterative computation of the probabilities using the transition matrix of $M^i$. The complexity therefore scales polynomially with the number of states. The powerful tool support in the community allows to compute these probabilities for MDPs with tens of millions of states efficiently [2].
[1] Quatmann et al. “Sound Value Iteration”
[2] Katoen. “The Probabilistic Model Checking Landscape”
## Q2: Steps for adding a new domain.
Our framework requires an MDP model of the environment and a CTL specification. The work involved in adding a new domain mostly amounts to creating an MDP. That is, you need knowledge about the environment dynamics on an abstract level. Most probabilistic model checkers accept inputs expressed in the symbolic modeling language PRISM [3]. For simulation, we generally rely on a gymnasium-like interface with a step and a reset functionality.
The objectives specified in CTL are relatively easy to define. For safety, it is generally sufficient to define events that must not happen. We also refer to our global rebuttal.
[3] Kwiatkowska. “PRISM 4.0: Verification of Probabilistic Real-time Systems.”
## Q3: Consequences of noise in transition relation.
IMT is only little affected by noise in the transition probabilities: the noise might only change the order in which states are tested. Let’s assume that from a certain state, a transition wrongly underrepresents the probability of traversing to an unsafe state. Our testing framework might then assign a lower rank to this state. However, if our algorithm is executed until convergence, the state will eventually be tested.
Our approach benefits from being a testing framework, with its primary task being the automatic selection of test cases. Even if the MDP model used to select the next test cases is not perfect, our approach will likely still identify interesting test cases. We thank the reviewer for this question and will emphasize the robustness of our approach against noise in the paper.
## Q4: Definition of a robust policy in Line 195.
We consider a policy to be robust if small changes in the input do not significantly change the chosen action. We mention this when discussing the testing approach with clustering: The intuition is that if we assume that the states in the cluster are similar and the policy makes similar decisions in similar states, testing only a fraction of the states within each cluster yields good testing results.
## Q5: Costs for clustering.
For the complexity analysis, we focused on the steps of the approach proposed in the paper. We refrained from including clustering because it is problem-dependent. However, as we applied k-means, which scales very well, we found that its runtime is negligible in practice.
## Q6: Impact of the clusters' quality on performance
Low-quality clusters can lead to an unnecessarily large testing effort or missed unsafe behavior.
Consider the case when a cluster is identified as unsafe: If the cluster is too large and includes states that are actually safe, our iterative testing framework will needlessly cluster and test the predecessor states of the cluster. For example, in the context of skiing, our framework would end up testing states that are not critical for avoiding collisions with trees or poles. Now, consider a scenario where all states in a cluster are deemed safe: If the clustering method groups states that are not sufficiently similar, our testing approach may fail to detect unsafe behavior in the policy.
Please note that we provide verification results (guarantees that any unsafe behavior will be detected) only when no clustering is used.
## Q7: Necessity and impact of sink states.
Yes, it is necessary to adapt the restriction of the MDP in Alg2.
If a single state of a cluster is assigned a failing verdict, we mark all states of this cluster as failing states and turn them into sink states. This allows the generalization of testing results to whole clusters, increasing scalability since not all states of a cluster need to be queried.
Terminating the trajectories at the sink states in clusters generally decreases the values of both estimates. By following a conservative approach, we consider all states in a cluster as unsafe if even one state is assigned a failing verdict. As a result, the estimates of the MDP with sink states are generally smaller than they would be in the unrestricted MDP. We will add a discussion to the paper.
To use the approach from Alg1, test executions for all states in a cluster would be necessary, which contradicts the goal of minimizing the testing budget.
## Q8 and Q9
We refer to our answer in the global rebuttal.
---
Rebuttal Comment 1.1:
Title: Re. author response
Comment: I thank the authors for their detailed response to the reviewers. After reading the response, other reviews and comments, I've increased my score to "Weak Accept". While I don't think there's anything technically wrong with the paper, there are a number of major assumptions being made, which might reduce the applicability of the proposed approach. The performance of the proposed method is not explored on abstract or learned models of the environment, which is the most likely scenario. Including these experiments would significantly strengthen the paper, in my opinion. In the absence of these, it becomes challenging to evaluate the paper for impact and overall utility. | null | null | null | null | null | null |
Idiographic Personality Gaussian Process for Psychological Assessment | Accept (poster) | Summary: This paper presents a multi-output Gaussian process model for classification in the context of psychological studies. It's based on a linear model of co-regionalisation leveraging unit-level latent factors and RBF kernels to jointly model population and individual personality traits to tackle an old debate in psychometrics regarding their original nature. The authors leverage a variational formulation to derive a lower bound for the model evidence and optimise the individual-specific, population, and GP-related parameters. An extensive simulation and real-data study is proposed, presenting longitudinal and multi-subject survey analyses, personality correlations and predictions.
Strengths: This paper is remarkably well-written and presented. The flow of derivations is easy to follow, and the illustrations are helpful and of excellent quality. The treated problem is far from trivial, with data that present several degrees of correlations (time, individual, population) and technical constraints (missing and categorical data). The experiments are impressive, with extensive comparison against many competitors and excellent results overall. The method shows great promise in providing practical insight into the domain, and even though I'm not a specialist in psychology, the ability to reconcile idiographic and nomothetic approaches seems particularly valuable.
Weaknesses: One could argue that the methodological novelty is limited as the presented multi-output GP model is fairly well-known in the GP community. However, I think its similarities and differences with GPLVM and GPDM are well presented, and the overall application is far from trivial regarding the nature of data (categorical outputs, latent variables, missing data, ...). This relative weakness is more than compensated by the strength of results and the potential such a methodology brings in a field like psychology, where the signal-on-noise ratio is generally low, and the nature of measurements is often challenging.
Technical Quality: 4
Clarity: 4
Questions for Authors: I only have a few questions regarding computation times, scaling and robustness of the inter-output covariance matrices, which are well-known limitations of co-regionalisation GP models in practice.
- In this sense, could you provide numerical evidence and comparison regarding training and prediction times for IPGP and competitors?
- Even though the number of tasks/topics remains probably limited in psychological applications, could you discuss a bit more, in Section 4.1 **Result**, the decrease in performance between full IPGP and the low-rank version. What could we expect for higher dimensions? Would the low-rank approximation remain robust for a more significant gap with the actual rank?
- In my experience, several multi-output GP models (or maybe their implementations) are somewhat unstable when it comes to estimating $\textbf{K}_{task}$, and pathological cases can often arise. Have you experienced such problems? And if so, how did you manage to tackle this issue?
**Typos:**
Page 5: The indices of the CMD definition have a slight error. All terms are written as $R_1$, and the norm is 'f' instead of '$l_2$'
Page 6: The title of 4.2 "Cross-secitonal" ==> Cross-sectional
Page 7: "Both correlation matrices **displace** a block pattern" ==> display?
Page 7: "questions corresponding negative emotionality" ==> corresponding to
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: In my opinion, the limitations are adequately discussed. The application spectrum and position in the related literature are well documented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer A4W7,
Thank you for your valuable feedback. Please see our responses below
> In this sense, could you provide numerical evidence and comparison regarding training and prediction times for IPGP and competitors?
Sure, please see below. We’d however first like to comment that runtime is typically not a primary concern in such analysis. The data we collected here required several months of time and over $20k in subject payments, so a few hours (or even a couple days) of one-time work is quite reasonable in light of that initial outlay if it provides additional insight.
The table below shows the average runtimes of IPGP and competitors on the simulation study data from the paper. IPGP does require more time to train due to the larger model space, but the runtime is still quite manageable even with a non-optimized reference implementation. We will add this table in the appendix and include some explicit discussion on runtime to the manuscript.
| Model | GRM | GPCM | SRM | GPDM | DSEM | TVAR | IPGP-NOM | IPGP-IND | IPGP-LOW | IPGP-NP | IPGP |
|---------------|------|------|------|------|------|------|----------|----------|----------|---------|-------|
| Avg runtime (sec) | 343 | 367 | 1398 | 17359| 311 | 468 | 17594 | 21562 | 10839 | 31150 | 30141 |
> Even though the number of tasks/topics remains probably limited in psychological applications, could you discuss a bit more, in Section 4.1 Result, the decrease in performance between full IPGP and the low-rank version. What could we expect for higher dimensions? Would the low-rank approximation remain robust for a more significant gap with the actual rank?
In general, a low-rank approximation inevitably lacks the model capacity of its full-rank counterpart, but this deficiency only matters up to the intrinsic rank of the data. Blindly increasing the rank beyond that point will not improve performance. In practice, when the rank is not known, a data-driven search could be performed to identify a suitable rank.
We ran an additional experiment during the rebuttal period using the simulation data from Section 4.1 in the manuscript where we varied the model rank in {2, 5, 8} (recall the true rank of the data was 5), repeating this simulation using 25 different random seeds. The results are summarized in the below table. Both the true rank (5) and high rank (8) models outperform the low rank (2) model, but there is no performance gain by increasing the rank more than necessary.
We will add this table and discussion to the appendix.
| Rank | Train Acc | Train LL | Test Acc | Test LL | CMD |
|------|--------|------------|-----------|-------|-------|
| 2 | 0.897 $\pm$ 0.004 | −0.313 $\pm$ 0.010 | 0.884 $\pm$ 0.005 | −0.334 $\pm$ 0.011 | 0.397 $\pm$ 0.007 |
| 5 | 0.957 $\pm$ 0.002 | −0.159 $\pm$ 0.005 | 0.942 $\pm$ 0.002 | −0.184 $\pm$ 0.006 | 0.128 $\pm$ 0.006 |
| 8 | 0.957 $\pm$ 0.002 | −0.161 $\pm$ 0.004 | 0.945 $\pm$ 0.002 | −0.183 $\pm$ 0.005 | 0.124 $\pm$ 0.006 |
> In my experience, several multi-output GP models (or maybe their implementations) are somewhat unstable when it comes to estimating, and pathological cases can often arise. Have you experienced such problems? And if so, how did you manage to tackle this issue?
We have not experienced such issues. The GPyTorch software that our multi-output GP implementation is based on is quite stable.
> Typos:
Thank you for pointing these out. We will fix them in revision.
---
Rebuttal 2:
Comment: Thank you for your thorough answers. I understand the technical constraints coming from such costly and time-consuming studies involving human subjects. While I acknowledge that this discussion about running time might be superfluous for your application, I always prefer to see beforehand 'how much exactly' it will cost me when I intend to test a method on my data, and I suspect that future readers would, too. This is especially true for methods like MTGP, which I know are coming at a not negligible cost.
That being said, I commend the authors' efforts to provide additional evidence and thus improve the confidence one can grant this paper. More generally, I support the authors' view about 'building bridges' between ML methodologies and their applications. Although I consider myself more of a methodological researcher, I notice that we too often focus on 'novelty' by principle while neglecting meticulous and well-conducted applications. Such studies are crucial for advancing more experimental disciplines like psychology with all the rigour it deserves from ML researchers. We all agree that this paper aims not to *advance ML* vastly, but I appreciate that it *uses ML to advance*.
---
Rebuttal Comment 2.1:
Comment: > I always prefer to see beforehand 'how much exactly' it will cost me when I intend to test a method on my data, and I suspect that future readers would, too. This is especially true for methods like MTGP, which I know are coming at a not negligible cost.
This view resonates with us. We're more than happy to add this discussion and these results regarding running time to the paper to help complete the story. We appreciate your original comment and the invitation to run these additional experiments.
> That being said, I commend the authors' efforts to provide additional evidence and thus improve the confidence one can grant this paper. More generally, I support the authors' view about 'building bridges' between ML methodologies and their applications. Although I consider myself more of a methodological researcher, I notice that we too often focus on 'novelty' by principle while neglecting meticulous and well-conducted applications. Such studies are crucial for advancing more experimental disciplines like psychology with all the rigour it deserves from ML researchers. We all agree that this paper aims not to advance ML vastly, but I appreciate that it uses ML to advance.
We truly appreciate your support! | Summary: Gives a multi-task/output GP formulation for multiple time-series (or intrinsic co-regionalisation model) for pyschological assessments. Design of the factor loadings informs the task-correlations and reflects the knowledge of individual's correlation between his responses and inter-person correlations. More of an application paper rather than a paper advancing machine learning; but a good application nonetheless.
Strengths: 1. The application is a natural fit for the model chosen, and results are good and convincing.
2. The use of Bayes factor for model testing make the paper better as an reference for this application.
3. Sections 4.2 and 4.3 evaluates the model from different aspects.
Weaknesses: 1. The paper is weak from the perspective of advancing state-of-the-art machine learning algorithms.
2. The paper is totally unrelated to GPLVM and tasks that GPLVM are designed for. Mentions of GPLVM only confuses the reader.
3. Equation 3 needs fixing.
4. In section 3.2, the method is variational inference, and not stochastic variational inference.
5. Description of the setup in section 4.1 needs to be clearer to explain also in terms of $K_{task}$.
6. Line 192 mentions "informative prior". What is this "informative prior"?
*Minor*
7. Line 319: "addressing" is too strong. Suggest to change to "contributing".
8. Line 321: "than" -> "over"
Technical Quality: 3
Clarity: 3
Questions for Authors: I do not have any critical questions.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer mvMV,
Thank you for your valuable feedback. Please see our responses below.
> The paper is weak from the perspective of advancing state-of-the-art machine learning algorithms.
Please see the shared rebuttal regarding the scope and nature of our contributions.
> The paper is totally unrelated to GPLVM and tasks that GPLVM are designed for. Mentions of GPLVM only confuses the reader.
Please allow us to elaborate on this claim. We believe that IPGP is spiritually related to GPLVM in that both involve the estimation of low-dimensional latent variables in order to gain more insight into high-dimensional observations. We agree that there is considerable departure in how that estimation proceeds. Further, reviewer A4W7 commented on the necessity of including GPLVM in the related work: “I think its similarities and differences with GPLVM and GPDM are well presented, and the overall application is far from trivial regarding the nature of data”.
> Equation 3 needs fixing.
Indeed, there is a misplaced transpose; it should be $W'W + ww'$, where $W$ is a $K\times J$ matrix and $w$ is a $J\times 1$ column vector. We will update accordingly. (Please indicate if you had something else in mind.)
> In section 3.2, the method is variational inference, and not stochastic variational inference.
Thank you for pointing this out. We will revise accordingly.
> Description of the setup in section 4.1 needs to be clearer to explain also in terms of $W_\text{task}$.
Thank you for pointing this out. In Section 4.1 we describe the setup of the shared interpersonal loading matrix $W_\text{pop}$ and unit-specific loading $w_i$, which are then used to construct the $W_\text{task}$ matrix according to Equation 3. We will make this more explicit in revision.
> Line 192 mentions "informative prior". What is this "informative prior"?
The informative prior refers to the interpersonal loading matrix $W_\text{pop}$ estimated from the standard cross-sectional data (the LOOPR data), which is used to reduce the number of hyperparameters of the full idiographic kernel from K*J (J items, K dimensions) + n*J (J items, n individuals) to nJ. Practically, the model training procedure includes two steps: (1) learning $W_\text{pop}$ using cross-sectional data, and then (2) learning the rest of the hyperparameters while holding $W_\text{pop}$ fixed. We show in the simulation that IPGP achieves more precise estimation of individual taxonomies with this stronger prior (see Table 1).
> (Minor) Line 319: "addressing" is too strong. Suggest to change to "contributing". Line 321: "than" -> "over"
Thank you for the suggestions; we will adopt them in revision.
---
Rebuttal Comment 1.1:
Comment: I think it is a matter of interpreting "bridge". It is clearly a "bridge" in terms of bringing new applications in. It is not a "bridge" in terms of bringing new ideas to advance ML --- an example of which is Random Matrix Theory.
For GPLVM, I am agreeable to it being mentioned in related work. Claiming in the introduction that you "advances on the ... GPLVM" is simply too much for me to take.
I may reconsider the score during the reviewer discussion stage.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response!
> Claiming in the introduction that you "advances on the ... GPLVM" is simply too much for me to take.
We are happy to rephrase this passage, in particular to avoid the acronym GPLVM entirely. We intended to indicate that our model was among a family of spiritually related models incorporating latent variables and Gaussian processes in their construction (it is "a" latent variable GP model), not that we were advancing "the" famous GPLVM model from Lawrence. | Summary: 1. This paper introduces an innovative measurement framework utilizing the Gaussian process coregionalization model to resolve the question of whether psychological attributes such as personality exhibit a universal structure among the populace or are uniquely individualized.
2. An Idiographic Personality Gaussian Process (IPGP), a hybrid model that accounts for both the commonality of traits across people and individual-specific "idiographic" variations.
3. Gaussian process coregionalization model to interpret the responses from grouped survey batteries, adapted for non-Gaussian ordinal data, and employs stochastic variational inference for estimating latent factors.
4. The application of IPGP on both synthetic data and an original survey demonstrates its performance.
Strengths: 1. The paper’s main innovation lies in its unique combination of multitask Gaussian process, which results in a conceptualization of the subject matter.
2. The innovative use of multitask Gaussian process in this paper offers a promising solution to a long-standing challenge of psychological assessment.
3. The quantitative and qualitative methods in this paper provide a potential in advancing psychological diagnosis and treatment.
4. The interdisciplinary nature of the research is interesting.
5. The authors provide a holistic view of the issue at hand.
Weaknesses: 1. there are many MTGPs, the author ignored comparison with them.
2. incorrect description, such as: "multi-task structure is also known as the linear model of coregionalization (LMC)".
3. This represents a particular implementation of MTGP, but it doesn't introduce any novel methodological advancements.
4. The paper presents a valuable contribution, but its innovative aspects are limited, as it largely builds upon existing theories without introducing significant new insights.
5. The literature review appears somewhat limited. Expanding it to include more recent or relevant MTGPs could provide a more comprehensive context for the research.
6. The statistical analysis used in the study seems inadequate given the complexity of the data.
7. The research seems to be more of an incremental advancement rather than a novel contribution, which may limit its impact on the field.
8. The experimental evidence provided in the paper is not as comprehensive as it should be to support the claims made, suggesting a need for more extensive testing.
9. Only one experiment is related to psychological assessment.
10. No idiographic personality is discovered and presented in the paper.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the weakness above
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please see the weakness above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer YTHZ,
Thank you for your valuable feedback. Please see our responses below.
> there are many MTGPs, the author ignored comparison with them
To our knowledge, Duerichen et al. [17] is the only existing MTGP model in the behavioral literature for multivariate physiological time-series analysis, but focuses on modeling responses directly rather than uncovering the latent structure, as we do here. So the construction of additional MTGPs for this setting is a novel contribution of its own. Note that we do include several ablated models in Tables 1-3 that are variants of MTGPs.
> incorrect description, such as: "multi-task structure is also known as the linear model of coregionalization (LMC)".
Indeed, the linear model of coregionalizaiton is only one (quite common) realization of the multi-task GP framework. We will reword accordingly.
> This represents a particular implementation of MTGP, but it doesn't introduce any novel methodological advancements.
Please see the shared rebuttal regarding the scope and nature of our contributions.
> The paper presents a valuable contribution, but its innovative aspects are limited, as it largely builds upon existing theories without introducing significant new insights.
Please see the shared rebuttal regarding the scope and nature of our contributions.
> The literature review appears somewhat limited. Expanding it to include more recent or relevant MTGPs could provide a more comprehensive context for the research.
MTGPs are underexplored in the behavioral literature. We did include a discussion on existing MTGP models for multivariate physiological time-series analysis [Duerichen, et al]. Outside the field of psychology, MTGPs have been recently applied in causal inference [1,2,3], environmental science [4, 5, 6] and biomedical research [7]. We add these related works to the literature review.
[1] Aglietti et al. NeurIPS 2020
[2] Alaa and van der Schaar. NeurIPS 2017
[3] Chen, et al. AISTATS 2023
[4] Zhou, et al. Journal of Cleaner Production doi:10.1016/j.jclepro.2020.124710
[5] Li, et al. Measurement doi:10.1016/j.measurement.2021.110085
[6] Dahl and Bonilla. Machine Learning doi:0.1007/s10994-019-05808-z
[7] Zhang, et al. Journal of Biomedical Informatics doi:10.1016/j.jbi.2022.104079
> The statistical analysis used in the study seems inadequate given the complexity of the data.
We used paired t-tests for model evaluation in Table 1 and likelihood-ratio tests in Tables 2-3, both of which are standard tools for statistical hypothesis testing. We are happy to continue this conversation. Could you please provide more details regarding your concern (for example, an alternative approach or a specific deficiency in ours) so that we can offer meaningful additional comments during the author-reviewer discussion phase?
> The experimental evidence provided in the paper is not as comprehensive as it should be to support the claims made, suggesting a need for more extensive testing.
We summarize our claims and empirical evidence that supports the claims here:
1. IPGP is a novel psychological assessment model that improves both prediction of actual
responses and estimation of individualized factor structures relative to existing benchmarks (see Table 1-2 for evidence).
2. Substantive deviations from the common psychological structures persist in considerable individuals, as IPGP is decisively favored than the nomothetic baseline (see Table 3 for model comparison and Figure 4 for our learned idiographic profiles).
We are happy to continue this conversation. Could you please provide more details regarding your concern (for example, an alternative approach or a specific deficiency in ours) so that we can offer meaningful additional comments during the author-reviewer discussion phase?
> Only one experiment is related to psychological assessment.
Our paper includes three experiments: a simulation study, a re-analysis of a large cross-sectional dataset (itself a psychological assessment), and a pilot study of repeated measures of the Big Five personality traits (a second, separate assessment). We note that collecting this pilot study was itself an intensive investment requiring multiple months and over $20k in subject payments. Repeated assessment datasets of this type are not widely shared to protect human subjects, hence why we had to collect our own.
> No idiographic personality is discovered and presented in the paper.
Figure 4 presents four residual correlation matrices w.r.t to the main population, each highlighting one distinct profile of idiographic personality. Selected full correlation matrices of idiographic personality types can also be found in Appendix B. Further, the results in Table 3 show clearly that the model that allows for idiographic personality structure is strongly favored.
---
Rebuttal 2:
Comment: Employing other advanced MTGPs for this psychological task is straightforward. However, the author intentionally abandoned more comparisons with other MTGPs.
The innovation of the method proposed in this paper is quite limited.
This article makes a very small contribution to the field of Gaussian processes
Compared to state-of-the-art methods, its advancements and advantages are minimal.
As this is an applied work, I suggest the authors submit their paper to a journal focused on bioinformatics.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response, although we respectfully disagree.
We strongly believe work such as ours has a place at this conference.
> As this is an applied work, I suggest the authors submit their paper to a journal focused on bioinformatics.
The call for papers explicitly encourages submissions of applications and highlights the participation of diverse communities beyond core ML:
> [NeurIPS] brings together researchers in machine learning, neuroscience, statistics, optimization, computer vision, natural language processing, life sciences, natural sciences, social sciences, and other adjacent fields.
> We invite submissions presenting new and original research on topics including but not limited to the following:
> - Applications
> [...]
> - Machine learning for sciences (e.g. climate, health, life sciences, physics, social sciences) [...]
We do not believe it is in the spirit of the call for papers to dismiss applied work out of hand.
> The innovation of the method proposed in this paper is quite limited.
> This article makes a very small contribution to the field of Gaussian processes
The reviewer guidelines explicitly encourage diversity in contributions beyond purely methodological improvements to (MT)GPs:
> There are many examples of contributions that warrant publication at NeurIPS. These contributions may be theoretical, methodological, algorithmic, empirical, connecting ideas in disparate fields (“bridge papers”), or providing a critical analysis.
Our contributions here required both considerable expertise in GP modeling (including the expertise required to build several necessary innovations highlighted by reviewer A4W7) and considerable and sustained engagement with another domain to ensure success. Our strong results reflect a sophisticated model shedding light on an important psychological question.
We do not believe it is in the spirit of these guidelines to dismiss our non-methodological (empirical, bridging communities mentioned in the CFP, etc.) contributions out of hand -- especially as they are supported by methodological contributions as well! | Summary: UPDATE: I am updating my scores in light of the excellent authors' response.
This paper considers psychometric data composed of ordinal responses $y_{ijt}$, each of which represent how unit _i_ answered survey item _j_ during time period _t_. The key characteristic of this item-response data is that the same $N$ units are longitudinally surveyed on the same $J$ items over $T$ periods.
The paper takes an ordered logit factor modeling approach to analyzing such data, wherein $f_j^{(i)}(t) = \mathbf{w}_j^\top \mathbf{x}_i(t)$ is the latent ideal point of unit _i_ on on item _i_ at time _t_, and $\mathbf{w}_j$ and $ \mathbf{x}_i(t)$ are the $K$-dimensional loadings and factor vectors, respectively.
The paper moreover advocates an "idiographic approach [which] emphasizes _intrapersonal_ variation by requiring distinct loadings $\mathbf{w}_j^{(i)}$" which are different for each unit $i$. It is able to effectively achieve this by exploiting the repeated measurements of each unit-item $(i,j)$ pair over different time periods $t$.
The proposed model is an instance of a multi-task Gaussian process (MTGP). Intrapersonal variation is modeled using a unit-specific kernel $K_{time}^{i}$. The full covariance is then $JT \times JT$, resulting from a Kronecker product of $\mathbf{K}_{\textrm{time}}^{(i)}$ (applied to input $T$ periods) with a low-rank matrix $J \times J$ matrix that represents covariance between survey items (or "tasks"). This all extends the previously-presented linear model of coregionalization (LMC), which was originally developed for the simpler case where observations of $(i,j)$ pairs are not repeated over time.
The paper derives a variational inference inference algorithm for the model, which follows closely from previous work.
The paper then reports three sets of experiments: 1) a synthetic study that reports excellent results on parameter recovery (when the true parameters are known), 2) a re-analysis study that purports to show that a non-dynamic (i.e., non-idiographic) version of proposed model is able to identify the correct factor structure (K=5) from survey data designed to assess the "Big Five" psychometric traits, and 3) an illustrative case study involving a novel longitudinal data set.
Strengths: The paper is very well-written. It brings an interesting application to life and convinces the reader that the proposed modeling approach is well-tailored to the problem at hand. The paper gives a clear review of the prerequisite concepts and presents its modeling approach clearly.
The paper covers related work in psychometrics well is convincing that the modeling approach is novel within that applied community. Experiments bear out this claim, as the proposed model performs much better than models which are currently used in psychometrics.
The paper presents a novel longitudinal survey dataset composed of $N=93$ subjects who were given personality assessment surveys over the course of three weeks. If I am reading the paper correctly, each subject was asked to complete a survey _six times per day_. This data sounds highly non-trivial to collect, and should be considered a main contribution in itself.
Weaknesses: The paper is vague about its technical contributions. It makes the following hedged novelty statement: "the first multi-task GP latent variables model _for dynamic idiographic assessment_". I took this to mean that the proposed approach is not very new technically, but it has never yet been applied to dynamic idiographic assessments. However, the paper also makes the general claim that it "advances the literatures on Gaussian process latent variable models...". I am of the opinion that tailoring existing modeling frameworks to new applications does provide a technical contribution, as it contributes a new view and set of interpretations/metaphors that can help to better understand the abstract model. I think the paper does contribute in this way. But I am unsure if the paper further contributes more substantially to the area of Gaussian process latent variable models, as the paper is vague about that.
There is sloppiness in some of the main equations. For instance, equation 2 confuses a distribution with a random variable stating $p(\mathbf{f}^{(i}) \sim \cdots$, and I believe equation 3 confuses an inner with an outer product (shouldn't it be $w_i w_i^\top$?). There's also some confusing overloading of symbols between the background and the model sections. The background sets up the idea that an ideographic approach has distinct loadings $w_j^{(i)}$ different across $i$. But the proposed model itself does not seem to sport this as each $w_j$ is global for each task. I believe effectively the model does achieve an idiographic interpretation through the unit-specific kernel, but the connection between the background and model section is not clear.
I think the second experiment has some major flaws. The paper claims this experiment validates the "Big Five" because model performance peaks at $K=5$. However, the model is only fit for $K=1...5$. We can clearly see from the log likelihood numbers that model performance for IPGP is monotonically increasing in $K$ for the values considered, suggesting that it will likely be even better at larger values of $K$; if that were true, it would not validate the "Big Five" theory. As this is an applied paper, I would expect a much higher level of rigor on one of the two applied case studies.
Technical Quality: 3
Clarity: 3
Questions for Authors: As this is an emergency review, after the reviewer discussion period, I am not sure if the authors will be able to reply. But if they are able, I would just ask that they respond to the various points made in previous sub-sections.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper did involve a fairly intense human subjects experiment for data collection, but the paper reports it was IRB-approved. Psychometrics of this kind are fairly common; I do not think the paper introduces any new potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | null | Rebuttal 1:
Rebuttal: Two reviewers commented on the nature of our contributions. We post a shared comment on the scope of our contributions here.
We would like to first stress that our contributions are not merely theoretical, but that our work also represents applied machine learning for science. Both applications and ML for science are listed as topics of interest in the call for papers https://neurips.cc/Conferences/2024/CallForPapers. We would also like to highlight the following language in the both the call and reviewer instructions:
> There are many examples of contributions that warrant publication at NeurIPS. These contributions may be theoretical, methodological, algorithmic, empirical, connecting ideas in disparate fields (‘bridge papers’), or providing a critical analysis.
A primary contribution of our work is in “bridging” between advances in (MT)GP modeling and the disparate field of psychological assessment and diagnosis. Building this bridge required significant engagement from another community and significant care in modeling to address the nuances and challenges presented by this setting (to quote reviewer A4W7: “categorical outputs, latent variables, missing data, ...”). Indeed, reviewer A4W7 commented further on the nature -- and relative strength -- of our contributions in this light:
> One could argue that the methodological novelty is limited as the presented multi-output GP model is fairly well-known in the GP community. However, I think its similarities and differences with GPLVM and GPDM are well presented, and the overall application is far from trivial regarding the nature of data (categorical outputs, latent variables, missing data, ...). This relative weakness is more than compensated by the strength of results and the potential such a methodology brings in a field like psychology, where the signal-on-noise ratio is generally low, and the nature of measurements is often challenging.
To provide our contributions here a bit more context, and underscore the importance of the bridge we build here, there is an underlying debate in psychology that we are addressing with this work: whether (i) all individuals have a shared personality structure, (ii) all individuals have a (unique) idiosyncratic structure, or (iii) something in between. The IPGP framework offers a powerful new tool to engage in this debate as each of these scenarios can be modeled with a special case of our model. Bayes factors (and predictive capacity) then allow us to measure the relative merit of these claims in a principled and data-driven manner.
Our experimental results reaffirm the common Big Five personality model through a factor analysis study in Table 2, and show that the idiographic model is decisively favored to the nomothetic model in Table 3. Moreover, we identify distinct personality profiles that substantially differ from the interpersonal commonality (see Figure 4). Our results therefore provide a significant challenge to dominant paradigms of personality by showing evidence for an intermediate outcome where there is neither a perfectly shared structure nor a perfectly idiosyncratic structure, but rather structured idiosyncratic deviations from a common baseline. We believe that our methodological approach sets the stage for significant advancements in theorizing, evaluation, and (eventually) clinical care in the psychological domain. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Online Iterative Reinforcement Learning from Human Feedback with General Preference Model | Accept (poster) | Summary: This paper explores RLHF with a general preference model. Specifically, the authors formulate a learning objective aimed at identifying a policy consistently preferred by the KL-regularized preference oracle. Furthermore, the authors propose a sample-efficient training algorithm applicable to both online and offline learning scenarios. The empirical studies conducted validate the effectiveness of the proposed framework.
Strengths: 1. This paper delves into RLHF under a general preference oracle capable of capturing non-transitive preferences.
2. The authors introduce an efficient algorithm applicable to both offline and online settings.
3. The work is theoretically sound, and the authors also offer practical implementations.
4. Experimental results demonstrate the effectiveness of the proposed method.
Weaknesses: 1. It would be beneficial if the authors could include examples of the prompts used for training the preference model, as well as responses generated by different methods. This addition would offer readers a clearer understanding of the practical application and comparative effectiveness of the proposed approach.
2. The reproducibility of the proposed method is not fully clear. Providing a detailed description of the entire framework or algorithm would greatly assist in understanding and replicating the research findings.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is Assumption 1 considered mild in real-world applications?
2. How does the preference model outperform the Bradley-Terry (BT) model, given that their training objectives in Equations 2 and 7 appear to be similar?
3. Why does the proposed method outperform the baseline in the offline setting despite suffering from a severe overoptimization problem?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive feedback and constructive suggestions! Our responses are as follows.
**Weakness 1** Include examples of the prompts used for training the preference model and responses generated by different methods.
Sure. Some example prompts are provided as below
```markdown
What is the ideal temperature to serve a crisp Chardonnay?
is there xor method in vb?
Define cryptography? Why we need cryptography?
how to use viewdid load in swiftui macos xcode 14
```
We also provide some example responses as below.
```markdown
What are the names of some famous actors that started their careers on Broadway?
DPO:
1. Hugh Jackman
2. Meryl Streep
3. Nathan Lane
4. Angela Lansbury
5. Bernadette Peters
6. Liza Minnelli
7. Audra McDonald
8. Philip Seymour Hoffman
9. Laurence Olivier
10. Kelsey Grammer
11. Richard Gere
12. Liev Schreiber
13. Mark Ruffalo
14. Julie Andrews
15. James Earl Jones
16. Whoopi Goldberg
17. Christopher Plummer
18. Robert Redford
19. Angela Bassett
20. Helen Mirren.
IPO:
Some famous actors that started their careers on Broadway include:
1. Hugh Jackman
2. Meryl Streep
3. Viola Davis
4. Denzel Washington
5. Mark Ruffalo
6. Matthew Broderick
7. Jennifer Holliday
8. Audra McDonald
9. Nathan Lane
10. Bernadette Peters
11. James Earl Jones
12. Angela Lansbury
13. Blythe Danner
14. Richard Dreyfuss
15. Cicely Tyson
Ours:
Here are some famous actors who started their careers on Broadway:
1. Meryl Streep - Before becoming one of the most acclaimed actors in Hollywood, Meryl Streep began her career on Broadway in the early 1970s. She made her debut in the play "The Play About the Baby" in 1971 and went on to star in several other productions, including "Trelawny of the 'Wells'" (1975), "Henry IV, Part 1" (1976), and "Sunday in the Park with George" (1984).
2. Hugh Jackman - Hugh Jackman made his Broadway debut in the musical "Oklahoma!" in 1990, playing the role of Curly. He went on to star in several other Broadway productions, including "Carousel" (1994), "The Iceman Cometh" (1999), and "The Boy from Oz" (2003), for which he won a Tony Award for Best Actor in a Musical.
3. Angela Lansbury - Angela Lansbury began her career on Broadway in the 1940s, starring in classic productions such as "Mrs. Warren's Profession" (1947), "The Picture of Dorian Gray" (1948), and "The Three Sisters" (1956). She won her first Tony Award for Best Actress in a Musical in 1966 for her performance in "Mame" and went on to win five more Tonys throughout her career.
4. Audrey Hepburn - Audrey Hepburn made her Broadway debut in the play "The Philadelphia Story" in 1956, playing the role of Tracy Lord. She went on to star in several other productions, including "Ondine" (1954) and "Tea and Sympathy" (1953), before transitioning to film.
```
We omit some content for our answer because of the limit of characters in our response.
It is clear that our proposed algorithm can enhance the helpfulness and engagement of the responses.
**Weakness 2** The reproducibility of the proposed method is not fully clear.
**We will make the description more detailed and release codes in the revised version.** We have 15K prompts for each iteration and perform 2 iterations for our algorithms. For each iteration, we have 2 epochs to train the policy model. For the exploration enhancer, we use the rejection sampling. The learning rate is 2e-7 with batch size 128.
**Question 1** Is Assumption 1 mild in real-world applications?
Yes. This assumption is quite standard in the RL literature. In practice, from Lines 122-123, we can allow an infinite function class as long as it has a finite covering number. Besides, when the capacity of the large language model is sufficiently large, it is very likely to contain the true model $P^*$.
**Question 2** How does the preference model outperform BT model, given that their training objectives in (2), (7) appear to be similar?
We want to clarify that the training objectives in Equations 2 and 7 are not similar, despite both maximizing an MLE objective. In (7), we directly learn the general preference $P(x,a^1,a^2)$ without assuming any specific reward structure, such as the BT model. This results in a preference function class with a much larger capacity. In contrast, (2) characterizes each response action pair by the reward $R(x,a)$ and assumes that the preference between two responses follows the BT model. The BT model imposes strong assumptions, which may not fully capture complex human preferences, such as intransitivity (see Lines 44-50).
Overall, our training objective learns a more general preference function class, while Equation 2 only considers a preference function following the BT model. Therefore, our approach applies to more general cases.
**Question 3** Why does the proposed method outperform the baseline in the offline setting despite suffering from a severe overoptimization problem?
We reiterate that our algorithm should be less likely to have the overoptimization problem due to the better exploration and preference signal. Given the same data sample pairs, offline datasets only provide signals from the offline distribution, while our algorithm has better exploration strategy (equations (11, 12)) so that the overoptimization problem should be less severe.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My concerns have been addressed. Accordingly, I've increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply and increasing your score! We are happy to see that our response addressed your concerns. Thanks again for your valuable feedback. | Summary: This paper introduces a two-play game approach to the RLHF problems. Both offline and online algorithms are studied, including a combination of pessimism in offline learning and exploration/exploitation in the online setting. The authors conducted experiments to verify the effectiveness of their algorithm.
The paper is mostly well-written and clear, yet the experimental verification is not strong enough to support the paper's claims.
Strengths: The paper is well-written, and I enjoyed reading it. The arguments are well supported by literature, and the authors are clearly knowledgeable and did well in presenting their ideas.
Weaknesses: Mainly on novelty and soundness of empirical study.
Technical Quality: 3
Clarity: 2
Questions for Authors: Could the authors elaborate more on the experiment details? From the current write-up, it is relatively hard for readers to capture how the training process is in either online or offline settings. Specifically,
1) how the reward models are trained? In Appendix A.2 the authors mentioned the bradley-terry RM and the oracle RM, is the oracle RM trained through next token prediction and with max length = 1 such that the probability of A is the score?
2) with those reward models, what is the exact policy optimization method used in the experiments (this is especially unclear for the offline setting).
3) Could the authors explain what are the motivations for the special choices in Section 5. Are there other alternatives?
The empirical results are impressive, however, I have some concerns regarding the fairness in comparisons:
1) the authors choose to compare mainly against offline methods, this seems not supportive enough for the proposed algorithm: could the authors also compare to other online RLHF algorithms? Only through this set of experiments will we be able to draw conclusions on the effectiveness of the proposed method --- rather than the superiority of online approaches over offline.
2) I reckon the implementation and reproduction of existing algorithms is not easy --- even only changing the dataset. Nonetheless, I would be keen to see some experiments regarding on how much effort would be needed in getting a well-performing algorithm. In other words, would it be easier to perform hyper-param sweeping for the proposed method than IPO / DPO (and their online alternatives)?
Ablation studies are required to draw conclusions of each element in the authors' empirical designs. For instance, the effectiveness of the pessimism objective, the necessity of each agent in the optimization problem (i.e., the min-player and its exploration.)
Regarding the novelty, could the author contrast the following literature (most of those are cited), and distinguish their contribution? I use [ - ] to denote their main message that is related to this paper:
[online RLHF is better than offline] https://arxiv.org/abs/2402.04792
[Nash learning from direct preference] https://arxiv.org/abs/2312.00886
[general preference optimization] https://arxiv.org/pdf/2402.05749
[self-play in RLHF] https://arxiv.org/abs/2401.01335
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Please refer to the question section. My current score is mainly based on the soundness of the experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your great efforts in reviewing our paper and constructive comments!
**Question 1** Experiment details:
1. how reward models are trained? Is oracle RM trained through next token prediction and with max length = 1 such that probability of A is the score?
We will add more details to improve the readability. For BT model, we remove the last layer of the LLaMA3-8B-it model and add a linear layer to construct the reward model, and train it by maximizing the log-likelihood on the preference dataset. For the oracle, yes, the training is conducted by next-token prediction. The problem is formulated as an instruction-following task, where we mask the prompt [CONTEXT] {x} [RESPONSE A] {a1} [RESPONSE B] {a2} and only compute the loss on the predicted token A or B. When inferencing, we re-normalize the probability on the two tokens and switch the positions of the two input responses to mitigate position bias.
2. Exact policy optimization method used in experiments (especially for the offline setting)
For the online setting (Lines 234-251), in each iteration, we first learn the main policy by optimizing the self-play ipo loss in equation (13). Then, we learn the enhancer by using rejection sampling to choose the most uncertain policy with respect to the main policy. For the offline setting, when the offline dataset has ideally good coverage (uniform coverage over all policies), we can learn the policy via self-play ipo. However, such coverage is hard to guarantee, so we focus on the online algorithm and validate in Table 3 that the online algorithm performs much better than offline ones.
3. Motivations for the special choices in Section 5
Our current choice matches best with our theoretical insights. The main player in Algorithm 2 is Nash equilibrium from oracle 2. Thus, practically, we apply self-play IPO to approximate the oracle. For the enhancer, we apply rejection sampling because equation (12) indicates that we need to maximize the uncertainty w.r.t. $\hat{\pi}_t^1$ within the confidence set. Since it is unknown how to compute the uncertainty for LLM, we practically use $\hat{\pi}_t^1$ to randomly generate $n$ samples, which is regarded as the confidence set. Then, we choose the best response in the confidence set, which aligns with the intuition of making the main agent and the enhancer more diverse. This process is the rejection sampling.
**Question 2** Some concerns regarding the fairness in comparisons for empirical results.
We answer this question from different aspects.
- **Fairness:** Our comparison with offline benchmarks is relatively fair since our online algorithm does NOT require more preference label queries than the offline version. Specifically, we used 30K prompts divided into 2 iterations.
- **Focus on Theory:** This paper primarily addresses theoretical aspects and proposes a framework to handle general preference. We newly conducted an iterative DPO algorithm (win rate: 14.37) compared to our algorithm's 17.67 win rate. While our algorithm outperforms the DPO, exploring the advantages of general preference over the BT model in practical scenarios (e.g., complex tasks like math and reasoning) has great potential for future work.
- **Reproducibility:** The reproduction of our online algorithms is reliable and similar to other IPO/DPO algorithms. Some concurrent online DPO/IPO studies mainly emphasize empirical performance. Our paper provides robust theoretical support for these empirical findings. For hyperparam sweeping, our algorithm should also be similar to other online alternatives.
**Question 3** Ablation studies.
We will include more ablation studies in our paper. We illustrate some early results below
| Baselines | Full algorithm | Without enhancer exploration | Without Preference |
| --- | --- | --- | --- |
| AlpacaEval Win-Rate | 17.7 | 13.9 | 16.6 |
We highlight that benefits from exploration align with other empirical studies, such as RLHF Workflow, who used rejection sampling as the exploration enhancer.
**Question 4** Contrast following literature
We will cite the rest and compare our work with them as follows. We denote these papers as:
- [i] [online RLHF is better than offline]
- [ii] [Nash learning from direct preference]
- [iii] [general preference optimization]
- [iv] [self-play in RLHF]
Compared to these works focusing on planning (computing the optimal policy for a fixed model), our work considers the learning problem, which is on top of these planning algorithms, and further learns the optimal policy for the ground-truth model by interacting with the human and environment via exploration strategies.
[i] shows that online and on-policy variants outperform offline counterparts, but the experiments are conducted with a fixed LLM for AI feedback. Therefore, no learning or exploration is considered in their work. Moreover, the paper is from an empirical side.
[ii] also studies Nash learning and proposes a mirror-descent-based approach. Notably, the mixture of the reference policy and the current policy (see Eq. 3) in [ii] is challenging in practice due to the extremely large response space for LLM. Our algorithm employs a self-play strategy and does not require sampling from the geometric mixture, making it more practical. Further, [ii] focuses on the planning problem, whereas we address the learning problem and use exploration. Thus, the two works are complementary.
[iii] proposes a unified framework with different loss functions and is also consistent with the results in [i] where DPO, IPO, and Slic perform similarly. They do not consider general preference oracle, learning, or exploration.
[iv] studies self-play algorithm under BT model to approximate the distribution of offline dataset. However, in standard benchmarks like alpaca-eval, even the 7B model can beat GPT4 in many cases. In contrast, our work relies on an external general preference oracle and our goal is to learn the Nash policy under this oracle.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed responses. I would encourage the authors to include those discussions in the revision of their paper.
I've increased my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply and for increasing your rating! We will integrate these discussions into the next revision. | Summary: The paper considers the problem of RLHF under a general preference model, going beyond the reward-based Bradley-Terry model. In particular, they cast the problem as a KL regularized minimax game between two LLMs, and show that their framework is strictly more general than reward-based RL. They propose algorithms for both offline and iterative online learning under this framework, with guarantees. Finally, they run experiments comparing their work to existing RLHF algorithms like DPO and IPO.
Strengths: 1. One of the biggest strengths of this paper is its exposition and transparency. While certain nuances might be missed, it is clear that the authors have tried to convey their reasoning behind every conscious choice, and elaborate on the reasoning behind many folklore choices too.
2. The idea of deriving algorithms under a general preference model, while not radical, is both natural and formalizes the common technique of using larger LLMs to get preferential feedback for training smaller LLMs.
3. Going from a theoretically efficient but intractable algorithm to their tractable version requires novel thought, since the translation between the two here isn't as straightforward as it is in general.
4. I appreciate that they also tested on an OOD prompt set, despite this being a theory-first paper.
Weaknesses: 1. The paper should also have tested using $\log(p/(1-p))$ as a target, since that corresponds to the DPO target and an empirical comparison is also made to DPO. Their current target corresponds to the IPO target, as the paper implicitly recognises.
2. The theoretical version of the algorithm is quite standard and while this alone would not be a weakness, the practical version of the algorithm seems quite far from the theoretical version. Most strikingly, we use rejection sampling for the enhancer. I mentioned the latter as a strength due to the novel thought needed for designing the practical algorithm, but it is also a weakness because the theoretical guarantees do not signal much about the practical version due to this.
3. This is my main qualm: The iterative method queries the learned preference model adaptively, so it has a clear adaptive advantage over purely offline methods like DPO. The paper should also compare to iterative methods that use reward-based preference models, such as the one in [1].
4. There is also growing work in generalizing from standard reward based preference models to accommodate partial observability, which form an important special case of a fully general preference model. Works such as [2,3] are worth discussing and comparing to in related work.
5. This is not really a weakness, but some minor typos need to be fixed, such as a dangling [Proof] right after "Theorem 1" and "Theorem 2". Table 3's caption says offline DPO, but it should say offline IPO.
Refs:
1. RLHF Workflow: From Reward Modeling to Online RLHF. Dong et al, 2024.
2. When Your AIs Deceive You: Challenges of Partial Observability in Reinforcement Learning from Human Feedback. Lang et al, 2024.
3. A Theoretical Framework for Partially Observed Reward-States in RLHF. Kausik et al, 2024.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Are their other options that the authors had while choosing a practical enhancer? I'm curious why they ended up choosing rejection sampling.
2. While the importance sampling ratio seems large in practice for the offline method, I'm wondering if there is a way to estimate the coverage coefficient too. This is a low priority question, but I am wondering if you have considered computing this to have a more compelling story.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your great efforts in reviewing our paper and thanks for recognizing our work!
**Weakness 1** The paper should also have tested using $\log(p/(1-p))$ as a target. Their current target corresponds to the IPO target.
We use $P$ directly as a target since it is more straightforward to empirically approximate the oracle in Definition 2 by iterative IPO according to [1]. Note that our theoretical framework can also apply to $\log(p/(1-p))$. A concurrent work [2] uses $\log(p/(1-p))$ as the target and empirically approximates the Nash equilibrium by using some additional tricks, such as filtering preference pairs.
[1] Daniele Calandriello, et al. Human alignment of large language models through online preference optimisation.
[2] Rosset, Corby, et al. Direct nash optimization: Teaching language models to self-improve with general preferences.
**Weakness 2** The practical version of the algorithm seems quite far from the theoretical version. Most strikingly, we use rejection sampling for the enhancer. I mentioned the latter as a strength due to the novel thought needed for designing the practical algorithm, but it is also a weakness because the theoretical guarantees do not signal much about the practical version due to this.
We want to argue about the point that our theoretical and practical versions are disconnected since the online theoretical algorithm exactly provides insights for the empirical version. Specifically, equation (12) in the theory part illustrates that instead of making $\hat{\pi}_t^2$ the same as the main policy $\hat{\pi}_t^1$, we need to maximize the uncertainty with respect to $\hat{\pi}_t^1$ within the confidence set. Since it is unknown how to compute the uncertainty for LLM, in practice, we use $\hat{\pi}_t^1$ to randomly generate $n$ samples, which is regarded as the confidence set. Then, we choose the best response in the confidence set, which aligns with the intuition of making the main agent and the enhancer more diverse. This process exactly is the rejection sampling.
**Weakness 3** The iterative method queries the learned preference model adaptively, so it has a clear adaptive advantage over purely offline methods like DPO. The paper should also compare to iterative methods that use reward-based preference models, such as the one in [1].
Thanks for your question. First, to ensure fairness, we guaranteed that the number of the preference queries is less than the number of the offline datasets. We use 2 iterations (15K prompt for each) for our online methods, while offline datasets contain 60K paired responses. Second, we will also add comparison with iterative DPO methods under our computation and query number setting. In our setting, it performs slightly worse than our algorithm (14.37 win rate, compared with ours 17.67). In general, for chat optimization, our algorithm should be similar to or slightly better than iterative DPO since BT model seems to be a reasonable assumption in this case as indicated by the similar chat accuracies of BT reward and preference model.
Besides, this paper mainly focuses on theory and proposes a framework to handle the general preference. To explore more advantages of general preference over BT-model in real practice, we need much work on complicated tasks, like math and reasoning, which should be future work.
**Weakness 4** Works such as [2,3] that accommodate partial observability are worth discussing and comparing to in related work.
Thanks for this point! We will add some discussions in the revision. We want to kindly remind that the models with partial observability and our general preference setting are two separate lines of work. Our setting considers a general preference function class when the BT-model does not hold, while the partial observability setting considers that the state features cannot be observed. The combination of the two lines of study would be interesting for future work.
**Weakness 5** Some minor typos need to be fixed.
Thanks for pointing out the typos. We will correct them in the revision.
**Question 1** Are their other options that the authors had while choosing a practical enhancer? Why they ended up choosing rejection sampling.
Of course, there are other options, but the rejection sampling aligns best with our theoretical insights. The reasons have been stated in Weakness 2.
One reasonable explanation is that if the reward/preference model is well-calibrated, which means that the underlying ground-truth accuracy of judging aligns well with the confidence of the model, then, a large margin between the two responses means a higher accuracy. With rejection sampling, the learning signal is more accurate and is also more consistent.
**Question 2** If there is a way to estimate the coverage coefficient too. This is a low priority question, but I am wondering if you have considered computing this to have a more compelling story.
Thanks for this point! The importance sampling ratio is an upper bound of the coverage coefficient and we present some case studies in the paper where it is usually large. An accurate estimation of the coverage coefficient itself can be challenging but our general impression is that we usually cannot expect to have a moderate coverage coefficient even for a single policy. The indirect evidence is when we personally tried to apply the offline algorithms in the literature to real-world RL applications, we eventually found that for almost all the cases, the resulting policy can and can only compete with the behavior policy.
Therefore, the main applicable situation of offline learning is that we have a noisy expert who can frequently make mistakes and the offline learning algorithm can automatically adapt to the successful trajectory.
We also remark that when we are considering the Markov game, a *unilateral concentration* is required, which can be even more challenging compared to the single-policy coverage.
---
Rebuttal 2:
Title: Raising score
Comment: The authors have addressed my concerns on an already excellently written paper. I am raising my score to a 7. I believe that this paper fully deserves to be accepted.
---
Rebuttal Comment 2.1:
Comment: Thank you for your reply and for raising the score! We are happy to see that our response has addressed your concerns. Thanks again for your constructive feedback. | Summary: The authors develop a theoretical framework based on a reverse-KL regularized minimax game and introduce sample-efficient algorithms suitable for both offline and online learning scenarios. Empirical results validate the proposed method, demonstrating its superior performance compared to traditional reward-based models across various tasks.
Strengths: The paper tackles a critical issue in existing RLHF approaches, challenging the assumption that human preferences are transitive. This is significant given the evidence of intransitivity in human decision-making.
The paper is well-written and easy to follow.
Empirical results demonstrate the advantages of the proposed approach over baseline methods.
Weaknesses: The empirical validation, while promising, is limited in scope. More extensive experiments, including comparisons with a broader range of state-of-the-art RLHF methods, would strengthen the paper.
Technical Quality: 2
Clarity: 3
Questions for Authors: How can we maintain the consistency of the learning process in the absence of the transitive assumption? For instance, when preferences are conflictive.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive comments! Our responses are as follows.
**Weakness 1** The empirical validation, while promising, is limited in scope. More extensive experiments, including comparisons with a broader range of state-of-the-art RLHF methods, would strengthen the paper.
Thank you for your question. We conducted large-scale experiments demonstrating that our algorithm performs comparably to iterative DPO in LLM chat benchmarks. We observed that the BT model is a reasonable assumption in this context, as indicated by the similar chat accuracies between the BT reward and preference models. However, to fully explore the advantages of a general preference model over the BT model in real-world applications, more work on complex tasks, such as mathematics and reasoning, is required. This will be the focus of our future research.
**Question 1** How can we maintain the consistency of the learning process in the absence of the transitive assumption? For instance, when preferences are conflictive.
Thanks for this point! As stated in Section 2, we directly optimize the general preference and do not make any assumptions about the general preference, such as transitivity. Our algorithm aims to learn the Nash equilibrium policy as defined in Definition 2. For the Nash policy, it is consistently preferred by the KL-regularized preference in the face of any competing policy (line 126). Therefore, the Nash policy inherently accounts for intransitivity among different responses.
In our experiments, we use self-play IPO (Lines 234-240) to approximate the Nash equilibrium oracle. Intuitively, due to the symmetry of the preference $P$, the max-player $\pi_1^*$ equals the min-player $\pi_2^*$ when Nash equilibrium is achieved. Hence, instead of solving the min-max problem, we iteratively compute the best response to the last iteration. | Rebuttal 1:
Rebuttal: Thank all the reviewers for the constructive comments. We appreciate that all reviewers provided positive feedback during the initial review round. We would like to highlight some key points below to clarify some confusing parts.
1. Novelty — Online algorithm with general preference oracle: we propose algorithms and provide rigorous analysis on the statistical complexity of the iterative RLHF algorithms under **general preference oracle**. Furthermore, due to the limit of exploration for the offline methods, we focus on the online iterative framework and propose a practical algorithm Online ELHF IPO. The advantage of Online ELHF IPO is validated by experiments.
2. Main focus — Theoretical framework and proof-of-concept experiments: This paper mainly focuses on theory and proposes a framework to handle the general preference. We provide controlled experiments to show that our proposed preference-based online algorithm demonstrates advantages over baselines.
3. Comparison with other concurrent online algorithms: From the theoretical perspective, our work is the first online algorithm to study the general preference. For empirical algorithms, there are some concurrent algorithms. Considering the experimental comparison with some concurrent SOTA online algorithms, our algorithm is at least comparably good than others in chat benchmarks, which is also indicated by the similar accuracy between BT & preference model. More significant empirical advantages of the proposed framework should be on reasoning-related benchmarks. However, due to the page limit, the large-scale experiments are out of the scope of this paper. Our paper can be regarded as both theoretical foundation and empirical proof of concept. More comprehensive empirical studies should be future work. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability | Accept (poster) | Summary: This paper presents Vista, a driving world model that predict future driving video based on video diffusion model. In particular, the author introduces the idea of conditioning on prior frames and two domain-specific losses to capture dynamics and preserve structures for driving scenarios. By using LoRA, Vista also supports controllable (via e.g., trajectory, angle/speed) generation in a zero-shot manner. The author evaluates Vista by showing its high-resoluation videos, low FID on nuScenes and high human evaluation scores.
Strengths: -neat domain-specific improvement on controllable driving video generation.
-the paper is clear and easy to follow.
Weaknesses: -while the author discussed the differences between Vista and GenAD in Appendix A-Q6 (e.g., different controllability design, and results with higher resolution and lower FID), the improvements seem to be on the incremental side. Besides, GenAD additionally supports text command and its qualitative results of GenAD looks as good as Vista.
-the proposed conditioning on the prior frame and the two domain-specific losses and the use of LoRA while being helpful and neat, are not super novel.
-human evaluation was not conducted between Vista and driving specific generation models (e.g., GenAD).
Technical Quality: 2
Clarity: 3
Questions for Authors: -Can you provide more discussions on why the proposed methods are very novel when there are already similar work (e.g., GenAD) on this similar task?
-Potential human evaluation results between Vista and GenAD (and/or other driving domain specific generation models)?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your time and effort in reviewing our paper. We provide detailed explanations below to solve the questions and some potential misunderstandings.
> **W1&W2&Q1**: The improvements from GenAD seem to be incremental. The proposed dynamic priors, loss functions, and LoRA adaptation are not super novel when there are already similar works (e.g., GenAD) on this similar task.
In our humble opinion, there are several fundamental differences between Vista and previous works like GenAD. Our work is a pioneering attempt to build a generalizable high-fidelity driving world model that can be controlled by multi-modal actions. To this end, we propose a series of improvements that effectively extend Vista’s abilities:
- With our efficient training strategies, Vista has acquired versatile action controllability that can be readily generalized to unseen scenarios. In stark contrast, none of the existing works has ever enabled such ability.
- We propose a novel reward function and validate its zero-shot efficacy in Fig. 9 and Table 3, which could serve as a potential avenue to assess actions in the wild.
- Thanks to our innovative techniques, Vista achieves a non-marginal performance gain of **55%** in FID and **51%** in FVD compared to GenAD while being more compact in model size. We provide an intuitive comparison between Vista and GenAD in **Fig. 2 of the rebuttal PDF**. We also meticulously conduct a human evaluation between these two methods, where Vista achieves a **94.4%** win rate in Visual Quality and a **94.8%** win rate in Motion Rationality. Please refer to our answer to the next question.
- Beyond the superior visual quality and spatiotemporal resolution, we also enable coherent long-horizon prediction ability, which is underexplored in previous works.
We believe that all these contributions will shed light on future investigations in developing driving world models.
> **W1&W3&Q2**: Human evaluation was not conducted between Vista and driving-specific generation models (e.g., GenAD). Qualitative results of GenAD look as good as Vista.
To our best knowledge, there is not a driving-specific world model publicly available so far, making it hard to conduct qualitative human evaluation. Therefore, we mainly compare Vista against the existing methods with the officially reported FID and FVD scores in our paper.
To demonstrate the considerable improvements in visual quality and motion rationality, we conduct a human evaluation with the state-of-the-art GenAD model *as requested*. We follow the two-stage training strategy and use the same training compute as specified in their original paper. Since GenAD processes a 4-second video each time, we perform autoregressive prediction to extend Vista’s output to 5 seconds and trim the last second to align with GenAD’s duration. To avoid any bias caused by resolution and frequency, we also downsample the outputs of Vista (576x1024 resolution at 10 Hz) to 256x448 resolution at 2 Hz for fairness. The human evaluation is conducted following the same procedure in Sec. 4.1.
During the rebuttal period, we collect 25 diverse samples from the unseen OpenDV-YouTube-val set and invite 20 volunteers for evaluation. We ask the volunteers to choose the video they deemed better. As a result, Vista is preferred in **94.4%** and **94.8%** of the time on Visual Quality and Motion Rationality respectively. This certifies that Vista, even when downsampled to a much lower spatiotemporal resolution (which is a significant perceptual reduction), has a remarkable advantage over GenAD in generation quality. We also append a qualitative comparison between GenAD and Vista in **Fig. 2 of the rebuttal PDF**, showing the superiority of Vista in resolution and quality.
> **W1**: Vista does not support text commands while GenAD does.
Although GenAD provides a control interface for text commands, its advantage over Vista is limited for three main reasons:
- The text commands used by GenAD are annotated by first classifying videos into discrete categories and then mapping to text templates from a predefined dictionary. Thus, the text commands used in GenAD and categorical embeddings learned by Vista are **functionally similar**.
- As stated in the limitations of the GenAD paper, the auto-labeling process of text annotations may incur conflicts with ego intentions. Unlike GenAD that requires labeling all videos from YouTube, we explore a collaborative training strategy that allows learning open-world action controllability from the public dataset with reliable annotations. This strategy circumvents the labor of auto-labeling that may cause accumulated errors and learning instability.
- Text commands are often ambiguous in expressing precise controls. Therefore, in Vista, we focus more on versatile action controllability ranging from high-level intentions to low-level maneuvers, empowering the applications for various purposes.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for your detailed response which mostly addresses my concerns and I will increase my initial rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for the kind response! We would appreciate specification on any concerns you may have, which will allow us to provide further information. Thank you. | Summary: Vista is a generalizable driving world model that excels in high fidelity and versatile controllability. By introducing novel losses to enhance the learning of dynamics and structural information, and integrating a unified conditioning interface for diverse action controls, Vista achieves high-resolution predictions and adapts seamlessly to various scenarios in a zero-shot manner. Extensive experiments demonstrate that Vista outperforms state-of-the-art video generators and driving models, showcasing significant improvements in prediction fidelity and action evaluation capabilities.
Strengths: 1. High Fidelity Predictions: Vista achieves accurate and realistic future predictions at high resolutions.
2. Versatile Action Controllability: Supports diverse control inputs from high-level commands to low-level maneuvers.
3. Strong Generalization: Seamlessly adapts to diverse and unseen environments in a zero-shot manner.
4. Superior Performance: Outperforms state-of-the-art models in prediction fidelity and evaluation metrics.
Weaknesses: 1. I have some questions regarding the production of Figure 5. Is the SVD in Figure 5 used as is, or has it been retrained? Are there any other action control modules? Additionally, while the long-term generation of Vista seems consistent, most of the details are quite blurry. I am also confused about the point that "the prediction of SVD does not commence from the condition image."
2. Regarding the misalignment in DynamiCrafter in Figure 4, it is actually because during training of DynamiCrafter , the condition frame is not always the first frame, but is randomly extracted from the video. Therefore, you can see that the fourth column of DynamiCrafter is consistent with the input frame. This is not a misalignment.
I will raise my rating if all of my concerns are well addressed.
Technical Quality: 4
Clarity: 4
Questions for Authors: Mentioned in the weakness section
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful comments and questions. We answer each question below and will incorporate all feedback in the revision.
> **W1**: Details related to the production of Fig. 5. (1) Has SVD been retrained? Are there any other action control modules? (2) The details of long-term generation are blurry. (3) The meaning of "the prediction of SVD does not commence from the condition image".
(1) We did not retrain SVD for this comparison. The results of SVD in Fig. 5 are generated using the official checkpoint and codebase without any modification. The samples in Fig. 5 are all action-free predictions without action conditioning, thus no action control modules are needed for SVD here.
(2) It is possible that the long-horizon rollouts may result in degradations of details. In fact, long-term prediction remains a challenge in this research direction. To overcome this challenge, we have explored some techniques that optimize the fidelity of long-term prediction (e.g., triangular classifier-free guidance in Appendix C.4). Moreover, although there is still room for improvement, it is noteworthy that our method has significantly outperformed the existing methods, with nine times more frames than Drive-WM and much better content consistency compared to SVD. As discussed in Appendix A-Q7, we will continue exploring solutions for long-term fidelity in future works.
(3) We apologize for the confusion. We use "the prediction of SVD does not commence from the condition image" to express that the first frame predicted by SVD is not identical to the condition image. This misalignment prevents SVD from performing autoregressive long-term rollout, as the consecutive clips predicted by SVD are not consistent in content. We will clarify this in the revision.
> **W2**: The misalignment in DynamiCrafter is because of its training settings.
Thanks for the comment. We agree that the misalignment of DynamiCrafter is due to its training settings, where a random frame is sampled as the condition image. Since DynamiCrafter is a prominent general-purpose video generator, we compare our model to DynamiCrafter to demonstrate the better fidelity of Vista and its distinctions in capabilities. Unlike existing general-purpose video generators for content creation, Vista is designed to make plausible future predictions while preserving high quality. We will clarify this in our revision to avoid confusion.
---
Rebuttal Comment 1.1:
Title: Response
Comment: All of my concerns are well addressed. I will increase my score to accept
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We will integrate your advice in our revision. | Summary: The paper presents a method named Vista, a novel driving world model that addresses limitations in generalization, prediction fidelity, and action controllability. Key contributions include introducing novel loss functions for high-resolution prediction, a latent replacement approach for coherent long-term rollouts, and versatile action controls ranging from high-level commands to low-level maneuvers. Vista demonstrates strong generalization in zero-shot settings and establishes a generalizable reward function for evaluating driving actions. Extensive experiments on multiple datasets demonstrate that Vista outperforms advanced general-purpose video generators in over 70% of comparisons, surpasses the best-performing driving world model by 55% in FID and 27% in FVD, and establishes a generalizable reward function for real-world driving action evaluation.
Strengths: 1. The paper introduces a novel approach to driving world models by incorporating high-fidelity predictions and versatile action controllability. This combination addresses existing gaps in generalization, prediction fidelity, and action flexibility, representing a significant step forward in autonomous driving research.
2. The paper presents a well-designed methodology that systematically addresses the limitations of existing driving world models. The integration of dynamic prior injection and versatile control mechanisms is methodologically sound and effectively implemented.
3. The paper is well-structured, with clear and logical sections that guide the reader through the problem formulation, methodology, experiments, and conclusions. Figures and tables are used effectively to illustrate key points and results.
Weaknesses: 1. Vista seems cannot to generate surround-view video, this limitation may restrict the method's effectiveness and generalizability in real-world scenarios where a comprehensive 360-degree view is crucial.
2. The closed-loop evaluation is demonstrated through only a few cases, raising concerns about the robustness and reliability of integrating the closed-loop process with the generative model. Expanding the evaluation to include a broader range of scenarios and detailed performance metrics would help assess the seamless integration of the closed-loop driving process with the generative model.
3. The paper introduces dynamic prior injection for coherent long-term rollouts, but the implementation details and impact of this component are not sufficiently elaborated. Providing more detailed information on this mechanism, including its implementation, theoretical basis, and specific impact on long-term prediction consistency, would enhance understanding.
4. The paper lacks a detailed analysis of the computational resources required for training and inference using the Vista framework. The potentially high computational demands could limit the scalability and real-time applicability of the approach.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Can the authors provide more details on the implementation of the dynamic prior injection mechanism? How does it theoretically support coherent long-term rollouts?
2. Provide additional experimental results demonstrating the model's performance in zero-shot settings across diverse and unseen environments. This would help validate the framework’s generalization capabilities and robustness in real-world applications.
3. Expanding the Vista framework to support surround-view video generation could significantly enhance its applicability and robustness in real-world autonomous driving scenarios, as a comprehensive 360-degree view is crucial for effective navigation and situational awareness. Additionally, extending the closed-loop evaluation to include a broader range of scenarios and detailed performance metrics would provide a more thorough assessment of the model's robustness and reliability. Such enhancements would further solidify the contributions of this work and pave the way for future advancements. Nonetheless, the current contributions are commendable, and I look forward to seeing future developments in this area.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: please refer to weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful and positive feedback. The following are our responses.
> **W1&Q3**: Surround-view generation is not supported.
We agree that supporting surround-view generation would further help driving. We are planning to extend Vista to multi-view settings like Drive-WM (Wang, et al.) in our future works.
In this paper, we focus on the front-view setting for three main reasons:
- The front view setting allows leveraging diverse data sources (e.g., the worldwide OpenDV dataset). Conversely, the distinctions in multi-view videos from various datasets, such as different numbers of cameras, hinder unified modeling and data scaling.
- Models that focus on the front view can be seamlessly applied to different datasets without adaptation (e.g., DriveLM (Sima, et al.)), broadening their applicability across datasets.
- Though incomplete, the front view often contains most of the information necessary for driving. As demonstrated in NAVSIM (Dauner, et al.), using the front view alone results in only a 1.1% performance drop in collision rate compared to using five surround-view cameras.
> **W2&Q3**: Evaluating the closed-loop process in a broader range of scenarios and detailed metrics.
From our understanding, the closed-loop process here refers to controlling Vista to create a closed-loop simulation.
To address the concern, we introduce an additional metric, *Trajectory Difference*, to assess the control consistency. Following GenAD, we train an inverse dynamics model (IDM) that estimates the corresponding trajectory from a video. We then send Vista's prediction to the IDM and calculate the L2 difference between the ground truth trajectory and the estimated trajectory. The lower the difference, the better the control consistency Vista has. We conduct the experiments on nuScenes and Waymo (unseen by Vista). As reported in **Table 1 of the rebuttal PDF**, Vista can be effectively controlled by different types of actions, yielding more consistent motions to the ground truth.
We also provide the complete FVD scores of Fig. 7 in **Table 2 of the rebuttal PDF**, which further validates the efficacy of all kinds of action controls.
For the coverage of scenarios, we have shown multiple open-world samples on the anonymous demo page at the time of submission. We will provide more demonstrations in the revision. In addition, we will fully open-source our code and model to the community for free trials.
> **W3&Q1**: More details on the implementation, theoretical basis, and impact of dynamic prior injection.
**Implementation details**: Conventional video diffusion methods (e.g., SVD) process a sequence of noisy frames to generate a video. However, this approach cannot ensure content and motion consistency between clips in long-horizon rollouts. To address this, we inject previous frames as priors to derive the necessary information for consistent rollouts. As illustrated in Fig. 2 [Left], these dynamic priors are injected by replacing the noisy frames with the previously known frames throughout the entire denoising process. The dynamic priors are then propagated through the temporal interactions within the model. The number of dynamic priors corresponds to the number of the injected frames. To indicate the presence of dynamic priors that do not require denoising, we assign different timestep embeddings to these frames. In our implementation, we create a frame-wise mask to uniformly allocate the dynamic priors and the timestep embeddings. We will refine this part accordingly in the revision.
**Theoretical basis**: As discussed in Appendix A-Q1, it is necessary to input sufficient information so that the model can learn to derive position, velocity, and acceleration for coherent future prediction. For example, without knowing acceleration, the model cannot determine whether another car in view is moving faster or slower. Such uncertainty will result in unnatural motions with respect to the historical frames. To fully obtain these priors, at least three consecutive frames with the same interval are required. To ensure temporal consistency while predicting as many frames as possible each time, we always use three previous frames as dynamic priors during long-horizon rollouts.
**Impact evaluation**: To further demonstrate the effectiveness of dynamic priors, we conduct a quantitative evaluation in **Table 1 of the rebuttal PDF**. Specifically, we use the inverse dynamics model to infer the trajectories of the predicted videos with different orders of dynamic priors. Extensive results show that increasing the order of dynamic priors can consistently improve the coherence to the ground truth motion.
> **W4**: A detailed analysis of the computational resources required for training and inference.
We have provided a thorough description of the training in Appendix C.3. The entire training process takes about two weeks, with the first phase taking one week on 128 A100 GPUs and the second phase taking another week on 8 A100 GPUs. For inference cost, it takes 70-80 seconds on a single A100 to predict 25 frames. Note that the inference could be greatly accelerated using some well-established techniques as discussed in Appendix A-Q7. While this is not in the scope of this paper, we will explore these techniques for downstream applications in the future.
> **Q2**: Additional experimental results in zero-shot settings across diverse and unseen environments.
All qualitative visualizations in our paper are produced under the zero-shot setting in open-world scenarios. Moreover, except for the results on nuScenes, all quantitative results are demonstrating Vista’s zero-shot performance (quality, duration, controllability, etc.). As described in Sec. 4.1, our experiments involve zero-shot evaluation on three unseen/geofenced datasets (OpenDV-YouTube-val, Waymo, CODA). Fig. 20 also shows the environmental diversity of our human evaluation. We will specify the zero-shot settings in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for your detailed response, and I think most of my concerns are almost resolved. I want to keep my rating unchanged.
---
Reply to Comment 1.1.1:
Comment: Thanks for responding to our feedback and recognizing our contributions. We really appreciate your help in improving our work! | null | null | Rebuttal 1:
Rebuttal: Dear reviewers and ACs:
We express our sincere gratitude to the reviewers for their thorough and constructive comments. It is encouraging that all reviewers have acknowledged our pioneering efforts in establishing a driving world model with versatile controllability.
We have carefully taken each comment into consideration. The attached **rebuttal PDF** includes two tables with quantitative results and two figures for illustration. Please refer to other rebuttal modules below for our detailed responses to each comment. We will integrate these results and discussions into our revised paper.
We hope that our rebuttal can address the concerns you may have. You are more than welcome to ask any further questions. We are looking forward to your feedback!
Best regards,
Authors of Submission574
Pdf: /pdf/4f94049728edffd45246a21bf174ba0663677973.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Boosted Conformal Prediction Intervals | Accept (poster) | Summary: This paper introduces a gradient boosting-based approach to tailor a conformal score function to better satisfy desirable properties. A more general score function "family" is introduced which adheres to a specific form dependent on parameters (e.g. $\mu, \sigma$). Instead of directly using the parameter estimates obtained from model training of the predictor $f$, a post-hoc boosting stage is introduced in which the parameters are further refined leveraging additional data to minimize property-specific losses, such as conditional coverage deviation or conformal interval length. After boosting for $\tau$ iterations, the refined parameter estimates $\mu^{(\tau)}, \sigma^{(\tau)}$ can be used for down-stream conformal calibration and prediction interval generation. Results are compared to default local conformal scores (Local) and CQR on a range of regression tasks, obtaining slightly more favorable empirical properties.
Strengths: - Addressing the practicality of obtained conformal sets is a relevant problem, since results can sometimes be overly conservative to adhere to the conformal guarantees
- The proposed method seems relatively lightweight and has the benefit of not requiring changes to the model weights of $f$, since the boosting derivatives are computed w.r.t. the inputs of the scoring function $E$, which are the predictions $\mu(x), \sigma(x)$
- The improvements for the local conformal score function seem substantial at times
Weaknesses: My main concerns are in regards to evaluation and practicality. The authors propose two applications of the boosting procedure to improve conditional coverage and prediction set size. For conditional coverage, the boosting loss is introduced as a relatively complicated group deviation metric leveraging a particular "contrast tree" mechanism, and in practice requires further non-trivial approximations to make it differentiable for boosting. Since the results are very strongly tied to this "contrast tree" approach, I was struggling to see how the boosting loss and conditional coverage target (Eq 12) connect in practice and how the results (e.g. Fig. 3) are more widely applicable, since the procedure is motivated as an "effective evaluation metric (for target conditional coverage) of independent interest". Is this procedure amenable to any black-box predictor $f$ and can be considered a particular gradient boosting mechanism? In general, I found the obtained improvements not overly convincing. For example, in Fig. 3 if we consider the % of data for which conditional coverage ends up being violated (above the red line) we find 33% for Local and 37% for Boosted Local. Granted, the boosting seems to distribute miscoverage somewhat more evenly, but the overall violation rate remains similar. In Table 1, boosting CQR only gives marginal improvements. It would be good to compare the results to other conformal algorithms which claim to empirically (or sometimes even theoretically, under relaxed conditions or varying interpretations of conditionality) improve conditional coverage, e.g. [1,2,3,4]. This would help clarify if the approach is more widely beneficial. Similarly, all experiments are run leveraging the same random forest and quantile network regressors -- do results translate and are amenable across different models $f$ as well? Another hurdle might be the fact that a differentiable property-specific loss needs to be defined for every target property, whose amenability may also depend in some way on the underlying model's design, and is fixed. Thus, tackling a more flexible scoring function comes at the cost of introducing another (fixed) functional to optimize over, with required approximations.
On another note, I was somewhat struggling to understand how the boosting procedure connects to the overall conformal procedure. In Fig. 1 it is suggested that boosting is an independent step inbetween model training and conformalization, but in the experiment description it seems that boosting leverages existing datasets (namely training data). Should I understand that the boosting dataset $\mathcal{D}$ is a separate set? If so, how do its sampling requirements (e.g. exchangeability) connect to calibration and test data. If not, should I understand that the boosting step actually equates to an iterative conformal calibration procedure leveraging $\mathcal{D}_{cal}$? Since the boosting objectives (Eq. 14, 17) actually require computing conformal sets and quantiles. Thank you for clarifying.
Other comments:
- I believe the $j$ index in the 3rd final line of Algorithm 1 should be omitted, since the final boosted score functions are not taken w.r.t. a particular fold.
- Fig. 4 is missing a legend, making it hard to interpret
- The use of the term "Power Loss" seems odd, since the goal is to reduce prediction set size, and from my understanding a smaller prediction set/interval will equate higher power, since the permutation testing interpretation of the null hypothesis is on inclusion in the set.
- Theorem 5.1 does not seem like a theorem but more like a remark to me, since it is merely suggesting that the "generalized" score family may contain the optimal oracle solution, but in no way relates any optimality to the practically obtained solutions via the boosting procedure.
- It would be beneficial to use hyper-referencing for Equations and the Algorithm throughout the paper.
References
- [1] Romano, Yaniv, et al. "With malice toward none: Assessing uncertainty via equalized coverage." Harvard Data Science Review 2.2 (2020): 4.
- [2] Gibbs, Isaac, John J. Cherian, and Emmanuel J. Candès. "Conformal prediction with conditional guarantees." arXiv preprint arXiv:2305.12616 (2023).
- [3] Sesia, Matteo, and Yaniv Romano. "Conformal prediction using conditional histograms." Advances in Neural Information Processing Systems 34 (2021): 6304-6315.
- [4] Jung, Christopher, et al. "Batch multivalid conformal prediction." arXiv preprint arXiv:2209.15145 (2022).
Technical Quality: 2
Clarity: 3
Questions for Authors: - Could you please comment on some of the raised points above, such as on clarifying the data used by the boosting procedure, the applicability of the "contrast tree" method, the practicality of defining new optimization objectives, or the distinction to other existing methods aiming for conditional coverage.
- The approach to obtain a differentiable loss objective from Eq. 14 seems quite involved. Could you comment on the motivation of leveraging "contrast trees" and the tools used to go from Eq. 14 to a differentiable loss?
- Are there any results that can be made on how the boosting procedure affects down-stream conformal coverage guarantees, or statements of optimality or improvement with regards to the obtained "boosted" conformity scores over the default ($0$-th iteration) conformity scores?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: I did not find the limitations of the method thoroughly discussed anywhere, and I believe the distinction to other related works could be improved. There are some comments made in the discussion on improvements, as well as L218-L222 on not guaranteeing an optimal solution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful comments. In response to each of the reviewer’s specific points:
> Is this procedure amenable to any black-box predictor and can be considered a particular gradient boosting mechanism?
- As an evaluation metric, the contrast tree algorithm applies to any black-box predictor f, as it only requires three inputs, the features, the labels and the conformalized prediction intervals, which can be derived from the conformity scores calculated via the values the model predicts. Then the contrast tree algorithm automatically detects regions, where coverage is most uneven.
> In general, I found the obtained improvements not overly convincing. For example, in Fig. 3 if we consider the % of data for which conditional coverage ends up being violated (above the red line) we find 33% for Local and 37% for Boosted Local.
- Assuming exchangeability, conformal prediction ensures that the average miscoverage rate over the test set is approximately 10%. The dream would be to achieve exactly 10% miscoverage within any subset of the feature space. While this might seem counterintuitive, the guarantee provided by conformal prediction implies that if one subset in the feature space has a miscoverage rate smaller than 10%, another disjoint subset must have a miscoverage rate greater than 10%. This results in prediction intervals that sometimes overcover and sometimes undercover. Figure 3 illustrates that before boosting, the conditional coverage in each leaf significantly deviates from the target rate of 10%; it is either too large or too small. After boosting, however, all groups have coverage rates that hover around 10%. Concretely, before boosting, the contrast tree identifies splits in the feature space that deviate from the 10% miscoverage rate by 15%. After boosting, the largest deviation identified by the contrast tree is reduced to no more than 6%.
> In Table 1, boosting CQR only gives marginal improvements.
- Please refer to point 3 in the global rebuttal.
> It would be good to compare the results to other conformal algorithms which claim to empirically (or sometimes even theoretically, under relaxed conditions or varying interpretations of conditionality) improve conditional coverage, e.g. [1,2,3,4].
- In comparison to other works aiming to improve conditional coverage, such as [1] and [2] mentioned by the reviewer, our approach offers a distinct advantage. Those works measure group conditional coverage based on user-specified groups. However, relying on predefined groups may not always be ideal. Specifically, please refer to point 4 in the global rebuttal.
>Thus, tackling a more flexible scoring function comes at the cost of introducing another (fixed) functional to optimize over, with required approximations.
We agree with the reviewer that optimizing a targeted property requires a customized objective. We would like to highlight that the conformity score functions we discuss in the manuscript are arguably the two most widely used score functions for regression tasks. Although we do not address classification tasks in this project, it is worth noting that commonly used score functions in that domain, such as the threshold conformal predictor (THR) and adaptive prediction sets (APS), are also differentiable by design.
> Should I understand that the boosting dataset is a separate set?
- During the boosting stage, we use the training set to simulate the conformal procedure, assessing the corresponding interval length and conditional coverage. Specifically, at each iteration of cross-validation, we divide the training set into a sub-boosting set and a sub-validation set (the held-out fold). The sub-boosting set serves as both the boosting set and the calibration set, while the sub-validation set acts as the test set. Please refer to Figure 2 for an illustrative schematic drawing. We shall work to explain this more clearly in the revised manuscript.
Response to other comments:
- We thank the reviewer for pointing out the typo in the 3rd final line of Algorithm 1. We shall fix this in the revised manuscript.
- We shall add a legend for clarity of presentation in the revised manuscript.
- We agree with the reviewer that a smaller prediction set or interval corresponds to higher power. In the manuscript, by "power loss," we intended to convey "loss of power," meaning that when the interval length is larger, we lose power. If the reviewer finds this phrasing misleading, we will rephrase it in the revised manuscript.
- We agree with the reviewer that Theorem 5.1 only proves that the search space contains the optimal solution, but does not guarantee the optimality of the numerical boosting procedure.
- We shall add hyper-referencing in the revised manuscript.
Response to questions:
- Please see our reply to the 1, 2, 3, 5 and 6th items in the Weaknesses section.
- For the motivation behind leveraging contrast trees, please refer to point 4 in the global rebuttal. One of the primary reasons we chose to approximate the empirical quantile using the Harrel-Davis quantile estimator is that it transforms the original linear combination of two samples into linear combinations of all samples, significantly enhancing the robustness of the optimization objective.
- Please refer to point 5 in the global rebuttal.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer and clarifications. A few more comments:
> Please refer to point 3 in the global rebuttal.
It would be nice to see more experimental validation to support your claims on leveraging your boosting procedure as a general tool to optimize for desired properties, and your suggestion for FDR control seems interesting. Naturally, this seems out of scope for the current work but perhaps could be added to future work / discussion.
> Those works measure group conditional coverage based on user-specified groups. However, relying on predefined groups may not always be ideal
This is a valid point, but a counterargument can be made for the fact that when we work in the group-conditional setting we often either know (or even define) interesting group partitions and may only be interested in these particular groups or a subset thereof, whereas coverage imbalance for other partitions is amenable (e.g. in the fairness case). Your contrast tree approach seems to perform its own partitioning in feature space that can (i) provide group partitions that are not really meaningful or hard to interpret, and (ii) end up suboptimal for particular groups of interest. Thus I still believe it would be useful to compare to such approaches in an experimental setting where groups are known a priori, and demonstrate that the contrast tree approach recovers meaningful partitions. At the very least, a clearer separation from such works should be done.
> We agree with the reviewer that optimizing a targeted property requires a customized objective.
I would suggest to stress this limitation for reader clarity, since I do not see it explicitly mentioned anywhere. This seems like a key modelling design step that can prove challenging for anyone wanting to boost for alternative objectives. In particular, it is unclear how the approximations made to obtain a differentiable objective impact the fundamental conformal prediction outcomes (e.g., validity and set sizes).
> Please refer to Figure 2 for an illustrative schematic drawing. We shall work to explain this more clearly in the revised manuscript.
Thank you for clarifying. I agree that this needs clarification, because there is no direct connection established between Fig. 1 and 2, and they seem to suggest different conceptual approaches (post-hoc vs. leveraging training data).
Relating to this and my rebuttal (*If so, how do its sampling requirements (e.g. exchangeability) connect to calibration and test data.*), it seems that leveraging the training data during boosting may impose an additional assumption on the training distribution being exchangeable with the calibration and test distributions. If so, this is not line with *"the usual exchangeability assumption"*. It seems like this data reuse during boosting is a key step that complicates the possibility to provide theoretical guarantees of some form on your procedure, would you agree?
> We agree with the reviewer that a smaller prediction set or interval corresponds to higher power. In the manuscript, by "power loss," we intended to convey "loss of power," meaning that when the interval length is larger, we lose power. If the reviewer finds this phrasing misleading, we will rephrase it in the revised manuscript.
I do find it somewhat confusing, especially given that the multiple testing interpretation is not touched upon anywhere else in the paper, and would suggest renaming or omitting this.
> We agree with the reviewer that Theorem 5.1 only proves that the search space contains the optimal solution, but does not guarantee the optimality of the numerical boosting procedure.
In my opinion this makes Thm 5.1 a fairly weak statement, and I would suggest renaming it into a "remark" or "proposition". In practice, this does not provide me with anything of real value. Since it follows primarily from your definition of generalized score families, perhaps Thm 5.1 as well as Thm. A.2 can even be rephrased as results following from a theoretical statement on "inclusion of the optimal solution in the search space of generalized score families" under sec 3.
Looking forward to your reply.
---
Reply to Comment 1.1.1:
Title: Replying to Official Comment by Reviewer 3gAf
Comment: We thank the reviewer for taking the time to review our rebuttal and provide additional helpful comments.
> It would be nice to see more experimental validation to support your claims on leveraging your boosting procedure as a general tool to optimize for desired properties, and your suggestion for FDR control seems interesting. Naturally, this seems out of scope for the current work but perhaps could be added to future work / discussion.
We shall include this in the discussion of additional objectives in Section A.1.
> Thus I still believe it would be useful to compare to such approaches in an experimental setting where groups are known a priori, and demonstrate that the contrast tree approach recovers meaningful partitions. At the very least, a clearer separation from such works should be done.
We agree with the reviewer that there are situations where groups defined based on domain knowledge a priori are of interest. In response, we further conduct the following experiments. As discussed in Section A.1, we can readily extend our boosting procedure to the group conditional coverage objective. As a result, we run the boosting procedure on the Meps-19 dataset divided into 4 groups of non-white female, non-white male, white female and white male.
We observe a significant improvement (54%) for the Local baseline, and a moderate improvement (15.6%) for the CQR baseline.
| | Marginal Coverage | Max within group deviation from target coverage rate | Average length
| :---------------- | :----: | :----: | :----: |
| Local | 89.6% | 8.0% | 2.16
| Boosted Local | 89.8% | 3.7% | 2.67
| CQR | 89.4% | 4.2% | 3.26
| Boosted CQR | 89.7% | 3.5% | 3.21
>I would suggest to stress this limitation for reader clarity, since I do not see it explicitly mentioned anywhere. This seems like a key modelling design step that can prove challenging for anyone wanting to boost for alternative objectives.
We shall discuss this limitation in our revised manuscript.
> In particular, it is unclear how the approximations made to obtain a differentiable objective impact the fundamental conformal prediction outcomes (e.g., validity and set sizes).
We agree with the reviewer that in general, using an objective that does not directly target set size may lead to undesirable (or more desirable, as shown in the comparison between CQR and Boosted CQR in the experiment above, where the target was group conditional coverage) set sizes. As a result, we would recommend that the user incorporate all the targeted properties when customizing the optimization objective. We shall defer the discussion on validity to the reviewer’s next question.
> Relating to this and my rebuttal (If so, how do its sampling requirements (e.g. exchangeability) connect to calibration and test data.), it seems that leveraging the training data during boosting may impose an additional assumption on the training distribution being exchangeable with the calibration and test distributions. If so, this is not line with "the usual exchangeability assumption". It seems like this data reuse during boosting is a key step that complicates the possibility to provide theoretical guarantees of some form on your procedure, would you agree?
We respectfully disagree with the reviewer on this point. The guarantee of marginal coverage validity for our boosted procedure does not require additional assumptions beyond the exchangeability assumption already made for the split conformal procedure. Whether or not boosting is applied, the training set produces a fitted or boosted conformity score based on the training data. As long as the calibration set and test set remain exchangeable, the original proof of validity still holds. This is also one of the reasons we chose to use the training set for boosting. We hope this clarifies our approach and rationale.
> I do find it somewhat confusing, especially given that the multiple testing interpretation is not touched upon anywhere else in the paper, and would suggest renaming or omitting this.
We thank the reviewer for clarifying. We shall omit this in the revised manuscript.
> In my opinion this makes Thm 5.1 a fairly weak statement, and I would suggest renaming it into a "remark" or "proposition".
We shall rename it as a proposition in the revised manuscript. | Summary: This paper introduces a methodology for learning a conformal score function after training. Notably, the proposed method does not require model retraining, and instead learns the score function using trained model predictions using a cross-validation approach on the training data.
Strengths: **Originality:** Although the paper is in the same vein as works such as ConfTr [1], the fact that it can work directly on top of trained predictions on the same data is a clever formulation and a nontrivial benefit.
**Quality:** Overall, the paper is technically sound; I have no concerns regarding the theory. My reservations regarding the empirical results are discussed under weaknesses.
**Clarity:** The paper is well-written and all necessary ideas regarding conformal prediction for understanding the results in the paper are discussed appropriately.
**Significance:** The idea of learning a conformal score function after training (using the same data) but before calibration is, to my knowledge, a novel formulation and worth building on.
[1] https://arxiv.org/abs/2110.09192
Weaknesses: **Marginal empirical benefit.** My main reservation with the proposed method is the seemingly marginal benefit it provides over CQR for conditional coverage, as shown in Table 1. This is especially relevant for large scale settings where the overhead of the boosting/CV stage is going to be highly nontrivial. Additionally, conditional coverage here is evaluated using contrast trees, which are baked into the objective of the boosting approach. It would be interesting to see if the improvements hold under, for example, just group-conditional coverage.
**Baseline score functions.** The baseline score functions as described in Appendix A.8 are relatively weak, untuned models. While I do not expect an expansive evaluation of models to be within the scope of this work, it would be good to know how sensitive the improvements in the paper are to the choice of baseline score model.
Overall, while I have reservations regarding the empirical results, I do find the formulation of learning the conformal score in this paper useful. As a result, I lean accept.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The boosting stage uses the same data on which the baseline score is trained. I'm curious whether the authors also considered splitting off some data for this stage (instead of doing the CV approach)? Although this would be less sample efficient, it seems like it may have greater benefit than reusing data for more powerful baseline score models.
- It seems to me that the choice of boosting here for improving the baseline score could be replaced by some alternative means of mapping trained model predictions to the necessary mean/variance parameters for the conformal prediction intervals, and I'm wondering whether the authors considered such alternatives?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Although not in a separate section, limitations are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time spent reviewing the manuscript. In response to each of the reviewer’s specific comments on weaknesses:
1.
> My main reservation with the proposed method is the seemingly marginal benefit it provides over CQR for conditional coverage, as shown in Table 1.
Please refer to point 3 in the global rebuttal.
> Additionally, conditional coverage here is evaluated using contrast trees, which are baked into the objective of the boosting approach.
We would also like to clarify that while we use contrast trees in our loss function, we execute the contrast tree algorithm at each iteration of gradient boosting. In other words, we re-partition the feature space at each iteration as soon as the conformity score is updated. When we evaluate performance on the test set, we rerun the contrast tree algorithm on the test set, taking as inputs the features, the labels and the conformalized prediction intervals calculated on the test set. Therefore, the partition of the feature space at test time is different from that at boosting time.
> It would be interesting to see if the improvements hold under, for example, just group-conditional coverage.
Please refer to point 4 in the global rebuttal.
2. We only partially agree with the reviewer that the improvement depends on the accuracy of the underlying model. Imagine we have infinite training data and that the underlying model perfectly identifies the conditional mean $E[Y|X]$ and the conditional expected absolute deviation $\text{MAD}(Y|X)$. Then plugging this in the local conformity score would not be optimal in terms of average interval length or conditional coverage. Below, we illustrate this with a simulated example. Assume that each X follows a uniform distribution Unif(0.1,1.1), and that $Y = X\cdot e$, where e is a random variable distributed as $\text{Beta}(a,b)-a/(a+b)$; this says that e follows a beta distribution shifted to have mean zero. We can theoretically calculate $E[Y|X]=0$, $\text{MAD}(Y|X) = X\cdot 2a^ab^b/\left(B(a,b)\cdot (a+b)^{(a+b+1)}\right)$. Here, $B(\cdot,\cdot)$ stands for the beta function. However, because a beta distribution is not symmetric, even if the training model perfectly fits $E[Y|X]$ and $\text{MAD}(Y|X)$, the conformalized prediction intervals will not be of optimal length, as shown in the left panel in the attached pdf. If we take the theoretical mean and MAD as trained, then the boosted Local score effectively adjusts for the non-symmetric nature of the underlying distribution, as shown in the right panel. The respective average conformalized interval lengths are 0.237 and 0.216. This is about a 10% reduction in size.
Response to questions:
1. We agree with the reviewer that an alternative approach could involve holding out a validation set in addition to the training, calibration, and test sets. However, as noted by the reviewer, this method presents a trade off: a small validation set can lead to high variability in prediction intervals and performance, while a large validation set may reduce the number of samples available for training, potentially limiting the model's effectiveness. In contrast, our procedure uses a form of cross-validation to avoid this trade off. To illustrate this, we will include experiments comparing the performance of the two procedures on real datasets in the revised manuscript.
2. We agree with the reviewer that the gradient boosting algorithm could potentially be replaced by other machine learning models. In this sense, the gradient boosting algorithm is not central to our contribution. Within our boosting framework, once the appropriate search space and loss function are established, any suitable machine learning model can, in principle, be used to search for an enhanced conformity score.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response, in particular the provided example. I plan to keep my score as I was already favoring acceptance, and in my view raising the score would require more substantive experiments that extend beyond a minor revision. | Summary: The paper proposes to utilize the training data and gradient boosting (of the model's predictions) to optimize loss functions related to conformal prediction interval length and conditional coverage with respect to the score function. The optimized score function is then used for "plain" CP usage, via calibration set.
Strengths: - The approach is novel and significantly differs from other methods that utilize training data to improve CP performance.
- The authors derive asymptotic theory for the sufficient expressiveness of the approach.
- Experiments demonstrate that the approach consistently boosts the CP metric associated with the designed loss (interval length or conditional coverage).
Weaknesses: - There is no discussion and investigation on the impact of optimizing the score for conditional coverage on the interval length and vise versa. In practice, both can be important for a user.
- Even though there is theory of the "asymptotic expressiveness" of the approach, the method in practice is based on differentiable approximations and greedy optimization that do not guarantee optimality.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What can you say about the computational complexity of the boosting procedure?
- In practice, both conditional coverage and the interval length are likely to be important for a user.
Therefore, when you optimize the score for better conditional coverage report also the effect on the interval length.
Similarly, when you optimize the score for better interval length report also the effect on the conditional coverage.
- Regarding the previous comment, when discussing these two properties it is worth mentioning the work:
Dabah, L., & Tirer, T. (2024). "On Temperature Scaling and Conformal Prediction of Deep Classifiers". arXiv preprint arXiv:2402.05806.
that shows a simple way to trade between them in CP methods for classification.
- While I am aware that your approach do not require access to the model's parameters, it would be insightful to compare the approach (even at some of the settings) to the approach of [26], namely optimizing the model's parameters to minimize a loss that includes the term that you design.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments. In response to each of the reviewer’s specific comments on weaknesses:
- Please refer to point 1 in the global rebuttal.
- Please refer to point 5 in the global rebuttal.
Response to questions:
- Please refer to point 2 in the global rebuttal.
- Please refer to point 1 in the global rebuttal.
- We thank the reviewer for pointing us towards this literature. We shall include this in our revised manuscript when discussing the trade off between conditional coverage and interval length.
- In our revised manuscript, we will include additional results comparing the performance of our method with the approach outlined in [26]. While [26] primarily focuses on reducing prediction set size for classification tasks, our work is centered on regression tasks. Another difference is that [26] considers conformity scores based solely on predicted probabilities for each class. In a regression context, this approach is analogous to using only $\hat{y}$ or $\mu$ in the Local conformity score. To bridge this gap, we plan to conduct an experiment using two neural networks to fit $\mu$ and $\sigma$ in the generalized Local conformity score in an alternating fashion. The objective will be the differentiable approximation of the average prediction interval length, as formulated below line 264. Originally, the training model's objective for fitting $\mu$ was the mean squared error of $y-\mu$, and for fitting $\sigma$, it was the mean squared error of $|y-\mu|-\sigma$.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and keep the score that I gave (which already supports acceptance).
As discussed, I am keen to see in the next version the effect of the approach on the trade-off between conditional coverage and interval length, which is currently not presented. | Summary: The paper resents a novel method to enhance conformal prediction intervals using gradient boosting. The proposed method focuses on improving specific properties such as conditional coverage and interval length. The key idea is to iteratively refine a predefined conformity score function via boosting, guided by a loss function designed to measure deviations from the desired properties. Experiments demonstrate that starting from conventional conformal methods, the boosted procedure significantly reduces interval lengths and better aligns with target conditional coverage.
Strengths: 1. Post-Training Enhancement: The boosting process is applied post-training, meaning it doesn't require modifications to the original model, making it applicable to a wide range of pre-trained models.
2. Flexibility: The approach can be tailored to optimize various properties beyond interval length and conditional coverage, such as fairness or other specific criteria.
3. Robustness: The use of cross-validation to select the number of boosting rounds helps prevent overfitting and ensures the method's robustness across different datasets and applications.
Weaknesses: 1. A comprehensive description of the experiment setting is necessary. Since the method is post-training, it is crucial to explain how the training samples used for post-training differ from those used for the base model. Additionally, it is important to address potential outcomes if a distribution shift occurs. If the data used for post-training is different from the one use for calibration and testing, can it still works well? (or if the data used for (posted-training,calibration, testing) is different from the one used for base model training, can it still achieves to improve CP?)
2. For experiment presentation, why the tables contain only conditional coverage and size solely? I mean, to convincingly demonstrate the method's effectiveness, it is essential to show that the method can improve interval length without significantly compromising conditional coverage, or improve conditional coverage without significantly increasing interval length.(rather than showcase them solely)
3. The introduction of a boosting step increases the computational burden, potentially requiring significant resources and time, especially for large datasets. An analyze for time complexity is required for better illustration.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Can your method work well with different error rates $\alpha$?
2. If the post-training set is small, can your method still perform effectively?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Please refer to the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to carefully review our manuscript. In response to each of the reviewer’s specific comments on weaknesses:
1.
> A comprehensive description of the experiment setting is necessary.
In our revision, we will more clearly describe our experimental setting. For concreteness, we shall make the following clarifications: we use the same data to train the initial prediction model and to boost. Building upon the trained prediction model $\hat{f}(x)$, we boost for an improved conformity score, and subsequently calibrate with the boosted conformity score on the calibration set. As a result, under the same exchangeability assumption as for the classical split conformal procedure, an expected marginal coverage rate of $1-\alpha$ is guaranteed on a new test point drawn from the same distribution.
> Additionally, it is important to address potential outcomes if a distribution shift occurs.
We of course acknowledge that if the exchangeability assumption is violated (e.g., in the case of a distribution shift), the marginal coverage property may no longer be valid. Robustness vis a vis distribution shifts is the subject of an immense literature and outside of the scope of this paper; that said, this is an interesting direction to explore. It would be interesting to see how our boosting procedure could be combined with a line of research which relaxes the exchangeability assumption: for instance,
[1] Barber R F, Candes E J, Ramdas A, et al. Conformal prediction beyond exchangeability. *The Annals of Statistics*, 2023, 51(2): 816-845.
[2] Gibbs I, Candès E J. Conformal inference for online prediction with arbitrary distribution shifts. *Journal of Machine Learning Research*, 2024, 25(162): 1-36.
2. Please refer to point 1 in the global rebuttal.
3. Please refer to point 2 in the global rebuttal.
Response to questions:
1. We believe that different target coverage rates would not affect the effectiveness of our method. We shall report additional results with different nominal miscoverage rates in our revised manuscript.
2. We are somewhat confused about the reviewer's question. As mentioned previously, we use the training data in combination with cross validation during the boosting stage. Regarding the calibration set, as with any method that relies on a holdout set, an extremely small holdout set will inevitably lead to high variability in the performance of the method. This is not specific to our method, and in general we would not recommend relying on a small holdout set via any procedure, so we do not explicitly consider this question in our work. It is worth pointing out, however, that marginal coverage guarantees do hold regardless of sample size, since these results are “on average” and do not account for variability.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response, in particular the supplementary experiments, I raise my rating to 6. | Rebuttal 1:
Rebuttal: We are grateful to all the reviewers for their valuable feedback and constructive suggestions for improving our manuscript. In the rebuttals below, we have responded to each reviewer’s point individually. The attached PDF includes additional simulation results. Below, we address several common points raised by the reviewers:
**1. Trade-off between conditional coverage and interval length (trTq,tKW7).**
We agree with the reviewers that in practice, both conditional coverage and interval length are likely to be important. In the revised manuscript, we will expand Tables 1 and 2 to report additional results on both conditional coverage and interval length to address this concern.
**2. Computational complexity of our boosted conformal procedure (trTq,tKW7).**
- The computational complexity of our procedure stems from three main components: 1) evaluating the customized loss function at each boosting iteration, 2) using the gradient boosting algorithm to update the conformity score function, and 3) performing cross-validation. We acknowledge the importance of improving the computational efficiency of our algorithm while maintaining or even enhancing its performance. One potential improvement could involve reducing runtime by replacing cross-validation with a simpler approach, namely, splitting the training set into a boosting set and a validation set. However, as discussed in our response to question 1 raised by reviewer D558, this simplification may introduce trade offs between the variability of the prediction intervals and the effectiveness of the training model. Additionally, in our current implementation, the contrast tree algorithm is executed at every iteration of gradient boosting to identify regions in the feature space that deviate from the target coverage rate. A more computationally efficient strategy might involve partitioning the feature space into regions that maximize deviation at specific intervals during the boosting iterations, such as every 5 or 10 iterations. In between these iterations, we would retain the partition computed last, and use it to evaluate performance.
- In our revised manuscript, we will report a more detailed comparison of the runtime between the unboosted split conformal procedure and our boosted procedure.
**3. Modest improvement on conditional coverage for CQR (D558,3gAf).**
We acknowledge the observation from the reviewers that the improvement our procedure affords over CQR is generally somewhat limited. However, we would like to emphasize a few points: first, applying boosting after running CQR does not degrade its performance. Second, the experiments we presented on the selected datasets are by no means exhaustive. It is possible that one might see substantial improvements on datasets where the base CQR learner is less than ideal. Finally, our main goal is to demonstrate the flexibility of our method, highlighting its adaptability to various conformity scores and loss functions. While the improvement in conditional coverage may be modest in the specific context of this paper, we anticipate potential benefits in other applications. For instance, we foresee extending this approach to boost for the power of multiple hypothesis testing while controlling for the false discovery rate (FDR), particularly in tasks such as drug discovery and candidate screening. Following the framework proposed by [1], we would first calculate conformal p-values from the conformity scores associated with each sample and then apply the Benjamini-Hochberg procedure to control for FDR. P-values depend on the choice of conformity scores and optimizing these scores would yield lower p-values and hence more rejections. Similarly, conformal methods have been used for outlier detection and boosting conformity scores to improve the ROC (receiver operating characteristic) curve would equally be of great interest.
**4. Comparison between the contrast tree algorithm and the group conditional coverage metric (D558, 3gAf).**
We would like to argue that one of the key innovations here is the ability to identify regions in the dataset without requiring users to pre-specify them. By employing the contrast tree algorithm, we can automatically detect regions where coverage is most uneven. This feature is particularly beneficial if the user's objective is to achieve even conditional coverage across the feature space. Relying on user-specified groups may not always be appropriate, as it might overlook subgroups within these predefined categories where conditional coverage significantly deviates from the target rate, even if coverage appears even across the broader groups.
**5. Clarifications on theoretical guarantees (tKW7,3gAf).**
We agree with the reviewers that we currently cannot give any formal guarantee on the improvement our algorithm affords. We can only claim the following: 1) under the usual exchangeability assumption, our boosted procedure has the same coverage guarantees as the split conformal procedure on the test set; and 2) the search space contains the optimal solution. We can of course not guarantee that our algorithm will find this optimal solution.
References:
[1] Jin Y, Candès E J. Selection by prediction with conformal p-values. *Journal of Machine Learning Research*, 2023, 24(244): 1-41.
Pdf: /pdf/e8ae4b0311ba849e43a03b4bbfb9b232c5900df0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Huber Loss Minimization Approach to Mean Estimation under User-level Differential Privacy | Accept (poster) | Summary: The paper proposes a user-level differentially private mechanism utilizing Huber loss minimization for mean estimation. This approach is robust to heavy-tailed distributions and addresses data imbalance across different users.
Strengths: - The paper is well-written and easy to follow overall.
- The differentially private version of Huber loss minimization for mean estimation is an interesting contribution.
Weaknesses: - It is unclear whether the convergence results (Theorem 5) are generally applicable to generic $w_i $'s. Using $m_i \land m_c$ seems somewhat unclear. What if the server (in federated learning) wants to compute the mean with $w_i = \frac{m_i}{\sum m_j}$?
- It would be beneficial to provide a detailed analytical comparison with WME, extending beyond Section G.
- An intuitive explanation of $\gamma$ is required. Does $\gamma$ measure imbalance? If so, how?
- The inequality in (10) appears to have a typo.
- $\lambda$ in Theorem 4 needs a definition.
- The figures are too small, making it difficult to check the results. What does the gray line in Figure 2(b) represent?
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitation section is adequately provided
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments.
# Reply to weaknesses
**1. It is unclear whether the convergence results (Theorem 5) are generally applicable to generic $w_i$'s. Using $m_i\wedge m_c$ seems somewhat unclear. What if the server (in federated learning) wants to compute the mean with $w_i=m_i/\sum m_j$?**
If we use $w_i=m_i/\sum m_j$, then the sensitivity will be too large if there are some users with large number of items. As a result, we must add a strong noise for privacy protection, which will hurt the performance. Using $m_i\wedge m_c$ ensures that the method is not severely affected by a single user with many items. This has been explained in line 299-305 in the paper.
**2. It would be beneficial to provide a detailed analytical comparison with WME, extending beyond Section G.**
Thanks for this comment. In Appendix G we have already shown that WME has worse rate than our method. The analysis in WME in [1] is based on $(\tau, \gamma)$-concentration. For our two main improvements, i.e. heavy-tailed distributions and imbalanced users, both the first and second one inevitably lead to large $\tau$.
Here we list the our results and compare with WME here. For convenience, we omit the logarithm factors and non-private terms here.
(1) Heavy-tailed, balanced users. For $n$ users with $m$ samples per user, under $p$-th order bounded moment,
Ours: $\frac{d}{mn^2\epsilon^2}+(\frac{d}{m^2n^2\epsilon^2})^{1-1/p}$ (from eq.(14))
WME: at least $\frac{d}{n^2\epsilon^2}(\frac{1}{m}+m^{4/p-2}n^{6/p})$ (from eq.(102))
(2) Bounded support, imbalanced users. For $N$ total number of samples belonging to $n$ users,
Ours: $\frac{d\gamma}{Nn\epsilon^2}$ (from eq.(20))
WME: $\frac{d\gamma_0^2}{Nn\epsilon^2}$ (from eq.(104) multiplies a factor $d$)
From the definition of $\gamma$ (Assumption 3) and $\gamma_0=nm_{max}/N$ (line 689), it is easy to prove that $\gamma_0>\gamma$. Moreover, the quadratic dependence $\gamma_0^2$ is reduced to linear dependence $\gamma$.
We will make these comparison clearer in our revised paper.
**3. An intuitive explanation of $\gamma$ is required. Does $\gamma$ measure imbalance? If so, how?**
Yes, $\gamma$ measures the imbalance of users. Note that users are arranged in ascending order of $m_i$ (line 264-265). Therefore, from Assumption 3, for users whose number of items are more than $\gamma$ times of the average number of items, the sum of items of these users are less than $1/2$. With small $\gamma$, users are nearly balanced, as the sizes of most of users are not much larger than the average sizes. On the contrary, with large $\gamma$, users are highly imbalanced. It means that the size of many users are much larger than the average size.
Examples:
(1) If users are balanced, then $\gamma=1$;
(2) If the $i$-th user has $ki$ items (which means that the number of items of user is linear in its order), then for large $n$, $\gamma$ is approximately $\sqrt{2}$.
**4. The inequality in (10) appears to have a typo.**
Thanks for finding this typo. This should be a equal sign here.
**5. $\lambda$ in Theorem 4 needs a definition.**
Thanks. Here $\lambda$ is the smooth sensitivity $S(D)$. We will change the notation in the revised paper.
**6. The figures are too small, making it difficult to check the results. What does the gray line in Figure 2(b) represent?**
Thanks for this suggestion. We will make figures larger. In figure 2(b), we make a mistake of including other number of users without changing the legends. The corrected legends are:
orange dashed curve -> WME, n=2000;
brown dashed curve->WME, n=5000;
gray dashed curve->WME, n=10000;
red dashed curve-> WME, n=30000;
blue solid curve-> HLM, n=2000;
the two curves that appears to overlap at the bottom are HLM n=5000 and HLM n=10000, respectively.
We refer to the global response for the corrected figure.
# References
[1] Levy et al. Learning with user-level privacy. NeurIPS 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response. I believe the paper makes a reasonable contribution, and I will maintain my positive score. | Summary: The paper proposes a user-level differential private mean estimation based on minimizing weighted huber loss. The authors conduct theoretical and empirical assessments, showing that the proposed method is more robust to user-wise sample imbalance as well as heavy-tail distributions compared to the Winsorized mean estimator proposed in [0]
Strengths: * The construction of smooth sensitivity and its analysis in both balanced and imbalanced user settings seem novel.
* The proposed method is thoroughly analyzed, providing both error upper bounds and empirical evaluations.
Weaknesses: * Some claims in this paper might need more elaboration:
> Line 57-58: “To the best of our knowledge, our method is the first attempt to unify robustness and DP at user-level”
The definition of robustness should be clarified in the paper. Is it referring to robustness against heavy-tailed data, arbitrary outliers, or specific types of attacks? Therefore, it would be beneficial for the authors to explicitly define both robustness and outliers.
> Line 93-94: “To the best of our knowledge, Huber loss minimization has not been applied to DP”
Please see section 3.2.1 in [2]. Additionally, as a side note, gradient clipping for generalized linear loss is well-known to be connected to Huber loss [3].
* As the author also pointed out (line 347), the proposed algorithm requires the sample size of each local user as input to determine the Huber loss parameter $T$, which is a strong assumption.
Technical Quality: 2
Clarity: 2
Questions for Authors: * Could the author compare the results in the balanced user case with existing literature, such as [1] and [4]? For example, a result similar to Theorem 2 has been derived in Theorem 4.1 of [1].
* What is the purpose of tuning $T_i$ in Section 7.2 when Theorem 5 already provides an optimal choice for $T_i$? Does this parameter tuning require an additional privacy budget?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors address the limitation of this work in section 8.
[0] Levy, Daniel, et al. "Learning with user-level privacy." Advances in Neural Information Processing Systems 34 (2021): 12466-12479.
[1] Narayanan, Shyam, Vahab Mirrokni, and Hossein Esfandiari. "Tight and robust private mean estimation with few users." International Conference on Machine Learning. PMLR, 2022.
[2] Avella-Medina, Marco, Casey Bradshaw, and Po-Ling Loh. "Differentially private inference via noisy optimization." The Annals of Statistics 51.5 (2023): 2067-2092.
[3] Song, Shuang, et al. "Evading the curse of dimensionality in unconstrained private glms." International Conference on Artificial Intelligence and Statistics. PMLR, 2021.
[4] Liu, Daogao, and Hilal Asi. "User-level differentially private stochastic convex optimization: Efficient algorithms with optimal rates." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for these valuable comments.
# Reply to weaknesses
**The definition of robustness should be clarified in the paper. Is it referring to robustness against heavy-tailed data, arbitrary outliers, or specific types of attacks? Therefore, it would be beneficial for the authors to explicitly define both robustness and outliers.**
The robustness refers to arbitrary model poisoning attacks. The robustness of Huber loss minimizer has been already widely analyzed in existing works (In particular, [5] analyzes the robustness to Byzantine attacks in federated learning, which is relatively simpler compared with this paper). Therefore, we do not repetitively discuss the robustness of Huber loss minimizer in this paper. Following your comments, we will clarify these in our revised version.
**Line 93-94: “To the best of our knowledge, Huber loss minimization has not been applied to DP”**
**Please see section 3.2.1 in [2]. Additionally, as a side note, gradient clipping for generalized linear loss is well-known to be connected to Huber loss [3].**
Thanks for bringing this paper into our attention. There are indeed some works that uses Huber loss minimization in DP. [2] discuss the linear regression problem. Despite different from ours, we will change the statement in the paper.
Gradient clipping can be viewed as minimizing Huber loss, as is shown in Section 5.1 in [3]. This "Huber loss minimization" and ours have different meanings. In [3], the Huber loss approximates the population risk of DP optimization. In our paper, we do not focus on the optimization problem. Instead, we work on mean estimation, and there are no loss functions to approximate here.
**As the author also pointed out (line 347), the proposed algorithm requires the sample size of each local user as input to determine the Huber loss parameter, which is a strong assumption.**
Our experience is that it is common to assume the knowledge of sample sizes in distributed learning. For example, in [6], eq.(2), the weight is determined by the proportion of samples of each client.
While we agree that it is worthwhile to extend our work to the case that $m_i$ are also private, current setting is already practical. In distributed scenarios (especially federated learning), sample sizes of each client are usually not sensitive (see [7] for a review). It is fine if our knowledge of the sample size of local user is not very accurate. Our analysis can be easily generalized to the case in which we know an upper bound and a lower bound of the local sample size, such that the ratio of the upper bound to the lower bound is not large than a certain constant. Then rate of convergence of the overall mean squared error remains the same.
# Reply to questions
**Q1. Could the author compare the results in the balanced user case with existing literature, such as [1] and [4]? For example, a result similar to Theorem 2 has been derived in Theorem 4.1 of [1].**
**Compared with [1] and [4], we (1) improve the performance for heavy-tailed distributions and (2) generalize to imbalanced users.** As discussed in line 225-227, for balanced users with bounded distributions, existing methods are already nearly optimal, and polynomial improvement is impossible. In our paper, the goal of Theorem 2 is to show that our improvement on heavy-tailed distributions and imbalanced users is not at the cost of hurting the performance under the simplest case with bounded distributions and balanced users.
We will add discussions of these two papers in our revised version.
**Q2. What is the purpose of tuning $T_i$ in Section 7.2 when Theorem 5 already provides an optimal choice for $T_i$? Does this parameter tuning require an additional privacy budget?**
(1) In Theorem 5, $T_i$ is selected to minimize the theoretical upper bound. To ensure that the analysis is mathematically rigorous, the upper bound of estimation error is larger than the truth. Therefore, the optimal $T_i$ is different from that derived in theories.
(2) The parameter tuning does not require additional privacy budget since in each experiment, $T_i$ are hyperparameters that is fixed before knowing the value of each sample. They are not determined adaptively based on the data.
We will add these discussions in the revised version.
# References
[1] Narayanan, Shyam, Vahab Mirrokni, and Hossein Esfandiari. "Tight and robust private mean estimation with few users." International Conference on Machine Learning. PMLR, 2022.
[2] Avella-Medina, Marco, Casey Bradshaw, and Po-Ling Loh. "Differentially private inference via noisy optimization." The Annals of Statistics 51.5 (2023): 2067-2092.
[3] Song, Shuang, et al. "Evading the curse of dimensionality in unconstrained private glms." International Conference on Artificial Intelligence and Statistics. PMLR, 2021.
[4] Liu, Daogao, and Hilal Asi. "User-level differentially private stochastic convex optimization: Efficient algorithms with optimal rates." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
[5] Zhao, Puning et al. A huber loss minimization approach to byzantine robust federated learning. AAAI 2024. (Ref.[62] in the paper)
[6] Wei, Kang, et al. "Federated learning with differential privacy: Algorithms and performance analysis." IEEE transactions on information forensics and security 2020.
[7] Fu, Jie, et al. "Differentially private federated learning: A systematic review." arXiv:2405.08299.
---
Rebuttal Comment 1.1:
Title: Official comment by reviewer BDM5
Comment: Thank you for answering my questions, particularly the one about Theorem 2.
I have a further question:
> "The parameter tuning does not require additional privacy budget since in each experiment ... They are not determined adaptively based on the data"
Privacy leakage is still possible if the evaluation is solely on the training set. (e.g. mentioned in section 2 of https://arxiv.org/pdf/2110.03620)
---
Reply to Comment 1.1.1:
Comment: Thanks for introducing this paper. We have read [1], which is improved over an existing analysis of parameter tuning in [2].
[1] Papernot, Nicolas, and Thomas Steinke. "Hyperparameter Tuning with Renyi Differential Privacy." ICLR 2022
[2] Liu, Jingcheng, and Kunal Talwar. "Private selection from private candidates." STOC 2019.
In our experiments, the parameter tuning does not lead to additional privacy leakage, since this is an experiment with synthesized data. After we change the parameters, samples are generated again. In other words, parameters for each experiment are determined before generating these samples. The parameters are not determined adaptively based on the data. Therefore there is no additional privacy leakage.
[1] and [2] analyze the case of using a fixed dataset. When we update the hyperparameters, we have to reuse the dataset. As a result, the parameters have to be determined adaptively based on the data. As a result, the privacy leakage is inevitable.
We will mention these discussions to avoid confusion. | Summary: Overall, the method proposed by the authors is interesting and effectively addresses the issue of privacy protection for users with imbalanced data. Compared to existing methods, the authors' approach is more robust and is supported by mathematical proofs.
Strengths: 1. The proposed method demonstrates significant innovation and holds substantial practical value.
2. The writing is clear, the arguments are well-structured, and the mathematical foundations are solid.
Weaknesses: I have a generally positive view of this paper, though I have some questions and concerns that I hope the authors can address thoroughly.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The authors' discussion lacks comprehensiveness. For instance, the statement "The most effective approach is the two-stage scheme, which finds a small interval first and then gets a refined estimate by clipping samples into the interval" is not fully substantiated. It would be beneficial to explore whether there are any end-to-end methods or approaches that integrate both stages. If such methods exist, a comparison should be provided. If they do not, an explanation should be offered as to why this research direction has not been pursued by others.
2. The authors should clearly explain why they chose the Huber loss, highlighting its advantages. Furthermore, they should clarify whether their proposed method involves any deep innovation beyond the simple use of the Huber loss, or if it merely employs the Huber loss without additional enhancements.
3. The authors' work bears similarities to "Private Mean Estimation with Person-Level Differential Privacy." The authors should discuss the distinctions between their work and this paper.
4. The authors should clarify whether their method has practical applications and specify the scale of data it can handle. Additionally, a complexity analysis of the method should be conducted.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments.
**Q1.The authors' discussion lacks comprehensiveness. For instance, the statement "The most effective approach is the two-stage scheme, which finds a small interval first and then gets a refined estimate by clipping samples into the interval" is not fully substantiated. It would be beneficial to explore whether there are any end-to-end methods or approaches that integrate both stages. If such methods exist, a comparison should be provided. If they do not, an explanation should be offered as to why this research direction has not been pursued by others.**
The two-stage approach [1] is currently the most standard approach for user-level DP. To the best of our knowledge, before this work, there are no methods that integrate both stages. Follow-up research focus on different assumptions or different statistical problems, but they still use the localization-refinement two stage framework.
Regarding why an end-to-end method is not pursued by others, we think that it is challenging to design a method that is adaptive to local sensitivity of data. It is necessary to finish a thorough analysis of the sensitivity that consider all possible cases. The user-wise mean concentrate around the true mean $\mu$ with high probability since the averaging operation within each user already reduces the variance. As a result, the local sensitivity is not large with high probability. However, extreme cases may happen, such that user-wise averages are far away from each other. Despite that these cases happen with low probability, the analysis of local sensitivity becomes significantly harder. Therefore, it is not straightforward to design an estimator that rigorously ensure $(\epsilon, \delta)$-DP. It has been mentioned in the paragraph 3 in the introduction. We will explain further in revision.
**2.The authors should clearly explain why they chose the Huber loss, highlighting its advantages. Furthermore, they should clarify whether their proposed method involves any deep innovation beyond the simple use of the Huber loss, or if it merely employs the Huber loss without additional enhancements.**
(1) Why Huber loss: Huber loss combines $\ell_2$ and $\ell_1$ loss, and strikes a tradeoff between bias and sensitivity. Huber loss minimizer is a widely used method for robust statistics. Moreover, robustness can be converted to DP.
(2) Advantages: as has been discussed in multiple places in the paper, there are two advantages of our work: improved performance for heavy tailed distributions, and suitability to imbalanced datasets.
(3) Innovation beyond the use of Huber loss. Huber loss was defined for scalars. In eq.(4) in the paper, Huber loss is defined for multi-dimensional vectors. Apart from the generalization to high dimensions, a more important technical novelty is the analysis of local sensitivity and the design of noise, which is the main challenge.
**3.The authors' work bears similarities to "Private Mean Estimation with Person-Level Differential Privacy." The authors should discuss the distinctions between their work and this paper.**
The paper [2] is indeed an important independent work that worths discussion. However, this paper is posted on arXiv after the NeurIPS submission deadline. Therefore, we are not aware of this paper at the time of submission.
Actually [2] has already provided fruitful discussion in its updated version (see Section 1.3.1, arXiv 2405.20405). It has mentioned that [2] uses a directional bound, while we use a non-directional bound. Our assumption is slightly weaker than [2]. By some necessary rescaling, our Theorem 3 matches Theorem 4.1 in [2].
We would like to comment further on the difference of methods. [2] still uses the two-stage method, with some refinements to handle the tails. Instead, our method is a direct Huber loss minimization approach, which is relatively easier to implement and requires less computation time. ([2] has not analyzed the time complexity. However, based on our understanding, the time complexity is not linear.)
**4.The authors should clarify whether their method has practical applications and specify the scale of data it can handle. Additionally, a complexity analysis of the method should be conducted.**
(1) Practical applications. As discussed in the last paragraph of the conclusion, for practical applications in federated learning, the remaining issue is that the method requires $n\gtrsim d$. $d$ is the number of model parameters here. In modern deep learning applications, $d$ is usually large, thus this condition is not satisfied. There are several potential solution: sparse DP mean estimation methods, and top-k gradient selection in federated learning.
(2) Complexity. It has been discussed in line 154-158 in the paper. Further discussions have been provided in Appendix A.
References
[1] Levy et al. Learning with user-level privacy. NeurIPS 2021. (Ref.[27] in the paper)
[2] Agarwal et al. Private Mean Estimation with Person-Level Differential Privacy. arXiv:2405.20405
---
Rebuttal 2:
Comment: Thank you very much for the author's response, which has largely addressed my concerns. Overall, this paper demonstrates innovation and theoretical depth. Therefore, I will maintain a positive score. I hope the authors will include the discussed content in the final revised version to further improve the paper.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for your reply. If you have any remaining questions or suggestions for us, please let us know. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for the reviews. We are encouraged that reviewers have positive views on the mathematical solidness, practical value (Reviewer Bmi5), novelty (Reviewer BDM5) and presentation (Reviewer zYzW) of this paper.
The detailed feedbacks of each review are provided below. We are looking forward to your replies, so that we can engage in further discussions.
Regarding the weakness 6 raised by reviewer zYzW: There are some issues in Figure 2(b) in the paper. We have attached the revised figure here.
Pdf: /pdf/623d6e2f196fe31b318eba1ec100745e7c8c6ec6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Trajectory Data Suffices for Statistically Efficient Learning in Offline RL with Linear $q^\pi$-Realizability and Concentrability | Accept (poster) | Summary: This paper proves that finite horizon offline RL under linear q^\pi realizability assumption can be solved efficiently (in terms of sample complexity) if the data are trajectories collected by a policy with bounded concentrability coefficient (that is, the density ratio between the state-action distribution induced by any policy and the data distribution is upper bounded). The result highlights the effect of trajectory data by sharply contrasting with the existing impossibility result where the data can be arbitrary state-action distribution with bounded concentrability coefficient.
Strengths: To the best of my knowledge, this paper solves a long-standing open question of offline RL with linear q^\pi realizability. The observation about trajectory data versus general offline data is solid and interesting.
Weaknesses: I don’t think I completely follows the proof sketch shown in Section 4 & 5 given the complexity of the proofs, and some of the proof intuitions could be made more explicit instead of referring to some lemmas in the appendix. For example:
- What is the reason that $G=\bar{G}$ passes the condition (14)? Is it because the concentrability assumption plus Lemma 4.2 results in a tight confidence interval for the q-value?
- One thing I do not follow is how the algorithm eliminates incorrect guesses $G$. If I understand correctly, Lemma 4.2 only shows that linear q^\pi-realizable MDPs with skips can be approximated by linear MDPs under the true guess. Then for an incorrect guess $G’$, is the modified MDPs still linear? If not, does optimization problem 1 still give some meaningful guarantees so that the algorithm can eliminate these cases?
In addition, since the optimization problem 1 is mostly built upon prior except for condition (14), if could be better to emphasize this new condition.
Minor issue:
- In the statement of problem 1, n does not depend on \delta.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors adequately addressed the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback.
> What is the reason that $G = \bar G$ passes the condition (14)? Is it because the concentrability assumption plus Lemma 4.2 results in a tight confidence interval for the q-value?
Yes, it is precisely as you say “because the concentrability assumption plus Lemma 4.2 results in a tight confidence interval for the q-value”. Lemma 4.2 establishes that the targets of the least-squares regressions in optimization problem 1 are linearly realizable with the features (because skipping with $\bar G$ results in an approximately linear MDP). Concentrability is then used to bound the average confidence of the least-squares predictor.
> One thing I do not follow is how the algorithm eliminates incorrect guesses $G$. If I understand correctly, Lemma 4.2 only shows that linear $q^\pi$-realizable MDPs with skips can be approximated by linear MDPs under the true guess. Then for an incorrect guess $\bar G$, is the modified MDPs still linear? If not, does optimization problem 1 still give some meaningful guarantees so that the algorithm can eliminate these cases? In addition, since the optimization problem 1 is mostly built upon prior except for condition (14), if could be better to emphasize this new condition.
Regarding “how the algorithm eliminates incorrect guesses”: it doesn't (necessarily). For an incorrect guess the modified MDP may not be linear, as you point out. In a nutshell, there are two key guarantees for optimization problem 1. (1) that specifically $G=\bar G$ is not eliminated, and (2) that any $G$ that passes condition (14) leads to an accurate value estimation (even if the associated modified MDP isn't linear).
To show guarantee (2), we use $q^\pi$-realizability only (no linear MDP properties) to prove that the true parameter $\psi_h(\pi^\star_G)$ realizing $q^{\pi^\star_G}$ for stage $h$ is included in $\Theta_{G, h}$ (Lemma C.2). Then the left hand side of condition (14) is exactly the confidence range for our estimator of $q^{\pi^\star_G}$. Condition (14) thus directly constrains these confidence ranges to be tight, leading to accurate estimators.
Guarantee (1) gives legitimacy to including condition (14) in the optimization problem, by arguing that at least one choice ($G=\bar G$) leading to a near-optimal policy value is considered by optimization problem 1. Therefore, whatever $G$ the optimization problem ends up choosing can only have a larger or equal value estimation than this near-optimal value. This value estimation is accurate by guarantee (2), finishing the proof.
We will follow your suggestion to better highlight condition (14), and we will make sure that the proof intuition above is clearly communicated in the paper.
> In the statement of problem 1, n does not depend on \delta.
Thanks for spotting the error in problem 1. We will revise it to $n = \text{poly}(1/\epsilon, H, d, C_\text{conc}, \log(1/\delta), \log(1/L_1), \log(1/L_2))$. | Summary: This paper presents an important theoretical result in offline reinforcement learning (RL) with linear function approximation. The authors show that under the assumptions of linear q-realizability, concentrability, and access to full trajectory data, it is possible to efficiently learn an ε-optimal policy with a sample complexity that scales polynomially in the relevant problem parameters (horizon H, feature dimension d, concentrability coefficient Cconc) and inversely in the desired accuracy ε. This is in contrast to previous negative results showing exponential sample complexity lower bounds when only having access to individual transitions.
Strengths: This is a strong theoretical contribution that significantly advances our understanding of the statistical complexity of offline RL under linear function approximation. The results are of high interest to the RL theory community.
The theoretical analysis is rigorous and the proofs seem correct. The assumptions are clearly stated and discussed. The authors also provide a thoughtful discussion of the limitations of their work, including the open question of computational efficiency and the restrictive nature of the linear q-realizability assumption.
Weaknesses: It would be beneficial if the authors could add the concrete definition of the previous non-trajectory data to make a direct comparison with full length trajectory data (Assumtpion 2). Also, the authors could add some comments on the hardness results without the trajectory data after the added definition.
In Assumption 2, the notation $\phi(s_h^1, \cdot)$ is not clearly defined. It may lead to the meaning of $\phi(s_h^1, a)$ for all $a$.
Given the lengthy proof with complex notations and limited review time, the proof details are challenging to follow. Adding more discussion on algorithm design, including a pseudocode, and explaining the intuitions behind the proof would be helpful.
One potential limitation is the absence of experimental results, which means the practical relevance of the theoretical findings is not directly demonstrated. However, considering the focus on fundamental statistical limits, the lack of experiments is understandable.
Technical Quality: 3
Clarity: 2
Questions for Authors: na
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are discussed in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback.
> It would be beneficial if the authors could add the concrete definition of the previous non-trajectory data to make a direct comparison with full length trajectory data (Assumtpion 2). Also, the authors could add some comments on the hardness results without the trajectory data after the added definition.
We will add the definition of non-trajectory data and better highlight the corresponding negative result and how it contrasts with our positive one.
> In Assumption 2, the notation $\phi(s_{h}^{1}, \cdot)$ is not clearly defined. It may lead to the meaning of $\phi(s_{h}^{1}, a)$ for all $a$.
The learner is actually given access to $\phi(s_h^1, a)$ for all $a \in \mathcal{A}$. Note that features corresponding to all actions are required for the optimization problem to be able to evaluate any choice of action. We will clarify the notation of Assumption 2 to reflect this.
> Given the lengthy proof with complex notations and limited review time, the proof details are challenging to follow. Adding more discussion on algorithm design, including a pseudocode, and explaining the intuitions behind the proof would be helpful.
Although the algorithm itself is conceptually simple (solving the optimization problem and outputting the policy defined in Eq. (15)), we agree that its presentation was fragmented. Currently, the algorithm definition is only mentioned in one sentence directly above Eq. (15), making it easy to miss. To enhance clarity, we will add an algorithm pseudocode block to subsection 4.4 ("Learner") that explicitly outlines the algorithm steps. We will also improve the presentation of the intuition; see our response to reviewer sDBe for more details on this.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I still believe this paper makes a valuable contribution to an important problem. Therefore, I have decide to maintain my positive score. | Summary: This paper considers the problem of learning the value ($Q$) function under q^{\pi} realizability and concentration assumption. The major contribution is to use trajectory data instead of independent samples to learn the target function, where negative results have been proven with independent samples.
Strengths: The problem is definite challenging, in particular given the negative results with independent samples.
Weaknesses: The presentation is a problem of this work. The notation system is not reading friendly, and there is no algorithm block. I suggest to add an algorithm to make the input and output clear.
The high-level idea is not clear. The authors spend too much space on introducing their methods, and there is no explanation about the hard term in learning with independent terms. For example, why should we skip the states with small range(s)? It is hard for readers to verify the result through touching the high-level intuition.
Another concern is about the computational efficiency. Seems there is no evidence the optimization problem could be resolved efficiently.
Technical Quality: 3
Clarity: 2
Questions for Authors: I do not have any further questions.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful feedback.
> The presentation is a problem of this work. The notation system is not reading friendly, and there is no algorithm block. I suggest to add an algorithm to make the input and output clear.
As in the camera ready we can have an additional page, assuming the paper gets accepted, we will be happy to add an algorithm (solving the optimization problem and outputting the policy defined in Eq. (15)) to subsection 4.4 ("Learner") with clear inputs and outputs, thanks for pointing this out.
Regarding the notation, we thought a lot about how to present it in a clear and precise way, given the inherent complexity of dealing with many “modified MDPs” at once. We are keen to find ways to improve it further; could you please point to any specific notations that did not read well? Any further specific suggestions would be much appreciated.
> The high-level idea is not clear.
We will improve the presentation of the high-level ideas; see our response to reviewer sDBe for more details on this.
> The authors spend too much space on introducing their methods, and there is no explanation about the hard term in learning with independent terms.
We are having trouble understanding what you mean by "hard term in learning with independent terms". Could you clarify what you mean by this?
If you mean why trajectory data is crucial for our method, this is discussed in Section 4.2: “it is because we have full length trajectories that we can transform the data available to simulate arbitrary length skipping mechanisms.” In other words, without trajectory data, it is not possible to transform the existing data to data that one would have collected had one worked with a “skipping MDP”.
> For example, why should we skip the states with small range(s)?
This is discussed in Section 4.3. We will improve this section to make sure the intuition that follows is clear. In a nutshell, if we could skip the states with small range(s), Lemma 4.2 shows the “modified MDP” would be approximately linear, transforming the problem to one we already know how to solve (e.g., with Eleanor [Zanette et al., 2020]). This would be great, but sadly it is hard to directly learn which states have small ranges. Instead, we use Lemma 4.2 as an analytical tool, as follows. We show that if we were to skip these states, the value function estimates in optimization problem 1 would be accurate. This guarantees that at least one near-optimal solution is considered by the optimizer. Without skipping, optimization problem 1 could optimize over the empty set, as it is possible that condition (14) would never be satisfied.
> It is hard for readers to verify the result through touching the high-level intuition.
We hope that including the above, as well as the high-level argument using guarantees (1) and (2) in our response to reviewer sDBe will significantly improve this shortcoming.
> Another concern is about the computational efficiency. Seems there is no evidence the optimization problem could be resolved efficiently.
The paper already acknowledges that computational efficiency is a concern. Through the skipping mechanism, our method inherently introduces complicated nonlinearities for value function estimations. It is an open question whether a computationally efficient solution to the problem considered in this paper exists at all. Our work only comments on statistical complexity, by discovering a polynomial-exponential complexity divide between trajectory data and individual transitions, which we found quite interesting. We think this contribution is of high interest and significance and thus warrants the paper to be published at NeurIPS even if we leave the question of efficient computation for future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I think I have the same concern as reviewer sDBe about how to eliminate the misleading value functions (which exists in the case with independent samples). It would be helpful if you can show how your methods address the hard instance in previous work [Foster et.al., 2021] and then summarize the high-level idea to generalize to other instances. Also it would be helpful to present some toy examples (e.g., tabular MDP) to show why it is necessary to skip some states. Because I have not checked the results in detail, I would like to keep the score and reset my confidence as 2.
---
Reply to Comment 1.1.1:
Comment: > I think I have the same concern as reviewer sDBe about how to eliminate the misleading value functions (which exists in the case with independent samples)
We gave a detailed explanation in response to reviewer sDBe about how the optimization does not necessarily eliminate all G that don’t lead to linear MDPs. Instead it guarantees (1) and (2), for which trajectory data is necessary. It would not be possible to show that guarantees (1) and (2) hold if we had general data of the form used in [Foster et al., 2021].
> It would be helpful if you can show how your methods address the hard instance in previous work [Foster et.al., 2021] and then summarize the high-level idea to generalize to other instances.
An important thing to note is that the lower bound constructions in [Foster et al., 2021] do not use trajectory data (otherwise our results would be a contradiction). Our method addresses the hard instance from [Foster et al., 2021] if the data is given as complete trajectories. The intuition for why our algorithm works if it has trajectory data has hopefully been addressed by our original comments to you and reviewer sDBe. Below we explain why the lower bound constructions break down if they need to use trajectory data, and why our algorithm breaks down if it doesn’t have trajectory data.
The lower bound constructions in Theorems 1.1 and 1.2 of [Foster et al., 2021] were both made hard because the data collection distributions of individual transition tuples $(s, a, r, s’)$ were selected such that they reveal no (or almost no) information about the MDP instance. In both cases, receiving samples from the joint distribution of the entire trajectory makes the problem easy. In the case of Theorem 1.1, one would simply observe which states are reachable from the start state (the planted states). For Theorem 1.2, some information on whether any next-state $s’$ is planted or not would be leaking in each trajectory, in the form of being able to observe the next-state transition from exactly $s’$.
A simpler example showing the root of the problem with individual transition data is as follows. Consider the toy problem of learning the value of some policy $\pi$ after taking action $a$ in state $s_1$ in a 2-stage MDP. The data is given as tuples of the form $(s_1, a, r_1^1, s_2^1, \dots, s_1, a, r_1^n, s_2^n)$ for the first stage and $(\bar s_2^1, a_2^1, r_2^1, \bar s_3^1, \dots, \bar s_2^n, a_2^n, r_2^n, \bar s_3^n)$ for the second stage. Notice there is no guarantee that $\bar s_2^j \sim P(s_1, a)$ with $j \in [n]$. We cannot infer what the rewards from the second-stage states distributed as $P(s_1, a)$ might look like from the data, making this problem hopelessly hard. In the extreme, the MDP might have infinitely many second-stage states, with the probability of any $s_2^j = \bar s_2^k$ (for any $j$ and $k$) being 0, highlighting that one cannot just “connect” and “importance weight” matching next-states $s_2^j$ of the first-stage transitions with matching start-states $\bar s_2^k$ of the second-stage transitions. In contrast, if we assume the data is such that $s_2^j = \bar s_2^j$ (notice this is exactly our “trajectory data”), this problem is immediately avoided as samples from $P(s_1, a)$, along with rewards from those states are directly handed to the learner. The learner can then simply use all of the rewards $r_2^j$ from tuples that contain the action $\pi(s_2^j)$ (which we have on average at least $~ 1/ C_\text{conc}$ of, due to concentrability) to estimate the value of policy $\pi$ after taking $s_1, a$ (solving the toy problem).
Now consider our algorithm if we do not have trajectory data. In this case we are no longer able to construct least squares targets of the form needed to make use of Lemma 4.2 (discussed in Section 4.2 “The benefit of Trajectory Data”). This means that we would not be able to guarantee that our targets are linear, even under the true skipping mechanism $\bar G$, implying that $\bar G$ might not be a feasible solution to our optimization problem. Then our optimism argument that the output of the optimization problem has a value estimate at least as large as the value estimate based on $\bar G$ would no longer hold, causing our whole proof strategy to break down.
We indicated to reviewer eyWj that we will add the definition of non-trajectory data (i.e individual transition tuples) and better highlight the hardness result. We will use our above explanation to better highlight the hardness result in the revised version of our paper.
> Also it would be helpful to present some toy examples (e.g., tabular MDP) to show why it is necessary to skip some states.
The work of [Weisz et al., 2023] originally presented this skipping mechanism. We believe that Figure 1 from their work does a good job of illustrating why skipping low-range states makes an MDP linear. We will add a similar example to our paper to make it more self contained. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Mixture of neural fields for heterogeneous reconstruction in cryo-EM | Accept (poster) | Summary: This paper introduces a novel method, Hydra, for ab initio heterogeneous cryo-EM reconstruction. Different from existing approaches, Hydra separately models conformational and compositional heterogeneity by integrating K-parameterized neural fields to represent cryo-EM density maps. Furthermore, Hydra employs a hybrid optimization strategy to optimize particle poses, heterogeneity, and density map representations concurrently. The authors assess the efficacy of Hydra on three datasets comprising various protein complexes (two synthetic, one experimental). Extensive experimental results indicate that Hydra outperforms three baseline methods regarding reconstruction quality, particle classification, and pose estimation accuracy.
Strengths: This paper aims to tackle ab-initio reconstruction – simultaneously estimating poses and reconstructing 3D structure, which is one of the most challenging problems in cryo-EM. The scope of the problem has been further extended to a more challenging setting by assuming the captured particle images exhibit the motions of structure and conformational heterogeneity. Thus, the problem setting is novel and challenging.
This paper proposes Hydra, the first ab-initio heterogeneous cryo-EM reconstruction method based on a mixture of neural networks to estimate conformational and compositional heterogeneity in the training process simultaneously.
Weaknesses: **Synthetic Dataset**
The number of images in the tomotwin3 synthetic dataset is too small to match the real cryo-EM setting; there are only 3000 particle images with a very low SNR of 0.01 (tomotwin3). In a real scenario, there would be 50,000 to more than 100,000 particles with conformational variability. Without reporting the 3D resolutions of the results, it is very hard to quantitatively evaluate the reconstruction results. Additionally, this synthetic dataset lacks conformational variability, which hinders the evaluation of conformational variability recovery. The combination ratio of the three types of particles is not explored either; it remains unclear if Hydra would be sensitive to a class with only a small ratio of particle numbers. In Section 4.3, the dataset settings that include pre-catalytic spliceosome, 80S ribosome, and SARS-CoV-2 spike protein are unrealistic. To sum up, I recognize a significant gap between this synthetic dataset and real datasets, and I feel the experiment is not sufficient to evaluate Hydra adequately.
**Reconstruction Quality Evaluation**
For qualitative evaluation, the differences between the various states in Figure 4 are very subtle, making it difficult to judge whether the surface changes are due to different conformations or the result of applying different thresholds to the density map. Also, I argue that cryoDRGN-AI and cryoDRGN2 also account for conformational heterogeneity without explicitly classifying particles; their qualitative results should also be compared in Figure 4.
For quantitative evaluation in Table 1 and Table 2, the authors only use Img-FSC to compare reconstruction quality. To the best of my knowledge, prior work such as cryoSPARC and cryoDRGN reports widely used 3D resolution calculated by thresholding the FSC curve between two half maps of the reconstruction for experimental datasets or between the reconstructed 3D density map and the synthetic ground truth density map (available for synthetic datasets).
**Choice of Metrics for Pose Evaluation**
The authors use the median Frobenius norm as the metric to compare pose error, which ignores the translation part. Referring to DRGN-AI, the in-plane and out-of-plane angle error, along with the translation error, should be reported. Additionally, showing the angle distribution of poses for comparison could be beneficial.
**Ablation Study**
This paper misses the ablation study of the number of K. I would like to know how to determine K in real cases. In this paper, the authors run CryoSPARC multiple times to determine the best K for H.
**Miscellaneous**
1. The chirality of the reconstruction results for the pre-catalytic spliceosome in Figure 4 appears to be incorrect.
2. The construction of the ribosplike dataset involves a mixture of three different proteins. In real single-particle cryo-EM experiments, this scenario seems rare as the purified samples are carefully prepared and should not contain or only contain a very small ratio of undesired particles that can be easily filtered out in 2D classification. Maybe in cryo-ET, Hydra can perform one-for-all sub-tomo averaging?
3. What’s the protocol to run CryoSPARC on a synthetic dataset? If it performs so well in Table 1, what is the advantage of Hydra?
4. This paper seems rushed. In Lines 57 – 62, the first three contributions have basically the same meaning; please consider rephrasing them.
Technical Quality: 2
Clarity: 3
Questions for Authors: My major concerns have been listed in the weakness section in this review, along with some of my confusion and questions. I list the important ones:
- Why is the synthetic dataset limited to 3000 particle images in a very low SNR of 0.01 for the tomotwin3 dataset? In my experience, this limited number usually causes poor reconstruction results.
- Can you provide the 3D resolutions of the reconstruction results to facilitate a more quantitative assessment?
- Have you tested how Hydra handles classes with a small ratio of particle numbers, given that the combination ratio of the three types of particles is not explored?
- The differences between the various states in Figure 4 are very subtle. How do you ensure that the observed surface changes are due to different conformations and not due to varying density map thresholds?
- Could you include qualitative comparisons with cryoDRGN-AI and cryoDRGN2, as these methods also address conformational heterogeneity without explicit particle classification?
- How do you sample the different conformation states in the conformational latent space? (like the white circular points in the latent space in Figure 4.)
-The median Frobenius norm metric ignores translation errors. Could you also report the in-plane and out-of-plane angle errors, along with the translation error, similar to DRGN-AI?
- What is the protocol for running CryoSPARC on a synthetic dataset, and what advantages does Hydra offer if CryoSPARC performs well in Table 1?
- How is the runtime efficiency of the Hydra algorithm compared to other algorithms?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: The authors have addressed the limitations of Hydra in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank our reviewer for their comments on the significance and the difficulty of the problem our method addresses. We hope to address their concerns in the following response.
**Synthetic Datasets**
To further validate Hydra on the tomotwin dataset, we generated a larger version of this dataset (30,000 particles) with the same SNR. Results are shown in Figure A2 (see document attached to the shared rebuttal).
For synthetic data like tomotwin, we use the ground-truth maps to quantitatively evaluate the reconstruction results. Following our reviewer’s suggestions, we provide 3D resolutions for the larger tomotwin dataset (30,000 particles) in Table A1 of the attached document.
The tomotwin and ribosplike datasets are built with a uniform distribution over the three possible states, while the three states in the experimental dataset (RyR) are present in very different proportions (RyR: 71%, p97: 8%, CIII: 13%, junk: 8%, according to cryoSPARC and Hydra, Fig 3.b). We demonstrate Hydra’s ability to process all these datasets, confirming the possibility to handle different types of distributions over classes.
We acknowledge the existence of a gap between the SNR in synthetic datasets vs. experimental datasets. However, for synthetic datasets, we compare Hydra to baselines (cryoSPARC, cryoDRGN2, DRGN-AI) using the same level of noise and show an improvement in reconstruction quality (Table 1, Table 2, Fig S7). Our experiments on the real dataset validate Hydra’s ability to handle realistic noise levels and to outperform existing methods (Fig 3.b, S2, S6).
**Reconstruction Quality Evaluation**
- We will include movies of the states shown in Fig 4 to better show their conformational changes.
- Per-image FSC measures per-particle 3D resolution on synthetic datasets (where ground truth maps are available) and is a standard metric for assessing the reconstruction quality of methods that can represent conformational (continuous) heterogeneity. We refer, for example, to Fig 3 and Supplementary Figure 1 of the cryoDRGN paper [1]. For the tomotwin dataset, which only contains compositional heterogeneity, we provide per-class 3D resolutions with a threshold at 0.5 FSC in Table A1 (see document attached to the shared rebuttal).
[1] Zhong, E. D., Bepler, T., Berger, B., & Davis, J. H. (2021). CryoDRGN: reconstruction of heterogeneous cryo-EM structures using neural networks. Nature methods, 18(2), 176-185.
**Choice of Metrics for Pose Evaluation**
In order to better assess pose accuracy, we provide per-class angular and translation errors for the tomotwin dataset in Table A1, and for the ribosplike dataset in Table A2 (see document attached to the shared rebuttal).
**Ablation Study**
Fig 2 provides an analysis on the influence of $K$. When $K$ is too low (Fig 2.a), the reconstruction is inaccurate. When $K$ is too large, some of the capacity of the model is not used, but the reconstructed states are accurate (Fig 2.b). To further stress-test Hydra, we provide additional results using $K=7$ on the tomotwin dataset in Fig A1 (see document attached to the shared rebuttal).
To obtain the optimal value of $K$, one can run Hydra several times with increasing values for $K$. Although choosing a larger-than-optimal value for $K$ would lead to unnecessary computation, the quality of the reconstruction does not degrade when $K$ is too large, as shown in Fig 2 (b vs. c).
We acknowledge that this sweeping procedure requires time, energy and memory and could be avoided with a principled way of choosing $K$. This limitation is currently mentioned in the Discussion section (L311-317), and potential mitigation strategies are suggested.
**Miscellaneous**
- We thank our reviewer for pointing out the wrong chirality of the spliceosome in Fig 4. Since the chirality of a molecule is not identifiable from cryo-EM projections, we showed the chirality that we got out of Hydra without modification. We will flip (mirror) the density map to show the correct chirality.
- While the choice of proteins in the ribosplike dataset may be unrealistic, having a mixture of different particle types in cryo-EM is not unrealistic, as evidenced by imaging of a lysate sample in the RyR experimental dataset. We agree with our reviewer: adapting this method to sub-tomogram averaging of _in situ_ mixtures is a promising avenue for future work (L320).
- We always use the default parameters of cryoSPARC, both for synthetic and experimental datasets. We will mention this in the supplements.
- Table 1 focuses on the tomotwin dataset, a synthetic dataset with strong compositional heterogeneity. The Table shows previous neural-based methods fail on this dataset. CryoSPARC performs well, but is unable to reconstruct conformational heterogeneity, which is quantitatively shown in Table 2.
- We will clarify the contributions in L57-62.
**Questions**
- We thank our reviewer for suggesting to provide access to the reconstructed maps. We will follow this suggestion and add the reconstructed maps to the supplementary material, in the _mrc_ format. To make our method fully reproducible, we will provide access to the code upon publication.
- We include qualitative comparisons to DRGN-AI and cryoDRGN2 in Fig S2, S4.b, S6 and S7.
- In Fig 4, the conformational states are manually selected in the latent space. In other figures, we use k-means clustering (L247, L940, caption of Fig S2, S4, S7).
- We evaluate the efficiency of Hydra by comparing GPU runtimes on the tomotwin dataset using a single NVIDIA A100 GPU. CryoDRGN2 required 1h50min. DRGN-AI required 1h20min. Hydra required 4h when $K=3$ and 6h20min when $K=5$. We will include this information on runtime efficiency in the supplemental.
---
Rebuttal Comment 1.1:
Comment: I sincerely appreciate the authors' huge efforts in responding to the review. I have also carefully considered the perspectives provided by other reviewers. While some of my concerns, such as the choice of evaluation metrics and the size of the synthetic dataset, have been addressed, I still find the overall quality of the paper to fall below the standard expected for NeurIPS. The main technical contributions remain unclear, and the experimental setup is lacking, particularly due to the absence of a real dataset and the low-quality reconstruction results, which are insufficient for 3D model building. Consequently, I am unable to raise my rating at this time. I recommend that the authors extend this work to cryo-ET sub-tomogram averaging, where the motivation is stronger and more substantial real-world experiments can be performed.
---
Reply to Comment 1.1.1:
Title: Clarification to response -- demonstration on real data
Comment: Thank you for reading our response. We realize there was a misunderstanding in your comment, and we apologize for the lack of clarity.
In our work, we consider two synthetic datasets for validation of our method: tomotwin and ribosplike. The first contains a mixture of 3 complexes that we use for evaluation of hyperparameters (which we updated during rebuttals to contain a realistic number of images, and with resolution metrics); the latter contains a mixture of 3 complexes with conformational motions. However, **we do additionally showcase the method on a real dataset**, collected from a lysate sample where we are able recover the structures of the membrane protein ryanodine receptor (RyR), the mitochondrial respiratory chain complex III (CIII), and the dimeric complex of the soluble valosin-containing protein (p97). We don't emphasize it since there is no ground truth to validate the conformations, however we feel that this dataset demonstrates that the method will be transferrable to real application settings.
To answer your comment on the technical contributions, we would like to emphasize that Hydra is the first method to use a mixture of neural fields for cryo-EM reconstruction. We show that it can handle a novel and complex problem setting in cryo-EM reconstruction: the simultaneous estimation of conformational and strong compositional heterogeneity, in an _ab initio_ setting.
We also want to clarify that the new resolution estimates for the volumes are close to the Nyquist resolution for this dataset (9.2 A vs 9 A). We emphasize that these experiments were performed with 128x128 images for computational considerations for hyperparameter evaluation.
Finally, thank you for the suggestions to extend this work to cryo-ET sub-tomogram averaging (STA). We completely agree this would be an amazing showcase and hope that this work can inspire research in this direction, but we feel that adapting to a new imaging modality and data type is beyond the scope of this work. | Summary: In this paper, author(s) porpose Hydra, an *ab initio* approach to model conformational and compositional heterogeneity. They ahieve this by parameterizing structures of proteins as a mixture of *K* neural fields.
Strengths: Originality:
- Authors propose to incorporate neural network ensemble with recent *ab initio* reconstruction method to enhance the ability to model complex heterogeneity.
Quality:
- Through qualitative experiments, authors demonstrate the method is capable of clearly distinguish different compositional states in the synthetic dataset.
- Hydra is able to identify both compositional and conformational heterogeneity in real datasets.
- The method exhibits better quantitative results compared to existing *ab initio* reconstruction methods.
Weaknesses: Novelty:
- The hierarchical pose search (HPS) method for pose estimation is proposed in DRGN-AI [1].
- Using an ensemble of representations to model heterogeneity in protein cryo-EM reconstruction has been adopted in many previous works [2][3].
[1] Levy, Axel, et al. "Revealing biomolecular structure and motion with neural ab initio cryo-EM reconstruction." *bioRxiv* (2024): 2024-05.
[2] Punjani, Ali, and David J. Fleet. "3D variability analysis: Resolving continuous flexibility and discrete heterogeneity from single particle cryo-EM." *Journal of structural biology* 213.2 (2021): 107702.
[3] Kimanius, Dari, Kiarash Jamali, and Sjors Scheres. "Sparse Fourier backpropagation in cryo-EM reconstruction." *Advances in Neural Information Processing Systems* 35 (2022): 12395-12408.
Technical Quality: 3
Clarity: 3
Questions for Authors: How would Hydra perform on EMPIAR-10076, one widely used dataset with complex compositional heterogeneity?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Authors discussed the limitation in their paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank our reviewer for their comments. We hope to address them in the following response.
**Distinction with Previous Works**
Hydra uses the pose search strategy and autodecoding framework from DRGN-AI [1]. It primarily differs by using several neural networks, “latent scores” (L197) and a new loss function involving a marginalization over the possible classes (Eq 7). Compared to DRGN-AI, Hydra broadens the scope of datasets that can be handled. We will clarify that the pose estimation strategy was already used in DRGN-AI (it is only adapted to consider the simultaneous representation of several classes).
We thank our reviewer for pointing out the relevant reference [3], which we will cite. Although the references [2] (3DVA) and [3] reconstruct a combination of 3D arrays, those are not density maps but rather vectors with the same dimension as density maps, representing the basis of a low-dimensional linear space. The conformation heterogeneity is then represented in the linear space spanned by these vectors (the “structural basis” [3]). Unlike Hydra, 3DVA (cited on L34 and L76) does not handle compositional heterogeneity (L38).
**EMPIAR-10076**
We thank our reviewer for this suggestion. EMPIAR-10076 (assembling ribosome) would indeed be a relevant dataset to demonstrate our method. We will run additional experiments on this dataset and try to include results in the supplement.
---
Rebuttal Comment 1.1:
Comment: I sincerely appreciate authors' clarification. But based on my understanding, 3DVA can handle compositional heterogeneity. In the fig. 9 of the bioRxiv version of 3DVA paper, they showed results on EMPIAR-10076, a dataset with complex compositional heterogeneity.
---
Reply to Comment 1.1.1:
Comment: Apologies we mis-stated that 3DVA does not handle compositional heterogeneity in our last response – we meant to clarify that 3DVA is not designed to handle a mixture of different species, a strong form of compositional heterogeneity. While 3DVA can technically handle compositional heterogeneity (as any density-based method does not explicitly constrain the reconstructed density maps), 3DVA uses a linear subspace of the volumes for its model of heterogeneity, and thus has an inductive bias towards modeling continuous motions of a single complex (similar to cryoDRGN and DRGN-AI). We would like to emphasize that, in Fig. 9 of 3DVA [1], the reconstructed complex is an assembling 50S ribosome where the discrete “classes” share most of their density. Finally, but perhaps most relevant, 3DVA is applied in the fixed pose setting, typically at the end of the processing pipeline after several steps of 2D or 3D classification, where poses are typically aligned to a single static reference structure, which would not apply in the case of a mixture of proteins.
In contrast, Hydra is the first method to adopt an ensemble of independent neural representations to model heterogeneity. This approach is similar to the ubiquitous 3D classification (a discrete mixture model of voxel arrays), but is far more expressive due to its neural representation augmented with latent scores. Thus, we demonstrate Hydra on datasets containing a mixture of different species that have conformational motions, where _ab initio_ pose estimation is necessary.
[1] Punjani, A., & Fleet, D. J. (2021). 3D variability analysis: Resolving continuous flexibility and discrete heterogeneity from single particle cryo-EM. Journal of structural biology, 213(2), 107702. | Summary: This work describes a new method for ab initio heterogeneous reconstruction in cryo-EM using mixtures of neural fields. This generalizes previous approaches, such as CryoDRGN and DRGN-AI, which attempts to reconstruct 3D molecular densities using a single neural field representation. The resulting method is able to handle both compositional (discrete) and conformational (continuous) heterogeneity, with each mixture component handling the continuous variability of each distinct compositional state. The performance of the method is evaluated on two synthetic and one experimental dataset.
Strengths: This represents a natural and well-structured extension of previous neural-field approaches to cryo-EM reconstruction. The method is well-motivated and described with an appropriate level of detail. Finally, the numerical results verify many important aspects of the proposed method. Overall, the writing is clear and easy to understand.
Weaknesses: The most important issue is the lack of experimental validation for the combined estimation of compositional and conformational heterogeneity. While this is tested in the third experiment (Section 4.3), this is only on a synthetic dataset. As the authors are no doubt aware, however, the behavior of a reconstruction algorithm can be quite different when applied to real data. It is therefore encouraging that the authors present results on an experimental dataset (Section 4.2), but this only covers compositional heterogeneity (and not conformational). That being said, validating the full method on an experimental dataset would make a stronger case for the proposed work.
Technical Quality: 3
Clarity: 4
Questions for Authors: – On line 49, please explain the principal difference between DRGN-AI and its predecessors. This is particularly important since the proposed method is quite closely related.
– What is meant by “tackle the discrete heterogeneity and the continuous variability *in sequential order*”? Is this referring to clustering the data, estimating the pose, and then estimating the continuous variability for each cluster? This could perhaps be clarified.
– Another approach to manifold embedding for continuous heterogeneity (line 82) is discussed in Moscovich et al., 2020.
– Line 130 should have “used” instead of “use”.
– The last line in the caption for Figure 1 is missing “Section” in front of “3.3”.
– If the dimension d is the same for each component in the mixture (eq. 3), how does this work when the number of degrees of freedom varies with each discrete structure?
– In eq. 4, σ_2 should be σ^2.
– Please discuss the choice of using point estimates for the pose and latent variables in eq. 6. Why is this approximation reasonable? This can be especially sensitive in the ab initio case where the can be great uncertainty in the pose estimation stage. Why do we not need to account for this here?
– Why use the area under FSC to evaluate the accuracy of the reconstruction (Table 1)? What are the advantages compared to gold-standard resolution estimates?
– Does the “strategy” on line 216 refer to the pose (eq. 9) estimation or to the overall estimation of pose and other variables? In other words, do we run ~1e6 iterations of pose estimation of ~1e6 iterations of pose estimation alternated with SGD?
– In line 227, what is meant by “a single pass”? The described algorithm seems to loop over the data several times over. Single-pass approaches usually refers to moment-based methods which pass over the data once to calculate certain statistics and then use those statistics to reconstruct the molecule.
– What is the SNR of the images described in Section 4.3? This information is important enough that it should be supplied in the main text.
– Please provide running times for the various experiments. The metric of 4 GPU-days is cited in the Discussion, but it is not clear how this relates to the number of images (is this for the 60 000-image dataset or the 3 000-image dataset?).
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: As stated above, the main limitation of the work is its lack of validation on experimental data (for both compositional and conformational variability). It is also not clear how computationally intensive the implementation is and how this can be mitigated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank our reviewer for their constructive comments and overall positive rating of our submission.
**Validation on an Experimental Dataset Combining Compositional and Conformational Heterogeneity**
As mentioned by our reviewer, processing real cryo-EM data often comes with unforeseen and significant challenges. By demonstrating Hydra’s ability to process the “ryanodine receptor” (RyR) mixture dataset, we hope to provide evidence that our method can process experimental data, in spite of the challenges and non-idealities that come with it. We agree with our reviewer on the fact that demonstrating the unique capabilities of Hydra on an experimental dataset with mixed heterogeneity, and potentially revealing motion or complexes that could not be seen before, would constitute an exceptional scientific leap. By sharing our work (and our code), we hope to provide new capabilities to the cryo-EM community and enable it to make this leap.
**Questions**
- We will clarify the main differences between Hydra and DRGN-AI in L49: use of several neural networks, the presence of “latent scores” and a new loss function involving a marginalization over the possible classes.
- We will clarify the term “sequential order” on L79. The explanation given by our reviewer is correct.
- We will add a reference to Moscovich et al (Cryo-EM reconstruction of continuous heterogeneity by Laplacian spectral volumes, Inverse Problems, 2020) on L82. Thank you for pointing this out.
- We will add the missing character on L130.
- We will add “Section” in the caption of Fig 1.
- In the current architecture, the conformational latent vectors are stored in a $N$-by-$d$-by-$K$ array, meaning that all classes have the same latent dimension. We hypothesize that the latent dimension $d$ must be greater or equal to the largest number of degrees of freedom among all compositional states. However, it would be possible to use per-class dimensions $d_k$, for example using a dictionary of $K$ arrays of dimensions $N$-by-$d_k$. We will mention this in the Discussion section.
- We will fix the typo in Eq 4, thank you for pointing it out.
- For poses, the point estimates are only used in the second phase of optimization (the Stochastic Gradient Descent phase). During the first phase (Hierarchical Pose Search), poses are exhaustively searched over using the strategy described in Section B (supplementary material). This HPS phase enables Hydra to perform _ab initio_ reconstruction.
- The exhaustive search (Eq 9) is only applied to poses on a predefined number of images (~1e6). The schedule describing the switch from HPS to SGD is further described in L869-871 (supplementary material).
- Following our reviewer’s suggestion, we report per-class resolutions at an FSC cutoff of 0.5 for the tomotwin dataset in Table A1 (see document attached to shared rebuttal). We note that a per-image FSC metric is better suited for the ribosplike dataset as it accounts for conformational differences in the true structural distribution.
- We recognize that the term “single pass” is misleading in L227. We will replace it with “single run”.
- The SNR for the dataset described in Section 4.3 is 0.1. We will add this information to the supplementary information, where the generation protocol is described (Section F).
- The mention of “4 GPU-days” relates to the experimental dataset (85k images). The 3,000 image dataset tomotwin3 was processed using a single NVIDIA A100 GPU and required 4h when $K=3$ and 6h20min when $K=5$. DRGN-AI required 1h20min. We will clarify this point in the supplement.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough rebuttal and for your clarifications. I will keep my score at 7 as I believe this is a good paper that deserves to be published as part of the proceedings. | Summary: The paper presents a neural network-based methodology for modeling both compositional and conformational protein states in cryo-electron microscopy (Cryo-EM) 3D reconstruction.
In particular, the authors propose a fully *ab initio* approach, named Hydra, which enables the joint inference of poses, conformations, and class identities.
The novelty of this approach lies in its ability to capture both discrete (compositional) and continuous (conformational) heterogeneity within Cryo-EM datasets, without relying on pre-computed pose estimations. Previous methods have either struggled with accurate pose estimation, relied on coarse initializations and upstream algorithms, or had limited capacity to represent complex biomolecular mixtures.
Moreover, the authors validate their proposed approach by comparing it against other popular methodologies, using both synthetic and real datasets, showing the potential of Hydra to capture both compositional and conformational heterogeneity.
Strengths: The paper proposes a novel approach that improves over previous methods. In particular, the proposed method extends previous work, DRGN-AI, to use mixture models of multiple neural fields instead of a single neural field to model conformal heterogeneity. In addition, the proposed approach can directly classify the reconstructed sample between 1 of K different classes, providing advantages over methods that rely on downstream classification tasks. Bibliographic references are exhaustive and well-discussed. The provided results highlight the significance of the method, showing substantial improvements over three pre-existing methods.
Weaknesses: Although the manuscript is generally well-written, it may be hard to follow for someone who is not an expert in the field. In particular, there are some areas where accessibility to a broader readership might be possible:
* For me, reading some previous work was necessary to understand the context of this work sufficiently to appreciate and understand the contributions of this manuscript. A clearer introduction tailored to a broader readership could make this submission more self-contained, which I would find desirable.
* The introduction lacks a straightforward definition of the taxonomy used throughout the paper. While the authors introduce the context of their research, in particular regarding cryo-EM, I think that the reader should be introduced to the concepts of “poses”, “conformational states”, and “compositional states”, and then to why they are relevant to the presented research and future users. Casting this into a concise but clear way will be much appreciated by future readers, I believe. After that, the authors address how their technical contributions addressed the main challenges presented to them. In my opinion, such changes will allow non EM experts to appreciate the presented work much better and potentially allow other fields to benefit from the same/similar ideas and methods.
* The proposed method seems to rely heavily on the DRGN-AI approach from Levy et. al. Although the authors explicitly state in the introduction that this paper represents an extension of that method, throughout the paper it’s not always clear whether the methodological choices described are novel contributions or are unaltered from the previous method. For example, in Section 3.4, the sentence “We use the pose estimation strategy introduced in [25]” may lead the reader to think that the authors are experimenting with a new pose estimation approach from the literature, while it was already used in the DRGN-AI (or at least I believe so).
* While results are clear and easy to follow, they only present standard deviations in Table 1 and not in the other tables. Moreover, it is unclear why the CryoSPARC result in Table 1 does not report any standard deviation.
* Will the authors provide a public code repository enabling others to replicate the presented results and use the method in the context of their data and experimentation?
Technical Quality: 3
Clarity: 2
Questions for Authors: - Do you think that the choice of using only 2 dimensions to represent the conformational space in Hydra may have affected the performance of your method? I think that the choice is excellent for increasing the interpretability of results since it eliminates the need to use techniques like PCA to visually understand the learned latent space. My intuition is that thanks to the K neural fields, the conformal representation may be better partitioned in lower dimensional spaces, but I wonder if you also have considered a higher number of dimensions.
- In Table 2, both Img-FSC and Pose errors are reported. It is unclear to me if the pose error and the Image FSC may be somehow correlated, as, intuitively, a wrong estimated pose could increase the reconstruction error. If that was the case, then I would be surprised to see such a small gap in Img-FSC with respect to CryoDRGN2 compared to the improvement in pose error.
- In Table 2, Hydra shows the lowest Pose error, while DRGN-AI shows the highest despite using a similar pose estimation approach. Can this performance difference be ascribed to the mixture model, or there are other factors to take into account, e.g. the dimensionality of the conformational space?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - The main limitation of this work is the need to know the number of classes K in advance. This can be addressed by an exhaustive search of the optimal value of K, however, this further worsens the second main limitation that is related to the computational cost of the method. The authors address both limitations and propose a possible future direction to reduce computational cost.
- Another point of view of the previous limitation is related to scalability. Given a fixed computational budget, since a new neural field is required for each additional protein class, the possible choice of the number of desired classes K is limited, potentially reducing the applicability of the method in datasets that contain a higher number of different macromolecules.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank our reviewer for their constructive feedback and suggestions on ways to improve the clarity of the manuscript.
**Clarification of Prior Work and Contributions**
We thank our reviewer for pointing out the lack of clarity in the presentation of prior works and apologize for the absence of important context. We will review our introduction and make an effort to make it clear for a broader audience.
**Clarification of Field-Specific Terminology**
Again, we apologize for the oversight on using field-specific terminology (pose, conformational/compositional heterogeneity). We will clarify these terms in order to allow non-cryo-EM experts to appreciate the presented work and potentially borrow from its ideas.
**Distinction with DRGN-AI**
Hydra uses the pose search strategy and the autodecoding framework from DRGN-AI. It primarily differs by using several neural networks, “latent scores” (L197) and a new loss function involving a marginalization over the possible classes (Eq 7). By doing so, Hydra broadens the scope of datasets for cryo-EM reconstruction methods. As suggested by our reviewer, we will clarify that we use the pose estimation strategy in DRGN-AI and that it is adapted to cope with the simultaneous representation of multiple maps.
**Standard Deviations in Table 1**
The standard deviations reported in Table 1 characterize the spread of the area under the Fourier shell correlation obtained over 20 images (with different conformational states $z$) per class. This can only be measured for neural-based methods (Hydra, DRGN-AI, cryoDRGN2), hence the absence of standard deviation for cryoSPARC, which outputs a single conformational state per class.
**Code Release**
Yes, we will release our code upon publication.
**Questions**
- We agree with our reviewer’s intuition. The use of $K$ neural networks increases the expressivity of the representation and probably decreases the number of latent dimensions required to capture conformational motion. Empirically, we found that a dimension of two was sufficient to obtain accurate density maps, and to capture continuous motion in the ribosplike dataset (Fig 4). However, if Hydra had to process a dataset where one of the compositional states had strictly more than two degrees of freedom, the dimensions of the latent space would have to be increased. We hypothesize that this dimension must be greater or equal to the largest number of global degrees of freedom among all compositional states. We will add a brief discussion on the choice of the latent dimension in the last section.
- Our reviewer’s intuition on the correlation between pose error and image-FSC is correct. In order to provide a clearer assessment of each method’s performance, we show updated metrics for the ribosplike dataset, including rotation and translation errors, in Table A2 (see document attached to the shared rebuttal).
- To show that the poor performance of DRGN-AI on the ribosplike dataset is not linked to the low dimension of the latent space, we give eight dimensions to the latent space of DRGN-AI (L244). We therefore hypothesize that Hydra’s ability to reconstruct this dataset can be ascribed to the mixture model. We provide qualitative reconstructions and a UMAP plot of the conformations obtained with DRGN-AI in Fig S7. | Rebuttal 1:
Rebuttal: We thank all our reviewers for their detailed and constructive feedback. We value their appreciation of the significance of this work **[RoiP, qUkh]**, its novelty **[14wr]** and the validation of our method on experimental data **[u3f9, 14wr]**. We appreciate them highlighting the substantial improvements demonstrated by Hydra over pre-existing methods **[RoiP, 14wr]** and the relevance of our quantitative **[SenM]** and qualitative **[14wr]** results. We are also glad they emphasized the clarity of the manuscript **[SenM]**.
We provide additional figures and tables in the attached document.
Pdf: /pdf/09e191b707eb8782468a151749c82c3ea8d24880.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This submission presents Hydra, a method for handling heterogeneous cryo-EM reconstruction. Hydra can model both conformational and compositional heterogeneity and can perform ab initio reconstruction. To achieve this, it parameterizes structures as arising from one of K neural fields. In the optimization pipeline, the conformations, poses, class probabilities, and neural fields are optimized to maximize the likelihood of the observed images. The authors demonstrate the reconstruction of multiple protein complexes from an experimental dataset.
Strengths: 1. The proposed approach can handle ab initio reconstruction, meaning it does not require pre-computed image poses.
2. The capability of handling compositional heterogeneity is not well-explored.
3. Results on experimental datasets are provided, with comparisons to baseline methods.
Weaknesses: 1. Compared to DRGN-AI, the proposed approach primarily changes the single neural representation to multiple ones.
2. The determination of K seems tricky.
3. The paper lacks comparisons with conventional approaches such as cryoSPARC and RELION, especially qualitative comparisons. More results, preferably video results, are needed.
4. It is unclear whether the pose predictions are accurate. More visualization analysis would be helpful.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How should the value of K be determined? How would the proposed model behave if K is too large?
2. Will the multiple neural representation design lead to excessive partitioning, that is, splitting complete independent structures into unreasonable parts?
3. How does the proposed approach compare to conventional methods like cryoSPARC and RELION in terms of reconstruction quality and efficiency?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank our reviewer for their comments. We hope to address their questions and concerns in the following response.
**Distinction with DRGN-AI**
Hydra uses the same pose search strategy and autodecoding framework as DRGN-AI. It primarily differs by using several neural networks, “latent scores” (L197) and a new loss function involving a marginalization over the possible classes (Eq 7). By doing so, Hydra broadens the scope of datasets for cryo-EM reconstruction. In particular, we demonstrate that, unlike Hydra, DRGN-AI fails at processing datasets with strong compositional heterogeneity, both on synthetic (Fig 2.a, S7.b) and experimental (Fig 3.d, S2, S6) datasets.
We will clarify, in the description of the method, what elements are borrowed from DRGN-AI.
**Determination of $K$**
The optimal value of $K$ can be obtained by running Hydra several times with increasing values for $K$. Although choosing a larger-than-optimal value for $K$ would lead to unnecessary computation, we find that the quality of the reconstruction does not degrade when $K$ is too large, as shown in Fig 2 (b vs. c). We provide additional results using $K=7$ on the tomotwin dataset in Fig A1 (see document attached to the shared rebuttal).
We acknowledge that this sweeping procedure requires time, energy and memory and could be avoided with a principled way of choosing $K$. However, other conventional approaches for heterogeneous reconstruction (cryoSPARC, RELION) are limited in the same way, while not being able to handle conformational heterogeneity (Fig S7.c). This limitation is currently mentioned in the Discussion section (L311-317), and potential mitigation strategies are suggested.
**Comparison to Conventional Approaches**
Hydra is qualitatively compared to cryoSPARC on the experimental dataset (Fig S5) and on the synthetic ribosplike dataset (Fig S7). We thank our reviewer for suggesting to provide access to the reconstructed maps. We will follow this suggestion and add the maps reconstructed with cryoSPARC, DRGN-AI and Hydra to the supplementary material, in the _mrc_ format.
**Metrics on Pose Estimation**
To better assess the accuracy of pose estimation, we provide both translation and rotation errors for the tomotwin (Table A1) and ribosplike (Table A2) datasets. For rotation errors, we use the geodesic distance in $SO(3)$ (angular error), in degrees, which is a more interpretable metric than the Frobenius norm. We will add these metrics to the supplementary material.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's rebuttal, I have read it carefully as well as other reviewers' opinions.
I noticed that the proposed method has been compared with cryoSPARC. What I want to hear is some insights about the advantages of the proposed method compared with these traditional methods, and why structural biologists should use the proposed method instead of sticking to traditional methods?
I hope to hear the author's answer during the discussion stage to decide my final rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for bringing up this discussion point. Our main motivation for Hydra is to propose a method designed to tackle a more challenging class of datasets for cryo-EM – those containing a mixture of distinct, dynamic protein complexes. We note that this is a new experimental setting for cryo-EM, thus we hope that Hydra will inspire new experimental protocols as well as other future reconstruction methods designed for this form of extreme heterogeneity.
Hydra performs _ab initio_ reconstruction, i.e. joint inference of poses and structure, to address this form of extreme heterogeneity. This contrasts with traditional tools that require input poses (e.g. cryoSPARC 3DVA, cryoSPARC 3DFlex, and most other heterogeneity analysis tools). Since poses are typically obtained from an upstream consensus reconstruction, this assumes that all images can be aligned to a static reference structure, which does not hold when analyzing a mixture of distinct complexes.
Hydra is the first method to adopt an ensemble of independent neural representations to model heterogeneity. This approach is inspired by the ubiquitous 3D classification (a discrete mixture model of voxel arrays), but is far more expressive due to its neural representation augmented with latent scores. In Fig. 4, for example, we show that Hydra reveals the conformational heterogeneity of the ribosome, the spliceosome and the spike protein, while 3D classification in cryoSPARC can only reconstruct three static states (Fig. S7.c). | null | null | null | null | null | null |
FEEL-SNN: Robust Spiking Neural Networks with Frequency Encoding and Evolutionary Leak Factor | Accept (poster) | Summary: This article proposes a robust algorithm for spiking neural networks. The algorithm includes a frequency-domain filter with a hard threshold and trainable neuron leakage parameters. The author's motivation in organizing the paper is based on biological interpretability, adopting an engineering approach in methodology, and attempting to propose a unified, robust framework for spiking neural networks. The author combines multiple previous methods in the experiment and provides better robustness results under adversarial disturbances.
Strengths: The author provides better robustness against disturbances. The method was verified by the author in GN FGSM, PGD, BIM, and CW.
I think the author's motivation is very important and urgent, consistent with the interests of NeurIPS.
Weaknesses: The author proposes that the robustness of SNN lacks theoretical analysis, but in reality, the theoretical analysis provided by the author is not significantly different from the theoretical analysis in StoG. The conclusions presented in the paper corresponding to the StoG method are similar to the theory proposed by the author. The innovation point here is not clear.
The author proposes that frequency encoding is based on cognitive motivation, i.e., selective visual attention mechanism, rather than coding level, which is inconsistent with the motivation behind the dynamic innovation points of variable dynamic parameters proposed later.
The two methods proposed by the author, FE and EL, did not conduct a detailed ablation study to determine the effectiveness of the module. Especially lacking in the performance of using the EL method alone.
What is the difference between the El method proposed by the author and the method proposed by Ding et al. in ICML 2024?
[Ding et al., 2024] https://arxiv.org/abs/2405.20694.
Technical Quality: 2
Clarity: 3
Questions for Authors: How did the author train a network containing frequency domain filtering (FE)? What backpropagation method is currently used by the author to propagate the direction of the FE module (such as what method is used, what tools are used, and what is the speed of backpropagation)?
Although the FE module proposed by the author improves the robustness of spiking neural networks, I would like to ask if FE will disrupt the low energy consumption characteristics of spiking neural networks. After all, frequency domain filtering seems to be more suitable in combination with traditional CNN, and the calculations here are not sparse.
When conducting white box and black box attacks, I believe it is necessary for the adversarial model with and without FE modules to be used as attack models in the validation of the paper.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weakness and questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **1. The innovation point of this work is not clear compared with StoG [11].**
We highlight the innovation of our theoretical analysis compared to StoG from two key perspectives:
1. **Theoretical focus**. For the regularizer $|\epsilon \odot \nabla_x \mathcal{L}(x)|_1$ in Eq. 5, StoG focuses on **perturbation constraints during SNN signal transmission**. In contrast, we analyze the effects of **input noise** and spiking neuron parameters on this regularizer (shown in Eq. 6). This shift in focus allows us to explore how these factors theoretically constrain the regularizer, offering a different perspective on enhancing robustness.
2. **Implementation of theoretical framework**. Unlike StoG's stochastic gating factor, we introduce a **frequency encoding method to eliminate perturbation inputs**. Tab. 1 (main paper) shows our method further enhances StoG's robustness, demonstrating that our method and idea do not conflict with StoG. Moreover, while prior studies empirically explored parameters like firing threshold [12] and leak factor [31] on SNN robustness, our robust constraint framework offers a theoretical basis for these findings.
In summary, our work advances the theoretical understanding of input noise and spiking neuron parameters to SNN robustness and proposes an innovative frequency encoding method. We will update our revised version to state the contribution of the adversarial loss constraint introduced in StoG to our theoretical analysis while emphasizing the distinctions and innovations of our approach.
### **2. The cognitive motivation of FE is inconsistent with dynamic parameters proposed later.**
The concept of selective visual attention in [8] is described as "visual attention focuses on one region of the visual field at a time", closely resembling the retinal coding rather than a cognitive process, as described in lines 173-174. Inspired by this, we propose FE to capture different frequency information at different time steps in SNNs, and EL is used to better learn the correlation between information of different frequencies across time steps.
Furthermore, FE and EL methods are two contributions of our work, which can be used independently and in combination to further improve the robustness of SNNs.
### **3. Lack of performance when using the EL method alone.**
We have now included the performance of our EL alone in Tab. R2 of the rebuttal appendix. It is evident that both FE and EL effectively enhance the robustness of the original methods, with FEEL further improving robustness on this foundation. For instance, under a PGD attack, the original RAT method achieves 8.87% accuracy, while our FE increases robustness to 9.70%, EL to 11.39%, and FEEL to 12.36%. This illustrates the effectiveness of each module of our method.
### **4. What is the difference between the EL method and DLIF [C]**
Our EL and DLIF [C] optimize the leak factor to improve SNN robustness but differ in **motivation** and **implementation**.
1) **Motivation**, DLIF uses a dynamic leak factor to reduce perturbation transmission, while our approach leverages it to capture correlations across time steps via frequency encoding, thus enhancing the learning capability of SNNs further against perturbations.
2) **Implementation**, DLIF dynamically learns the leak factor at each time step but shares it across neurons within a layer. our EL learns the optimal leak factor across time steps and among individual neurons within the same layer, leading to greater robustness, as described in lines 74-78, 212-213.
While both methods aim to enhance robustness through leak factor optimization, our EL method achieves superior results, as shown in Tab. R7. We will cite DLIF [C] in our revision to clarify the differences.
**Table R7: Performance (white-box attack) comparison with DLIF[C] (CIFAR100 with the same experimental setting in Tab.1 of [C]).**
|Method|Clean|FGSM|PGD|
|:-:|:-:|:-:|:-:|
|Vanilla+DLIF|70.79|6.95|0.08|
|Vanilla+EL|71.41|9.16|1.29|
||||||||
### **5. How did the author train a network containing frequency domain filtering (FE)?**
We would like to clarify that our frequency masking operation is a data preprocessing step and does not participate in model training. As shown in Eq. 10 and Fig. 2a, after frequency mask crops information from high-frequency to low-frequency at different time steps (Eq. 7, 8, 9), the frequency domain images are converted back to the spatial domain images with varying frequency information at different time steps (Eq. 10). These images are then used for training. Therefore, our backpropagation method remains identical to standard training methods, using the surrogate gradient $\frac{\partial O}{\partial u} = \frac{1}{\gamma^2} \max \left( 0, \gamma - |u - V_{th}| \right)
$ based on BPTT rule. More training settings are detailed in lines 218, 450-454 of the main paper and in the code provided in Supplementary Materials.
### **6. Whether FE disrupts the low energy consumption characteristics of spiking neural networks.**
As addressed in the previous question, our FE functions as a data preprocessing method and does not participate in model training. Therefore, FE does not disrupt the low-energy performance of SNNs.
### **7. It is necessary for the adversarial model with and without FE modules to be used as attack models in the validation of the paper.**
Thanks for your suggestion. The relevant experiments are included in our main paper. Fig. 3 and Fig. 4 illustrate the performance of the vanilla model with and without FE under white-box and black-box attacks, respectively. Tab. 1 presents the performance of SOTA robust SNNs with and without FE under white-box attacks. We have now added the performance of adversarial models ($i.e.$, AT [13] and RAT [9]) with and without FE under black-box attacks in Tab. R5 of the rebuttal appendix. All experimental results confirm that FE effectively enhances the robustness of the model.
---
Rebuttal Comment 1.1:
Title: Looking forward to the exact experimental setting for the last question
Comment: Dear Reviewer zYHj:
Thank you for your detailed feedback. Regarding the last question in the Questions part, it is confusing for us to understand the exact experimental setting of “the adversarial model with and without FE modules to be used as attack models”. Do you mean using black-box attack to evaluate the performance of the model with and without FE? We have now answered the question from this understanding, the detailed answer can be found in our Rebuttal. If you feel that the answer does not meet your question, please let us know, and we will be glad to give further responses.
---
Rebuttal 2:
Title: Further Response to Reviewer zYHj
Comment: Thanks for your further comment. We address your concern in two parts:
### **1. This means that FE can also be added to the ANN-based image classifier and has little to do with the characteristics of the SNN itself.**
Since ANN-based image classifiers lack temporal characteristics and timesteps, we construct five training datasets for ANN-based methods: (1) images generated by FE using all timesteps (similar to data augmentation), and (2) images generated by FE at each of four individual timesteps.
We have already evaluated the models' performance using these five training datasets, with the results presented below (can also be found in Tab. 2 of the main paper). These results suggest that applying FE without considering temporal characteristics is less effective.
**Table 2: Effect of different training datasets generated by FE. The attack is PGD with perturbation $\epsilon=4/255$, iterative step $\alpha=0.01$, and iterative step $k=4$. The dataset is CIFAR100 with $T = 4$, the network is VGG11.**
|Datasets|Clean|GN|FGSM|PGD|BIM|CW|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|images generated by FE using all timesteps|70.88 |69.73| 14.43 |4.33| 4.19| 6.21|
|images generated by FE at the first timestep|62.01 |60.73 |9.47| 2.28 |2.17 |4.79|
|images generated by FE at the second timestep|68.78 |67.55 |13.62 |4.74 |4.35 |6.37|
|images generated by FE at the third timestep|69.96 |69.26| 14.60 |5.38 |5.20| 6.87|
|images generated by FE at the fourth timestep|70.95 |70.39| 15.72| 5.41| 5.22| 7.45|
|FE (Ours, using the temporal characteristics of SNN )|**71.40**|**70.59**|**16.80**| **6.89** |**6.62**| **8.09**|
||||||||
Please note that the models referenced in Tab. 2 are SNNs, used to ensure a fair comparison for validating frequency mask strategies, rather than to assess the effectiveness of adding FE to ANN-based classifiers. Due to the rush time, we are unable to conduct experiments on ANN models. We will perform a fair comparison by adding FE to ANNs and will include the results and discussions in the final version.
### **2. How do you implement the adversarial attack when FE is employed?**
In our study, we implement the adversarial attack by applying **adversarial perturbations to the image domain before the FE module**, aligning with the first scenario you mentioned. We achieve the differentiable conditions of FE by:
- According to Eq. 7-10 of the main paper, the formulation of our FE is as follows:
$$\tilde{x} = \mathcal{F}^{-1} \left( \mathcal{M} \odot \mathcal{F}(x) \right),$$
where $x$ is the original image, $\tilde{x}$ is the FE-encoded image, $\mathcal{F}$ and $\mathcal{F}^{-1}$ represent the Discrete Fourier Transform (DFT) and Inverse-DFT (Eq. 7 and Eq. 10), and $\mathcal{M}$ is the frequency mask (Eq. 8 and Eq. 9).
- Both $\mathcal{F}$ and $\mathcal{F}^{-1}$ are differentiable [33] and can be directly implemented using the *torch.fft.fft2(x)* function in the PyTorch framework.
- The frequency mask operation $\mathcal{M} \odot \mathcal{F}(x)$ involves element-wise multiplication of the frequency domain image $\mathcal{F}(x)$ with the 0, 1 matrix $\mathcal{M}$ of the same size, which is also differentiable, as implemented in [15].
Thus, adversarial perturbations can be generated directly through these differentiable operations.
We will revise the paper to incorporate the above explanation as follows (revised or newly added contents are in **bold**):
In line 193 of Section 4.2:
*“In summary, the proposed FE method, as described in Eq. 10, allows us to control the frequency mask radius $r$ at each time step, enabling the suppression of different frequency ranges. **Since the DFT ($\mathcal{F}$), IDFT ($\mathcal{F}^{-1}$) (Eq. 7 and Eq. 10) and frequency mask operation ($\mathcal{M} \odot \mathcal{F}(x)$, Eq. 8 ) are differentiable [33,15], the FE module can be directly utilized to generate adversarial perturbations. Therefore, the adversarial perturbations are applied to the image domain before FE.”***
In line 239 of Section 5.1:
*“The attack methods include adversarial attacks ($i.e.$, FGSM, PGD with random start, BIM, and CW, for both white-box and black-box attacks) and common noise attack ($i.e.$, gaussian noise, GN). **In our study, the adversarial perturbations are applied to the image domain before FE, leveraging the differentiable property of the FE module.”***
**Reference**
[15] Lirong He, Qingzhong Ai, Yuqing Lei, Lili Pan, Yazhou Ren, and Zenglin Xu. Edge enhancement improves adversarial robustness in image classification. Neurocomputing, 518:122–132, 2023.
[33] Duraisamy Sundararajan. The discrete Fourier transform: theory, algorithms and applications. World Scientific, 2001.
---
Rebuttal 3:
Comment: I truly appreciate the effort the author has put into this work. However, I find that Table 2 does not fully convey a sense of dynamics to me. Upon revisiting Table 2, it gave me an opportunity to reflect further on your statement: "This superiority stems from our frequency encoding, which simulates selective visual attention in the biological nervous system, thereby enhancing the model’s robustness more effectively." (above Table 2 in the main content).
I’m curious about how the model represents or emulates selectivity in the retina. I wanted to share some paragraphs for your reference and to hear your thoughts on this matter. From my perspective, selective attention should be more of a data-driven approach.
"A number of studies have measured the influence of selective attention on the coding of visual stimuli by single neurons (e.g., Spitzer et al., 1988; McAdams & Maunsell 1999, 2000; Reynolds & Chelazzi 2004) and populations of single neurons (Cohen & Maunsell 2009, 2010), and they have discovered that attention appears to increase the information conveyed about stimuli. Moreover, attention-driven increases in coding appear to be specific to behavioral conditions in which an animal’s perceptual sensitivity per se, rather than simple response bias, is increased (Luo & Maunsell 2015)."
This content is from your reference [8]. If you delve into [8], you’ll find that "selective attention is defined behaviorally as a relative improvement in psychophysical performance for attended versus unattended stimuli," which seems different from "closely resembling the retinal coding rather than a cognitive process" as mentioned in the authors’ rebuttal. I would gently suggest that the authors reconsider including the bio-inspired aspect if it is not fully aligned with common viewpoints.
---
Rebuttal 4:
Title: Further Response to Reviewer zYHj
Comment: We sincerely appreciate your insightful feedback, which will greatly improve our paper. While there may be potential misleading in References [8] and [E] as they share the same title but have different authors, we agree with your suggestion that selective attention should be more of a data-driven approach. We agree that the current version lacks a comprehensive exploration of this viewpoint, and we will remove the bio-inspired aspects that do not fully align with common viewpoints in our revised version. Thank you for providing us with a learning opportunity and potential exploration directions for future research. Thank you once again for your time and efforts.
**References**
[8] Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual review of neuroscience, 18(1):193–222.
[E] Moore, T., & Zirnsak, M. (2017). Neural mechanisms of selective visual attention. Annual review of psychology, 68(1), 47-72.
---
Rebuttal Comment 4.1:
Comment: Thanks. I will increase my score to 5. Happy to see the performance improvement.
---
Rebuttal 5:
Title: Response to Reviewer zYHj
Comment: We are glad to have such a nice discussion with you. Thanks for your insightful suggestions which significantly help improve the quality of our work. We feel quite encouraged! | Summary: This paper presents a unified framework for SNN robustness, based on this framework, this paper further proposes a frequency encoding (FE) method for SNNs to decrease the input perturbations and proposes an evolutionary membrane potential leak factor (EL) to ensure that different neurons in the network learn the optimal robustness leak factor at different time steps, thus improving the robustness of SNNs. Extensive experiments are conducted to verify the effectiveness of this method.
Strengths: 1. The authors present a unified framework for SNN robustness constraints, which provides a potential explanation for the robustness improvements achieved by previous work and inspires enhancements in the encoding method and the leak factor for SNN robustness.
2. The proposed FEEL method crops information from high-frequency to low-frequency to remove the input noise and learn the optimal robustness leak factor at different time steps. The extensive results demonstrate that the FEEL method is state-of-the-art. Both the FE and EL methods further enhance the robustness of current defense strategies.
Weaknesses: 1. The Frequency Encoding (FE) is proposed to suppress the perturbation $\varepsilon (t) $ in Eq. (6). The implementation of FE is based on the cropping operation in Eq. (9). Although such an implementation gives the benefit of $\varepsilon (t) $ suppression for $T>1$, it also brings the drawback of valid information loss. It is not clear whether the benefits outweigh the drawbacks or vice versa. Please provide more evidence or analysis to support the performance improvement by FE (as compared with direct coding) in Table 2.
2. Section 4.3 introduces the implementation of considering leak factor $\lambda$ in the first term of Eq. (6). According to the objective of Eq. (6), an intuitive approach is to minimize $\lambda$. However, the authors proposed a learnable leak factor, which seems to contradict this intuitive approach. Please clarify it.
3. There is a new attack method [1] specifically designed for SNNs which outperforms attacks designed for ANNs. I wonder how the proposed method in this paper performs under such a kind of attack.
[1] Bu T, Ding J, Hao Z, et al. Rate gradient approximation attack threats deep spiking neural networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023: 7896-7906.
Technical Quality: 3
Clarity: 3
Questions for Authors: please refer to the weakness
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **1. Please provide more evidence or analysis to support the performance improvement by FE.**
We would like to show our FE not only improves defense accuracy but also maintains clean accuracy from both **data observation** and **additional experimental validation**.
1) **Data observation**. We visualize the frequency spectrums of CIFAR10 images alongside the added GN, FGSM, PGD, BIM, and CW noise in Fig. R1 of the rebuttal appendix. As illustrated in Fig. R1, the information of the original image is concentrated in the low-frequency region (center area of the second column), while the noise information spans from low-frequency (center area) to high-frequency (edge area) regions (third to fifth columns). The proposed FE removes noise by progressively cropping information from high-frequency to low-frequency regions over time steps. (as $r$ in Eq. 9 of the main paper gradually decreases over time). Since the information of the original image is concentrated in the low-frequency region, this method minimizes the loss of valid information from the original image.
2) **Experimental validation**. To further verify the effectiveness of FE, we compare it with an alternative strategy, Inverse-FE (IFE), which crops information from low-frequency to high-frequency over time steps. As shown in Tab. R6 below, IFE causes a significant drop in clean accuracy (64.81% vs. vanilla 92.64%). This demonstrates that a substantial amount of valid information is lost, verifying that valid information is concentrated in the low-frequency area. In contrast, FE not only effectively removes noise (21.56% vs. vanilla 15.59% when under PGD attack) but also minimizes the loss of valid information (92.26% vs. vanilla 92.64%).
**Table 6: Performance (%) of the proposed Frequency Encoding (FE) and the alternative strategy Inverse-FE (IFE). The perturbation $\epsilon=4/255$ for all attacks, and iterative step $k=4$, step size $\alpha=0.01$ for PGD. The dataset is CIFAR10 with time step $T=4$, the network is VGG11.**
|Method|Clean|GN|FGSM|PGD|BIM|CW|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Vanilla|92.64|91.28|35.47|15.59|14.95|6.92|
|IFE|64.81|64.48|12.33|4.44|4.25|4.18|
|FE|92.26|92.02|39.67|21.56|21.05|10.12|
||||||||
### **2. Please clarify why not minimize $\lambda$ directly.**
We did not minimize $\lambda$ directly as it would negatively impact clean accuracy on original images. This is supported by both **theoretical analysis** and **experimental validation**.
1) **Theoretical analysis**. According to Eq. 2 of the main paper, the leak factor controls the residual membrane potential between time steps. A smaller leak factor may lead to a weakened temporal modeling capability of the SNN, leading to a decline in network performance [A]. Considering the leak factor's dual role in original information transmission (Eq. 2) and robustness enhancement (Eq. 6), we propose an evolutionary leak factor (EL). The EL dynamically learns the optimal robustness leak factor across different time steps and neurons, which also increases the expression capability of SNN, helping maintain clean accuracy and improving robustness.
2) **Experimental validation**. We compare EL with two alternative strategies. The first strategy sets all leak factors to 0. The second strategy, adds L2 regularization to the EL to further constrain the leak factor. As shown in Tab. R1 of the rebuttal appendix, a small leak factor significantly reduces clean accuracy (vanilla 92.64% vs. FEEL ($||\lambda||_2$) 88.52% vs. FEEL ($\lambda=0.0$) at 81.76%), consistent with theoretical analysis above. Besides, a small leak factor does increase the robustness of SNN ($e.g.$, under PGD attack, FEEL ($\lambda=0.0$) is 63.80%, FEEL ($||\lambda||_2$) is 29.98%, compared to vanilla 15.59%). This also aligns with the proposed robustness framework (Eq. 6) by demonstrating that controlling the leak factor improves robustness. And our EL method ensures improvements in both robustness and original accuracy ($e.g.$, the PGD defense accuracy of FEEL (learnable $\lambda$) is 30.27%, compared to 15.59% for vanilla, and the clean accuracy of FEEL (learnable $\lambda$) is 92.73%, compared to 92.64% for vanilla).
### **3. The performance of the proposed method when under RGA attack[B].**
We expand the results in Tab. 1 of the main paper with the RGA strategy [B]. As results shown in Tab. R4 of the rebuttal appendix, under a PGD attack with RGA, AT+FEEL improves accuracy to 10.83%, compared to 8.57% for AT alone. Similar improvements are observed with other attacks. These results confirm that our methods (FE and FEEL) achieve state-of-the-art defense accuracy and enhance the robustness of existing approaches, even against SNN-specific attacks. This is because our FEEL method, which introduces frequency encoding and learnable leak factors, increases the complexity of spiking neurons, effectively countering attacks.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: The author has adequately addressed my concerns, and I recommend incorporating these results in the final version.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer yJ85
Comment: Thanks for your acknowledgment of our work. We will incorporate the analysis and results into the final version of our paper. | Summary: This paper aims to enhance the robustness of SNN. The authors first present a unified framework for SNN robustness. They propose a frequency encoding method that filter the noise in frequency domain. Based on that, they also propose the trainable leaky parameter to better constrain robustness. Experimental results on various datasets validate that both our FE and EL methods can effectively improve the robustness of SNN to different noises.
Strengths: The frequency encoding method is novel. The FE-SNN is able to filter out noise by processing information in the frequency domain.
The authors conducted very comprehensive experiments to demonstrate the effectiveness of the proposed method. The experiments results demonstrate that FEEL can be combined with adversarial training or other robustness enhancement algorithms to obtain more robust SNNs.
Weaknesses: The theoretical framework is not rigorous enough. It is not obvious from Eq. 6 that a smaller TERM 1 (term 1) will result in less perturbation in the output, since the change of term 1 may affect $$. The authors need more solid theory to support the FE and EL methods.
The robustness improvement is not significant. Sometimes robustness performance of FEEL is even worse than FE.
Technical Quality: 1
Clarity: 2
Questions for Authors: See in weaknesses
Confidence: 5
Soundness: 1
Presentation: 2
Contribution: 2
Limitations: Authors are encouraged to introduce the additional training/inference cost of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Title: Explaination for the first question
Comment: Sorry for the ambiguous comment of my first question in Weaknesses. My question is how the authors conclude that a smaller leaky factor will increase robustness based on Eq. 6? It seems that if you directly reduce the leaky factor, the gradient term will also change and it may also affect the whole equation. Could the authors explain more about that?
---
Rebuttal 2:
Rebuttal: ### **1. Could the authors explain more about why a smaller leak factor will increase robustness based on Eq. 6, since the change of term 1 may affect other terms in Eq. 6?**
We would like to discuss how the leak factor affects other terms in Eq. 6 in two cases: 1) leak factor $\lambda$ as a hyperparameter predefined before neural network training and 2) leak factor $\lambda$ as a learnable parameter during neural network training (the proposed implementation).
1) CASE 1: $\lambda$ is a **fixed value** during neural network training (similar to $\epsilon$ in term 1). Hence, it **will not affect other terms** in Eq. 6. To validate the correctness of our theoretical framework ($i.e.$, smaller term 1 results in less perturbation in the output), we conduct additional experiments, training different neural networks with **different fixed** $\lambda$ (keep the remaining settings the same as that reported in experimental settings in the main paper). As results shown in Tab. R1 (rebuttal appendix), a smaller $\lambda$ results in a more robust model, indicating that **smaller term 1 results in less output perturbation**. As can also be observed from Tab. R1 (rebuttal appendix), a smaller $\lambda$ could bring performance degradation for clean inputs, $i.e.$, from 92.26% at $\lambda=1.0$ to 81.76% at $\lambda=0.0$. Therefore, we implement the leak factor as a learnable parameter to mitigate performance degradation.
2) CASE 2: $\lambda$ is a **learnable parameter** updated within neural network training. It is not easy to directly analyze the influence of $\lambda$ on other terms in Eq. 6 due to their complex relationship. Therefore, we analyze the influence by **validating whether term 1 for robustness improvement affects term 2 or term 3's effectiveness** for the same goal. To be specific, as analyzed in line 160, RAT [9] (weights regularization) is essentially minimizing term 2. We implement another comparison method which is implemented by adding a gradient constraint via the L2 norm (gradient penalty regularization (GP), same implementation as that in [D]) to minimize term 3. We compare these two methods to two alternatives of our methods. These two alternatives are implemented by additionally optimizing $\lambda$ for methods RAT and GP (keeping remaining parts unchanged), represented as RAT+EL and GP+EL, respectively. As shown in Tab. 1 (main paper) and Tab. R1, Tab.R2 (rebuttal appendix), RAT+EL and GP+EL significantly improve the robustness of RAT and GP, across different attack types and datasets, respectively. These results show that leveraging term 1 for robustness improvement does not interfere with term 2 or term 3’s effectiveness for the same goal, indicating that the leak factor does not affect other terms in Eq. 6.
In summary, results in both cases indicate that the leak factor does not affect other terms in Eq. 6 on SNN robustness. We would like to discuss this with you in the reviewer-author discussion period if you have further questions.
### **2. The robustness improvement is not significant. Sometimes robustness performance of FEEL is even worse than FE.**
We would like to show that our robustness improvement is consistent and significant in two aspects. Kindly note that FE is essentially to fix $\lambda=1$.
1) Consistent improvement over FE by an alternative implementation for **all attack types**. As discussed in our response to **Question 1**, an alternative implementation is to strictly adhere to the theoretical framework of Eq. 6, $i.e.$, set the leak factor to 0. As can be observed from Tab. R1 (rebuttal appendix), FEEL ($\lambda=0.0$) can achieve consistent robustness improvement over FE. This observation validates the effectiveness of our theoretical framework.
2) Significant improvement over FE and SOTA methods by the proposed implementation for **average performance across different attack types** ($i.e.$, FGSM, PGD, BIM, and CW). As can be observed from Fig. 3 and Fig. 4 (main paper), FEEL significantly outperforms FE (black: 6.8% over 4.2%, white: 10.8% over 4.9%, CIFAR10, VGG11, T=4). Furthermore, as described in Tab. 1 (main paper), the average improvement of StoG+FEEL (5.8%) and StoG+FE (4.6%) are **1.6 and 1.2 times larger** than the improvement achieved by the SOTA method StoG [11] over vanilla method (3.6%).
### **3. The additional training/inference cost of the proposed method.**
We would like to introduce the training/inference cost of our method in terms of 1) Training/inference time and 2) Convergence speed comparison and analysis.
1) **Training/inference time**. Tab. R3 (rebuttal appendix) presents the training/inference time of the proposed method and other SOTA methods, which demonstrates that the training time added by our method is less than other methods, under the same experimental settings (detailed in lines 218, 450-454 in the main paper). Compared to StoG [11], which optimizes additional stochastic gating factors for SNN robustness, our method demonstrates more efficient training times.
Particularly, our method has a significant advantage over adversarial training methods such as AT [13] and RAT [9], since our method does not need additional adversarial data for training.
2) **Convergence speed comparison and analysis**. As shown in Fig. R2 (rebuttal appendix), incorporating our FEEL module results in faster convergence of the training loss. This may be due to the fact that our EL increases the learning ability of the network, making it converge faster.
In summary, these observations further confirm the superiority of our approach in terms of training and inference costs.
---
Rebuttal 3:
Title: Thanks for your precious time and we would like to see if there are any further concerns and comments
Comment: Dear Reviewer uyY9:
Following extensive communication and discussion with the other two reviewers, they **acknowledged the contribution of our work and subsequently raised their scores to positive**. In light of this and the detailed responses provided in our rebuttal, we hope our feedback has adequately addressed your concerns. We sincerely request your reconsideration of our manuscript.
If you have any **further questions or comments**, we would **be pleased to provide additional responses**. We understand the approaching deadline for author-reviewer discussion and are afraid you may get further comments we cannot respond in time due to the closing of the system. However, we will try our best to discuss or address the potential issues you may raise in the final version of our paper.
Below, we concisely summarize our responses to your concerns to facilitate a quick review.
- **Theoretical Framework:** Your first concern relates to the rationality of our theoretical framework. To address this, we provide two cases: (1) treating the leak factor as a hyperparameter and (2) treating it as a learnable parameter. These cases illustrate that the leak factor in term 1 does not affect other terms within the framework, ensuring model robustness.
- **Method Performance:** Your second concern pertains to the performance of our method. We present an alternative implementation based on theoretical analysis and the average performance of FE and FEEL across various attack types, demonstrating that our robustness improvements are consistent and significant.
- **Training/Inference Costs:** Your third concern involves the costs associated with training and inference. We compare (1) training/inference time and (2) convergence speed to confirm the superiority of our approach in these aspects.
We look forward to hearing from you soon.
Best regards,
Authors of paper 3953
---
Rebuttal 4:
Title: Thank you for your time and we hope that our response helps for your assessment of our work
Comment: Dear Reviewer uyY9:
We feel incredibly fortunate to have received such valuable comments from experts like yourself. Because you and the other two reviewers all with the highest confidence scores and provide very professional suggestions to further improve the quality of our paper.
Following thorough discussions with the other two reviewers, we have gained significant insights, and both other reviewers raised their initial ratings, **from 6 to 7 and from 4 to 5**, respectively. These positive feedbacks are really encouraging.
We are eager to engage with you and learn from your perspectives. We will **await your reply until the deadline** and will respond promptly to any additional concerns you may have.
We look forward to your feedback.
Best regards,
The Authors of Paper 3953
---
Rebuttal 5:
Title: Looking forward to your feedback in the last three hours
Comment: Dear Reviewer uyY9:
Thanks for your constructive suggestions on our work. Given that **the discussion period is closing in the next 3 hours**, if you have any further questions, please feel free to reach out to us. We will remain attentive to any new concerns and will respond promptly.
In the event that we do not receive any feedback, we assure you that all rebuttal content will be incorporated into the final version of our paper, and we will release the relevant code to ensure the reproducibility of our experiments.
Once again, we sincerely appreciate your time and contribution to improving our paper.
We look forward to your feedback.
Best regards,
The Authors of Paper 3953 | null | null | Rebuttal 1:
Rebuttal: We appreciate all the reviewers for the insightful feedback. We are encouraged that they recognize the significance and urgency of our motivation [Reviewer zYHj], the novelty [Reviewer uyY9] and effectiveness [Reviewer zYHj] of our method, and the comprehensiveness [Reviewer uyY9] and extensiveness [Reviewer yJ85] of our experiments.
In response to each reviewer's comments, we have provided point-by-point replies in the corresponding sections. Figures R1-R2 and Tables R1-R5 referenced in our rebuttal are included in the **newly uploaded Rebuttal Appendix PDF**. The references are listed below.
We will add all additional discussions and results to the final version of our paper and release relevant codes for the reproducibility of all experiments.
Welcome further discussion during the reviewer-author discussion period if there are any additional questions.
**References**
[8] Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual review of neuroscience, 18(1):193–222.
[9] Ding, J., Bu, T., Yu, Z., Huang, T., & Liu, J. (2022). Snn-rat: Robustness-enhanced spiking neural network through regularized adversarial training. Advances in Neural Information Processing Systems, 35:24780–24793.
[11] Ding, J., Yu, Z., Huang, T., & Liu, J. K. (2024) Enhancing the robustness of spiking neural networks with stochastic gating mechanisms. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 492–502.
[12] El-Allami, R., Marchisio, A., Shafique, M., & Alouani, I. (2021, February). Securing deep spiking neural networks against adversarial attacks through inherent structural parameters. In 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE) (pp. 774-779). IEEE.
[13] Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
[31] Sharmin, S., Rathi, N., Panda, P., & Roy, K. (2020). Inherent adversarial robustness of deep spiking neural networks: Effects of discrete input encoding and non-linear activations. In European Conference on Computer Vision, pages 399–414. Springer.
[A] Rathi, N., & Roy, K. (2021). Diet-snn: A low-latency spiking neural network with direct input encoding and leakage and threshold optimization. IEEE Transactions on Neural Networks and Learning Systems, 34(6), 3174-3182.
[B] Bu, T., Ding, J., Hao, Z., & Yu, Z. (2023). Rate gradient approximation attack threats deep spiking neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7896-7906).
[C] Ding, J., Pan, Z., Liu, Y., Yu, Z., & Huang, T. (2024). Robust Stable Spiking Neural Networks. In International Conference on Machine Learning.
[D] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. Advances in neural information processing systems, 30.
Pdf: /pdf/a0239af97e10d36e4edc3f79d1aa963eb73887cc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
QuadMamba: Learning Quadtree-based Selective Scan for Visual State Space Model | Accept (poster) | Summary: This paper presents QuadMamba, a novel Mamba architecture for visual tasks such as image classification and dense predictions. Unlike the classic Vision Mamba, which splits 2D visual data using fixed windows, the authors introduce learnable windows by using a lightweight module that predicts 2D locality more informatively, allowing for the ignoring of irrelevant windows and further splitting of the most informative windows into sub-windows, capturing more fine-grained information. This coarse-to-fine fashion is made possible by the new implementation of a splitting operator with Hadamard product and element-wise summation, making the pipeline fully differentiable. The authors experiment on the classic benchmarks for image classification (ImageNet), object detection (COCO), and semantic segmentation (ADE20K), showing that the method achieves state-of-the-art results.
Strengths: - First, the paper is very well written and most of the figures are done very well so that it is easy to fully understand the story of paper.
- I also like the idea coarse-to-fine scanning, to capture more relevant information in each layer in QuadMamba. It is also worth noting that as the authors mentioned, direct sampling from 2D visual data based on index is not differentiable, but the authors has proposed a solution to overcome it which I consider is a strong contribution.
- The experiment shows that QuadMamba always outperforms the classic CNN, Vision Transformer on popular tasks (image classification, object detection, image segmentation).
Weaknesses: I do not see any major weakness for this paper but only have a few questions for better understanding the paper and improve its clarify:
- The lightweight module to predict 2D locality, informative windows, is it shared across layers or specific to each layer? It seems this module is shared across layers but I feel it needs to be different as each layer capture different information.
- As the authors mentioned in limitation, in remote sensing images which might be helpful to have more than two levels partition in QuadMamba. However, I feel this experiment can still be experimented with the current architecture. In this case, what can happen is that the lightweight module will split into sub-quadrants for all windows ( since all of them are mostly capturing relevant for predicting the output). Did the authors observe this extreme case to subplot all quadrants into sub-quadrants can happen with the current architecture?
Technical Quality: 4
Clarity: 4
Questions for Authors: As mentioned in weakness section, I have only two questions related to lightweight module, extreme case of splitting all quadrants to sub-quadrants in each layer. Besides, it would be great if the authors can provide some experiments with more than two levels partition as mentioned in conclusion, for example, by using the lite model on semantic segmentation tasks should be enough. I would be very helpful to get some insights for possible future works.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes, the authors explicit mentioned the limitation which is that the partition with more than two levels is yet to be explored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1.Details about the lightweight prediction module
Thanks for your valuable advice. We will depict the prediction module more clearly in the revised manuscript. In our design, each QuadVSS block has its specific prediction module that determines the informative regions of the current layer. The potential reason is that each layer needs to attend to different regions as the feature resolution and network depth change, given that features of different depths and resolutions are known to have different responce patterns in terms of regions and context. The shift scheme also helps the model select the informative regions more flexibly, which can enhance the feature representation quality.
### Q2.The effects of more window-level partitions
Thanks for your thoughtful question. The downstream tasks (detection and segmentation) need the pre-training weight of the ImageNet classification. Our two-level window partition strategy mainly considers the input image resolution (224x224) of the image classification task. The input image resolution in dense prediction tasks is generally higher than that in the image classification task. Thus, our module design may not be optimal for downstream dense prediction tasks involving very high resolutions. We will further improve this window partition scheme in future work. A similar phenomenon is observed in Table 4 of the main text: A moderate local window size (14x14) perform better than overly-small (2x2) and overly-large (28x28) window sizes.
Moreover, we conduct a simple experiment in semantic segmentation to explore the proposed question. We split the image into sub-quadrants for all windows using fixed local window sizes. The size is increased four times in the table below. It is worth noting that semantic segmentation results are highly related to the pre-training weights from the ImageNet Classification. Since we do not train each model variant on ImageNet classification, the results may not truly reflect the effects of window size in the segmentation experiment. In the table below, the 8x8 window setting outperforms the 1x1 setting. This indicates that more window partitions can improve the segmentation performance in high-resolution inputs. It is hard to find the optimal local window size for a specific downstream task. The quadtree variant without pertaining achieves the results in between the 8x8 setting and 32x32 setting. It indicates that the two-level quadtree window partition setting may not be flexible enough to handle the high-resolution downstream task. It is because we designed our hyper-parameter by mainly considering the image classification task (lower image resolution). Thus, we will be devoted to designing more flexible window partition schemes to handle various downstream tasks in the future.
| Model | Pre-training| Local Window | mIoU |
|:------|:-------:|:-------:|------:|
| Tiny (B2) | Yes|Quadtree | 44.3 |
| Tiny (B2) | None|Quadtree | 38.0 |
| Tiny (B2) | None|w/o (1x1) | 36.8 |
| Tiny (B2)| None|8x8 | 38. 2|
| Tiny (B2)| None|32x32 | 37.9 |
| Tiny (B2)| None|128x128 | 36. 9| | Summary: This paper proposes a vision mamba backbone for various vision tasks. It aims to adapt the recent popular mamba model originated from the language domain to vision tasks. The authors propose a learnable quad-tree partition strategy which can adaptively generate multi-scale visual sequences with spatial prior for mamba model. The authors also make contributions on differentiable training and shifted token interactions. Experimental results show the effectiveness of the proposed vision mamba, including image classification, object detection and segmentation. It shows the potential to be applied in various vision mamba models. Ablation studies are also extensively conducted.
Strengths: - Strong Motivation. This paper is well-motivated and important. Effectively adapting the Mamba model to vision tasks has gained wide attention recently. Multi-grained image information and 2D priors are essential to building a vision backbone, which has led to a strong motivation for this paper. Treating the 2D image as token sequences in the language domain needs exploration and adaptation.
- Novel \& applicable method. The quadtree-based window partition is novel in vision Mamba. The learnable module part is lightweight and conceptually simple. The quadtree search technique is seen in other tasks, such as the point cloud domain. This paper only learns to partition the semantic-rich content with a handcrafted partition strategy, which avoids developing a search algorithm that is too complex. The other two points, the differentiable trick and shifted scheme, are also interesting. It has the potential to be applied to constructing token sequences for other vision mamba models.
- Good experimental results. The method proposed is tested in image classification, detection, and segmentation tasks, which proves its effectiveness over other vision mamba models. The ablation studies and visualization results also give more explanations about the quadtree-based sequence construction.
- Well-written presentation. The writing is clear and precise. In general, the presentation of this work is easy to follow.
The experiments are well performed and the presented method compares favorably to the considered baselines.
Weaknesses: I have a number of suggestions and questions that may help to further improve the paper:
- It would be helpful to demonstrate the impact of the hyper-parameters in the model design. For example, is it helpful to partition more window levels?
- More illustrations and analysis are needed for the learnable parameters. It can show the relationship between model cost and performance gain to other researchers/readers.
- It would be useful to give the details about the training settings and compare them to other vision mamba works.
Technical Quality: 3
Clarity: 3
Questions for Authors: I have listed some questions in the previous section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1.More illustrations of hyper-parameters
Thanks for your valuable advice. We will add more analysis on the hyper-parameters. The impacts of the increased number of window partition levels are explored in Table 5 of the main text. We examine the choice of partition resolutions in the bi-level quadtree-based partition strategy. Experimentally we deduce the optimal resolutions to be {1/2, 1/4} for the coarse- and fine-level window partition, respectively. It is worth noting that the design space is handcrafted and may be extended to more levels. We will explore a more flexible scheme in the future.
### Q2.More analysis of learnable parameters
Thanks for your insightful opinion. We will include more analysis on the learnable parameters in the revised manuscript. To show the relationship between model complexity and performance, we plot the performance against flops for different methods and models in Figure 2 of the attached PDF file. We also conduct an ablation experiment on the local and global context embedding for the prediction module. The method of combining both the local and global contexts outperforms the one using only the global context vector.
| Variant| Embedding | Top-1 Acc.(%) |
|:------|:-------:|------:|
| A | Global | 74.0 |
| B | Global+Local| 74.2 |
### Q3.More details about training settings
We will add more illustrations of training details in the supplementary materials. To compare fairly with previous vision Mamba methods, our training settings strictly align with VMamba, whose training configurations were inherited from Swin Transformer. The training details for image classification are as follows: QuadMamba models are trained from scratch for 300 epochs, with a 20-epoch warm-up period, using a batch size of 1024. The training process utilizes the AdamW optimizer with betas set to (0.9, 0.999), a momentum of 0.9, an initial learning rate of $1\times 10^{-3}$, a weight decay of 0.05, and a cosine decay learning rate scheduler. Additional techniques such as label smoothing (0.1) and EMA (decay ratio of 0.9999) are also applied. No other training techniques are employed.
For object detection and semantic segmentation, our method also strictly follows VMamba and Swin Transformer. The training details are as follows: Object detection and instance segmentation experiments are conducted on COCO 2017, which contains 118K training, 5K validation and 20K test-dev images. An ablation study is performed using the validation set, and a system-level comparison is reported on test-dev. For the ablation study, we consider four typical object detection frameworks: Cascade Mask R-CNN, ATSS, RepPoints v2, and Sparse RCNN in mmdetection. We utilize the same settings as in previous works: multi-scale training (resizing the input such that the shorter side is between 480 and 800 while the longer side is at most 1333), AdamW optimizer (initial learning rate of 0.0001, weight decay of 0.05, and batch size of 16), and $3 \times$ schedule (36 epochs).
---
Rebuttal Comment 1.1:
Title: keep original score
Comment: All my concerns have been addressed. Thus, I keep my original score as strong accept. | Summary: Authors propose a technique to adapt Mamba to vision tasks. They propose a novel Quad-tree based approach instead of flattening tokens for images in a raster scan mechanism to avoid losing local dependencies. They evaluate on three vision tasks of object recognition, objet detection and instance & semantic segmentation.
Strengths: - Interesting novelty and contribution designed specifically for Mamba type of models adapted to vision tasks.
- Practically beneficial algorithm on the efficiency side where the show lower FLOPs and parameters with better or on par performance to SOA
Weaknesses: - It provides a tradeoff though and it's not clear which is more favourable when compared to VMamba with corresponding T/S/B in Table 1. Meaning you can see when comparing VMamba-T (88.2) with QuadMamba-T (78.2) it is much lower, it is only at the base variant they become on-par but with reduced FLOPS/parameters on QuadMamba side. The fact it doesn't seem that on the lighter weight variants the performance is on par in fact it is much lower with around 10%.
- What’s the reported inference time for Table 1 in addition to the FLOPs.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Table 2 not clear what is the result for LocalVMamba and VMamba with S for fair comparison to QuadMammba-S
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1.Clear and fair model comparisons
Thanks for your valuable advice. We regret that our Li/T/S/B naming system may confuse readers compared with other methods. To clearly show our method's advantages, we rename the QuadMamba model variants (Lite -> B1, Tiny -> B2, Small -> B3, and Base -> B4). Our B1 and B2 variants are compared with efficient models, such as lite CNN models and efficient ViT models. Our B3 and B4 maintain similar model sizes as the tiny/small variants in transformer and vision Mamba backbone methods. Thus, our B1/B2 should be compared with those under 10M model parameters. Our B3 should be compared to tiny variants whose model parameters are around 30M, and our B4 to small variants around 50M. For instance, VMamba-T (9.1 GFLOPs) does not correspond to QuadMamba-S (5.5 GFLOPs) but QuadMamba-B (9.3 GFLOPs) instead. To make the comparison clearer, we highlight several comparisons under similar model sizes to show the advantages of the QuadMamba. Our model variants (Lite/Tiny) demonstrate clear superiority on extreme light model levels (under 10M). Moreover, our small and base models achieve comparable and better results compared to methods of similar model sizes.
The detailed plots and comparisons are found in the attached PDF file. We highlight several comparisons under fair conditions to show the advantages of QuadMamba:
| Model | #Params.(M) | Top1 Acc|
|:------:|:------:|:------:|
|Vim-Ti| 7| 76.1 |
|LocalVim-Ti| 8| 76.2 |
|PVT| 13.2| 75.1 |
|QuadMamba-B1 (Lite)| 5.4| 74.2 |
|QuadMamba-B2 (Tiny)| 10 | 78.2 |
|Vim-S| 22| 80.5 |
|LocalVim-S| 28| 81.2 |
|QuadMamba-B3 (Small)| 31 | 82.4 |
### Q2.Reported inference time
In Table 9 of the supplementary materials, we benchmark the throughput of the QuadMamba model variants on an A800 GPU platform. The results are also shown below. Compared to the vanilla VMamba model, our QuadMamba model has negligible inference latency. Moreover, the throughput of our lite and tiny models is much higher compared to the efficient CNN and Transformer models, which implies better inference efficiency.
| Model|Renamed| #Params.(M) | Flops(G) | Throughput(img/s)|
|:------:|:-------:|:------:|:------:|:------:|
|QuadMamba-Li|B1| 5.4 | 0.8 | 1754 |
|QuadMamba-T|B2| 10.3 | 2.1 | 1586 |
|QuadMamba-S|B3| 31.2 | 5.5 | 1252 |
|QuadMamba-B|B4| 50.6 | 9.3 | 582 | | Summary: The paper introduces QuadMamba, an enhancement of vision State Space Model (SSM) architectures. At its core is a learnable QuadVSS network block that processes the image input patch at two different resolutions. For every 2x2 coarse window with 4 image patches, the method adaptively learns to process one of the 4 patches at the finer 2x resolution using the differentiable Gumbel-softmax formulation. These block are hierarchically stacked similar to SwinTransformer, but using a window shifting scheme in two different directions which is a better fit for SSM.
The method is demonstrated to obtain good results relative to comparably sized transformer and CNN architectures on Imagenet classification, COCO object detection / instance segmentation, and ADE20k semantic segmentation.
Strengths: - Intuitive idea to learn how to pick an area that requires higher resolution processing and pack it into SSM via Gumbel Softmax. Solid although not particularly novel stacking of QuadVSS blocks into a SwinTransformer-like network architecture.
- Competitive results relative to ViT and CNN, and seems to improve a little bit over other Mamba methods, although the gains there seem quite incremental.
- Ablation over multiple network parameter decisions.
Weaknesses: # Significance
The contribution seems a bit incremental. The overall idea of image traversals into several windows to enforce better locality was already explored before in LocalMamba. The gains in the experimental section over some of the other Mamba network variants (EfficientVMamba, VMamba, Swin-S etc) are not too large. The idea that we do not have to process all areas in high resolution but only 1/4 of them does seem useful, although does not seem strictly limited to SSMs -- e.g. would ViT methods also benefit?
# Experimental results
- It seems that the natural baseline to this method is to compare a network made of VSS modules, as opposed to QuadVSS (the main novelty). Such a comparison was done in Fig 5, however it is not too clear how rigorous it is. It would be helpful to compare more directly VSS / QuadVSS equivalents with same FLOPS, for several different FLOPS thresholds. This does not seem to have been done - it would validate more strongly the fact that allocating parameters selectively to higher resolutions actually helps (as opposed to using the baseline VSS module pyramid).
- Unclear what the training overhead of a QuadVSS block is compared to a VSS block (in terms of memory, compute)?
- As opposed to just a table, it would be helpful to have plots with flops vs quality as axes, with a separate curve in that graph for each model family. Such a plot can make the comparison more obvious, given that different methods have somewhat different amounts of flops.
- In Sec 4.3, it's unclear why for the Tiny model, we stack more blocks in the second stage, while for Small and Base we stack more in the third stage. Any particular reason? Third stage is more standard VSS modules, as opposed to your QuadVSS innovation. Also later in Table 6 yet different stackings are best (2,4,6,2). It's hard to discern the logic in all these choices.
# Clarity and questions
Some details of the approach were not particularly clear to me.
- L175 Do you always pick the same fine-grained patch for all 2x2 windows, or do they vary by window? Not particularly clear from the exposition/notation.
- Also, it appears that 7 patches are picked for each region as per L185-187. Don't you want to have 8 patches (powers of 2 are usually more efficient hardware-wise?)
- The softmax in Eq 7, what labels do you train it on? Is it the final labels? Right now it appears that this part is trained first, and kept fixed.
- (minor) In Eq 6, it seems 'v_local' is aggregated across the whole image, shouldn't it be called 'global' instead?
- (minor) L186: where is 'local adjacency score' defined? This is the first mention of the term.
- (minor) Fig 5: S-QuadVSS is not defined, I assume it's the opposite direction shift, but helps to be explicit.
# Minor Nits and Typos
159: infOrmative
Table 1: VMamaba
606: Uperhead [62] (I believe it should be UperNet)
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is your quad idea limited to SSM or would it also work for Vit? Can one learn to downsample specific patches in a Vit?
- Do you have comparisons between networks made of QuadVSS/VSS or only VSS for several same FLOPS budgets? Do you have comparison of training overhead (compute/memory) for networks made of QuadVSS/VSS or only VSS baseline?
- L175 Do you always pick the same fine-grained patch for all 2x2 windows, or do they vary by window? Not particularly clear from the exposition/notation.
- Also, it appears that 7 patches are picked for each region as per L185-187. Don't you want to have 8 patches (powers of 2 are usually more efficient hardware-wise?)
- The softmax in Eq 7, what labels do you train it on? Is it the final labels? Right now it appears that this part is trained first, and kept fixed.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1.Significance
**Q1A1 Comparing to LocalMamba:** How to effectively preserve 2D spatial dependencies is an important challenge in adapting sequence models into the vision domain. Though it has been partially explored by previous works such as PlainMamba [1] and LocalMamba [2], it is non-trivial that our proposed QuadVSS is a **data-adaptive** and **light-weight** design for the locality module.
- Data-adaptive: LocalMamba opts for a differential neural network search during training for the optimal locality strategy for each layer. Once trained, the architecture of LocalMamba is fixed.
For varied input images, LocalMamba used the same fixed locality strategy. In contrast, our QuadMamba has a learnable data-dependent locality strategy, which means it can dynamically and adaptively generate the optimal scanning sequences for different input data and various downstream tasks.
- Light-weight: LocalMamba, which requires handcrafting the network search space and extra optimization losses, brings much more complexity during training. Differently, our method brings minimal additional complexity in terms of training and optimization, as the quadtree-based scanning module is lightweight.
**Q1A2 Gains and improvement to existing Mamba:** We believe this is a misunderstanding, and the gain of our method over existing ones is actually significant. We apologize for the possible slight confusion in Table 1's main results, which is due to misalignments between how our model variants and others (like VMamba) are named. For instance, VMamba-T (9.1 GFLOPs) does not correspond to QuadMamba-S (5.5 GFLOPs) but QuadMamba-B (9.3 GFLOPs) instead. To make the comparison clearer, we highlight several comparisons under similar model sizes to show the advantages of QuadMamba. Our model variants (Lite/Tiny) show clear superiority on extreme light model levels (under 10M). Moreover, our small and base models achieve comparable and better results compared to methods of similar model sizes. It is also worth noting that our method of bringing locality into Mamba is generalized, practical, and easy to implement. compared to other complex methods. More details are found in the attached PDF.
**Q1A3 Applicable to ViT architecture:** Our quadtree-based window partition module and sequence construction strategy are specially designed for the recent vision Mamba models. The idea of our QuadMamaba originated from the coarse-to-fine feature representation philosophy, found in many prior arts such as InceptionNet [3], Multiscale Transformer [4], Focal Transformer [5], and Quad-attention Transformer [6]. However, the casual sequence modeling scheme in the Mamba model is completely different from the non-casual attention scheme in vision transformers. The constructed sequence for Mamba has to casually scan each token from the input. The length of the token sequence in Mamba has to be kept the same for multi-layer feature processing. Thus, it brings much difficulty in preserving spatial adjacency in Mamba. Differently, due to the flexibility of the attention scheme, the quantity of the output tokens can easily maintained if the query tokens remain unchanged. Thus, our method is highly customized for Mamba.
### Q2.Experimental results
**Q2A1 More comparisons with the baseline:** To demonstrate the effectiveness of the proposed QuadVSS block, we will add additional ablation studies on different model sizes. The table below shows the improvement in tiny model levels/FLOPS thresholds.
|Variant|Block|Params.(M)|Top-1 Acc.|
|:------|:-------:|:------:|:------:|
|Mamba-T|(2,6,2,2) |8.5 |76.9|
|QuadMamba-T|(2,6,2,2)|10.2 |78.2|
**Q2A2 Training overheads:** In the attached PDF files, we plot the GPU memory vs. Batch size and training curve. Our QuadMamba has an affordable GPU memory overhead compared to the baseline. We find that our method does not result in any difficulty in training convergence.
**Q2A3 Plots with Flops vs. Performance:** We plot the detailed comparison of Flops and Performance in the attached PDF file. To clearly show the advantages of our method, we rename the QuadMamba model variants (Lite -> B1, Tiny -> B2, Small -> B3, and Base -> B4).
**Q2A4 Block numbers in different stages:** According to our observation, QuadVSS blocks work better in early model stages, where the feature resolution is higher. However, stacking more blocks in the second stage (high resolution) brings significant computation overload, especially for high-resolution images. To strike a balance of FLOPs between different stages, we allocate more QuadVSS blocks in the third stage (low resolution) instead of the second stage for larger models such as QuadMamba-Small and -Base (channels=144). For smaller models like QuadMamba-Lite and -Tiny, allocating more QuadVSS blocks to the second stage is more affordable because of fewer number of channels (channels=48), so it is unnecessary to defer the QuadVSS blocks to the third stage.
### Q3.Clarity and questions
**Q3A1 Window hyper-parameter:** We always pick the 2x2 windows for two-level window partitions. Considering the 224x224 image resolution and four-time downsampling ratio, the two-level window partition strategy is enough for image classification. A more level window partition will be explored.
**Q3A2 Patch hyper-parameter:** The image features, which are partitioned by learnable modules, are then reshaped to a 1D token sequence for the Mamba block. For the Mamba, the 1D token sequence is completely the same in terms of computation flows. Thus, the computing efficiency remains unchanged with different patch numbers.
**Q3A3 Label of Eq.7:** As the QuadMamba brings no extra training complexity, there is no training label for the softmax in Eq. 7. We design a differential sequence construction strategy and apply the Gumbel-Softmax trick, which can help optimize Eq.7.
### Q4.Minor nits and typos
We will revise the main text carefully.
---
Rebuttal 2:
Title: response
Comment: Thank you for the additional graphs and figures. While QuadMamba performs well compared to baselines in terms of #param, Fig 2 shows that performance is incremental wrt MACs (FLOPS). To me, actual model sizes are not as important, FLOPS (or model latency on same hw) are the primary criterion, and on this one the gains are quite small. So the graphs actually reinforce my initial statement - I do not think there is a naming misalignment - I was comparing similar flops to similar flops and there was not much notable improvement there.
> Q2A1 The table below shows the improvement in tiny model levels/FLOPS thresholds.
I do not see FLOPS listed in the table, just #params. I explicitly asked for quality comparisons for same FLOPS.
> Q1A3
Your answer seems to be "Mamba adds a bunch more complicated constraints to the process" but you did not actually answer my question of -- can I do sampling and apply the Gumbel-Softmax idea to ViTs?
> As the QuadMamba brings no extra training complexity, there is no training label for the softmax in Eq. 7.
Softmax usually suggests a classification objective with training data. It seems what actually is going on is Eq 9. But at the point where you introduce Eq 7, the context is missing and is quite confusing.
I still lean positive but the explanations do not change my general takeaways, so I would keep my rating.
---
Rebuttal 3:
Comment: Thanks for your responses and for maintaining positive ratings. We apologize for not addressing the concerns precisely due to the word limit. We hope the following explanation can partially address your concerns.
* Our QuadMamba can achieve similar gains with a reasonable and simple learnable module compared to the complex methods (e.g., LocalViM).
* We feel sorry that we forgot to provide the Flops in Q2A1. The Glops for Mamba-T is 1.7G, and the Flops for QuadMamba-T is 2.0G.
Regarding Gumbel-Softmax:
**1. Application to Vision Transformers (ViTs) and Other Architectures:**
The Gumbel-Softmax technique can be integrated into Vision Transformers (ViTs)
and various other architectures or tasks, such as detection and super-resolution.
Unlike the Softmax, which outputs continuous values in the range (0,1), Gumbel-Softmax
provides discrete outputs in {0,1}. This capability is particularly beneficial for differentiable
decision-making. Several ViT studies, like A-ViT[7] and SparseViT[8], utilize Gumbel-Softmax
primarily for sparsifying feature computation. However, there's currently no exploration
of using Gumbel-Softmax for patch partitioning in ViTs, which we believe is a promising
area for future research.
**2. Supervision and Learning:**
Gumbel-Softmax here does not require explicit supervision.
It receives gradients from the overall objective, such as the 1000-way cross-entropy loss
in ImageNet classification, and learns to predict coarse-to-fine partitions in an **end-to-end** manner.
The supervision for the partitioning process is implicit.
[7] A-ViT: Adaptive Tokens for Efficient Vision Transformer. Hongxu Yin, etc. In CVPR22.
[8] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer. Xuanyao Chen, etc. In CVPR23. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We thank the reviewers for the positive reviews of our work and constructive comments. Here is a list of new figures and tables in the attached PDF file, and references referred to in other responses.
**Figures and tables** :
Figure 1. Plots of performance, mode size, and FLOPs in ImageNet classification.
Figure 2. Plots of performance, mode size, and FLOPs in COCO detection and ADE20K segmentation.
Figure 3. Training curve and GPU memory consumption of our method.
Table 1. Fair Comparisons in ImageNet classification.
**References** :
[1] PlainMamba: Improving Non-Hierarchical Mamba in Visual Recognition. Chenhongyi Yang, etc. BMVC2024
[2] LocalMamba: Visual State Space Model with Windowed Selective Scan. Huang, etc. ArXiv2024.
[3] Going deeper with convolutions. Christian Szegedy, etc. Neurips2014
[4] Multiscale Vision Transformers. Haoqi Fan, etc. CVPR2021
[5] Focal Self-attention for Local-Global Interactions in Vision Transformers. Jianwei Yang, etc. Neurips2021
[6] QuadTree Attention for Vision Transformers. Shitao Tang, etc. ICLR2022
Pdf: /pdf/fb3ac93c6ba23c2d20fe960174dd217367fc4f78.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Randomized Truthful Auctions with Learning Agents | Accept (poster) | Summary: The paper considers repeated auctions with agents using no-regret learning algorithms. It first extends previous result on second price auction to all deterministic auctions that the runner-up bidder may not converge to bidding truthfully, and provides the condition how the learning rates of the bidders affect the convergence of the bidders. Then it is shown that with bidders being learning agent, randomized auctions can have strictly better revenue guarantees than second price auctions with reserves. Finally, a notion of auctioneer regret is defined, measuring the revenue guarentee in learning agent settings comparing to second price auction with truthful bids, and corresponding regret bounds are provided for auctioneers using the same auction and various auctions.
Strengths: 1. The results on biding convergence and the revenue of auctions with respects to bidders using learning algorithms are important.
2. The idea of using auctioneer regret to analyze the revenue guarentee in learning agent settings is interesting and inspiring.
3. The paper provides concrete theoretical proofs.
Weaknesses: 1. MWU is the only learning algorithm that considered for bidders. Although it is resresentative, it would be better see more general results for other learning algorithm, or more general characterization on learning algorithms.
Technical Quality: 4
Clarity: 4
Questions for Authors: NA
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the Reviewer for their detailed feedback. Please find our answers to your questions below.
> MWU is the only learning algorithm that considered for bidders. Although it is representative, it would be better see more general results for other learning algorithm, or more general characterization on learning algorithms.
Notice that our results in Sections 4, 5 hold for all mean-based algorithms, which generalize MWU. We will emphasize that more in the next version of our work.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. | Summary: This work studies a setting where bidders use no-regret learning algorithms (e.g., MWU) to participate in repeated auctions. Bidders' values are assumed to be persistent.
Generalizing [Kolumbus and Nisan 2022a]'s results on second-price auctions with two equal-learning rate MWU bidders, the authors show that:
(1) in *all deterministic truthful auctions*, if the learning rate of the runner-up bidder is asymptotically equal to or slower than the winning bidder, then the bidders will *not* converge to truthful bidding (which hurts the auctioneer's revenue).
(2) for *some* deterministic truthful auctions (not *all*, see my (Q2)), if the learning rate of the runner-up bidder is asymptotically faster than the winning bidder, then the runner-up bidder converges to truthful bidding (while the winning bidder does not).
Then, the authors design auctions to maximize the auctioneer's revenue, aiming to achieve no-regret against the second-price auction revenue. The basic idea is to take the mixture of the revenue-maximizing IC auction and a randomized strictly IC auction to ensure that bidders can converge to truthful bidding. For the finite horizon setting, the authors obtain a tight regret bound of $\Theta(\sqrt{T^{3/4}})$ by using a constant auction throughout $T$ rounds, and a tight regret bound of $\Theta(\sqrt T)$ by using an adaptive auction schedule.
Strengths: (S1) [Significance & Originality] The characterization of bidders' learning outcomes under different learning rates is very interesting. It is a significant generalization of previous work [Kolumbus and Nisan 2022a] that only considers equal learning rates. Result (1) above holds for all deterministic truthful auction, which is also a significant generalization of [Kolumbus and Nisan 2022a] on second-price auction only.
(S2) [Quality] The auctioneer's regret bounds in the finite horizon analysis are tight (in $T$), which is good.
(S3) [Clarity] The writing is clear in general. I like the discussion of high-level ideas and intuitions.
Weaknesses: (W1) Bidders having persistent values is a strong assumption. Under this assumption, the results in Sections 4 and 5 (designing auctions to achieve no regret for the auctioneer) are relatively straightforward. Moreover, the recent paper [Cai et al 2023, Selling to Multiple No-Regret Buyers] has already studied the problem of auction design against multiple no-regret learning buyers with iid valuations across time, which seems to be a more natural and challenging setting than the persistent value setting here. Given [Cai et al 2023], this paper's additional contribution is limited.
(W2) The conclusion that "if the learning rate of the runner-up bidder is strictly faster than the learning rate of the winning bidder, then the runner-up bidder converges to bidding truthfully" seems to only hold for some deterministic truthful auctions, not "all deterministic truthful auctions" as claimed by the authors. See my question (Q2).
I lean towards rejection for now due to the above concerns, but may change opinions based on the authors' response.
Technical Quality: 3
Clarity: 3
Questions for Authors: (Q1) What's the difference/improvement of this work compared to [Cai et al 2023]? (See W1)?
(Q2) This is an important question regarding the correctness of a result. In Section 3, the authors claim that for *all* deterministic truthful actions, "if the learning rate of the runner-up bidder is strictly faster than the learning rate of the winning bidder, then the runner-up bidder converges to bidding truthfully (Line 213)". However, the formal result in Theorem D.3 and the proof are presented only for *second price auctions*. In fact, if the auction is a trivial auction that always allocates the item to bidder 1 (which is deterministic and weakly truthful), then all bids are the same for bidder 2 and hence bidder 2 cannot learn to converge to truthful bidding, so the authors' claim does not hold. I think the claim only holds for deterministic auctions where the low-value bidder can win by truthful bidding when the high-value-bidder submits a small enough bid, like the second-price auction. A formal definition of such auctions is needed and a complete proof should be provided. Can the authors respond to this issue?
(Q3) (minor clarification question) Section 4 assumes that a strictly IC auction $A'$ always exists. Does it always exist? Is the "staircase auction" in Definition 5.5 a strictly IC auction? If yes, then maybe mention it in Section 4.
(Q4) What's the intuition that as the discretization is finer ($\Delta$ is larger), the regret of the auctioneer becomes larger, in the order of $O(\Delta^2 \sqrt T)$ ? (Corollary 5.6) This feels a bit counterintuitive.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: **Suggestions:**
(1) Typo: Line 119, "in the prior-free"
(2) Typo: Line 237, "to a bidding truthfully"
(3) In Section 4, I'd suggest to elaborate on the black-box transformation from IC auctions to strictly IC auctions, and shorten the paragraph about "Equilibrium of Meta-Game in Repeated Strictly IC Auctions". For example, you can move Theorem E.1 (which is referred to in the following paragraphs) from appendix to here. You can also clarify the existence of strictly IC auctions here (see my question (Q3)).
(4) Line 372: "number of actions $\Delta$" -> "number of discretized bids $\Delta$"
(5) Typo: Line 388: "theoptimal"
(6) Line 559 in Theorem D.1: What is "NPT"?
(7) Typo: Line 608: "round $t$" -> " round $i$"
(8) When defining the auctioneer's regret, you compete with the revenue of the second-price auction. I think you can actually compete with the high value $v_H$, which is a stronger benchmark than the second-price revenue. And to achieve no-regret against $v_H$, the following non-oblivious adaptive auction schedule might work: use a strictly IC auction for $T_0$ rounds until the bidders converge to truthful bidding, observe the bidders' values from their truthful bids, and then in the remaining rounds switch to the auction that always allocates the item to the higher-value bidder at a price of $v_H - \frac{1}{\Delta}$. Is that correct?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the Reviewer for their detailed feedback. Please find our answers to your questions below.
> Bidders having persistent ...
We view our results and the setting in which we work as orthogonal to the setting of [Cai et al 2023]. Firstly, they do not restrict themselves to truthful auctions, and for their welfare extraction results, the agents are allowed to overbid. Secondly, in their setting, redrawing valuations i.i.d. in every round helps the learning process (this was also observed in [Feng et al. 2021]). Intuitively, consider two agents and SPA: for every valuation $v$ of player 1, there is some positive probability that player 2’s draw is below $v$, hence player 1 will learn that bidding truthfully is strictly better (in expectation over the other random draw), which leads to the desired bidding behavior. In such a system, randomness is already present due to the draws of the valuations, which helps the convergence to the right bidding behavior.
Our setting of persistent valuations and restricting to truthful auctions, rather than complex mechanisms, is motivated by online ad-auctions. In such settings, multiple auctions are run every second, whereas the valuations of the advertisers may not change much for time scales of a day or a week. Thus, there are typically large intervals of size $T$ where the valuations of the participating agents are persistent. We agree, that the valuations will likely change over longer time horizons, but as long as the persistence is significant, we think our results and the value of randomized mechanisms over deterministic mechanisms will hold. It is an intriguing question to understand the behavior in an intermediate setting, where the valuations vary slowly over time. For this setting, ideas from our work (and from Kolumbus and Nisan (2022), who also consider bidders with persistent valuations) along with those from [Cai et al. 2023] where valuations are drawn in every round, are likely to be useful. Intuitively, the frequency at which the valuations are drawn would affect the amount of randomization we need to add to the auction to help the bidders converge to the right bidding behavior.
Our work also differs from [Cai et al. 2023] in having different conceptual goals: we aim to “restore” the single-shot behavior in natural auctions, such as second-price auctions, in the presence of mean-based learning agents by making minimal modifications to the underlying auction rule. On the other hand, [Cai et al. 2023] aim to exploit the mean-based learning behavior to extract more revenue, and their auctions diverge from the truthful ones we consider in our work. Thus, in our setting, it is clear that reporting the valuation truthfully to the bidding algorithm is an (almost) optimal strategy for the agents (i.e., the so-called “meta-game” considered by Kolumbus and Nisan is truthful), whereas it is not clear to us whether reporting the valuations truthfully to the no-regret algorithms is an optimal strategy in the setting of [Cai et al. 2023].
Finally, a conceptual message of our work is that the key to convergence of the low-type bidder is the presence of enough randomness. If the ranking of the bidders is very stable due to the lack of inherent randomness (i.e., due to infrequent redraws of the valuations), we show that injecting external randomness into the auction induces the desired learning behavior, improves the revenue and restores the property that advertisers can truthfully report their valuations to the learning algorithms. Having persistent valuations is one case of the ranking of the bidders remaining stable over time: studying it allows us to showcase our main ideas, but a central message here is the presence/absence of stability in the rankings of the bidders is key. We have tried to make this point in Lines 88-93 of our manuscript; we will emphasize it more in the next revision.
> The conclusion that ...
Thank you for pointing this out, we apologize for the confusion. Our Remark 2 in the Appendix (lines 674-678 of the submitted file) describes exactly the condition you are referring to. Under this condition, the proof provided in the appendix goes through for all such auctions. Notice also that if the runner-up bidder loses no matter what the opponent bids, then regardless of their learning rate they will converge to bidding uniformly at random. Thus, we can indeed characterize the convergence behavior for all deterministic auctions, and all learning rates. We will change the discussion in the main body, mention that this result holds for non-trivial auction (for trivial ones we have convergence to uniform bidding), and change Remark 2 into a definition that states this property. Moreover, we will modify the proof to formally capture this setting.
> (minor clarification question)...
Indeed, the “staircase auction” does satisfy this condition. We will spell it out in the next version of our work.
> What's the intuition ....
Intuitively, if there are more bids, the strictly IC auction needs to “hedge” against more pairs of valuations of the agents and the strictly IC parameter decreases (i.e., the benefit of bidding truthfully decreases). Thus, we need to run this auction for a larger period of time to induce the desired behavior. In the setting of online ad auctions the number of bids is significantly smaller than the number of auctions, so our focus was to obtain optimal bounds with respect to $T$.
> Suggestions...
We will fix the typos and clarify the black-box transformation in the next revision.
Regarding suggestion (8), this is indeed correct, assuming that we use some *adaptive* auction schedule. However, we want to refrain from using such strategies and stick to non-oblivious ones, due to incentive issues – if the bidders know that we are trying to infer their valuations they should not be reporting truthfully to the learning algorithms. We wish to avoid that and stick to more practical approaches.
---
Rebuttal Comment 1.1:
Title: Happy with authors' response and raise rating to 6
Comment: My concerns are resolved by the authors' response and I raised rating to 6.
Given authors' response to W1, I agree with the authors that this paper has a sufficient additional contribution to the literature.
And thank you for clarifying my question Q2. I completely agree that, in the revision, you should "change the discussion in the main body, mention that this result holds for non-trivial auction (for trivial ones we have convergence to uniform bidding), and change Remark 2 into a definition that states this property. Moreover, we will modify the proof to formally capture this setting." A formal proof is really needed since the devil is always in the details.
Another minor suggestion regarding readability: Since your Section 3 now only presents all results informally and redirect the reader to Appendix D, you might consider adding formal theorem statements there or pointing to the specific theorems (like Theorem D.2, D.3). | Summary: The paper considers the building auction mechanism for settings where the bids supplied by agents are chosen by automated no-regret algorithms operating on their behalf. It has been shown from prior work that when bidding with asymmetric valuations, no-regret algorithms converge to bids substantially far from their valuations even when the auctioneer utilizes a truthful mechanism such has a second-price auction. This leads to scenarios where the bidder with the lower valuations routinely bids lower than their true values resulting a loss of revenue for the auctioneer.
The paper undertakes a deeper study of this phenomenon and suggests methods to remedy this situation. They start by showing that the learning rates of the respective agents plays a substantial role in determining the types of behavior observed at convergence. They show that when the learning rate of the agent with lower valuations is larger, they converge to truthful bids while this is no longer the case if they use a smaller rate. The most interesting contribution of the paper is the observation that the type of convergent behavior depends on the rewards observed by the learning agents. For instance, when second-price auctions are used, the agent with lower valuation rarely receives rewards as they never win the bid and hence, do not receive any feedback to update their bids. The paper shows that when the auctions are instead \emph{randomized}, that is the auctioneer randomly chooses between say a second-price auction and another truthful mechanism, this allows for convergence to truthful bids irrespective of the specific implementation of the bidding algorithms. When the mixing mechanism is \emph{strictly} incentive compatible, that is, a player obtains \emph{strictly more} utility from truthful bidding, they show that any mean-based no-regret algorithm converges to truthful bidding. Furthermore, the proofs in the paper are natural and easy to follow.
Overall, the paper considers a natural problem and presents an elegant solution. In the process, it also conceptually identifies the counterintuitive behavior of no-regret algorithms in this setting and demonstrates an approach toward remedying this. Furthermore, the technical material in the paper is well-presented and understandable. The main drawback of the results is their restriction to the setting of mean-based algorithms. It would be nice if the authors could comment on whether such a restriction may be removed.
Strengths: See main review
Weaknesses: See main review
Technical Quality: 3
Clarity: 3
Questions for Authors: See main review
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See main review
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the Reviewer for their detailed feedback. Please find our answers to your questions below.
> The main drawback of the results is their restriction to the setting of mean-based algorithms. It would be nice if the authors could comment on whether such a restriction may be removed.
Our choice of mean-based learners is motivated partly by prior work on learning in auctions, which, to a large extent, deals with mean-based learners, and partly by the fact that this is a broad class of learners which can be quantitatively reasoned about in one stroke. While we believe that our qualitative results should extend to other algorithms, it seems this would have to be a case-by-case analysis; it is not clear what broad general condition to place on a learning algorithm so that our results hold quantitatively.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! I will retain my current evaluation. | Summary: This work builds upon Kolumbus and Nisan (2022a), which studies a setting where agents use no-regret learning algorithms to bid in a repeated auction setting. The authors first focus on a deterministic setting with two bidders that use the Multiplicative Weights Update (MWU) algorithm. In this case, they show that the runner-up bidder may not converge to bidding truthfully, depending on the learning rate it uses compared to the other agent. Next, they show that adding randomness to the auction can lead to the truthful bidding of the runner-up agent and, hence, maximize the revenue of the auctioneer. Finally, the authors study the non-asymptotic case.
Strengths: I believe the question of studying repeated auctions where agents use algorithms is timely and interesting. I also appreciate how the authors attempt to extend the results of Kolumbus and Nisan (2022a) to any deterministic auction and also highlighting the importance of incorporating randomness.
Weaknesses: I have a number of questions and concerns regarding the generality of the results and the presentation of the paper.
First, as far as I understand, the results in Section 3 are limited to two bidders that use the MWU algorithm. If I am not mistaken, the results of Kolumbus and Nisan (2022a) are not limited to two bidders and also study other algorithms such as "follow the perturbed leader." I wonder how the authors justify or perceive the limitations of their work in this regard.
Next, the model described in Section 2 (and the results of Section 3) are for two bidders, but it seems that from Section 4 onward, the authors switch to an $n$ bidder case. Is this correct?
Technical Quality: 3
Clarity: 2
Questions for Authors: Please seem my comments above. Also, can the authors comment on the persistent value assumption? What if the values are drawn independently in each round but from different fixed distributions (in other words, we have persistent values with some noise)?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations are discussed as the modeling assumptions are stated clearly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the Reviewer for their detailed feedback. Please find our answers to your questions below.
> First, as far as I understand, the results in Section 3 are limited to two bidders that use the MWU algorithm. If I am not mistaken, the results of Kolumbus and Nisan (2022a) are not limited to two bidders and also study other algorithms such as "follow the perturbed leader." I wonder how the authors justify or perceive the limitations of their work in this regard.
Regarding the number of bidders in Section 3, we focus on the case of two bidders to keep the presentation cleaner; the results can go through when we have a larger number of bidders, where we let W be the winning bidder if they were all to bid truthfully and R the runner-up bidder if they were all to bid truthfully (i.e., the one who would win in the absence of W). Then, the convergence results about the ratio of the learning rates would apply to these two bidders.
Notice that the theoretical results of Kolumbus and Nisan hold for the case of two bidders only.
Regarding the choice of the algorithm, notice that the theoretical non-convergence result of Kolumbus and Nisan (Theorem 1 in their paper) applies to bidders who are using MWU. There are some simulations about FTRL, but, to the best of our knowledge, there is no theoretical understanding of the convergence behavior.
Notice that our transformation in Section 4 considers $n$ bidders who are using mean-based no-regret learning algorithms (a natural class of algorithms that significant prior work has focused on), so these results do apply to Follow-the-Regularized-Leader/Follow-the-Perturbed-Leader type of algorithms.
> Next, the model described in Section 2 (and the results of Section 3) are for two bidders, but it seems that from Section 4 onward, the authors switch to an $n$ bidder case. Is this correct?
This is correct; the results in Section 3, Section 5 consider two bidders, and the transformation in Section 4 applies to $n$ bidders. We are happy to state the model with n bidders, if the reviewer feels that it will make the presentation more transparent.
> Also, can the authors comment on the persistent value assumption? What if the values are drawn independently in each round but from different fixed distributions (in other words, we have persistent values with some noise)?
The persistent assumption is motivated by online ad-auctions, and is also the main setting that is considered by Kolumbus and Nisan. In the context of online ad auctions, while multiple auctions are run every second, the valuations of the advertisers do not change much for certain time scales, e.g., a day or a week. Thus, there are typically large intervals of size $T$ where the valuations of the participating agents are persistent. We do agree, however, that the valuations will likely change over longer time horizons. We believe that as long as the persistence is significant, our results and the value of randomized mechanisms over deterministic mechanisms will hold. Having a theoretical analysis showing such behavior for a setting with not fully persistent valuations will be interesting but goes beyond the scope of this work.
In the model you mentioned, where the valuations are re-drawn in every round but the distributions are different, we believe that the convergence (or non-convergence) would be dictated by the mass that the distributions put on overlapping regions of the supports of the distributions.
At a more conceptual level, we believe that a message of our work is the following: the key to convergence of the low-type bidder is the presence of enough randomness. If the ranking of the bidders is very stable due to the lack of inherent randomness (i.e., due to infrequent redraws of the valuations), our results show that injecting external randomness into the auction induces the desired learning behavior and hence improves the revenue. Moreover, it restores the property that advertisers can truthfully report their valuations to the learning agents that bid on their behalf on the queries. Having persistent valuations is just one case of the ranking of the bidders remaining stable over time: studying this case allows us to showcase our main ideas, but a central message here is the presence/absence of stability in the rankings of the bidders is key. We have tried to make this point in Lines 88-93 of our manuscript; we will emphasize it more in the next revision.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I thank the authors for their detailed response. I am satisfied with the answers provided and keep my score as is. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
High-dimensional (Group) Adversarial Training in Linear Regression | Accept (poster) | Summary: This paper presents a non-asymptotic consistency analysis of prediction error for the adversarial training procedure under $l_\infty$ perturbation. It demonstrates that the convergence rate of the prediction error is up to a logarithmic factor. Additionally, the authors prove that the group adversarial training procedure achieves a superior upper bound on prediction error compared to classic adversarial training.
Strengths: 1. This paper applies the restricted eigenvalue condition and sparsity to deliver a convergence analysis, resulting in a better convergence rate.
2. The authors investigate the convergence rate of group adversarial training and achieve a faster upper bound for convergence.
Weaknesses: 1. The authors aim to connect their conclusions about a linear model to adversarial training, but adversarial training is a defense strategy commonly used in deep neural networks (DNN). The linear model is too simple and specific to accurately represent the behavior of adversarial training. The authors would benefit from studying adversarial training on a simple two-layer neural network or a convex function, not just the linear model.
2. The convergence rate for a linear model, as presented, is insufficient to illustrate the behavior of adversarial training effectively.
3. The points in Lines 142-145 lack supporting literature.
4. From Theorem 2.3 and Corollary 2.4, the authors derive a convergence rate of order $\frac{1}{n}$. Moreover, in Remark 2.8, the authors claim that the prediction error in [20] has a lower order $\frac{1}{\sqrt{n}}$. However, from Theorem 2 in [20], the convergence rate is also $\frac{1}{n}$ if $\delta\propto \frac{1}{\sqrt{n}}$ as your setting. The conclusion in this paper is very similar to that of [20]. While the restricted eigenvalue condition and sparsity might accelerate the convergence rate, I don’t see an improvement in this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the reviewer's comments on our work. We hope the following responses and clarifications can address the reviewer's concerns!
**Comment 1**
The authors aim to connect their conclusions about a linear model to adversarial training, but adversarial training is a defense strategy commonly used in deep neural networks (DNN). The linear model is too simple .... network or a convex function, not just the linear model.
**Comment 2**
The convergence rate for a linear model, as presented, is insufficient to illustrate the behavior of adversarial training effectively.
**Response to Comment 1 and 2**
Although the linear model seems simple, exploring the linear model is still common and crucial in the development of machine learning theories. Many existing works, e.g., [1-5], and our paper focus on the linear model for the adversarial training procedure. We explain the reasons as follows: Firstly, the linear model admits advanced analytical analysis. It will make the problem mathematically tractable and provide clear insights. For example, the minimax optimality for adversarial training under $\ell_\infty$-perturbation is proved in this paper, conveying the direct message that adversarial training under $\ell_\infty$-perturbation is statistically optimal. Moreover, the linear model could serve as an essential starting point for understanding more complex models. For example, it is well-known that the training dynamics of the wide neural network can be approximated by the linear model through the neural tangent kernel. Also, we want to emphasize that we are the first to prove the minimax optimality of the adversarial training in the linear model. We believe that this contribution has pushed the frontier of adversarial training theoretical exploration.
We appreciate the reviewer’s consideration of this perspective and hope our clarifications underscore the rationale behind our focus on linear models and the significance of our contribution to the statistical theory of adversarial training.
[1] A. Javanmard, M. Soltanolkotabi, and H. Hassani. Precise tradeoffs in adversarial training for linear regression. In Conference on Learning Theory, pages 2034–2078. PMLR, 2020
[2] A. Ribeiro, D. Zachariah, F. Bach, and T. Schön. Regularization properties of adversarially-trained linear regression. Advances in Neural Information Processing Systems, 36, 2023
[3] H. Taheri, R. Pedarsani, and C. Thrampoulidis. Asymptotic behavior of adversarial training in binary linear classification. IEEE Transactions on Neural Networks and Learning Systems, 2023
[4] A. H. Ribeiro and T. B. Schön. Overparameterized linear regression under adversarial attacks. IEEE Transactions on Signal Processing, 71:601–614, 2023
[5] E. Dobriban, H. Hassani, D. Hong, and A. Robey. Provable tradeoffs in adversarially robust classification. IEEE Transactions on Information Theory, 2023
**Comment 3**
The points in Lines 142-145 lack supporting literature.
**Response to Comment 3**
Line 142 - Line 145 are the analysis of Equation (3), which is a direct expansion of Equation (2). In the literature, this phenomenon has been referred to as the "regularization effect." We have mentioned the relevant literature in Line 146.
**Comment 4**
From Theorem 2.3 and Corollary 2.4, the authors derive a convergence rate of order $\frac{1}{n}$. Moreover, in Remark 2.8, the authors claim that the prediction error in [20] has a lower order $\frac{1}{\sqrt{n}}$. However, from Theorem 2 in [20], the convergence rate is also $\frac{1}{n}$ if $\delta \propto \frac{1}{\sqrt{n}}$ as your setting. The conclusion in this paper is very similar to that of [20]. While the restricted eigenvalue condition and sparsity might accelerate the convergence rate, I don't see an improvement in this paper.
**Response to Comment 4**
The rate derived in [20] is $\frac{1}{\sqrt{n}}$ rather than $\frac{1}{n}$. We would like to provide the following explanation: The upper bound of the error in [20] is given by $8\delta\Vert\beta^\ast\Vert_1\left(\frac{1}{n}\Vert\varepsilon\Vert_1 + 10\delta \Vert\beta^\ast\Vert_1\right), $
which can be written as:
$$ 8\delta\Vert\beta^\ast\Vert_1\frac{1}{n}\Vert\varepsilon\Vert_1 + 80\delta^2\Vert\beta^\ast\Vert_1^2. \quad \quad \quad \quad \quad (1)$$ By choosing $\delta \propto \frac{1}{\sqrt{n}}$, the order of the first term in (1) is $\frac{1}{\sqrt{n}}$. A common misunderstanding regarding the order of $8\delta\Vert\beta^\ast\Vert_1\frac{1}{n}\Vert\varepsilon\Vert_1$ may come from the term $\frac{1}{n}\Vert\varepsilon\Vert_1$. At first glance, $\frac{1}{n}\Vert\varepsilon\Vert_1$seems to have the order $\frac{1}{n}$. However, since $\varepsilon$ is an $n$-dimensional vector, the resulting order of $ \Vert\varepsilon\Vert_1$ should be $n$, indicating the order of $\frac{1}{n}\Vert\varepsilon\Vert_1$ is $O(1)$ instead of $\frac{1}{n}$. Therefore, the order of the first term in (1) is $\frac{1}{\sqrt{n}}$. Consequently, the overall order of the error bound should be $\frac{1}{\sqrt{n}}$.
For further clarification, please refer to the last paragraph on Page 8 in [20], where the authors claim that their order is $\frac{1}{\sqrt{n}}$:
> "For $ \lambda \propto M \sigma \sqrt{(\log p) / n}$, we (with high probability) satisfy the condition in Theorem 3, obtaining: $\frac{1}{n}\Vert X(\widehat{\beta} -\beta^*)\Vert_2^2 \lesssim M \sigma \sqrt{(\log p) / n}$. For adversarial training, we can set: $\delta \propto M \sqrt{(\log p) / n}$, and (with high probability) satisfy the theorem condition, obtaining the same bound,"
where the same bound denotes $M \sigma \sqrt{(\log p) / n}$, which has the order of $\frac{1}{\sqrt{n}}$.
Given that our proved order is $ \frac{1}{n}$, **we believe our paper demonstrates an order improvement.** We hope our explanations address the reviewer's concern and clarify the contribution of our work.
---
Rebuttal 2:
Title: Gentle Reminder
Comment: Dear Reviewer,
We hope this message finds you well. We are writing to kindly follow up on the rebuttal. We appreciate the effort you made to provide feedback on our paper. We would be grateful if you could let us know if our response has addressed your concerns.
Thank you,
Authors | Summary: The paper provides a high-dimensional analysis of Linear Regression in Adversarial training. It has two contributions:
1. it proves an improved convergence rate of prediction error of $1/n$ (previous work show $1/\sqrt{n}$).
2. It extends adversarial training for the group setting and extend the convergence results for it
Strengths: The paper is well-written and clear.
The mathematical results seem to be consistent, and to the best of my knowledge are correct.
The author authors are very explicit about their contribution and present clear distinction from related work. Particularly, it is well contextualized and compared with [20], [26] and [28].
Weaknesses: The numerical experiments are the main weakness. Not so many different configurations are tested. Still, since this is mostly a theoretical paper, I don't see this as a major problem. But I believe studying it for different settings could strength the paper
I think the presentation of the numerical results could be somehow improved,
1. Figure 1 could be interesting to have confidence intervals and also have in the plot the final values that the coefficients converge to.
2. Figure 2 it would be better to have log 10 base, it is a bit hard to read with ln.
Technical Quality: 4
Clarity: 4
Questions for Authors: It is unclear to me why in the numerical experiments what constant is used when setting the perturbation proportional to $1/\sqrt{n}$? How is it chosen, because I imagine is hard to guarantee in practice that the conditions of Theorem 2.3 are being satisfied.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The paper has no clear societal impact. See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Great thanks for the reviewer's appreciation of our work! Regarding the numerical experimental improvement, please see our response below and the revised figures in the pdf file attached in the global response.
**Comment 1**:
The numerical experiments are the main weakness. Not so many different configurations are tested. Still, since this is mostly a theoretical paper, I don't see this as a major problem. But I believe studying it for different settings could strength the paper.
**Response to Comment 1**:
We appreciate your recognition of the theoretical contributions of the paper. We acknowledge that the numerical experiments could benefit from a wider variety of configurations.
Due to the page limit of the rebuttal attachments,
we will include the results of additional settings in future revisions.
**Comment 2**:
I think the presentation of the numerical results could be somehow improved,
Figure 1 could be interesting to have confidence intervals and also have in the plot the final values that the coefficients converge to.
Figure 2 it would be better to have log 10 base, it is a bit hard to read with ln.
**Response to Comment 2**:
Thanks for the suggestions for improving the presentations of our numerical results. We have added the confidence interval, indicated the values of the coefficient converging, and changed to $\log 10$ base in the figures. Please see the pdf file with the revised figures attached in the global response.
We have some explanations of the figures as follows.
We plot the curve of $log_{10}$(prediction error) versus $log_{10}$(sample size) with error bars. We can observe that the slopes of two curves are
approximately equal to $-1$, which is consistent with our theoretical analysis, where we have proved that the prediction error for high-dimensional (group) adversarial training is of the order $1/n$. Further,
the curve and error bar of group adversarial training are below those of adversarial training, indicating the superiority of group adversarial training.
We also plot the coefficient estimation paths of (group) adversarial training with error bars. Both adversarial training and group adversarial training can shrink the parameter estimation, while group adversarial training performs a better shrinkage effect,
In addition, the final values that the coefficients converge to are annotated in the figures. Given the ground-truth non-zero values [0.1, 0.15, 0.2, 0.25, 0.9, 0.95, 1, 1.05], the final values of group adversarial training are closer to the ground-truth, indicating that the group adversarial training output a more accurate estimation.
**Comment 3**:
It is unclear to me why in the numerical experiments what constant is used when setting the perturbation proportional to $1/\sqrt{n}$
? How is it chosen, because I imagine is hard to guarantee in practice that the conditions of Theorem 2.3 are being satisfied.
**Response to Comment 3**:
The reviewer is correct that it is hard to guarantee the conditions in Theorem 2.3 in practice.
As noted, we choose the order $1/\sqrt{n}$ because Corollary 2.4 shows that the minimax convergence rate can be achieved if the perturbation magnitude $\delta$ is of this order. This order is also recommended in the literature [26] (reference in the paper). For the constant, we selected $1$ for simplicity and experimental convenience. This constant allows us to illustrate the theoretical results effectively without complicating the numerical setup. We will clarify this setting clearly in our future revisions.
---
Rebuttal 2:
Comment: I read the other reviews and comments. I believe the focus on linear models is interesting and well-justified and I don't think other scores should penalize the paper for it. The paper is good in what it does and has a clear contribution.
I thank the reviewer for addressing my concerns and I raise my score to strong accept.
---
Rebuttal Comment 2.1:
Comment: Thank you for recognizing our contributions and the value of our focus on linear models! We truly appreciate your positive evaluations and are greatly encouraged by your decision to raise your score to a strong accept. | Summary: This paper provided an theoretical analysis for the optimality of adversarial training methods on linear regression, and further explored the advantages of the group adversarial training method, comparing with general adversarial training method. There are also experiments supporting their points.
Strengths: 1.The paper is well-organized with clear statement. The contributions are stated clearly, and it is easy to follow the logic and flow of the paper.
2.The theoretical results are discussed detailedly, which helps to understand the theorems.
Weaknesses: 1.As the paper is mainly focused on empirical errors, the contributions seem to be not enough. It is better to extend the results on test error analysis, which may be more interesting.
2.The tightness of such upper bounds in main theorems has not been proved.
3.The linear model seems to be quite constrained, it is recommended to extend to random feature models or other neural network models, such as two-layer NTK or diagonal network.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the reviewer's insightful comments. We hope our response and clarifications can address the reviewer's concerns!
**Comment 1:** As the paper is mainly focused on empirical errors, the contributions seem to be not enough. It is better to extend the results on test error analysis, which may be more interesting.
**Response to Comment 1:**
Thanks for the reviewer for pointing out this. We clarify the error framework used in this paper as follows.
1.
We are using the prediction error, $\frac{1}{n}\Vert \mathbf{X}(\hat{\beta}-\beta^\ast) \Vert_2^2$ ,instead of the empirical error.
In the framework of the non-asymptotic high-dimensional statistical analysis, the prediction error is typically used, seeing [1-4]. The prediction error can help us directly quantify the derivation in $\hat{\beta}$ from $\beta^\ast$.
2. The prediction error, $\frac{1}{n}\Vert \mathbf{X}(\hat{\beta}-\beta^\ast) \Vert_2^2$, is also called 'in-sample' prediction error. The 'test error' mentioned by the reviewer may be referred to as the 'out-of-sample' prediction error. We provide explanations for why the in-sample prediction error is usually preferred in high-dimensional analysis as follows.
Firstly, high-dimensional settings typically involve situations where the number of input variables is much larger than the number of observations. In such cases, splitting the data into training and test sets can result in very few observations in the test set, making out-of-sample prediction errors unreliable.
Secondly, the in-sample prediction error will enable the application of the concentration inequality, resulting in explicit probabilistic bounds analysis. Thus, in-sample prediction error can admit advanced theoretical analysis.
[1] Wainwright, Martin J. High-dimensional statistics: A non-asymptotic viewpoint. Vol. 48. Cambridge university press, 20
[2] G. Raskutti, M. J. Wainwright, and B. Yu. Minimax rates of estimation for high-dimensional linear regression over lq-balls. IEEE Transactions on information theory, 57(10):6976–6994,
373 2011.
[3] P. C. Bellec, G. Lecué, and A. B. Tsybakov. Slope meets lasso: improved oracle bounds and optimality. The Annals of Statistics, 46(6B):3603–3642, 2018.
[4] K. Lounici, M. Pontil, S. van de Geer, and A. B. Tsybakov. Oracle inequalities and optimal inference under group sparsity. The Annals of Statistics, pages 2164–2204, 2011.
**Comment 2:** The tightness of such upper bounds in main theorems has not been proved.
**Response to Comment 2:** Thank you for your comment regarding the tightness of the upper bounds in our main theorems. We appreciate the opportunity to clarify this point.
The upper bounds in our main theorems are indeed tight, as we have demonstrated that the error of the adversarial training can achieve the minimax rate. Specifically, the error order in our paper is $s\log p/n$, which matches the minimax lower bound given in [2,19] (the references in our paper), seeing Line 206-208. We hope this explanation addresses your concern.
**Comment 3:** The linear model seems to be quite constrained, it is recommended to extend to random feature models or other neural network models, such as two-layer NTK or diagonal network.
**Response to Comment 3:**
Thanks for the reviewer's valuable suggestions on possible extensions.
The reason we focus on the linear model is that it will allow advanced analytical analysis.
It will make the problem mathematically tractable and provide clear insights.
For example, this paper proves the minimax optimality for adversarial training under $\ell_\infty$-perturbation, conveying the direct message that adversarial training is statistically optimal.
Moreover, the linear model could serve as an essential starting point for understanding more complex models. For example, it is well-known that training dynamics of wide neural network can be approximated by the linear model through neural tangent kernel.
We appreciate your suggestions for the extensions.
The extensions to the more complex models are very promising but may require intensive additional proof work, so we will consider the extensions seriously in our future work.
---
Rebuttal 2:
Title: Gentle Reminder
Comment: Dear Reviewer,
We hope this message finds you well. We are writing to kindly follow up on the rebuttal. We appreciate the effort you made to provide feedback on our paper. We would be grateful if you could let us know if our response has addressed your concerns.
Thank you,
Authors
---
Rebuttal Comment 2.1:
Title: Response to the rebuttal
Comment: Many thanks for the authors' response. This paper has an interesting story, but maybe it need to be polished from my side. I will maintain my score. | Summary: The paper studies adversarial training in high-dimensional linear regression under $\ell_\infty$-perturbations and group adversarial training. The paper also provides non-asymptotic consistency analysis.
Strengths: The associated convergence rate of prediction error achieves a minimax rate up to a logarithmic factor. The authors also claim that group adversarial training offers a better prediction error upper bound under certain group-sparsity patterns.
Weaknesses: - Remark 2.8: The authors say that they improve the convergence rate of the prediction error from $\frac{1}{\sqrt{n}}$ to $\frac{1}{n}$. Is the improvement possible only due to the additional assumption of restricted eigenvalue condition and sparsity information? If so, it should be clearly stated in the abstract and introduction. Otherwise, it gives the wrong impression or over-claims that the improvement is achieved without any additional assumptions.
- Is the weight vector $\mathbf{w}$ defined in line 234 assumed to be known in group adversarial training? If so, it should be stated clearly as an assumption. If not, does an algorithm exist to estimate it? The results are obviously heavily dependent on this parameter.
- The abstract can be improved by including more technical information about the specific contributions. For example, is the analysis improving any rates as compared to the existing literature? If so, what specific theoretical analysis or assumption helped to achieve that improvement? For example, the last line in the abstract mentions certain group sparsity patterns. It would be helpful if the authors could explain more about this particular sparsity pattern so that a reader gets an understanding of whether the requirement is restrictive or easily achievable.
- Line 30: The authors could probably clarify with more technical details why existing literature thinks that $\ell_\infty$-perturbations can be helpful in recovering sparsity.
- Line 87-97: It is repetitive.
- Line 229 has typo: $s = \infty$ should be used in underscore.
Technical Quality: 2
Clarity: 2
Questions for Authors: - The proof of Corollary 2.4 uses Gaussian tail bounds. Can the proof be generalized for sub-Gaussian distribution?
- Are the bounds derived in Corollary 3.5 or 3.6 tight? Meaning, can the authors show the equivalence to the bounds derived in Corollary 2.4 or 2.5 by making the number of groups = 1 in Corollary 3.5 or 3.6?
- Line 55: what is the sparsity condition exactly?
- Line 65: What is the certain group sparsity structures exactly?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: - At the end of Section 3, the authors claim that the inferences made are similar to the differences between Lasso and group Lasso. If that is the case, what are the new insights that the proposed theoretical analysis is bringing in? It seems like a paper that rigorously verifies the expected results, which have been well explored in the literature for similar methods like Lasso and group Lasso.
- The equality assumption on $\delta$ or $\frac{\delta}{w_l}$ in Corollary 2.4 or Corollary 3.5, respectively, seem quite restrictive. Is the theoretical analysis not useful for any other $\delta$? Any form of upper bound or lower bound on $\delta$ could have been more helpful.
- If I understand correctly, the paper analyzes the particular case of $r = 2$ and $s = \infty$ only. But the problem is defined for general $r$ and $s$ in proposition 3.1 and equation after line 233. It gives the wrong impression to the reader that the problem is analyzed for general $r$ and $s$.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the reviewer's careful and detailed comments on our work. We hope our responses and clarifications can address the reviewer's concerns!
**Comment 1** ...improve the convergence rate......clearly stated in the abstract...
**Response**
The reviewer is correct that the order improvement is based on the restricted eigenvalue condition, a standard assumption in the literature of sparse high-dimensional analysis. We also assume that the ground truth parameter $\beta_\ast\in\mathbb{R}^p$ is supported on a subset of {1,...,p}. We will revise the abstract and introduction to state these assumptions more clearly.
**Comment 2** Is the weight vector $\mathbf{w}$ defined in line 234 assumed to be known in group adversarial training?...
**Response** $\mathbf{w}$ is a hyperparameter and does not need to be estimated. Additionally, in the order analysis presented in Corollary 3.5, the error upper bound no longer depends on $\mathbf{w}$.
**Comment 3** The abstract can be improved by including more technical information about the specific contributions...
**Response** The main contributions in this paper are: (a) We are the first to show that adversarial training can achieve minimax optimality. We prove this under the restricted eigenvalue condition. (b) We analyze the group adversarial training and prove that it enjoys a better prediction error bound when group sparsity patterns are known. The group sparsity pattern means that the variables act in groups, and sparsity exists at the level of groups of variables instead of individual variables. The group patterns exist in many real-world problems. For example, groups of genes act together in pathways in gene-expression arrays. Also, if an input variable is a multilevel factor and dummy variables are introduced, these dummy variables act in a group. We will make all these assumptions and contributions clearer in the abstract.
**Comment 4** The authors...why existing literature thinks that $\ell_{\infty}$-perturbations can be helpful in recovering sparsity.
**Response** [26] has proved that the asymptotic distribution of adversarial training estimator under $\ell_\infty$-perturbation has a positive mass at $0$ when the underlying parameter is $0$. Other types of perturbation do not have this property. We will state these technical details carefully in our revision.
**Comment 5 and 6** Line 87-97: It is repetitive. Line 229 has typo...
**Response:**
We will revise these lines.
**Comment 7** ...Can the proof be generalized for sub-Gaussian distribution?
**Response** Corollary 2.4 can be generalized to sub-Gaussian distributions since sub-Gaussian distributions share similar tail behavior with Gaussian distributions. We will revise the proof to include the generalization for sub-Gaussian distributions.
**Comment 8** Are the bounds derived in Corollary 3.5 or 3.6 tight?..
**Response:** Thanks for pointing this out. The bounds derived in Corollary 3.5 or 3.6 are tight. If we make the number of groups equal to $p$, i.e., each group only has one component, then we will have that $L=p,p_l=1, \vert G_J\vert=g$. The resulting error bound is $g\log p/n$, where $g$ denotes the number of nonzero components of $\beta_\ast$. This order matches what is derived in Corollary 2.4 or 2.5. We will conclude these as a remark under Corollary 3.5 and 3.6.
**Comment 9** what is the sparsity condition exactly?
**Response** The sparsity condition means that the number of non-zero coefficients is less than the total number of coefficients, i.e., the ground truth $\beta_\ast\in\mathbb{R}^p$ is supported on a subset of {1,...,p}. We will clarify this definition in the revision.
**Comment 10** What is the certain group sparsity structures exactly?
**Response** The group sparsity structure means that sparsity is enforced at the group rather than at the individual level.
Specifically, suppose the index set {1,...,p} has the prescribed (disjoint) partition {1,...,p}$=\bigcup_{l=1}^L G_l$.
$J\subset$ {1,..., L} denotes a set of groups and $\beta_\ast\in\mathbb{R}^p$ is supported at these J groups, i.e., $\beta_\ast$ is supported on the $G_J=\bigcup _{l\in J}G_l$.
**Comment 11** ...the inferences made are similar to the differences between Lasso and group Lasso..what are the new insights that...
**Response** Our work focuses on the adversarial training problem, which is inherently different from the Lasso problem, and our contributions are (1) the adversarial training is minimax optimal, and (2) group adversarial training can improve the error bound. These conclusions have never been explored in the literature. We mention and correlate adversarial training with LASSO because both LASSO and $\ell_\infty$-perturbed adversarial training could recover sparsity and achieve the minimax convergence rate. In this way, it is not surprising that the theoretical error bounds of (group) adversarial training are consistent with (group) LASSO. We will add these discussions to avoid confusion.
**Comment 12** The equality assumption on $\delta$ or $\frac{\delta}{w_l}$ in Corollary 2.4 or Corollary 3.5, respectively, seem quite restrictive...
**Response** $\delta$ should be in the order of $1/\sqrt{n}$ to make the corollaries hold due to the structure of the concentration inequality of Gaussian distribution. We admit that this setting seems to be restrictive. But we would like to interpret this result as a recommendation of order choice. That is to say, the order of $1/\sqrt{n}$ is recommended in order to achieve the fast convergence rate. This order choice is also recommended in the literature [26] to achieve sparsity, seeing Remark 2.5 in our paper.
**Comment 13** If I understand correctly, the paper analyzes ... gives the wrong impression to the reader that the problem is analyzed for general $r$ and $s$.
**Response** The reviewer is correct. Thanks for pointing this out. We will clarify the scope of our analysis more clearly.
---
Rebuttal 2:
Title: Gentle Reminder
Comment: Dear Reviewer,
We hope this message finds you well. We are writing to kindly follow up on the rebuttal. We appreciate the effort you made to provide feedback on our paper. We would be grateful if you could let us know if our response has addressed your concerns.
Thank you,
Authors
---
Rebuttal Comment 2.1:
Comment: I thank the authors for their detailed response. I will maintain my original score. | Rebuttal 1:
Rebuttal: We have revised the figures for the numerical experiments as requested by Reviewer J4Ez. Please see the attached pdf file.
Pdf: /pdf/bfd9319a74c051e74866bae2cca29fc0b6f124b6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Robust Mixture Learning when Outliers Overwhelm Small Groups | Accept (poster) | Summary: In this paper, the authors introduce the list-decodable mixture learning problem, which can be considered as the extension of the list-decodable mean estimation problem. In such case, data is drawn from a weighted mixture of $k$ inlier distributions and an adversarial outlier distribution. A notable aspect of this paper’s problem setting is that the fraction of outlier $\varepsilon$ can exceed the fraction of certain inlier clusters $w_i$s. As a result, the required list size is at least $k+\varepsilon/\min w_i$. In this paper, The primary objective of this study is to compute a compact list $L$ of means such that for any $\mu_i$ of an inlier group, there exists $\hat{\mu}\in L$ with a sufficiently small $\| \hat{\mu}-\mu_i \|$. The only known knowledge is $w_{low}$, a lower bound of all $w_i$s.
The algorithm proposed by the author is meta in the sense that it uses other algorithms (Robust Mean Estimation and List-Decodable Mean Estimation) as base components. It comprises two stages outlined as follows:
1. outer stage: Iteratively separates the inlier clusters from each other and from the outliers. This stage produces the collection of sets $\mathcal{T}$ satisfying certain properties. Then for every $T\in \mathcal{T}$, run the inner stage.
2. Inner stage: uses the cor-kLD algorithm to derive the cor-aLD algorithm that restricts access to the $\alpha_{low}$ s.t. $\alpha_{low}\leq \alpha$. In this stage, for every $T\in \mathcal{T}$, a refined list of mean estimates is produced.
Strengths: - This study is interesting. It appears to be the first one to address the challenge of mixture learning in cases where outliers overwhelm small inlier groups
- The problem setting of this paper is practical. The algorithm is designed to work with limited information, only having access to the lower bound of inlier group fractions.
- The paper presents theoretical lower bounds for the error. Remarkably, the proposed meta-algorithm achieves a matching lower bound.
Weaknesses: - The two stage meta-algorithm can be time-consuming.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The paper lacks a presentation of the time complexity, and the experimental part also fails to provide results from this aspect. It would be beneficial to include information regarding the time complexity of the proposed algorithm to provide a comprehensive understanding of its computational efficiency.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of our work and for the feedback which will help to improve it. Below, we address specific comments and questions.
> The two stage meta-algorithm can be time-consuming. [...] The paper lacks a presentation of the time complexity, and the experimental part also fails to provide results from this aspect. It would be beneficial to include information regarding the time complexity of the proposed algorithm to provide a comprehensive understanding of its computational efficiency.
Thank you for your comment. We provided a detailed analysis of the time complexity in the general rebuttal response (third part). In particular, our meta-algorithm has a small overhead complexity compared to base learners. We will include this discussion in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional analysis. The authors have answered my question and I would like to keep my postive score. | Summary: This paper addresses the problem of estimating the means of well-separated mixtures in the presence of adversarial outliers, a scenario where traditional algorithms may fail. The authors introduce the concept of list-decodable mixture learning (LD-ML), which is particularly relevant when outliers can outnumber smaller inlier groups.
Strengths: The paper introduces a new problem formulation (LD-ML) and a creative combination of existing ideas (base learners for adversarial corruptions) to solve it. This originality addresses significant limitations in prior work, enhancing the robustness and applicability of mixture learning algorithms. The research quality is high, with rigorous theoretical analysis and well-designed experiments. The thoroughness of the proofs and the clarity of the theoretical contributions reflect a strong understanding of the subject matter.
Weaknesses: The paper did not include real-world datasets and additional robust learning methods for comparison. It would be great if the impact of the assumptions can be clarified.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Have you considered validating your algorithm on real-world datasets? If so, could you provide details on these experiments? If not, what were the main obstacles?It would be beneficial to include results from real-world datasets to demonstrate the practical utility and robustness of your method.
2. How does your algorithm perform when the assumption of well-separated mixture components does not hold? Have you tested its robustness in such scenarios?
3. Can you provide a detailed analysis of the computational complexity of your algorithm? How does it scale with the number of dimensions and mixture components?
4. How does your algorithm compare to other state-of-the-art robust learning methods in terms of practical performance metrics like runtime and ease of implementation?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of our work and for the feedback which will help to improve it. Below, we address specific comments and questions.
**Experimental evaluation**
> The paper did not include real-world datasets and additional robust learning methods for comparison. [...] Have you considered validating your algorithm on real-world datasets? If so, could you provide details on these experiments? If not, what were the main obstacles? It would be beneficial to include results from real-world datasets to demonstrate the practical utility and robustness of your method. [...] How does your algorithm perform when the assumption of well-separated mixture components does not hold? Have you tested its robustness in such scenarios?
We would like to emphasize that the focus of our work is to first introduce the LD-ML framework and prove theoretical guarantees in this setting for an efficient algorithm. We agree that for the purpose of properly evaluating the effectiveness of the algorithm in practice we would need a much more extensive experimental setup that would be out of scope for this paper and we leave it for future work.
Having said that, it would be great if the reviewer could point to the specific robust learning methods (beyond the ones considered in the paper) they have in mind. For our experimental results, we opted for comparing our algorithm with 1) baselines that have provable guarantees in our setting (list decoding algorithm) and 2) three commonly used in practice clustering algorithms: (k-Means, robust k-Means and DBSCAN).
> It would be great if the impact of the assumptions can be clarified. [...] How does your algorithm perform when the assumption of well-separated mixture components does not hold? Have you tested its robustness in such scenarios?
We will clarify the assumptions (in particular, we have a result which does not require well-separateness assumption) in the revised version. For example, Corollary C.4 answers the question about the impact of the well-separateness assumption, by showing the guarantees when the components are non-separated. We can move it to the main text so that the impact is more explicit in the main text.
Furthermore, in our experimental design we effectively included the setting where clusters are not separated (by inserting 'adversarial' clusters in the vicinity of the inlier component). Therefore, results in Figure 2 can also be interpreted as results for non-separated clusters (by assuming that the inserted clusters around the smallest cluster are part of the mixture).
> Can you provide a detailed analysis of the computational complexity of your algorithm? How does it scale with the number of dimensions and mixture components?
Thank you for your suggestion. Please see our general rebuttal response (third part) for the detailed analysis of the computational complexity.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I am happy to raise my score considering all the discussions. | Summary: The authors investigate the problem of mean estimation for a well-separated mixture in the presence of arbitrary outliers introduced by an adversary. They propose a meta-algorithm that leverages robust mean estimation algorithms as base learners, each with a set of prescribed properties. The authors provide an error guarantee of $\mathcal{O}(\sqrt{\log \frac{\epsilon}{w_i}})$ for a list size of $k + \mathcal{O}(\frac{\epsilon}{w_{low}})$, where $k$ is the total number of inlier groups, $\epsilon$ is the proportion of outliers, and $w_i$ is the proportion of inlier group $i$. Additionally, their approach is capable of handling cases where some $w_i \leq \epsilon$.
Strengths: The main contribution of the paper lies in developing the meta-algorithm (Algorithm B.1) which first creates clever partitions of the original set (Algorithm D.1) and then uses the appropriate base learners on the partitioned sets (Algorithm C.1). While I have not gone through the mathematical details very carefully, the results seem very interesting and match the error guarantees of base learners run with the full information on the weight proportion of each inlier group separately. The paper is written well and the ideas and corresponding arguments are presented clearly in the main paper with technical details deferred to the appendix.
Weaknesses: The paper makes significant contributions, and I did not identify any particular weaknesses. While I possess general knowledge of the research area, I am not an expert, and my review should be considered within that context.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please see above.
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of our work. | Summary: This submission considers the problem of list-decoding the means of a mixture of $k$ sub-Gaussian components, under adversarial *additive* contamination which can have size $\epsilon n$ larger than the smallest-weight component. Instead of only yielding guarantees scaling with the known component weight lower bound $\alpha_{low}$, the (meta-)algorithm in this paper achieves error that depends directly on $w_i$, the weight of that particular component, and decays also with $\epsilon$. Furthermore, the size of the output list is guaranteed $k + O(\epsilon/\alpha_{low})$ instead of the more common and weaker $O(1/\alpha_{low})$ guarantee.
Note: this is an emergency review, so I did not read the paper as closely and carefully as I'd like.
Strengths: To me, the main strength of the paper lies in identifying the regimes where much stronger guarantees (e.g. mean estimation error dependence on actual component weight and not just $\alpha$) can be made compared to prior works, which are worst-case optimal in some sense of the phrase. The (meta-)algorithmic ideas are also simple enough to be implemented (which I consider a plus and not a minus).
Weaknesses: - The introduction claims results even when there is no separation between mixture components, but Theorem 3.3 makes a separation assumption? Did I miss something?
====
Below are hopefully some constructive feedback that the authors can use to improve the paper:
- These guarantees seem possible only because of the distance separation vs concentration assumption (both depend on "$t$"), and I think there needs to be more discussion of the intuition on the quantitative tradeoff in the main body.
- The contexualization in prior work seems incomplete to me. I have two related complaints, both (roughly) related to finite covariance assumption setting.
1. The $\epsilon \gg \min_i w_i$ regime seems to me *almost* covered by the standard setting of learning the means of finite covariance mixtures: the contamination distribution $Q$ can always be trimmed a little bit to get finite covariance, and just be regarded as part of the mixture. Upon googling, it seems that there is more recent work by Diakonikolas, Kane, Lee and Pittas (Clustering Mixtures of Bounded Covariance Distributions Under Optimal Separation) which does handle such mixtures, and in fact components with different covariance sizes. I guess the caveat is that, under the above modelling, the mean of $Q$ might not be sufficiently bounded away from the genuine components. Is my understanding correct? I think a comparison to finite-covariance mixture learning literature, from the framework perspective, would be helpful.
2. The intro of this submission really focuses on the sub-Gaussian mixture case ($t \approx \log 1/w_{low}$), but the general result applies also to $t = 2$. So I think a direct comparison to the technical results of the above paper as well as DKKLT22 is needed. For example, the separation assumption in this submission is much larger ($1/w_{low}^2$) than these prior works ($1/\sqrt{w_{low}}$). In general, I think it'd be very useful to clarify the results in this paper with respect to these DKKLT22 and DKLP papers (and other related works).
- Writing: Section 3.1 was quite dense to read (though possibly because I have to read it quickly for the emergency review). It might be better to present the Gaussian case and the relation to prior work, before presenting the full result in generality and explaining precisely the generalizations.
- Minor thing in line 114: I'm not sure it's strictly true that "in most robust estimation problems, the fractions of inliers and outliers are usually provided to the algorithm". Doesn't filtering work even if the number of outliers is unknown, as long as the inlier variance is known? One could keep filtering until the remaining data set has small-enough covariance?
Technical Quality: 3
Clarity: 2
Questions for Authors: - Any intuition for why $w_{low}^2$ is the right quantity to compare with $w_i$ and $\epsilon$? Is it a necessity of the problem setting or is it just an artifact of the algorithmic construction?
- Definition 2.1 is essentially assuming boundedness compared to the identity covariance Gaussian. For mixtures where the components have, say, covariance bounded by $c$ times identity, does the algorithm need to know $c$, or can $c$ be estimated easily in this context?
- How hard is it to go beyond the corruption model of (2.1), to get *adaptive* additive corruption? That is, the corruption isn't just drawn from a fixed $Q$, but that the corrupted points can depend directly on the drawn inlier samples?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of our work and for the feedback which will help to improve it. Below, we address specific comments and questions.
> The introduction claims results even when there is no separation between mixture components, but Theorem 3.3 makes a separation assumption? Did I miss something?
Thank you for the comment. Yes, we also obtain a result for the non-separated case. See lines 241-243 in the main text, and Corollary C.4 in the Appendix. In the revised version we will move the corollary statement to the main text.
> These guarantees seem possible only because of the distance separation vs concentration assumption (both depend on $t$), and I think there needs to be more discussion of the intuition on the quantitative tradeoff in the main body.
Indeed, this trade-off is inevitable and is present in prior works too. Please also see our discussion on the separation assumption in the general rebuttal (second part).
> Upon googling, it seems that there is more recent work by Diakonikolas, Kane, Lee and Pittas (Clustering Mixtures of Bounded Covariance Distributions Under Optimal Separation) which does handle such mixtures, and in fact components with different covariance sizes. I guess the caveat is that, under the above modelling, the mean of $Q$ might not be sufficiently bounded away from the genuine components. Is my understanding correct? I think a comparison to finite-covariance mixture learning literature, from the framework perspective, would be helpful.
Your understanding is correct, after the trimming operation, $Q$ is not guaranteed to be separated from the other components. This does not allow to model $Q$ as a part of the mixture.
We will add a comparison with the DKLP work in the revised version (also see general rebuttal response).
> The intro of this submission really focuses on the sub-Gaussian mixture case ($t \approx \log 1 / w_{\text{low}}$), but the general result applies also to $t = 2$. So I think a direct comparison to the technical results of the above paper as well as DKKLT22 is needed. For example, the separation assumption in this submission is much larger $1 / w_{\text{low}}^2$ than these prior works $1 / \sqrt{w_{\text{low}}}$. In general, I think it'd be very useful to clarify the results in this paper with respect to these DKKLT22 and DKLP papers (and other related works).
Thank you for the comment, please see our discussion on the separation assumption in the general rebuttal (second part), where we also compare with [DKLP23]. We will extend the comparison with prior work in the revised version.
> It might be better to present the Gaussian case and the relation to prior work, before presenting the full result in generality and explaining precisely the generalizations.
Thank you for your suggestion, we will take this into consideration for the revised version.
> I'm not sure it's strictly true that "in most robust estimation problems, the fractions of inliers and outliers are usually provided to the algorithm". Doesn't filtering work even if the number of outliers is unknown, as long as the inlier variance is known? One could keep filtering until the remaining data set has small-enough covariance?
Thank you for your comment, this was indeed an imprecise choice of words.
We were mostly referring to the list decoding setting, where algorithms are provided with inlier proportion (see, e.g., [1, 2]). Here, the extension to the unknown inlier proportion seems challenging for the following two reasons:
First, e.g., for the filtering technique, in order to obtain optimal error, one needs to filter with the ‘correct’ polynomial degree, and this degree depends on the fraction of inliers.
Second, in order to obtain a list of predictions, a lower bound on the size of inlier sets is required, in order to decide when to ‘discard’ too small subsets.
Overall, we argue that it is not straightforward to extend prior methods to unknown component weights, and this extension is one of our contributions.
> Any intuition for why $w_{\text{low}}^2$ is the right quantity to compare with $w_i$ and $\varepsilon$?
Thank you for the question. First, we would like to note that it is generally not possible to obtain guarantees depending only on $w_i / (w_i + \varepsilon)$, since there are always samples from other components which are effectively ‘outliers’ for a fixed component. For example, consider the Gaussian case with $\varepsilon = 0$, and $w_i / (w_i + \varepsilon) = 1$, but the approximation error is clearly greater than $0 = \sqrt{\log (w_i + \varepsilon) / w_i)}$.
Further, the specific value $w_{\text{low}}$ connects to the separation assumption (see discussion on the necessity of the separation assumption in the general response): if the components are $\Omega\left(\left(1 / w_{\text{low}}\right)^{4/t}\right)$ separated, only an $O(w_{\text{low}}^4)$ fraction of samples from one mixture component may lie in the vicinity of another, so in total $O(w_{\text{low}}^3) \leq w_{\text{low}}^2$ points from other components. We also remark that $w_i$ always dominates $w_{\text{low}}^2$.
> Definition 2.1 is essentially assuming boundedness compared to the identity covariance Gaussian. For mixtures where the components have, say, covariance bounded by 𝑐 times identity, does the algorithm need to know 𝑐, or can 𝑐 be estimated easily in this context?
We assume that the algorithm knows at least a valid upper bound (and obtain guarantees depending on this upper bound). We leave the question of unknown covariances for future work (the main challenge here is to still obtain a small list size $k + O(\varepsilon / w_{\text{low}})$, while correctly ‘guessing’ the covariance).
> How hard is it to go beyond the corruption model of (2.1), to get adaptive additive corruption?
Thank you for the question, please see our general rebuttal response (first part).
---------
For references [1, 2], please see the main rebuttal response.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. More comments from me:
- Re: $1/w_{low}^{4/t}$. For the $1/\sqrt{w_{low}}$ kind of separation, for bounded covariance mixtures, there is a simple intuition of Chebyshev's inequality to see that it is indeed necessary (and hence tight up to constants). Is there any analogously intuitive explanation for $1/w_{low}^{4/t}$? That's what I was hoping for. The overall response doesn't quite get at that level of intuition.
- On the comparison of $w_{low}^2$ with $w_i$ and $\epsilon$, it seems from the explanation that it is slightly arbitrary (it could've been anything bigger than $w_{low}^3$?). In that case, it might be worth pointing out in the paper, for intuition for the reader.
I'll reply to the overall rebuttal thread for my comments on the comparison with bounded-covariance mixture clustering works.
---
Reply to Comment 1.1.1:
Comment: > Re: $1 / w_{\text{low}}^{4/t}$. For the $1 / \sqrt{w_{\text{low}}}$ kind of separation, for bounded covariance mixtures, there is a simple intuition of Chebyshev's inequality to see that it is indeed necessary (and hence tight up to constants). Is there any analogously intuitive explanation for $1 / w_{\text{low}}^{4/t}$? That's what I was hoping for. The overall response doesn't quite get at that level of intuition.
Thank you for the comment. It is true that when $t=2$ and $\varepsilon \lesssim w_{\text{low}}$ the optimal separation is $O(1 / w_{\text{low}}^{1/t})$ (we note that this was achieved with optimal list size only last year). However, as far as we can tell, this separation is currently known to be achievable only when $t = 2$ and $\varepsilon \lesssim w_{\text{low}}$. In particular,
1. When $t \geq 4$, to the best of our knowledge, the best mixture learning algorithm needs separation $1 / w_{\text{low}}^{2/t}$, even when there are no outliers [e.g., Kothari-Steinhardt-Steurer’18, Theorem 2.7]. This is still better than ours, but larger than $1 / w_{\text{low}}^{1/t}$.
2. When $\varepsilon \gg w_{\text{low}}$, as far as we can tell, nothing is known about the separation required to obtain optimal error and list-size. Our paper is the first in this setting.
Our goal was to obtain an algorithm that works when $\varepsilon \gg w_{\text{low}}$, and we focused on $t$-sub-Gaussian moments to show the generality of our result (importantly, with an application to clustering Gaussians). In this setting it is unclear that separation $O(1 / w_{\text{low}}^{1/t})$ is possible, given the observations in (1) and (2). It is possible that a much tighter analysis of our techniques may allow separation $O(1 / w_{\text{low}}^{2/t})$; we did not consider this kind of optimization essential to our paper, but acknowledge that it would be important future work to gain an even deeper understanding of the problem.
On a high level, the separation is currently used in our proof for the following technical reason:
a separation of $1 / w_{\text{low}}^{1/t}$ suffices to show that the samples from a component are concentrated well when projected on a given direction. Our analysis needs such a property to hold along $1/w_{\text{low}}^2$ directions (between all pairs of components). In particular, in Lemma G.2, when applied for number of directions $m = 1 / w_{\text{low}}^2$, we need the separation $R$ to be at least $1 / w_{\text{low}}^{2/t}$ for a union bound to be applicable. Furthermore, to prove a technical Lemma D.6 we end up needing separation $1 / w_{\text{low}}^{4/t}$.
> On the comparison of $w_{\text{low}}^2$ with $w_i$ and $\varepsilon$, it seems from the explanation that it is slightly arbitrary (it could've been anything bigger than $w_{\text{low}}^3$?). In that case, it might be worth pointing out in the paper, for intuition for the reader.
Indeed, the reviewer is correct that, assuming separation $1 / w_{\text{low}}^{4/t}$, anything larger than $C w_{\text{low}}^3$, for some constant $C > 0$ could be used. We will add this clarification to the revised version.
[Kothari-Steinhardt-Steurer’18] Kothari, Pravesh K., Jacob Steinhardt, and David Steurer. "Robust moment estimation and improved clustering via sum of squares." Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. 2018. | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable feedback and comments, which help to improve our manuscript. Below, we focus on three main concerns, which were raised by several reviewers.
**Adaptive contamination**.
We prove robustness of our algorithm in the non-adaptive contamination model, where $\varepsilon$ proportion of data are i.i.d. samples from an adversarial distribution $Q$. The reviewers asked whether our results can be applied to adaptive (non-i.i.d.) contaminations, or whether this is a limitation of our method.
The proof of Theorem 3.3 only uses the concentration of inlier samples, and thus generalizes to the case of adaptive contaminations. However, recall that the guarantees of our meta-algorithm depend on the base learner guarantees. Therefore, as long as both base learners (RME and LD-ME) have guarantees under adaptive contaminations, so does our meta-algorithm.
For RME, state-of-the-art methods are indeed robust against adaptive adversaries. However, to the best of our knowledge, results for LD-ME (see, e.g., [1, 2]) are generally only stated for the non-adaptive contamination model. This limits our result to only i.i.d. contaminations.
**Optimality of separation requirements.**
In our paper, we present two sets of results: for non-separated (in the Appendix, Corollary C.4) and separated mixtures (Theorem 3.3, Corollary 3.4). In the former case, we note that for the smallest component, our algorithm has optimal guarantees. In particular, it is impossible to achieve asymptotically smaller error (even when $\varepsilon = 0$) unless the algorithm outputs an exponentially larger list size (see lines 238 - 243).
For separated components, we discuss the Gaussian distribution and distributions with sub-Gaussian $t$-th central moments separately.
For the case of Gaussian components, our separation requirements are $\Omega\left(\sqrt{\log 1 / w_{\text{low}}}\right)$, matching the information-theoretical lower bound for the separation (see [1]).
For general $t$, we require separation $\Omega\left(\left(1 / w_{\text{low}}\right)^{4 / t}\right)$ and the reviewers are correct in pointing out that there exist recent prior works on clustering of mixture models which require only separation $\Omega\left(\left(1 / w_{\text{low}}\right)^{1 / t}\right)$ (see, e.g., [3]). However, e.g., in [3], this comes at the expense of a larger list size $O(1 / w_{\text{low}})$ (see Theorem 3.1 in [3]).
Next, we provide intuition why, for a desired short list size $k + O(\varepsilon / w_{\text{low}})$, obtaining results under separation $\Omega\left(\left(1 / w_{\text{low}}\right)^{1 / t}\right)$ is challenging:
Note that at this separation, only a constant fraction of initial mass of the cluster stays ‘close’ to the cluster mean; the other constant fraction will be further away from the mean than $O\left(\left(1 / w_{\text{low}}\right)^{1 / t}\right)$. For our goal to prove the list size bound $k + O\left(\varepsilon / w_{\text{low}}\right)$, we require that such ‘left-over’ samples are not misinterpreted by the algorithm as separate clusters (see lines 674 - 697, the proof of Lemma B.7). Otherwise, we cannot guarantee list size better than $O\left(1 / w_{\text{low}}\right)$, which can be much larger than $k + O\left(\varepsilon / w_{\text{low}}\right)$.
Overall, the reason for our separation requirements is that
1. We consider the setting with a large adversarial part, which was not present in the literature before,
2. We guarantee optimal list size $k + O\left(\varepsilon / w_{\text{low}}\right)$.
We will add a detailed discussion on the separation assumption in the revised version and leave the very interesting question of relaxing the separation assumption for future work.
**Time complexity.**
Several reviewers asked for a detailed exposition of the computational complexity of our meta-algorithm. We would like to highlight that the main purpose of our work is to show the existence of a (quasi-)polynomial-time meta-algorithm with the proven performance guarantees. Our meta-algorithm uses RME and LD-ME base learners, thus it depends on their time complexity, and we did not optimize for the particularly fast and iteration-efficient implementation of our algorithm.
For the rebuttal we provide the following upper bounds on the time complexity (ignoring the RME base learner, which is generally faster than LD-ME learner):
$$\text{Inner stage:} \quad \tilde O\left(\left(\frac{1}{w_{\text{low}}}\right) T(n, w_{\text{low}}) + \left(\frac{1}{w_{\text{low}}}\right)^3 n\right),$$ where $T(n, w_{\text{low}})$ is the time for LD-ME base learner to run on a dataset with $n$ samples and $w_{\text{low}}$ fraction of inliers.
$$\text{Outer stage:} \quad O\left( T(n, w_{\text{low}}) + \left(\frac{1}{w_{\text{low}}}\right)^2 n\right).$$
$$\text{Full algorithm:} \quad \tilde O\left(\left(\frac{1}{w_{\text{low}}}\right) T(n, w_{\text{low}}) + \left(\frac{1}{w_{\text{low}}}\right)^4 n\right).$$
We leave the question of more efficient implementations for future work.
[1] Diakonikolas, Ilias, et al. "List-decodable robust mean estimation and learning mixtures of spherical gaussians." Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. 2018.
[2] Cherapanamjeri, Yeshwanth, et al. "List decodable mean estimation in nearly linear time." 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2020.
[3] Diakonikolas, Ilias, et al. "Clustering Mixtures of Bounded Covariance Distributions Under Optimal Separation." arXiv preprint arXiv:2312.11769 (2023). | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper studies the problem of estimating the means of a mixture model in the presence of outliers. More specifically, each component has unknown weight $w_i$ that is lower bounded by a known quantity $w_{\mathrm{low}}$, each component is assumed to be (sub)-Gaussian, and the mixture also includes an adversarially selected distribution (outliers) with weight $\epsilon$, which can in general be larger than $w_{\mathrm{low}}$, meaning that the outliers can form entire clusters on their own. Correctly estimating the means of all true components is impossible and thus the goal here is to instead output a small list of candidate means. Since the $w_i$’s are unknown, naive applications of existing list-decoding algorithms would result in estimation errors and list size guarantees that are a function of $w_{\mathrm{low}}$ only. The contribution of this paper is to show that it is indeed possible to obtain more fine-grained guarantees where the error for the $i$-th component scales with $w_i$ instead of $w_{\mathrm{low}}$ and the list size is equal to the true number of components plus a small overhead $O(\epsilon/w_{\mathrm{low}})$. Regarding the list-size, $\epsilon/w_{\mathrm{low}}$ is the number of extra components that the outliers can form, thus this term is unavoidable in the list size. The paper provides information-theoretic error lower-bounds justifying that the precise error rates achieved are qualitatively tight. The regimes where the final algorithm has qualitatively best-possible performance includes the cases where the means are pairwise separated as well as without that assumption, and also the case where $\epsilon \ll w_0$ for separated mixtures (where it matches existing qualitatively optimal algorithms).
The algorithm (for the separated components case) in based on the following parts shown in the paper: First, there is a way to obtain list-decodable mean estimation algorithms that do not need knowledge of the fraction of inliers. This works by collecting the answers of a list-decodable mean estimator for multiple candidate values for the fraction of inliers and carefully pruning the results to keep only hypothesis with a large number of points close to them. Second, the paper proposes a procedure that splits the dataset into (not too many) parts where each part includes at most one (almost entire) cluster of inliers, so that calling the agnostic list-decodable mean estimation of the previous sentence can produce the improved error guarantees. Finally, for the case where $\epsilon \ll w_{\mathrm{low}}$, existing robust mean estimation algorithms can be employed to further improve the error.
Overall, the paper closes a gap in the literature. The result is non-trivial and a useful addition to the literature, thus I am recommending acceptance.
Strengths: * A positive point of the algorithm is that it is a meta-algorithm, in the sense that only performs calls to existing algorithms (for list-decoding and robust mean estimation), and performs computationally simple processing of the dataset based on the outputs of these algorithms. The algorithm is thus efficient and takes advantage of computational efficiency of the base learners.
* The paper contains some experiments to demonstrate practical performance advantages.
Weaknesses: * The contamination model of eq (2.1) requires that corruptions come i.i.d. from some distribution. Since the base learners can also handle some adversarial corruptions why can’t the final algorithm work for fully adversarial contamination?
* On presentation: The result about non-separated mixtures is not discussed in the main body. It would be good to at least provide the main ideas regarding why similar error can be achieved for the non-separated case, since the algorithm discussed in the main body seems to use crucially the separation of components. Another slightly confusing point is that the introduction talks about Gaussianity a lot while Theorem 3.3 works even for bounded second moment distributions. Existing black box learners work for these settings too but for some reason the existence of black box learners is emphasized only for the Gaussian case (line 193) in Section 3.1.
Technical Quality: 3
Clarity: 3
Questions for Authors: * The algorithm that gets errors all the way down to $O(\sqrt{\log(1/w_i})$ has super-polynomial complexity. Does the existing SQ lower bound from list-decodable mean estimation in [3] justify this via some easy reduction? If so, it would be good to include.
* Is the optimal separation for the bounded moments case indeed $O(1/w_{\mathrm{low}}^{4/t})$? I just want to make sure that there is no typographical error in the exponent, since for bounded second moments usually things scale with square root of $w_\mathrm{low}$.
* Also see first point in "weaknesses".
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of our work and for the feedback which will help to improve it. Below, we address specific comments and questions.
> The contamination model of eq (2.1) requires that corruptions come i.i.d. from some distribution. Since the base learners can also handle some adversarial corruptions why can’t the final algorithm work for fully adversarial contamination?
Thank you for the question, please see our general rebuttal response (first part).
> On presentation: The result about non-separated mixtures is not discussed in the main body. It would be good to at least provide the main ideas regarding why similar error can be achieved for the non-separated case, since the algorithm discussed in the main body seems to use crucially the separation of components.
Thank you for the suggestion, we will mention the result on non-separated mixtures in the main text in the revised version.
> Another slightly confusing point is that the introduction talks about Gaussianity a lot while Theorem 3.3 works even for bounded second moment distributions. Existing black box learners work for these settings too but for some reason the existence of black box learners is emphasized only for the Gaussian case (line 193) in Section 3.1.
Thank you for the suggestion, we will extend the discussion on the results for the bounded moment distributions in the revised version.
> The algorithm that gets errors all the way down to $𝑂(\sqrt{\log(1/w_i)})$ has super-polynomial complexity. Does the existing SQ lower bound from list-decodable mean estimation in [3] justify this via some easy reduction? If so, it would be good to include.
Yes, there is a simple reduction of SQ lower bounds (same as IT lower bounds), we mentioned it in the Appendix (lines 1001-1004). We will move the paragraph to the main text in the revised version.
> Is the optimal separation for the bounded moments case indeed $O\left(\left(\frac{1}{w_{\text{low}}}\right)^{4/t}\right)$? I just want to make sure that there is no typographical error in the exponent, since for bounded second moments usually things scale with square root of $w_{\text{low}}$.
Thank you for raising this point. Yes, there is no typographical error (please see our general rebuttal response, second part).
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I am keeping my positive score. | null | null | null | null | null | null |
Inversion-based Latent Bayesian Optimization | Accept (poster) | Summary: This paper proposes Inversion-based Latent Bayesian Optimization (InvBO), a plug-and-play module for latent Bayesian optimization (LBO) methods. The key components of InvBO are: 1) An inversion method to find latent codes z that exactly reconstruct input samples x, addressing misalignment between the latent space and input space. 2) A potential-aware trust region anchor selection method that considers both observed function values and the potential of the trust region to improve optimization. Empirically, InvBO boosts the performance of several existing LBO methods on nine tasks spanning molecule design and arithmetic expression fitting. Theoretically, the authors prove InvBO reduces an upper bound on the surrogate model's prediction error within trust regions.
Strengths: The inversion method is a novel and principled way to address the important problem of misalignment between the input and latent spaces in LBO. The method finds decoder triplets (x, z, y) without additional function evaluations.
Potential-aware trust region anchor selection considers both observed values and the acquisition function when selecting local search regions. This expands on prior methods that only use observed values.
The theoretical result in Proposition 1 provides justification for the inversion method. It shows that minimizing the reconstruction error d(x, p(z)) with inversion reduces an upper bound on the surrogate model's error.
Extensive experiments demonstrate the effectiveness of InvBO. It improves over several strong LBO baselines on molecule design (Guacamol, DRD3) and symbolic regression tasks. Ablations confirm both INV and PAS contribute to performance.
The writing is generally clear and easy to follow. Figures, tables and algorithms complement the main text well. The Related Works section provides helpful context.
Weaknesses: The paper could include more discussion on the limitations and practical considerations of LBO in general. For example, what are the trade-offs compared to standard BO? How much data is needed to train an effective VAE? Some discussion is given but more would help ground the work. What are the real applications of LBO?
Proposition 1 relies on several key assumptions, like Lipschitz continuity of f and small reconstruction error d(x, p(z)). While these enable a clean result, it would help to discuss how realistic the assumptions are and what can go wrong if they are violated.
To validate INV and PAS, it is also important to implement them on LBOs, TuRBO-L, LOL-BO.
The potential-aware anchor selection mainly seems motivated by intuition. A theoretical grounding, even if loose, could strengthen this contribution. For example, can we say anything about the quality of the local optima or the regret?
The improvement on DRD3 and the arithmetic expression tasks (Figure 5a) appears smaller than on Guacamol. It would be good to comment on what makes certain tasks more or less challenging for the method.
Technical Quality: 2
Clarity: 3
Questions for Authors: Can you comment more on the limitations of LBO and InvBO for practical applications? For example, what size dataset is needed to train an effective VAE model? What happens if the dataset is small?
Proposition 1 assumes the inversion mapping exists and has a small error. Can you discuss what happens if these do not hold, for example, if the inverse mapping is ill-posed?
Can you provide any theoretical insight into why considering the acquisition function in anchor selection (PAS) helps? For example, can it be related to the quality of local optima or regret bounds?
Why is the improvement on DRD3 and the symbolic regression task smaller than on Guacamol? What properties of these tasks make them challenging?
Can you discuss Validating INV and PAS, how about implementing them on LBOs, TuRBO-L, LOL-BO.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Limitations of needing a large dataset to train the VAE
Limitations of the theoretical assumptions, and what happens if they are violated in practice
Including this discussion, even if speculative, would help contextualize the contributions for practitioners. The paper already acknowledges several mathematical assumptions. Expanding this to real-world considerations would be a positive addition.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - **[W1, Q1, L1] Discussion on the limitations and practical considerations of LBO.**
Good suggestion. While standard BO struggles with discrete data, LBO addresses this by mapping discrete data to continuous space. To bridge the gap between the discrete and continuous space, LBO utilizes a Variational AutoEncoder (VAE), the quality of which depends on the number of unlabeled data available. Thus, LBO requires a large amount of high-quality unlabeled data to train an effective VAE.
- **[W2, Q2, L2] Realism and violated situation of assumptions in Proposition 1.**
Thank you for your valuable question. The Lipschitz continuity assumption for the objective function $f$ is common in Bayesian optimization [1-5] or global optimization [6]. Furthermore, we have shown that the distance $d_{\cal X}(\mathbf x, p_\theta(\mathbf z))$ can be reached to zero through our inversion method in Figure 12. These findings imply that our assumptions in Proposition 1 are highly realistic. Notably, the non-zero values of the distance are the main motivation of InvBO. If the assumptions are violated, GP prediction error increases, resulting in suboptimal optimization performance as we have already shown in Figure 8.
- Reference
[1] González, Javier, et al. "Batch Bayesian optimization via local penalization." *Artificial intelligence and statistics*. PMLR, 2016.
[2] Scarlett, Jonathan. "Tight regret bounds for Bayesian optimization in one dimension." *International Conference on Machine Learning*. PMLR, 2018.
[3] Hoang, Trong Nghia, et al. "Decentralized high-dimensional Bayesian optimization with factor graphs." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 32. No. 1. 2018.
[4] Kim, Jungtaek, and Seungjin Choi. "On local optimizers of acquisition functions in bayesian optimization." *Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2020, Ghent, Belgium, September 14–18, 2020, Proceedings, Part II*. Springer International Publishing, 2021.
[5] Lee, Seunghun, et al. "Advancing Bayesian optimization via learning correlated latent space." *Advances in Neural Information Processing Systems* 37 (2023).
[6] Christodoulos A. Floudas and Panos M. Pardalos, editors. Encyclopedia of Optimization, Second Edition. Springer, 2009.
- **[W3, Q5] Validation of INV and PAS on various LBO methods.**
We have already provided the experimental results of applying InvBO over mentioned LBOs, TuRBO-L, and LOL-BO in Table 2 of Section D. Table 2 shows that applying our InvBO to previous trust region-based LBOs (TuRBO-L, LOLBO, CoBO) and non-trust region-based LBOs (LBO, W-LBO, PG-LBO) consistently improves the optimization performance in 8 tasks.
- **[W4, Q3, L3] Theoretical grounding for PAS.**
Thank you for the good suggestion. We also have a deep interest in the theoretical grounding for PAS and recognize its significance. To provide a comprehensive theoretical analysis of PAS, we are actively conducting ongoing research in this area. We anticipate sharing these findings in future work.
- **[W5, Q4] Analysis of performance improvement on DRD3 and Arithmetic expression tasks.**
In the DRD3 and arithmetic expression tasks, the dissimilarity between $\mathbf{x}$ and $p_\theta(\mathbf{z})$ in previous LBOs without InvBO is relatively small, resulting in less room for performance improvement from inversion compared to other tasks. We conducted additional experiments measuring normalized Levenshtein distance between $\mathbf x$ and $p_\theta(\mathbf z)$ with and without inversion on DRD3 and arithmetic expression tasks in Figure 6. in the provided PDF. Compared to Figure 12 in Section C which has a mean of dissimilarity of 0.3, the mean of dissimilarity of these two tasks is lower than 0.1. | Summary:
The authors propose two empirical improvements to VAE-based Bayesian optimization methodology. First, the authors propose a correction for the eponymous misalignment problem which they characterize in the paper, the idea being that the latent z corresponded to an encoded x may not correspond to the x' decoded from the same latent z. The correction proposed by the authors is based on inversion. The authors demonstrate that inversion systematically improves a suite of VAE-based Bayesian optimization methods and furthermore, include diagnostic experiments that demonstrate how the Gaussian process surrogate model fit is improved following inversion. Secondly, the authors propose an improvement to trust region-based optimization in the VAE latent space based on using "potential-aware" scoring. Given that the method is broadly applicable and the experimental validation of the approach is rigorous, I recommend acceptance with the following points for the authors to consider. I am ready to increase my score if these points are addressed.
Strengths:
1. The principle strength of this work is the generality of the approach. Concretely, the authors highlight a systematic problem present in all VAE-based Bayesian optimization architectures and solve it with their inversion approach.
2. The diagnostic experiments on the GP fit provided by the authors provides excellent evidence as to why the inversion approach improves performance, shedding light on the underlying mechanism of the inversion approach.
Weaknesses:
I highlight below some points of concern with the paper. Of particular note is the justification for the potential-aware method. In terms of the sample-efficiency matters benchmark, I believe this would greatly increase the impact of the paper by showcasing the potential for VAE-based BO methods against other molecule generation approaches in the ML literature. If the authors can address these points I will increase my score.
__MAJOR POINTS__
1. In Proposition 1, how important is the assumption that the distance function d_\Chi is bounded on [0, 1]?
2. The Levenshtein distance may not be the most appropriate distance for objects such as molecules which are defined using the SMILES and/or SELFIES syntax. Other distance metrics such as Tanimoto similarity [8, 9] would be more chemically meaningful.
3. I think it would be a great idea for the authors to assess their method on the sample efficiency matters benchmark [13]. While this is not a weakness of the current work per se, I list this here as seeing the performance of this method on the benchmark in comparison to other molecule generation approaches would be very interesting to see and would highly encourage me to increase my score.
4. It would be great if the authors could include the T-LBO method from [14] as a baseline as it would be interesting to understand the effect of metric learning on performance. In particular, metric learning is hypothesized to smoothen the latent function and hence make it easier to fit the GP on the latent points. It would be very interesting to understand the interplay between inversion and metric learning. In other words, does inversion directly solve the problem that metric learning is trying to address? For the arithmetic task, the results from this paper should be directly comparable if the experiment was run under the same settings.
5. For the experiments reported in Section C of the appendix, it would be great if the authors could report the Tanimoto similarity between the molecules that correspond to the SELFIES strings as this would be a much more chemically meaningful measure of similarity relative to Levenshtein distance.
6. For the "potential-aware" method of anchor selection the authors use the acquisition function value is scaled because of the effect of using multiple local GPs to model each trust region?. Do the authors have a motivation for using the sum of the objective function value of the anchor together with the scaled maximum of the Thomson sample as the scoring criterion? Presumably the \alpha term accounts for the local region and the objective function accounts for the known quality of the anchor. Why should this objective be substantially different than taking the maximum of a Thompson sample directly i.e. just keeping the \alpha term?
7. Could the authors provide clear instructions for reproducing the experimental results in the supplied code by means of a README? Currently it is not clear how to reproduce the results.
__MINOR POINTS__
1. There are some missing capitalizations in the references section e.g. "Bayesian" and "Gaussian".
2. When mentioning variational autoencoders it would be worth citing the original paper [1].
3. On line 68, it may be worth mentioning that the goal of LBO is to learn a latent space to enable optimization over a continuous space from a discrete or structured input space where "structured" refers to objects such as graphs and images.
4. In the related work section it would be worth mentioning the following works on VAE-based Bayesian optimization [2-7].
5. In Proposition 1, f, m, and the composition function of f and p_\theta are assumed to be L_1, L_2, and L_3 Lipschitz continuous respectively. It may be beneficial to clarify that these functions are not 1-Lipschitz, 2-Lipschitz continuous etc. but rather the Lipschitz constants can be arbitrary.
6. The PyTorch, BoTorch, and GPyTorch papers [10-12] should be cited given that the packages are used.
7. In Section K of the appendix the citation for LS-BO is not given (presumably Tripp et al., NeurIPS 2020).
8. In Figures 5 and 6 the x-axis label, "Number of Oracle" is somewhat confusing. Perhaps "queries" would be more appropriate? Additionally, it would be great if the authors could give the number of random trials for which the error bars are reported?
9. Although it is mentioned in the main text, it might be worth adding the number of trials and the fact that the uncertainty bars are standard errors in the captions for Figures 4, 5, and 6.
10. For Table 1, it would be great if the caption appeared above the table rather than below it.
11. The diagnostic experiment on the misalignment problem in Section H of the appendix is interesting. It would be great to report a quantitative R^2 value in addition to the plots.
12. Line 323, "This indicates that both the uncertainty of the surrogate model and objective function value need to be considered for exploration and exploitation". The same can be said about the acquisition function itself?
13. In Algorithm 2 of Section M of the appendix it would be great to explicitly provide the definition of the Calculate subroutine.
14. In Table 3 of Section I of the Appendix it would be worth stating that the rows are fixed wall clock time and fixed oracle calls respectively.
__REFERENCES__
[1] Kingma and Welling, Auto-Encoding Variational Bayes, ICLR, 2014.
[2] Stanton, S., Maddox, W., Gruver, N., Maffettone, P., Delaney, E., Greenside, P. and Wilson, A.G., 2022, June. Accelerating Bayesian optimization for biological sequence design with denoising autoencoders. In International Conference on Machine Learning (pp. 20459-20478). PMLR.
[3] Notin, P., Hernández-Lobato, J.M. and Gal, Y., 2021. Improving black-box optimization in VAE latent space using decoder uncertainty. Advances in Neural Information Processing Systems, 34, pp.802-814.
[4] Lu, X., Gonzalez, J., Dai, Z. and Lawrence, N.D., 2018, Structured variationally auto-encoded optimization. In International conference on machine learning (pp. 3267-3275). PMLR.
[5] Siivola, E., Paleyes, A., González, J., & Vehtari, A. (2021). Good practices for Bayesian optimization of high dimensional structured spaces. Applied AI Letters, 2(2), e24.
[6] Maus, N., Wu, K., Eriksson, D. and Gardner, J., 2023, Discovering Many Diverse Solutions with Bayesian Optimization. In International Conference on Artificial Intelligence and Statistics (pp. 1779-1798). PMLR.
[7] Verma, E., Chakraborty, S. and Griffiths, R.R., 2022. High-Dimensional Bayesian optimization with invariance. In ICML Workshop on Adaptive Experimental Design and Active Learning.
[8] Tanimoto TT (17 Nov 1958). "An Elementary Mathematical theory of Classification and Prediction". Internal IBM Technical Report. 1957 (8?).
[9] Bajusz, D., Rácz, A. and Héberger, K., 2015. Why is Tanimoto index an appropriate choice for fingerprint-based similarity calculations?. Journal of cheminformatics, 7, pp.1-13.
[10] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L. and Desmaison, A., 2019. PyTorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
[11] Balandat, M., Karrer, B., Jiang, D., Daulton, S., Letham, B., Wilson, A.G. and Bakshy, E., 2020. BoTorch: A framework for efficient Monte-Carlo Bayesian optimization. Advances in neural information processing systems, 33, pp.21524-21538.
[12] Gardner, J., Pleiss, G., Weinberger, K.Q., Bindel, D. and Wilson, A.G., 2018. Gpytorch: Blackbox matrix-matrix Gaussian process inference with GPU acceleration. Advances in neural information processing systems, 31.
[13] Gao, W., Fu, T., Sun, J. and Coley, C., 2022. Sample efficiency matters: a benchmark for practical molecular optimization. Advances in Neural Information Processing Systems, 35, pp.21342-21357.
[14] Grosnit, A., Tutunov, R., Maraval, A.M., Griffiths, R.R., Cowen-Rivers, A.I., Yang, L., Zhu, L., Lyu, W., Chen, Z., Wang, J. and Peters, J., 2021. High-dimensional Bayesian optimisation with variational autoencoders and deep metric learning. arXiv preprint arXiv:2106.03609.
Technical Quality: 4
Clarity: 4
Questions for Authors:
1. In Section L, the authors state that they pretrained the SELFIES VAE with 1.27M molecules from the Guacamol benchmark. Do the authors assess whether the generated molecules are contained within the pre-training dataset?
2. What do the authors think the interplay between metric learning and inversion is?
3. On line 248 what is the particular type of approximation that the authors use for their sparse GP implementation? I note that this is also not mentioned in Section L of the appendix. From the code it would appear to be the Sparse Variational Gaussian Process (SVGP) model.
4. Why are Figures 7 and 16 different?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations:
1. The use of the Levenshtein distance for diagnosing the effect of inversion on the misalignment problem may not be the best criteria for the molecule experiments. Tanimoto similarity would be more indicative of chemically meaningful differences since it is a distance metric between molecules directly and not their string representation.
2. The aforementioned lack of justification for the potential-aware scoring criterion for the trust regions is an additional limitation for the work in its current form.
3. It would be great if instructions could be included in the codebase to reproduce the results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - **[W1] Importance of $d_{\cal X}$ bounding assumption in Proposition 1.**
For analytical convenience, we use the bounding assumption of the distance function. This is similar to the concept of image normalization, where pixel values are scaled to a specific range (often 0 to 1) to facilitate more effective processing and analysis. We would like to note that this scaling does not affect the generality or applicability of our findings, since the underlying principles remain unchanged regardless of the bound.
- **[W2, L1] Appropriateness of Levenshtein distance for molecules.**
Thank you for the good suggestion. Since we evaluate the optimization problem in diverse domains, such as molecule and arithmetic expression tasks, we use a more general distance metric, Levenshtein distance, compared to Tanimoto similarity. Our InvBO can utilize any distance function, including Tanimoto similarity. We will include it in the final version.
- **[W3] Assessment of InvBO on the Sample Efficiency Matters Benchmark.**
Thank you for the suggestion. We conduct additional experiments applying InvBO to LBO in the sample efficiency matters benchmark. We apply InvBO to the SELFIES VAE provided by the benchmark on 7 tasks. The experimental results are in the table below:
| | REINVENT SMILES | Graph GA Fragment | SELFEIS VAE - InvBO | SELFIES VAE - LBO |
| --- | --- | --- | --- | --- |
| amlodipine_mpo | 0.635 $\pm$ 0.035 | 0.661 $\pm$ 0.020 | 0.594 $\pm$ 0.011 | 0.516 $\pm$ 0.005 |
| median2 | 0.276 $\pm$ 0.008 | 0.273 $\pm$ 0.009 | 0.236 $\pm$ 0.018 | 0.185 $\pm$ 0.001 |
| osimertinib_mpo | 0.837 $\pm$ 0.009 | 0.831 $\pm$ 0.005 | 0.823 $\pm$ 0.007 | 0.765 $\pm$ 0.002 |
| perindopril_mpo | 0.537 $\pm$ 0.016 | 0.538 $\pm$ 0.009 | 0.538 $\pm $ 0.004 | 0.429 $\pm$ 0.003 |
| ranolazine_mpo | 0.760 $\pm$ 0.009 | 0.728 $\pm$ 0.012 | 0.762 $\pm$ 0.011 | 0.452 $\pm$ 0.025 |
| valsartan_smarts | 0.179 $\pm$ 0.358 | 0.000 $\pm$ 0.000 | 0.003 $\pm $ 0.005 | 0.002 $\pm$ 0.003 |
| zaleplon_mpo | 0.358 $\pm$ 0.062 | 0.346 $\pm$ 0.032 | 0.384 $\pm$ 0.008 | 0.206$\pm$ 0.015 |
| SUM | 3.582 | 3.377 | 3.34 | 2.103 |
| Rank | 1 | 2 | 3 | 23 |
We re-rank the method provided in the sample efficiency matters benchmark on 7 tasks by AUC Top-10 from 5 independent runs. In the table, applying InvBO to SELFIES VAE achieves rank 3 on 7 tasks while vanilla LBO with SELFIES VAE achieves rank 23. These results demonstrate that LBO with InvBO is highly competitive with other non-BO baselines.
- **[W4, Q2] Additional comparison to T-LBO and analysis of the interplay between Inversion and metric learning.**
We appreciate for valuable suggestion. We provide the optimization results of T-LBO with and without inversion on the arithmetic expression task in Figure 5 in the provided PDF. The metric learning proposed in T-LBO adjusts the latent space to be smooth, making it easier to fit the GP. Meanwhile, inversion constructs an aligned dataset without additional oracle calls, enabling the GP to correctly emulate the objective function. Although these are orthogonal approaches, metric learning helps reduce the $L_3$ constant in Proposition 1, and it is expected to produce a synergistic effect with InvBO. We will include it in the final version.
- **[W5] Tanimoto similarity between $\mathbf x$ and $p_\theta(\mathbf z)$.**
Thank you for the suggestion. We additionally conducted experiments measuring the Tanimoto similarity between $\mathbf{x}$ and $p_\theta(\mathbf{z})$ in Figure 4 in the provided PDF. Figure 4 shows the Tanimoto similarity comparison results with and without inversion on the med2 and valt tasks. We used rdkit library to measure the Tanimoto similarity. From the figure, the inversion achieves a Tanimoto similarity of 1.0 for every iteration, while CoBO without inversion achieves about 0.8 Tanimoto similarities in both tasks.
- **[W6, L2] Motivation for $\alpha$ normalization in PAS.**
We normalize the $\alpha$ value to ensure that it has the same influence relative to the anchor point’s objective score $y$. Without scaling, regions with high uncertainty may be overly influenced by the $\alpha$ value compared to the objective score $y$.
- **[W7, L3] Providing clear reproduction instructions.**
We will publish our code, including clear reproduction instructions in the camera-ready version if the paper gets accepted.
- **[Q1] Assessment of generated molecules against the pre-training dataset.**
| | med2 | valt |
| --- | --- | --- |
| Ratio of novel data (%) | 100 $\pm$ 0.00 | 100 $\pm$ 0.00 |
We additionally conducted experiments to assess whether the generated data are contained within the pre-training Guacamol benchmark dataset. The table above provides the ratio of generated data that is not contained in the pre-training dataset on the med2 and valt tasks across five runs. The table demonstrates that all generated data during the optimization process are not contained in the pre-training dataset.
- **[Q3] Sparse GP approximation method used in InvBO.**
As shown in our codebase, we use Sparse Variational Gaussian Process (SVGP) for the sparse Gaussian process implementation. We will add this detail in our camera-ready version if the paper gets accepted.
- **[Q4] Discrepancies between Figures 7 and 16.**
Good observation. Figures 7 and 16 are different because they are conducted on different splits. Figure 7 illustrates the Gaussian process fitting results on the test set, while Figure 16 shows the results on the training set.
- **[Minor points in Weakness]**
We appreciate your detailed review. We will update the final version considering these minor points.
---
Rebuttal Comment 1.1:
Title: Prepared to Champion the Paper for Acceptance Following the Rebuttal
Comment:
Many thanks to the authors for their rebuttal. I believe the rebuttal has greatly strengthened the paper on two fronts:
1. The results on the sample efficiency matters benchmark are very impressive. Not only does the inversion scheme systematically improve the performance of the SELFIES VAE-BO approach, but the authors achieve a new SOTA on the zaleplon_mpo problem. This is notable because the SOTA is achieved in competition with many general black-box optimization techniques.
2. The Tanimoto similarity results provide further evidence of the efficacy of the inversion mechanism. Specifically, a Tanimoto similarity of 1 showcases that the inversion method is able to recover the same molecule. In contrast without the inversion mechanism it is not possible to recover the same molecule. This feature of the inversion mechanism is very important from a scientific standpoint whereby chemists may be interests in guarantees on recovery of the same molecule.
Given the points above, I am prepared to champion this paper for acceptance. The inversion method highlights an important pathology across all VAE-BO architectures, demonstrates systematic empirical improvement by addressing the pathology, demonstrates the mechanism of the pathology and proposed solution, and lastly, produces a new SOTA on a challenging benchmark featuring many general black-box optimizers.
---
Reply to Comment 1.1.1:
Comment: We appreciate your thorough review. Considering the feedback, we will update the final version if our paper gets accepted. | Summary: The authors propose Inversion-based Latent Bayesian Optimization (InvBO), a novel approach to improve latent space Bayesian optimization (LBO) by introducing two novel components. First, to fix the misalignment problem that typically plagues LBO methods that rely on encoder-decoder models, InvBO introduces an inversion method that can be used to recover the latent code that decodes to a given data point. This allows the misalignment problem to be largely circumvented without the need for additional black-box function evaluations. Second, InvBO proposes a new strategy for the selection of the center of the trust region for trust-region based LBO. In existing LBO methods that use trust regions, the trust region centers are usually chosen to be the latent data point associated with the best objective value observed thus far. InvBO proposes a new method for trust region center selection that encourages selection of trust region centers that give the trust region the highest potential to improve local optimization performance. The authors refer to this method as “potential-aware trust region anchor selection”. By combining these two components (the inversion method and the potential-aware trust region anchor selection method), the authors show that InvBO can be applied to substantially outperform current state-of-the-art LBO methods across nine high-dimensional, discrete optimization tasks. These include some of the most difficult tasks from the GaucaMol benchmark suite of molecule optimization tasks.
Strengths: Originality: InvBO proposes two novel ideas that greatly improve performance of LBO. The inversion method is a very straight-forward and effective means of dealing with the misalignment problem that avoids using extra function evaluations. The potential-aware trust region anchor selection method represents a novel means of selecting trust region centers that improves upon the fairly ad-hoc strategy people have been using of just centering the trust region on the best data point observed so far.
Quality: The paper is well-written and concise. Additionally, the figures and tables are all of good quality - they are both easy to parse and do a nice job of displaying results. Figure 2 does a nice job of illustrating the author’s inversion method.
Clarity: The paper is clear and easy to follow from start to finish. The figures and tables are clear and easy to read. The way the authors motivated, defined, and applied InvBO is clear.
Significance: LBO has emerged as one of the most promising ways to optimize black-box functions defined over discrete, high-dimensional spaces. These discrete, high-dimensional optimization problems are of particular interest to the community because relevant real-world problems, particularly in biological design, are defined over discrete high-dimensional spaces (i.e. the discrete search space of molecules or proteins). The results in this paper show a very substantial performance improvement over state-of-the-art methods in LBO, including on some of the most difficult molecular design benchmark optimization tasks in the popular GuacaMol benchmark suite. This paper therefore represents a significant improvement in our ability to optimize discrete high-dimensional black-box functions and will be of interest to the community.
Weaknesses: Figure 1 colors should be changed to be more friendly to red-green color blind folks.
It would be interesting to see an additional comparison to the Genetic Expert-Guided Learning (GEGL) method for the molecular design tasks in the results section. GEGL is an reinforcement learning (RL) method that obtains state-of-the-art performance across tasks in the GuacaMol benchmark suite (see GuacaMol results in Table 2 of their paper here https://arxiv.org/pdf/2007.04897). While I do think that this additional comparison to GEGL would strengthen the paper, I do not think that it is strictly necessary for this paper to be accepted because RL is an orthogonal method and the results currently in the paper compare to all relevant LBO baselines.
Technical Quality: 4
Clarity: 4
Questions for Authors: I am interested in your ideas for future work. Do you have plans for how you might build upon this work and continue to improve methods for LBO?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - **[W1] Modifying the colors of Figure 1 for red-green color blind folks.**
Thank you for the suggestion. We will modify the colors of Figure 1 for red-green color-blind folks in our camera-ready version if the paper gets accepted.
- **[W2] Additional comparison to GEGL.**
Thank you for the suggestion. We provide the experimental results of GEGL optimization on adip and osmb tasks in Figure 3 in the provided PDF. We include only a subset of baselines in Figure 3 to enhance the clarity of additional experimental results. While GEGL demonstrates superior optimization performance compared to Graph-GA, CoBO with InvBO still achieves higher optimization performance.
- **[Q1] Future work of InvBO.**
LBO has become a promising approach for optimizing structured data such as molecules or proteins. However, research on multi-objective Bayesian optimization over latent space has not been fully explored [1, 2]. Here, we propose InvBO for single-objective Bayesian optimization over latent space, but we believe that the misalignment problem also occurs in multi-objective Bayesian optimization over latent space. We will explore the adaptation of InvBO to multi-objective Bayesian optimization over latent space.
- Reference
1. Stanton, Samuel, et al. "Accelerating bayesian optimization for biological sequence design with denoising autoencoders." *International Conference on Machine Learning*. PMLR, 2022.
2. Gruver, Nate, et al. "Protein design with guided discrete diffusion." *Advances in neural information processing systems* 37 (2023).
---
Rebuttal Comment 1.1:
Title: Rebuttal Acknowledgement
Comment: I would like to thank authors for their response and addressing the points I raised in my review. I am happy to keep my assessment of their work the same.
---
Reply to Comment 1.1.1:
Comment: Thank you for the constructive review. We will include your valuable feedback in the final version. | Summary: This paper identifies and addresses an overlooked issue in several latent space Bayesian optimization methods and proposes a new trust region anchor selection method (PAS) that incorporates the "potential" of a trust region to improve optimization. Specifically, the paper proposes an inversion method to correct the misalignment between the encoding that generated a sample and the resultant encoding of the generated sample. Aligning these representations allows for a better estimate by the surrogate and improves optimization. Further, the trust region anchor selection incorporates both the observed objective value at a given point and the potential for improvement within that trust region (evaluated through Thompson sampling), which similarly improves the optimization.
Strengths: - They identify and address the misalignment problem using an inversion method. Logically, this is a more direct and sample efficient way of doing it compared to previous methods which perform oracle evaluations to score unevaluted points. Additionally, this allows for the GP to produce a better fit of the objective which improves optimization.
- The inversion method is plug-and-play with LOLBO and CoBO, and could possibly be used by other LSBO methods that perform fine-tuning of the VAE.
- The potential aware trust region selection is more flexible than TuRBOs and allows the optimization to revisit previous regions in the optimization trajectory. Their ablations show the effectiveness of this.
- The methods are intuitive and straightforward to implement.
- InvBO performs particularly well over baselines in the low-budget regime.
Weaknesses: - Methodologically the contribution is a bit weak due the fact the proposals here are largely extensions of LOLBO and CoBO.
- The Lipschitz assumption for the VAE decoder and objective function doesn't seem well motivated. This isn't a significant issue with respect to the method, but I do question the relevance of Proposition 1.
Technical Quality: 4
Clarity: 3
Questions for Authors: PAS is not specific to LSBO and can be employed wherever TuRBO-derived methods are used. Was this tried on one of the standard BO benchmarks (Robot, Lunar, etc)?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The noted limitation of InvBO being sensitive to the quality of the generative model is fair and to be expected.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - **[W1] Methodological contribution of InvBO beyond LOLBO and CoBO.**
Our InvBo can be applied beyond trust region-based LBO methods (e.g., LOLBO and CoBO). We show that each component of our InvBO, such as Inversion and Potential-aware trust-region anchor selection (PAS), can be implemented with diverse BO approaches, as specified below:
- **Inversion.**
Our Inversion can be applied to any LBO method, including trust region-based LBO methods (e.g., LOLBO and CoBO). In Table 2 of Section D, we have already shown that the inversion has successfully been adopted to diverse LBO works with and without using trust region.
- **PAS.**
Our PAS can be extended to any trust region-based standard BO method (e.g., TurBO). To validate it, we conduct additional experiments applying PAS to TuRBO on standard BO tasks in Rover and Lunar and report the optimization results in Figure 2 of the provided PDF. These experimental results demonstrate that InvBO can be applied not only to TR-based LBO but also to standard TuRBO.
- **[W2] Appropriateness of Lipschitz assumptions in Proposition 1.**
Thank you for your valuable question. In Proposition 1, we assumed the black box function $f$ and the composite function $f\circ p_\theta$ of the black box function and the decoder of VAE as a Lipschitz continuity function. Assuming the black box function or the objective function as the Lipschitz continuous function is common in Bayesian optimization [1-5] or global optimization [6].
- Reference
[1] González, Javier, et al. "Batch Bayesian optimization via local penalization." *Artificial intelligence and statistics*. PMLR, 2016.
[2] Scarlett, Jonathan. "Tight regret bounds for Bayesian optimization in one dimension." *International Conference on Machine Learning*. PMLR, 2018.
[3] Hoang, Trong Nghia, et al. "Decentralized high-dimensional Bayesian optimization with factor graphs." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 32. No. 1. 2018.
[4] Kim, Jungtaek, and Seungjin Choi. "On local optimizers of acquisition functions in bayesian optimization." *Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2020, Ghent, Belgium, September 14–18, 2020, Proceedings, Part II*. Springer International Publishing, 2021.
[5] Lee, Seunghun, et al. "Advancing Bayesian optimization via learning correlated latent space." *Advances in Neural Information Processing Systems* 37 (2023).
[6] Christodoulos A. Floudas and Panos M. Pardalos, editors. Encyclopedia of Optimization, Second Edition. Springer, 2009.
- **[Q1] Applying PAS to TuRBO on standard BO benchmarks.**
Thank you for the suggestion. We provide the optimization performance of TuRBO and TuRBO with PAS in two standard BO benchmarks, Rover and Lunar. Our implementation is based on the codebase of TuRBO provided in the BoTorch tutorial and uses the same hyperparameters ($e.g.$, batch size, and number of initial data points) with TuRBO. The experimental results are reported in Figure 2 of the provided PDF. The figure shows that applying PAS on TuRBO consistently improves the optimization performance. These results demonstrate that PAS is also effective in standard BO benchmark tasks.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and the application of PAS to Rover and Lunar. With those results and the rebuttal, I think my critique of methodological simplicity is not an appropriate weakness.
With respect to W2, I still have doubts that we can safely make the Lipschitz assumption for an arbitrary BB objective that we would be interested in optimizing. Nevertheless, CoBO attempts to optimize the Lipschitz continuity of the decoder-objective composition during end-to-end retraining and whenever the conditions of Eqn. 5 are met within a trust region, Prop. 1 will hold. Its likely that Prop. 1 is more reasonable than I first believed.
Given the results of the PAS experiment on Rover and Lunar further thought on W2 I'll update my score accordingly.
---
Rebuttal 2:
Comment: Thank you for your thoughtful consideration and revisiting your initial concerns. We will incorporate your valuable feedback in our final version. | Rebuttal 1:
Rebuttal: Thank you to the reviewers for the thorough feedback on our paper. Based on the reviews, we have organized the key strengths of our paper that reviewers identified:
### **1. Convincing motivation.**
Most of the reviewers (8vpX, wVbQ, nFyz, tYNa) provided positive feedback on our motivation. We address the misalignment problem that has been overlooked by prior LBO works, despite its presence in all VAE-based Bayesian optimization architectures.
### **2. Superior optimization performance.**
To validate the effectiveness of InvBO, we measured the optimization performance on nine different tasks. Reviewers (8vpX, wVbQ) responded positively to the superior optimization performance of InvBO, especially in the low-budget setting.
### **3. Novelty.**
Reviewers highlighted the novelty as a strength of InvBO. InvBO consists of two components: the inversion and the PAS method. (wVbQ, tYNa) The inversion method provides a novel and principled way to address the misalignment problem. (wVbQ) The PAS method improves the trust region anchor selection, which most previous works approached with ad-hoc strategies.
### **4. Generality.**
Reviewers (8vpX, nFyz) appreciated the generality of InvBO. As mentioned in the paper, InvBO is a plug-and-play algorithm compatible with previous LBOs. Furthermore, we provided experimental results showing that PAS can be applied to TuRBO on standard BO tasks (e.g., non-LBO tasks) in Figure 2 of the provided PDF.
### **5. Thorough analysis.**
Reviewers gave positive feedback on our thorough analysis of InvBO. Reviewers (8vpX, tYNa) noted that the ablation studies of InvBO demonstrated the effectiveness of each component. Additionally, the experiments on GP fitting provided strong evidence of the effectiveness of the inversion method (8vpX, nFyz).
---
We appreciate all the reviewers for their thoughtful feedback. We will address all issues raised by the reviewers below.
Pdf: /pdf/052e74caba0a54c165a1a604490f2d19f0ffd553.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Latent Bayesian optimization has been tackled in this work. In order to solve an optimization problem on a continuous latent space, it utilizes auto-encoder-based neural networks. In particular, this work attempts to solve a misalignment problem in the latent Bayesian optimization. Some experimental results are demonstrated to validate the method proposed in this work.
Strengths: - Latent Bayesian optimization, which is solved in this paper, is a compelling topic in the Bayesian optimization field.
Weaknesses: - Motivation of this work is weak.
- Thorough analysis on the misalignment problem is not provided.
- Experiments are domain-specific.
Technical Quality: 1
Clarity: 2
Questions for Authors: - Does the misalignment problem certainly degrade the performance of Bayesian optimization? Is there any particular evidence?
- I think that the proposed method lets each decision focus on exploitation. What do you think about this issue?
- Equation (4) doesn't seem to be inversion. It just finds the nearest output of the decoder.
- How did you choose the dimensionality of the latent space?
- The proposed method seems to require theoretical analysis.
- Considering the nature of Bayesian optimization, which is to solve black-box optimization, the benchmarks used in this work are too domain-specific to show algorithm's performance. Under the assumption of optimizing a black-box function, the proposed method actively utilizes the information of objective functions. Do you expect that your algorithm works well in more general optimization problems?
- I cannot find the details of the neural architectures used. How did you design such networks?
- Could you elaborate on the description of Figure 3?
Confidence: 4
Soundness: 1
Presentation: 2
Contribution: 2
Limitations: There are no specific societal limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - **[W1] Motivation of InvBO.**
Other reviewers provided positive comments regarding the motivation of our paper as follows:
> Reviewer 8vpX: This paper identifies and addresses an overlooked issue in several latent space Bayesian optimization methods.
>
> Reviewer wVbQ: The way the authors motivated, defined, and applied InvBO is clear.
>
> Reviewer nFyz: The authors highlight a systematic problem present in all VAE-based Bayesian optimization architectures.
>
> Reviewer tYNa: The inversion method is a novel and principled way to address the important problem of misalignment between the input and latent spaces in LBO.
>
Here we clarify the motivation behind our proposed InvBO, which consists of Inversion and PAS.
- **Motivation of Inversion.**
LBO suffers from the misalignment problem caused by the reconstruction error of the VAE. Previous works handle this problem with a recentering technique; however, Figure 3 shows that this requires additional oracle calls. This motivated us to design an inversion method, a solution to the misalignment problem that does not require any additional oracle calls.
- **Motivation of PAS.**
Most prior trust region-based approaches select the anchor as the current optimal point without considering the potential of the latent vectors within the trust region to improve optimization performance. This prompted us to design a novel anchor selection method that considers the potential of the latent vectors within the trust region.
- **[W2, Q1] Evidence and analysis of misalignment problem.**
We have already presented evidence that the misalignment problem certainly degrades optimization performance. Figure 12 in Section C shows the discrepancy between $\mathbf{x}$ and $p_\theta(\mathbf{z})$. As shown in Figure 7 from the main paper, this discrepancy leads to the misalignment problem. Furthermore, Figure 8 in the main paper shows that optimization performance is significantly lower with a misalignment problem (yellow line) compared to when the problem is addressed by inversion (blue line).
- **[W3, Q6] Diversity of experimental domains and general optimization ability of InvBO.**
We already measure the performance of our InvBO on diverse domains such as molecule domains (e.g., Guacamol and DRD3 tasks) and an arithmetic expression fitting task. Figures 4 and 5 in the main paper demonstrate the general optimization ability of our InvBO across various domains and settings.
- **[Q2] Exploration capability of InvBO.**
InvBO does not make exploitation-centric decisions. In Figure 1 of the provided PDF, we conduct additional experiments measuring the number of unique data searched in each iteration of CoBO with and without InvBO. Figure 1 demonstrates that InvBO does not lose exploration capability compared to CoBO. On the other hand, Figure 4 in the main paper shows that applying InvBO enhances the exploitation ability. These results indicate that InvBO makes decisions while balancing exploitation and exploration.
- **[Q3] Clarification on the appropriateness of the term 'Inversion'.**
‘Inversion’ is broadly used to refer to the reverse process of generation and has been widely applied to generative models such as GANs and Diffusion models [1-5]. As mentioned in Section 2.2 of the paper, ‘Inversion’ is a process of finding a latent code that generates the original data $\mathbf{x}$ through a generator $G$. Without loss of generality, we also use the term `Inversion' in the same sense with decoder $p_\theta$ and formally define it in Equation (4).
- Reference
1. Xia, Weihao, et al. "Gan inversion: A survey." TPAMI, 2022.
2. Zhu, Jiapeng, et al. "In-domain gan inversion for real image editing." *ECCV,* 2020.
3. Wang, Tengfei, et al. "High-fidelity gan inversion for image attribute editing." CVPR, 2022.
4. Xu, Yiran, et al. "In-N-Out: Faithful 3D GAN Inversion with Volumetric Decomposition for Face Editing." *CVPR,* 2024.
5. Gal, Rinon, et al. "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion." *ICLR,* 2023
- **[Q4, Q7] Details of the dimensionality of latent space and the Variational Autoencoder (VAE) used in experiments.**
For a fair comparison, we use the same dimensionality of latent space and VAE following baseline works. As we mentioned in the main paper, we use SELFIES VAE [1] for the de novo molecule design tasks (e.g., Guacamol and DRD3) and Grammar VAE [2] for the arithmetic expression fitting task.
- Paper
1. Maus, Natalie, et al. "Local latent space Bayesian optimization over structured inputs." NeurIPS, 2022.
2. Kusner, Matt J., et al. "Grammar variational autoencoder." *ICML*, 2017.
- **[Q5] Theoretical analysis of InvBO.**
In Proposition 1, we theoretically show that inversion plays a crucial role in minimizing the upper bound of the GP prediction error within the trust region. Figure 9 in the main paper shows the experimental results of GP prediction error within the trust region (left) and the corresponding optimization results (right). These results demonstrate that the inversion method minimizes the GP prediction error and results in the improvement of the optimization process.
- **[Q8] Detailed description of Figure 3.**
The left figure illustrates the number of oracle calls made by the acquisition function and recentering. The right figure displays the number of best score updates achieved by the acquisition function and recentering. From both figures, the acquisition function updates the best score 5 times within approximately 150 oracle calls, whereas the recentering fails to update the best score with about 350 oracle calls. These results demonstrate that the recentering wastes a huge amount of oracle calls.
---
Rebuttal 2:
Comment: Thank you for your response.
> [W2, Q1] Evidence and analysis of misalignment problem
In Figure 12, how does it achieve zero dissimilarity? It seems strange. Is there any test data leakage?
Figures 7 and 8 show that the trained model of the proposed method only works for a single specific domain. Please see the concern below.
> [W3, Q6] Diversity of experimental domains and general optimization ability of InvBO
It is the most serious concern. I think that the authors misunderstood my concern. The proposed algorithm is trained on each domain, which implies that the configuration used in this work cannot be used for unseen domains. Bayesian optimization is black-box optimization, so that an optimization method can solve any problem with a small amount of inductive bias. The authors tackled known domains accessing a true function.
If the proposed method can be applied in unseen tasks without re-training and with the same configuration, I can say that the proposed method is not domain-specific.
> [Q3] Clarification on the appropriateness of the term 'Inversion'
I don't think your answer resolves my concern. First off, "inversion" in GAN inversion is not matched to "inversion" in this work. While GAN models an implicit distribution, the proposed method is defined on an explicit representation. Moreover, Equation (4) does not align with Figure 2(b).
> [Q4, Q7] Details of the dimensionality of latent space and the Variational Autoencoder (VAE) used in experiments
Did you try to adjust the dimensionality of latent space? How does it impact on performance?
Most of my concerns haven't been resolved. I believe that the current manuscript is not ready to be published in NeurIPS.
---
Rebuttal Comment 2.1:
Comment: Thank you for the thorough review.
- **[Q1 in Comment] Clarification of zero dissimilarity in Figure 12 and test data leakage.**
We do not rely on traditional train or test data splits during the inversion process. Inversion is fundamentally a search algorithm designed to find an **“optimal”** latent code $\mathbf z_{\text{inv}}$ that reconstructs the target data $\mathbf x$, rather than training the model and evaluating on test data. In Figure 12, we measure the dissimilarity between $\mathbf x$ and $p_\theta(\mathbf z)$, denoted as $d_{\cal X}(\mathbf x, p_\theta(\mathbf z))$, with and without our inversion method for all observed data. Our Inversion method is designed to find a latent code $\mathbf z_{\text{inv}}$ that reconstructs the target data $\mathbf x$, by minimizing $d_{\cal X}(\mathbf x, p_\theta(\mathbf z))$. The zero dissimilarity of blue line in Figure 12 demonstrates that the inversion method consistently finds an “**optimal”** latent code $\mathbf z_{\text{inv}}$ that satisfies $d_{\cal X}(\mathbf x, p_\theta(\mathbf z_{\text{inv}}))=0$.
- **[Q2 in Comment] Applicability of InvBO in unseen domains.**
While a pre-trained VAE cannot be directly applied to an unseen domain, this limitation pertains to most LBOs rather than our proposed InvBO. InvBO, on the other hand, is designed as a plug-and-play module applicable to any VAE-based LBO. Notably, LBO methods employ a fully unsupervised approach for pre-training VAEs, wherein the objective function is not utilized during the pre-training process. In LBO, we assume the availability of sufficient unlabeled data to pre-train a VAE for any given domain.
- **[Q3 in Comment] Clarification on the appropriateness of the term 'Inversion'.**
- **Comparing with GAN inversion.**
Both the inversion process in GANs and in our InvBO aim to find a latent code $\mathbf z_{\text{inv}}$ that reconstructs the target data $\mathbf x$. The distinction between whether a generative model defines the data distribution explicitly or implicitly is irrelevant to the appropriateness of the term `Inversion' in InvBO. In both cases, the fundamental goal remains the same: to find the latent code that generates the target data.
- **Consistency between Equation (4) and Figure 2(b).**
Other reviewers provided positive responses regarding the Figure as follows:
> Reviewer wVbQ: Figure 2 does a nice job of illustrating the author’s inversion method.
>
> Reviewer tYNa: Figures, tables and algorithms complement the main text well.
>
Figure 2(b) shows we can find $\mathbf z_{\text{inv}}$ that generates the target evaluated data $\mathbf x$ by inversion method $i.e.,d_{\cal X}(\mathbf x, p_\theta(\mathbf z_{\text{inv}})) = 0$, illustrating the Equation (4).
- **[Q4 in Comment] Dimensionality of latent space.**
We did not adjust the dimensionality of latent space. We follow the dimensionality of latent space used in prior works [1].
- Reference
[1] Lee, Seunghun, et al. "Advancing Bayesian optimization via learning correlated latent space." *Advances in Neural Information Processing Systems* 37 (2023). | null | null | null | null | null | null |
DiffAug: A Diffuse-and-Denoise Augmentation for Training Robust Classifiers | Accept (poster) | Summary: The paper proposes DiffAug, a new method for training image classifiers that are more robust. DiffAug is based on diffusion models and effective at improving classifier robustness in several ways, such as resistance to variations in the data. It also can improve the performance of classifier-guided diffusion models. Furthermore, DiffAug is computationally efficient and can be combined with other augmentation techniques.
Strengths: 1. The writing and the presentation of the work is good and easy to follow.
2. The finding, that training with degraded images from the diffusion process won’t harm the classifier’s performance but even improve it, is interesting.
3. The experiments are comprehensive and diverse, showing the effectiveness of the proposed method.
Weaknesses: While I currently find the paper favorable and have no major weaknesses to point out, I do have some questions that the authors' responses could help clarify (see the part below). I am open to raising the score based on their explanations.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The authors state on Line 106 that they are unaware of prior studies training classifiers with denoised examples. However, Line 20 mentions previous work using "synthetic images generated using Imagen [38]". Does reference [38] not qualify as a "previous study on training classifiers with denoised examples"? Perhaps the authors meant there's no prior work on training with diffused-and-denoised data?
2. The authors mention "include Eq. 5 as an additional optimization objective" (Line 128-129). Could they elaborate on why it's considered "additional"? How does Eq. 5 differ from the standard classification loss typically used during classifier training?
3. In Section 4, the experiments explore "unconditional improved-DDPM." Have the authors investigated using DiffAug with "conditional" diffusion models? Can such text guidance improve DiffAug's performance, or would the "conditional" process limit augmentation diversity and hinder performance?
4. In Table 1, which model structure is trained on when evaluating “AM, DA, and DAM”? It seems the first column didn’t present enough information on that.
5. Table 1 results (columns 3, 4 & 8, 9) consistently show DE underperforming compared to DDA(SE). What explains this difference? Could it be due to DDA's "self-ensemble" strategy (although DE also uses ensembles)? Does this suggest that using even multiple-step-denoised examples can enhance classifier robustness?
6. While the paper focuses on improving classifier’s robustness based on diffusion models, there's a complementary area of research: diffusion-based attacks (DiffAttack [1], Diff-PGD [2]). How effective is DiffAug against these attacks? Given Diff-PGD's constraints with perturbation norms, the results in the paper on certified adversarial accuracy might generalize. However, DiffAttack is unrestricted. Can DiffAug still be applied in such cases? The authors should supplement these discussions in the paper for better evidence of DiffAug’s robustness.
7. About the related work, the idea of leveraging a diffusion model for augmentation and incorporating denoised examples to improve classifier robustness was also mentioned in the discussion section of DiffAttack [1]. Citing this related work would be appropriate.
8. Both DE and DDS employ a single reverse diffusion step during testing. What are the key distinctions between these methods? There seem no comparisons between them or descriptions about their differences.
[1] Chen, Jianqi, et al. "Diffusion models for imperceptible and transferable adversarial attack." arXiv preprint arXiv:2305.08192 (2023).
[2] Xue, Haotian, et al. "Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability." arXiv preprint arXiv:2305.16494 (2023).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. Limitations and social impacts have been involved in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your insightful reviews and affirmative evaluation of our work. We thank you for your appreciation of our presentation and method’s simplicity/computational efficiency. We agree with you that it is indeed interesting to learn that classifiers can be improved without sacrificing test accuracy through the use of degraded images from the diffusion process! We answer your questions in the following:
**[Q1]** Thank you for this question. The reviewer is correct that we meant no prior work on training with diffused-and-denoised data, and in particular by “denoised” we meant “partially denoised”; we apologize for this confusion, and we will make sure to clarify this. In [38], the generated synthetic images are of high-quality since they use iterative denoising: in fact, to ensure better quality, they first fine-tune the Imagen model on ImageNet data and then use FID to select the model checkpoints and various hyperparameters of the sampling algorithm.
***
**[Q2]** Additional optimization objective refers to extra loss term. When extending previous methods with DiffAug, we modify their official code by simply adding $\mathcal{L}$ in Eq. 5 to the original loss-terms. For example, AugMix involves 2 loss-terms: a cross-entropy loss term and a Jensen-Shannon divergence loss term. When extending AugMix with DiffAug, we introduce Eq.5 as the third loss-term.
***
**[Q3]** Similar to many standard augmentation techniques, we do not rely upon labels for DiffAug broadening its applicability for different tasks and scenarios. For example, DiffAug can be applied at both train-time and test-time (when class labels are unknown). Furthermore, it enables direct use of unlabeled data or a mix of labelled and unlabeled data to train the diffusion model. In theory, it should be possible to achieve performance enhancements with DiffAug by using conditional diffusion models, especially with innovative choices of prompt. However, this introduces additional hyperparameters such as the guidance-strength and choice of prompt. Additionally, it requires two forward passes through the diffusion model (one unconditional and one conditional) per training step. The augmentation diversity may certainly be affected especially if the guidance-strength is very high: in such cases, it is also possible that the classifier could cheat by exploiting imperceptible image statistics (e.g., due to adaptive layer-norms).
***
**[Q4]** We use the official pretrained checkpoints for these models and all of them are trained using a ResNet-50 backbone. We will clarify this.
***
**[Q5]** As compared to DE, DDA is a more complex technique that utilises multiple-diffusion steps to transform a test-sample from an unknown distribution into the source distribution. We find that the multiple-step denoising can be beneficial when dealing with severe corruptions in ImageNet-C. For example, we find negligible difference between DDA and DE over uncorrupted ImageNet test examples when considering the DiffAug-trained models (76.60 & 76.67 respectively). Yes, an optimal multiple-step denoising approach that effectively transforms the image into the source distribution may further enhance classifier robustness on ImageNet-C. For other distribution shifts such as ImageNet-R and ImageNet-S, we observe that DE offers better test-time adaptation on average as compared to DDA.
***
**[Q6,7]** We thank you for sharing these works on novel methods for diffusion-based adversarial example generation. We agree with your analysis that DiffAug’s certified adversarial accuracy results may generalise to Diff-PGD. From our understanding of DiffAttack, it produces transferable untargeted adversarial attacks at a high success rate and hence, we will need to conduct an empirical analysis to study DiffAug’s effectiveness against these examples. While we are interested in understanding the performance of DiffAug and DiffAug-Ensemble against DiffAttack, we were unable to perform this experiment during the rebuttal period due to time and resource constraints. We will include a discussion on these methods and aim to provide a preliminary empirical analysis in the final submission for completeness.
In the following, we describe additional results on ImageNet-D, another stable-diffusion generated dataset designed to evaluate robustness (similar to DiffAttack). From Table 2 (global PDF), we find that extra synthetic data (from stable-diffusion, as described in global response) offers the most improvements against ImageNet-D --– similar to the suggestion in the DiffAttack paper. Interestingly, DiffAug training and DiffAug-Ensemble (DE) inference can offer further improvements: for example, RN50+Synth achieves 17.52% accuracy on ImageNet-D while RN50+Synth+DiffAug achieves 19.18% accuracy and can be further improved to 21.41%. Since these results are encouraging, we are curious to evaluate the performance of DiffAug against DiffAttack.
***
**[Q8]** This is a good question! DE utilises a set of different diffusion time-steps ($\\mathcal{S}$ in Eq. 6) while DDS uses a single diffusion time-step. Crucially, DDS is based on the randomized smoothing theory in [1] whereas DE is inspired from test-time augmentation techniques [2]. Additionally, DE uses average voting for prediction whereas DDS uses majority voting. Further, the number of samples generated for each input is typically higher in DDS (e.g., 100 samples for ImageNet) as compared to DE (9 new samples when using $\\mathcal{S}=\\{0,50,100 … 450\\}$). In Figure 11, we compare the performance between DE and DiffAug: since DiffAug is equivalent to DDS using 1 sample, we expect DDS results to be similar to Fig. 11b.
[1] Certified Adversarial Robustness via Randomized Smoothing
[2] Understanding test-time augmentation.
***
We hope this resolves all concerns and look forward to resolving any outstanding concerns during the discussion period.
---
Rebuttal Comment 1.1:
Title: Thanks for the response.
Comment: After carefully considering the authors' response, I have no further questions. I am pleased to recommend the acceptance of this paper, and I have raised my rating to 7.
In the revised manuscript, I would be delighted to see not only the experiments on robustness against diffusion-based models as mentioned in Q6, but also those on the effect of DiffAug using conditional diffusion models, as discussed in Q3. Including these experiments would offer readers a more comprehensive understanding of the method's scope of application. This seems to be a point of curiosity for many, as also highlighted by Reviewer uEtP, and would make a valuable addition to the paper.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We thank you again for your insightful reviews and positive evaluation of our submission. We agree with your recommendation that DiffAug experiments with conditional diffusion models would be valuable. Based on both your and Reviewer uEtP’s interest in conditional diffusion models, and based on Reviewer 3vHc’s interest in Diffusion-Transformer models, we will include in the final version additional DiffAug experiments with Diffusion-Transformer (DiT). Since DiTs are trained for classifier-free guidance (i.e., both class-conditioning as well as null-conditioning), we will include additional settings with class-conditioning using DiTs in the final version. | Summary: This paper applies a diffusion-based data augmentation method to enhance the robustness of classifier. First, a gaussian perturbation is applied to train examples and then a single diffusion denoising step is applied to generate the augmentations. Besides, DiffAug can also be combined with other augmentations to further improve robustness. Empirically, DiffAug can achieve improvements in classifier generalization, gradient quality and image generation performance.
Strengths: 1. DiffAug can be combined with other augmentations to improve robustness. Besides, DiffAug can be also used to improve many other performance, such as classifier generalization, gradient quality and image generation performance
2. DiffAug is simple, computational and easy to follow to improove robustness.
Weaknesses: 1. Absence of ablation study of sampling methods and sampling processes. Did the use of better sampling methods and more steps achieve better results?
2. Absence of ablation study of diffusion model. Can other diffusion model such as DiT also be used to apply DiffAug
Technical Quality: 2
Clarity: 2
Questions for Authors: As shown in Weakness 1, why adopt a single diffusion denoising step when the current sampling method, such as DPM-Solver, can generate good results within 10 steps? In theory, better sample generation can achieve better results
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your detailed reviews. We thank you for appreciating the strengths of our method’s simplicity and computational efficiency and its ability to be combined with other augmentations to further improve robustness. In the following responses, we aim to address the weaknesses and answer your questions:
> **_[Q1] Absence of ablation study of sampling methods and sampling processes. Did the use of better sampling methods and more steps achieve better results? Why adopt a single diffusion denoising step when the current sampling method, such as DPM-Solver, can generate good results within 10 steps? In theory, better sample generation can achieve better results._**
This is a good question! To answer this question, we performed additional experiments during the rebuttal week using high-quality synthetic data from stable-diffusion as described in our global response (see [E1]). Specifically, we use 1.3M synthetic imagenet images in addition to the original ImageNet training data to finetune the torchvision resnet-50 model for 10 epochs. We call this RN50+Synth. To understand the utility of DiffAug in this case, we finetune another instance of the torchvision resnet-50 model for 10 epochs using DiffAug as well as additional synthetic data. We call this RN50+Synth+DiffAug.
From our experimental evaluation across several datasets, we find that DiffAug training and DiffAug-Ensemble (DE) inference offers complementary benefits to extra synthetic training data. We agree with you that, theoretically, DiffAug augmentations generated using a DDPM model trained on ImageNet should not provide further improvements when high-quality synthetic data from SD — trained on LAION-5B, that also covers ImageNet — is already available. Yet, we surprisingly observe that DiffAug improves over and beyond additional high-quality synthetic data! To explain this, we first note that DiffAug is qualitatively different from previous diffusion-based augmentation techniques. Depending on the diffusion time used to generate the DiffAug augmentation, the resulting image can vary greatly in quality (as shown in Fig. 1). As a result, classifying some of these augmented images is more challenging as compared to the original examples producing a regularizing effect that leads to empirical robustness improvements. Lastly, yet importantly, the complexity introduced by DiffAug training does not sacrifice test accuracy despite training on poor quality examples. This is an unusual and noteworthy property that can be understood by interpreting denoised examples to lie on the image manifold following recent theoretical studies [E.g., refs. 9 & 34 in the main paper].
In summary, DiffAug can enhance robustness even when efficient sampling techniques are available to synthesize high-quality images since its performance improvement can be attributed to the regularizing effect from learning to classify partially synthesized train examples (i.e., diffused-and-denoised examples). We will include these results in the final version of the paper and supplementary materials.
***
> _**[Q2] Absence of ablation study of diffusion model. Can other diffusion model such as DiT also be used to apply DiffAug.**_
While we consider the DDPM model (a variance-preserving (VP) SDE) for all our ImageNet experiments, we use a variance-exploding (VE) SDE diffusion in our classifier-guided generation experiments for the CIFAR10 dataset. We find that DiffAug introduces identical improvements for both DDPM and VE-SDE in terms of classifier generalization, perceptually aligned gradients, and improved image-generation performance. Yes, in theory, it should be possible to apply DiffAug with a diffusion-transformer model instead of the UNet architecture.
***
We hope this resolves all concerns and look forward to resolving any outstanding concerns during the discussion period.
---
Rebuttal Comment 1.1:
Comment: In fact, I believe the authors haven't addressed my concerns. For the first question, I was hoping to see attempts at different sampling methods. The authors proposed using high-quality synthetic data but neglected to explore sampling approaches. Regarding the second question, my main concern is with the network architecture of the diffusion model, not the training method (VPSDE or VESDE). I hope the authors can still address my concerns. Thank you.
---
Rebuttal 2:
Title: Thank you for your response!
Comment: We thank the reviewer for promptly reading our rebuttal, stating their outstanding concerns, and giving us another opportunity to address their concerns. We truly appreciate this level of engagement.
**Better Sampling Methods and More Steps.**
We originally interpreted your question as follows: better images should yield better results, so wouldn’t we get “even better” results by using the “even better” generated images? To answer this, we considered a strong baseline: a large-scale high-quality synthetic dataset generated with a stronger diffusion model. In particular, we used high-quality synthetic data generated using Stable-Diffusion with 50 reverse-diffusion steps of the PNDM sampler. Then, we studied the role of DiffAug augmentations — generated with a single reverse-diffusion step (of DDIM, or equivalently DPM-Solver-1) — when extra synthetic images are already available. Our experimental results showed that, surprisingly, the single-step DiffAug augmentations introduce further improvements beyond extra synthetic images.
We believe we now understand your question better and it seems that you were interested in applying DiffAug with more denoising iterations (on the diffused training image), or using a different solver (e.g. DPM-solver-2). We address both of these questions directly, followed by a discussion:
**(1) More denoising steps**: In fact, we originally had the same initial intuition as the reviewer: that taking more denoising steps might yield better performance! In our preliminary analyses, we therefore did indeed explore a multi-step extension to DiffAug (e.g., DDIM). *Interestingly, we found that multiple de-noising steps did not help as much as samples from a single reverse-diffusion step*, which is why we did not pursue it further. This initially surprised us and prompted us to think about it more carefully — we provide a detailed discussion about this below in addition to our discussion in lines 162-167 of our original paper submission, where we briefly explain why we do not consider multi-step denoising techniques despite their potential to improve sample quality. However, we would be very glad to expand upon this explanation in the final version!
**(2) Different sampling method**: Additionally, in the last few days, based on this reviewer’s suggestion, we tried using a single-step of reverse-diffusion of DPM-solver-2, both for training the classifier, and also as the sampling method at test-time for the DiffAug ensemble-technique (DE) that we describe in the paper. In particular, we first diffused the train example to a random diffusion-time $t$ and used a single reverse-diffusion step of the order-2 DPM solver to integrate the diffusion ODE backwards from diffusion time $t$ to $t=\epsilon$ with $\epsilon=0.001$ instead of $0$ for numerical stability. The results are shown in the Table below: *in both cases, while the resulting augmented images are of high visual quality, this did not work as well as the sampling strategy that we are already using* (again, discussion is below). This was very interesting for us to try out, and we feel that having tried this is clearly strengthening our paper and results, as it is consistent with our understanding so far of what DiffAug is doing.
| Training Method | | |Evaluation |
|:---:|:---:|:---:|:---:|
| | Default | DiffAug-Ensemble (DPM-Solver-1) | DiffAug-Ensemble (DPM-Solver-2) |
| AM | 26.72 | 34.08 | 31.77$\downarrow$ |
| AM+DiffAug/DPM-Solver-1 | 29.47 | 38.58 | 35.56$\downarrow$ |
| AM+DiffAug/DPM-Solver-2 | 22.96 $\downarrow$ | 29.69$\downarrow$ | 25.16$\downarrow$ |
(We use $\downarrow$ to denote lower performance due to the use of DPM-Solver-2 instead of the default DPM-Solver-1 used in our paper.)
We now describe the benefits of augmentation with a single reverse-diffusion step in classifier-training (apart from computational efficiency) in the following:
**i. A single reverse-diffusion step generates examples on the image manifold that cannot be generated by multi-step samplers.** The generated sample lies in regions between high-quality samples and can be understood mathematically from Eq. 4, where we observe $\hat{\bf x}_t$ is a convex-sum over examples from the data distribution. Also, see Fig. 5 in the Appendix for an illustration on a toy dataset.
In fact, surprisingly, these "low-visual quality" samples are valuable in a way that "high-visual-quality" samples are not! It turns out that we get the best results by including the full range of visual quality: in Appendix B.5.2, we include an ablation study to identify the comparative advantages of “_stronger_” DiffAug augmentations ($t \in [500,1000]$) and “_weaker_” DiffAug augmentations ($t \in [0,500]$). While stronger augmentations offer greater performance improvements on ImageNet-C and OOD detection, we find using the entire diffusion time-range tends to yield better performance across all evaluated tasks.
---
Rebuttal Comment 2.1:
Title: Thank you for your response! (contd.)
Comment: **ii. Improved samplers generate higher quality examples faster but may not offer regularization effects similar to DiffAug.** To demonstrate this, we conducted experiments using additional high-quality synthetic samples generated with Stable-Diffusion. Improved samplers such as DPM can help generate new synthetic data faster. While large-scale synthetic data can improve performance on ImageNet-R, ImageNet-Sketch and ImageNet-D, we show that both DiffAug training and also DiffAug-Ensemble (DE) inference offer additional improvements in each of these cases. We attribute the performance improvements from extra synthetic images to the manually designed prompts (Table 8 of [1]) and large-scale upstream dataset (LAION-5B) used to train stable-diffusion. On the other hand, the benefits of DiffAug training can be attributed to the regularization effect introduced by training over examples lying in regions between clean samples (and on the image manifold).
**iii.** Informally, we also found that there is another subtle catch with increasing the number of iterations. Our current method requires essentially no hyper-parameter tuning: it just works. However, we found that **DiffAug with multiple reverse-diffusion steps tended to generate augmented examples that effectively belong to a different class than that of the original training image.** This happens because we use the entire diffusion time range by default — in particular, images diffused farther could get augmented to an image that belongs to a different class. On one hand, we were able to address this, by, e.g. defining an upper limit $\tau$ such that the forward-diffusion step is applied with $t \le \tau$. However, any such solution that we considered ultimately introduced additional hyperparameters that required tuning, but without noticeable gains, and in fact generally missing out on the robustness improvements from stronger augmentations. Interestingly, while DiffAug with one reverse-diffusion step also does not preserve the class-label for larger diffusion time $t$ and injects label-noise while training, the model can still "learn" from these images without underfitting since the label-noise is correlated with visual quality. For more details, please refer to lines 148-167 on page 4 of the main paper; our current discussion in the paper is brief, but we would be glad to expand it.
For test time image adaptation, DDA uses multiple reverse-diffusion steps to deal with severe ImageNet-C corruptions and an improved sampler such as DPM can help accelerate sampling in this case. However, we note that DDA contains hyperparameters (in addition to the diffusion range $\tau$) that can strongly influence the test time performance. DiffAug-Ensemble is an alternative approach that is robust to hyperparameter choices (Appendix B.6) and demonstrates comparable or better improvements as compared to DDA using a single reverse-diffusion step. We feel that it may be advantageous to implement and evaluate a multi-step DiffAug-Ensemble technique using DPM-solver.
***
In summary, the benefit (in terms of regularization and robustness) of our augmentations appears to be in where the augmented data points sit in relation to the data manifold, and the “effective” augmentations do not necessarily correlate with visual quality (as supported by our experiments including: synthetic data, multi-step denoising, DPM-solver-2 denoising, Appendix B.5.2 ablation). For this reason we find that using a single reverse-diffusion DDIM step is both crucial for improvements introduced by DiffAug training, and is robust in that it does not require any hyper-parameter tuning. We appreciate your insightful questions and recommendations, and we look forward to including appropriate parts of this discussion in the final version of the paper.
For the final version, we will also implement DiffAug-Ensemble (for inference) using the DPM-solver and also include an analysis of hyperparameters (e.g., diffusion-range) and runtime. We will also include the results of our preliminary analysis where we generated DiffAug train augmentations using multiple steps (e.g., DDIM, DPM-Solver-2). We hope that these additional experiments can inspire multi-step extensions to DiffAug in future work. More specifically, while many multi-step samplers are impressively optimised for improved efficiency and sample quality (i.e., for humans), multi-step samplers for improved regularizations (i.e., for downstream neural models) likely require alternative designs. Finding these designs, and using them to build on DiffAug, would be a very exciting future research direction.
[1] Leaving Reality to Imagination: Robust Classification via Generated Datasets
***
---
Rebuttal 3:
Title: Thank you for your response! (contd.)
Comment: **Augmentation with Diffusion-Transformers (DiTs):** In theory, DiffAug can be applied using DiTs. Since DiTs are latent-diffusion models, we note that the augmentations can be generated by directly applying the VAE decoder over the one-step denoised latent representation. Previous works (e.g., [1,2]) follow this technique to use external guidance functions — that operate on images/piano-rolls as input — to guide the latent-diffusion sampling. Anecdotally, our informal experiments have suggested that DiffAug works with a variety of models. While we are greatly interested to train/evaluate classifiers with DiffAug using DiT, we are unable to report results of this experiment during the rebuttal discussion week due to time and resource constraints. We have already started working on this and we will certainly include results with DiT in the final revision.
[1] Universal Guidance for Diffusion Models. (ICLR 2024)
[2] Symbolic Music Generation with Non-Differentiable Rule Guided Diffusion (ICML 2024)
***
We hope these responses help address your concerns. If there are any other concerns, including other ablation analyses that you feel are important for inclusion in the final version, please let us know and we will gladly address your concerns/recommendations.
***
---
Rebuttal Comment 3.1:
Comment: Thank you very much for the detailed reply, which has solved my main concern, I will update my score.
---
Reply to Comment 3.1.1:
Title: Thank you!
Comment: We thank you for carefully considering our response and engaging with us during the discussion period. We are confident that the experiments prompted by your reviews not only strengthen our contributions but also help provide a better understanding of our new augmentation technique. | Summary: The paper explores the use of diffusion as a data augmentation technique to train robust classifiers. Specifically, it investigates whether a diffusion model trained without additional data can be leveraged to enhance classifier performance. The study shows utilizing a diffusion model trained without extra data to improve classifier robustness on Imagenet-C by 2 points, while train improved classifiers with just a single step of reverse diffusion.
The proposed method is evaluated on challenging benchmarks such as Imagenet-C and Imagine, utilizing Vision Transformer (ViT) and ResNet models.
Strengths: 1. Originality: The paper proposes an effective method of one-step diffusion augmentation for training robust image classifiers. While similar augmentation techniques using generative models have been explored previously, the utilization of a single-step diffusion process in this context is interesting. The idea is straightforward yet innovative, offering a novel approach to enhancing classifier robustness with minimal computational overhead.
2. Clarity: the paper is clearly written.
3. Quality: the paper includes ablation study and visualization. Say on diffusion time.
Weaknesses: 1. Soundness of results: the robustness is only evaluated on Imagenet-C (severity 5 only), which is only local corruption. Does the proposed method work when viewpoint, background, scale, style, texture change? What about the other severity? Say can test on Imagenet-A [1], ImageNet-D [2], Imagenet-S [3], ImageNet-R [4]. Since it is diffusion-based augmentation, the reviewer wondered if it will help ImageNet-D the most since this one is diffusion-based testing. Evaluation on those will help understand the strengths and weakness of the proposed approach.
2. Not clear how useful is test-time augmentation, can add the ablation study.
3. Unclear why the proposed method can handle covariate shifts. Can the author explain intuitively?
The score will be updated based on whether evaluation concerns are addressed.
[1] https://openaccess.thecvf.com/content/CVPR2021/html/Hendrycks_Natural_Adversarial_Examples_CVPR_2021_paper.html
[2] https://openaccess.thecvf.com/content/CVPR2024/html/Zhang_ImageNet-D_Benchmarking_Neural_Network_Robustness_on_Diffusion_Synthetic_Object_CVPR_2024_paper.html
[3] https://arxiv.org/abs/1905.13549
[4] https://openaccess.thecvf.com/content/ICCV2021/html/Hendrycks_The_Many_Faces_of_Robustness_A_Critical_Analysis_of_Out-of-Distribution_ICCV_2021_paper.html
Technical Quality: 3
Clarity: 2
Questions for Authors: How do the hyper parameters of the one-step diffusion set? If you add more noise, you may change globally, if only local noise, you can only handle local noise corruption.
Can the author show an ablation study on this tradeoff if there is any?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the detailed review with suggestions to strengthen the contribution. We thank you for recognizing the strengths of our method’s novelty, simplicity and computational efficiency. We are happy to see that you found our paper clearly written with nice visualisations. In the following responses, we aim to address the weaknesses and answer your questions.
> ***Evaluation on other severity***
Following this reviewer’s suggestion, we evaluate the models on all severity levels and include the results in the global PDF. We notice similar overall trends as those that we observed for severity=5. On average, we find that the DiffAug training and DiffAug-Ensemble inference leads to improved performances. We will include this in the final version of the paper.
***
> **_Evaluation on Imagenet-A [1], ImageNet-D [2], Imagenet-S [3] and ImageNet-R [4]._**
As per this reviewer’s suggestion, we carried out evaluations on ImageNet-A and ImageNet-D. For this evaluation, we also considered the additional classifiers we trained during the rebuttal week: in particular, we trained classifiers using synthetic datasets generated with stable-diffusion (please refer to the global response for more details) and denote this as RN50+Synth. We use RN50+Synth+DiffAug when trained using DiffAug in addition to the synthetic data. We note that ImageNet-R and ImageNet-S results are already included in the Appendix (Table 6) – we will label the paragraph describing these results on page 5 (line 220) to ensure that these results are easily accessible. We summarize our findings as follows:
* DiffAug training introduces slight improvements in the default evaluation mode (i.e., directly evaluating on test examples) for these datasets while DiffAug-Ensemble (DE) inference introduces notable improvements for Imagenet-R, ImageNet-D and ImageNet-S.
* On ImageNet-S, the average performance improves from 15.45 (using default evaluation) to 17.99 (using DE). For DiffAug-trained models, the average performance improves from 15.51 (default) to 18.26 (DE). In particular, DiffAug training and DE inference improves the RN50 performance from 7.12 to 12.52.
* On ImageNet-R, we find similar observations as above. For example, DiffAug training and DE inference improves the accuracy of RN50 from 36.16 to 41.61. On average, DE enhances the performance of DiffAug-trained models from 44.63 to 46.35. The best performance on ImageNet-R is obtained when using additional synthetic data: RN50+Synth has an accuracy of 49.28. Using both DiffAug training and DE inference, we can improve the performance to 54.71.
* On ImageNet-D, we observe that extra synthetic data from stable-diffusion helps enhance the robustness most as you had predicted. Interestingly, DiffAug helps to further improve robustness in this case. For example, RN50+Synth obtains an accuracy of 17.52 on ImageNet-D. Using DiffAug training in addition to extra synthetic data improves the performance to 19.18. When using DE-evaluation, we observe further improvements up to 21.41.
* On ImageNet-A, we generally find that all models except ViT struggle on this task. In this case, we find that DE inference leads to a reduction in performance on this task (on average). DiffAug training neither improves nor negatively affects the model performance in this case although we find slight improvements in many cases.
In summary, we find that DiffAug training improves classifier robustness in general and this can be further enhanced using DiffAug-Ensemble (DE) inference.
***
> **_[W2] Not clear how useful is test-time augmentation, can add the ablation study._**
[A] This is a good question, and we acknowledge that while we included this ablation in appendix B.6, the reviewers are not expected to look at appendix materials. We will gladly add a comment in the main body of the paper that highlights and refers the reader to these results.
To summarize these ablation results here: we find in multiple experiments that the DiffAug-Ensemble (DE) inference technique does indeed improve robustness. DE has two hyperparameters: maximum diffusion time-step and step-size controlling the number of augmentations. Overall, we also find that DE is robust to hyperparameter choices.
***
> _**[W3] Unclear why the proposed method can handle covariate shifts. Can the author explain intuitively?**_
[A] We include a detailed answer in the global response providing an intuitive explanation of DiffAug's performance (in Q2). In summary, DiffAug generates augmentations of varying sample-qualities, each presenting a different level of challenge for the classifier. Intuitively, this produces a regularisation effect that helps improve various aspects of classifier robustness including covariate-shift. Notably, the complexity introduced by DiffAug does not lower the test accuracy despite the explicit use of low sample-quality images for classifier training.
***
> _**[Q1] How do the hyper parameters [...] tradeoff if there is any?**_
[A] This is a very good question, and the reviewer is correct that smaller diffusion times produce DiffAug examples with localised modifications whereas larger diffusion times produce global modifications. In all our experiments, we use the entire diffusion time range to generate the DiffAug examples. To understand the influence of diffusion time-range, we conducted an ablation study (see Appendix B.5.2) comparing between _weaker_ DiffAug augmentations ($t \in [0,500]$) and _stronger_ DiffAug augmentations ($t \in [500,1000]$). While stronger and weaker augmentations help enhance different aspects of classifier robustness — for example, stronger augmentations are more useful to enhance performance on ImageNet-C and OOD detection — using the entire diffusion time scale tends to work better across all tasks.
***
We hope we have addressed all your concerns and look forward to discussing any outstanding concerns in the discussion period.
---
Rebuttal Comment 1.1:
Title: Thank you for your answer. My questions are answered.
Comment: Score updated accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: We thank you for your prompt acknowledgement of our new results and insights. We feel that these new experiments, prompted by your review, indeed strengthen the contributions of the paper. | Summary: This paper proposes a data augmentation method DiffAug for training robust classifiers. The method is very simple: first, add Gaussian noise to one training image, and then denoise the noisy images with a pre-trained diffusion model. They also propose methods for test-time augmentation using DiffAug. Experiments show that DiffAug is able to improve robust accuracy on a wide range of datasets such as ImageNet-C, R,S, OOD detection, and certification accuracy. This paper also shows that the proposed method can improve the generation quality of classifier-guided diffusion generation.
Strengths: 1. To the best of my knowledge, this method is new. Most of the existing methods use diffusion models to generate new training examples instead of augmenting existing ones. The proposed method is simple and easy to follow and implement.
2. This paper provides extensive ablation experiments to demonstrate the effectiveness of the proposed method. The proposed method is able to achieve significant improvements even combined with SoTA augmentation for robustness, say AugMix and DeepAugment, which indicates the proposed method provides new information for the model to learn. DiffAug can also be used to improve the performance of guidance classifiers in diffusion generation, thus lead to better results.
3. Models trained with this method have perceptually aligned gradients and this paper provides nice figures to demonstrate this.
Weaknesses: 1. **Missing related work and comparison**
Comparison with existing work that uses large synthetic data datasets with diffusion models to improve training is lacking. For example [1] uses diffusion models to generate extra training examples to improve empirical adversarial robustness and [2] for certificated robustness. Both two works show significant improvements for robustness.
Specifically, suppose DiffAug train the model with E epochs on a dataset of N images. SoTA diffusion models can generate new images with K steps (K can be small (K<= 10) using DDIM, even 1 step using some flow based methods). If we use the diffusion model to generate EN/K new images, and train the model on the original dataset with the generated images. The extra computational cost is the same as DiffAug, if I understand it correctly. Will DiffAug achieve better results over this method? If yes, what could be the motivation/theory?
2. **Concerns regarding some numbers reported**
In the DeepAugment paper, they report results for ResNet50 on ImageNet-C ([3], Figure 5). The accuracy of DeepAugment+AugMix (DAM in this paper) is close to 60%, which is much higher than the numbers in this paper (around 40%). Similarly, the results of AugMix reported in the original paper are also much higher than the numbers in this paper. (I do not work in the corruption robustness field, so there may be errors in citing these numbers)
Can the authors explain why the gap is so large? If it is due to different settings. I suggest following the original settings for a fair comparison.
Similarly, the results on Table 8 are also significantly higher than numbers in [4], and far from SoTA.
[1] Better Diffusion Models Further Improve Adversarial Training. ICML23
[2] A Recipe for Improved Certifiable Robustness. ICLR 2024
[3] The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
[4] (CERTIFIED!!) ADVERSARIAL ROBUSTNESS FOR FREE!
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I do not pretty understand the sentence in line 128: what does "additional optimization objective" refer to?
```
When combining DiffAug with such novel augmentation techniques, we simply include Eq. 5 as an additional optimization objective instead of stacking augmentations
```
and line 131: why stacking augmentations are not good and what is the difference between stacking augmentations the the proposed method?
```
our preliminary analysis on stacking augmentations showed limited gains over simply training the network to classify independently augmented samples likely because training on independent augmentations implicitly generalizes to stacked augmentations.
```
2. Why use unconditional diffusion models? Will using conditional diffusion models raise to some problem?
3. Can authors provide some insights why the proposed method can improve both corruption robustness and certificated adversarial robustness? Will this method also improve empirical adversarial robustness?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the insightful review with suggestions to strengthen the contribution. We thank you for your appreciation of our method’s novelty, simplicity and computational efficiency, our presentation/figures and extensive ablation experiments. In the following responses, we aim to address the weaknesses and answer your questions.
> _**[W1-A] Comparison [...] is lacking.**_
We address your suggestion for a comparison in the experiment [E1] described in the global response and include the results in Table 1 of the global PDF. In summary, we find that DiffAug training and DiffAug-Ensemble (DE) inference offer performance improvements _over and beyond_ additional high-quality synthetic datasets.
***
> _**[W1-B] Missing Related Work.**_
We thank you for bringing these papers to our attention, and we will certainly include discussion in the final version. We identify important distinctions and similarities between our work and the suggested papers. While the mentioned works focus exclusively on adversarial robustness, we focus on regular non-adversarial classifier training with an aim to improve different aspects of robustness (e.g., covariate shifts, OOD detection and certified adversarial accuracy). Nevertheless, both example papers demonstrate use of synthetic data generated with diffusion models trained with no extra data to improve the adversarial training technique and this is in fact very close to one of our research objectives: exploring the augmentation potential of diffusion models trained with no extra data. On the contrary, previous generative data augmentation approaches for regular non-adversarial classifier training often rely upon synthetic datasets from large diffusion models such as Stable-Diffusion or Imagen.
***
> **_[W1-C] Specifically, [...] theory?_**
Thank you for this interesting question about performance and sample quality vs compute budget! We provide a detailed answers to this in the global response (please see answers to Q1 and Q2). In the context of adversarial training, we imagine that corresponding enhancements in performance are likely possible using DiffAug. Additional exploration on adapting DiffAug to adversarial training and empirical validation is out of scope, but is an interesting avenue for future work.
***
**[W2]** We appreciate the reviewer’s dedication to provide a well-informed review of this submission! We use official checkpoints and confirm our reproduction is error-free; we will release both the source-code and model checkpoints for reproducibility.
The DeepAugment+AugMix result in the original paper was obtained by considering all 5 severities of ImageNet-C, while Table 1 is for severity=5 (please see Fig. 1 in the global PDF for results over entire ImageNet-C). Our reproduction of the ImageNet-test and ImageNet-R accuracies of these models do also match the official results. Since the ImageNet-C dataset contains 750k test samples for each severity level, considering all 5 severities is computationally expensive for some methods (e.g., DDA) and in those cases, the standard practice is to evaluate on severity=5 since this represents the most challenging examples.
As we have described in lines 259-267 and as correctly identified by this reviewer, DDS already achieves state-of-the-art certified robustness results by using a pretrained 305M-parameter BeIT-L network. It is intuitive that training a classifier with DiffAug enhances the DDS certified accuracy and we empirically confirm this in Table 8 for a 86.6M parameter ViT-B model since this is computationally feasible.
***
**[Q1]** Additional optimization objective refers to extra loss term. When extending previous methods with DiffAug, we modify their official code by simply adding $\mathcal{L}$ in Eq. 5 to the original loss-terms. For example, AugMix involves 2 loss-terms: a cross-entropy loss term and a Jensen-Shannon divergence loss term. When combining it with DiffAug, we simply introduce Eq.5 as the third loss-term.
Stacking refers to the sequential application of distinct augmentations on the same image. While stacking augmentations may intuitively offer more robustness enhancements, our preliminary analysis showed no advantage of stacking augmentations over simply including the DiffAug loss in addition to the original training loss. Therefore, we choose to implement DiffAug as an additional loss term since this requires minimal change to the original code and fewer design choices (e.g., order of augmentations).
***
**[Q2]** Similar to many standard augmentation techniques, we do not rely upon labels for DiffAug broadening its applicability -- e.g., DiffAug can also be applied at test-time when labels are unknown. In theory, DiffAug with conditional diffusion models should not cause any problem. However, this introduces additional hyperparameters such as the guidance-strength and choice of prompt. Additionally, it requires two forward passes through the diffusion model (one unconditional and one conditional) per training step. If the guidance-strength is very high, the classifier could cheat by exploiting imperceptible image statistics (e.g., due to adaptive layer-norms). This is an interesting extension for future exploration.
***
**[Q3]** In summary, DiffAug generates augmentations of varying sample-qualities, each presenting a different level of challenge for the classifier. Intuitively, this produces a regularisation effect that helps improve various aspects of classifier robustness including covariate-shift. Since both DiffAug and DDS utilise a single reverse-diffusion step, we can observe improvements in certified adversarial accuracy. Yes, we can also improve empirical adversarial robustness against $l_2$ perturbations by using a majority vote following the Algorithm 2 in [1].
[1] (CERTIFIED!!) ADVERSARIAL ROBUSTNESS FOR FREE!
***
We hope we have addressed all your concerns and look forward to discussing any outstanding concerns in the discussion period.
---
Rebuttal Comment 1.1:
Title: Additional Information regarding Q2
Comment: We thank you again for your thorough review of our submission and insightful questions. Based on both Reviewer QY34’s recent positive recommendation and your question (Q2) about conditional augmentation, we want to inform you that we will include experiments on conditional DiffAug in the final version. In particular, we will explore class-conditioning with Diffusion-Transformer (DiTs) models, as that is also consistent with the DiT experiments we committed to include in the final version to Reviewer 3vHc. We hope we have addressed all your concerns and look forward to discussing any outstanding concerns in the remaining discussion period. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful and thoughtful reviews with suggestions for improvements. We are happy to see that the reviewers appreciated the novelty (uEtP, gb3b) and simplicity/computational efficiency (uEtP, gb3b, 3vHc, QY34) of the method and found the paper to be well-presented (uEtP, gb3b, QY34) with extensive experiments and analysis (uEtP, QY34).
Based on the reviews, we conducted additional experiments during the rebuttal week and include the results in the attached PDF. We describe the experimental details as follows:
**[E1] Experiments with additional high-quality synthetic data**
For this experiment, we consider 1.3 M synthetic images --- released by [1] --- generated with Stable-Diffusion (SD) using diverse prompts aimed at enhancing classifier robustness. We fine-tune the torchvision resnet-50 model for 10 epochs using both real and synthetic images and call this RN50+Synth. Next, we repeat this finetuning process with DiffAug and call this RN50+Synth+DiffAug. We evaluate these models across many datasets and include the results in ***Table 1*** of the global PDF.
From Table 1, we immediately observe that large-scale synthetic data can improve performance on ImageNet-R, ImageNet-Sketch and ImageNet-D. However, in each of these cases, we can observe that both DiffAug training and also DiffAug-Ensemble (DE) inference each offer additional improvements. For example, DiffAug training and DE inference can improve performance on ImageNet-R from 49.28 (RN50+Synth, Default (Def)) to 54.71 (RN50+Synth+DiffAug, DE). Similarly, DiffAug training and DE inference improves performance on ImageNet-Sketch (35.45 to 37.39) and ImageNet-D (17.52 to 21.41).
On ImageNet-C, we observe that extra synthetic data does not offer any improvement while DiffAug training and DE inference offer similar improvements both with and without extra synthetic data.
***
**[E2] More Datasets**
We present further empirical evidence in favour of DiffAug-Training and DE inference in ***Table 2*** (ImageNet-A/D) and ***Figure 1*** (ImageNet-C, all severities).
***
Now, we address some common questions:
**Q1. [uEtP, 3vHc] DiffAug vs Additional high-quality data generated with more steps.**
This is an interesting question! We answer this question based on the experiment [E1] by comparing between RN50+Synth and RN50+Synth+DiffAug. Based on our experimental results in Table 1, we interestingly observe that DiffAug using improved-DDPM offers improvements over and beyond additional synthetic data using Stable-Diffusion. This is surprising due to the following reasons:
1. Stable-Diffusion is trained on LAION-5B, a much larger dataset that also subsumes ImageNet while the improved-DDPM model is trained on ImageNet data alone.
2. Additional synthetic data requires more compute per each sample (e.g., 50 reverse-diffusion steps) whereas DiffAug just uses one reverse-diffusion step.
Even when using a diffusion model trained on a smaller dataset, DiffAug manages to offer performance improvements _complementary_ to high-quality synthetic examples. We imagine that a better diffusion model for generating DiffAug augmentations may provide further performance improvements. Ultimately, when using comparable diffusion models (e.g., same training data and similar parameter counts) for synthetic data augmentation as well as DiffAug, +Synth+DiffAug may likely be equivalent to +DiffAug allowing for a compute-efficient diffusion-based augmentation technique that combines the advantages of both extra training data as well as DiffAug. We leave this exploration for future work.
***
**Q2. [uEtP, gb3b] Intuitive Explanation of DiffAug's Performance**
We do have some intuition, which we present here, as to why DiffAug offers additional improvements, even beyond what we obtain by making direct use of 1.3M high-quality synthetic images.
As a first step (and to state what might be obvious), the improvements in ImageNet-R, ImageNet-Sketch and Imagenet-D when fine-tuned with extra synthetic images can be attributed to the large-scale upstream dataset (i.e., LAION-5B) used to train Stable-Diffusion and various prompts designed to enhance data-diversity (e.g., Table 8 of [1]). To intuitively understand improvements from DiffAug training, we first note that DiffAug augmentation is qualitatively different from previous diffusion-based augmentation methods: depending on the diffusion time used to generate the DiffAug augmentation, the resulting image can vary greatly in quality (as shown in Fig. 1). As a result, classifying some of these augmented images is more challenging as compared to the original examples producing a regularizing effect that leads to empirical robustness improvements. Lastly, yet importantly, the complexity introduced by DiffAug training does not sacrifice test accuracy despite training on poor quality examples. This is an unusual and noteworthy property. We can explain this observation by interpreting denoised examples to lie on the image manifold following recent theoretical studies [E.g., refs. 9 & 34 in the main paper].
We can use this interpretation of denoised examples to explain why DiffAug-Ensemble enhances robustness: a diffuse-and-denoise transformation applied to a test example from a different distribution *projects it towards the manifold of the original distribution*. This data manifold intuition helps clarify how and why these examples are fundamentally different from examples augmented with pure gaussian noise.
***
[1] Bansal and Grover. (2023) Leaving Reality to Imagination: Robust Classification via Generated Datasets
Pdf: /pdf/f5dcd6ffb763a99c1a953fa82e81385e9f26ea3b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dynamic Service Fee Pricing under Strategic Behavior: Actions as Instruments and Phase Transition | Accept (poster) | Summary: The paper studies the problem of dynamic pricing of service fees in a setting where only equilibrium quantities of supply and demand curves are observable. The main contributions of the paper lies in using Instrumental Variables in an online setting, and consequently, theoretically bounding the regret of the model.
Strengths: The problem is very interesting and the modelling is very realistic than hypothetical.
Theoretical Analysis.
Weaknesses: Some proofs could had been included in the main paper as theoretical contribution is the strength of the paper.
Technical Quality: 3
Clarity: 2
Questions for Authors: The equilibrium equation (1) between supply and demand was unclear. How does the equation be helpful for the buyer’s strategies. Some intuition could be provided here
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The paper makes good theoretical observations about the setting in consideration. However, the 100% civilian-run seller assumption could be relaxed to make it a more impactful contribution. Other limitations are stated by the authors and these are not straightforward to fix and I agree with authors that these should be left for future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and encouragement. The followings are our responses to your questions.
Re Weaknesses: Thank you for your suggestion. In the camera-ready version, we will include some core proofs on the additional page of the main body.
Re Questions: $P_{St}$ represents the payment received by the seller, reflecting the supply curve, while $P_{Dt}$ represents the price paid by the buyer, reflecting the demand curve. Since the platform charges a service fee $a_t$, there is a gap between these two prices at equilibrium. In other words, at the equilibrium point where we use $Q_t^e$ to denote equilibrium quantity, we have $P_{Dt}(Q_t^e) = P_{St}(Q_t^e) + a_t$. However, buyers might not purchase according to their true demand, i.e., their willingness to pay. For example, a buyer may be able to afford Uber's price but choose to wait for some time until the price drops. Consequently, Uber may perceive the buyer's purchasing power as lower and reduce prices in future personalized pricing, thereby giving the buyer higher long-term utility. As a result, we use $P_{Dt}'$ to represent the buyer's apparent demand, based on which they choose the quantity to purchase, and it might be different from $P_{Dt}$ reflecting strategic behaviors. Therefore, we finally have $P_{Dt}'(Q_t^e) = P_{St}(Q_t^e) + a_t$.
Re Limitations: Thank you for your comment. In Appendix B, we discuss how to generalize the 100% civilian-run seller assumption to $\alpha$ civilian-run. We chose $\alpha=100$% to concisely express our main contributions, namely (1) how to use endogenous actions as IVs, (2) how to deal with buyers' strategic behavior, and (3) how to explore the unknown environment through adaptive randomness injection.
Thank you again for your comments. We hope you find the response satisfactory. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I retain my score.
---
Reply to Comment 1.1.1:
Comment: Thanks a lot! | Summary: This paper considers a dynamic pricing problem where the buyer is strategic and provides regret analysis under this setting. In particular, they propose to use instrumental variables to estimate demand and discuss the impact of the supply randomness.
Strengths: 1. The paper is technically strong with several results and the presentation is clear: the study of "noise helps learning" and phase transition is interesting
1. The setting of supply equilibrium is nonstandard in dynamic pricing and less explored
Weaknesses: 1. The problem motivation is questionable as the service fee does not change often in practice whereas the online learning problem considered effectively relies on frequent changes (equivalently, a large number of periods)
1. It is a pity that the regret dependence on $\gamma$ is not captured in the current analysis, as also mentioned in the future direction of this paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. As mentioned in Example 1.1, the service fee is constant in a short period of time, and might not be that dynamic (compared to typical applications of dynamic pricing such as retailing or ride-hailing pricing). However, the numerical experiments involve a very large number fo time periods, in particular, the regret converges at around 20,000. The inconsistencies make the proposed algorithm inviable in practice. Can the authors reconcile the inconsistency and provide a more convincing motivation?
1. Can the authors explain a bit more on the revelation principle and in particular how the buyer maximizes surplus?
1. The paper mentions using non-i.i.d. action as an instrument; however, a rigorous exposition on conditional independence (i.e., sanity check) is missing, which decreases the theoretical credibility of this work. Can the authors enrich this part?
1. The noise-agnostic algorithm in Algorithm 3 is interesting but the current description is a bit concise. Can authors add explanation on how to implement the hypothesis test?
1. The confidence region is rather wide in Figure 2. Can authors check the algorithm performances and maybe present more results with different $\sigma_S$ values? It also might be worthwhile to increase the number of trajectories from $10$.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed remarks and questions. The followings are our responses to your questions.
Re Weakness 1 and Question 1: One application scenario could be that some online platforms, e.g., Amazon, sells a large number of products daily, and takes service fees between sellers and buyers.
At the same time, we notice that on ride-hailing platforms like Uber, prices in the app frequently fluctuate due to frequent changes in supply. While the booking fee doesn't change too often, promotions can change quickly and be highly personalized. Therefore, our model can also be applied to dynamic promotion pushes. This is another application scenario for our model.
Re Weakness 2: We agree that this is an important and challenging future direction. We would like to note that although our results on $\gamma$ may not be tight, we have improved upon previous results in the literature. Please refer to Lines 245-252. Compared to $O(1/(1-\gamma)^2)$ regret in another similar pricing problem [37], we improved it to $O(1/(1-\gamma))$ in Thm 3.2. Additionally, our algorithm does not require prior knowledge about $\gamma$.
Re Question 2: The revelation principle [63] tells us that for any mechanism that maximizes revenue, there exists an incentive-compatible (IC) mechanism that can achieve the same revenue. IC means that buyers can purchase according to their true demand, and the mechanism's incentives will guarantee them higher utility. Therefore, we study how to motivate and approximate within the IC mechanism framework, narrowing down the policy space and making it easier to find an approximately optimal mechanism. This not only simplifies the analysis but also makes it easier to apply in real-world scenarios (buyers don't need to struggle with searching for a good strategy).
Re Question 3: $a_t$ is not i.i.d. because it depends on the supply from $1$ to $t$ and the demand from $1$ to $t-1$. Therefore, $a_m$ and $a_n$ are correlated, since they, for example, both depend on the supply at $t=1$. However, since the platform announces $a_t$ before observing the equilibrium at time $t$, $a_t$ and $P_{Dt}$ are conditionally independent given $P_{S1}, ..., P_{St}, P_{D1}, ..., P_{D(t-1)}$. Therefore, we model it as a martingale for parameter estimation. Thank you for your comment; we will include this discussion in the next version.
Re Question 4: In our experiments, we chose $T_0 = 10\log_2 T$. We also used $T_0$ data points to empirically estimate $\sigma_S^2$. If the estimated value is greater than $10/\sqrt{T}$, we consider it to belong to $\mathbb{H}_1$; otherwise, we consider it to belong to $\mathbb{H}_0$. The parameter 10 does not affect the order of the regret bound in terms of $T$, and the concrete value is usually determined empirically, such as through cross-validation with past data. We hope this helps with your understanding.
Re Question 5: Thank you for your suggestion. We increased the number of trajectories to 100 and observed that the bandwidth significantly decreased. Please refer to Figure 1 in the newly added pdf. Additionally, we tested the regret under different $\sigma_S^2$ values: 0.5, 1, 1.5, 2. Please refer to Figures 2-4 in the newly added pdf. We found that the larger the $\sigma_S^2$, the smaller the regret, which confirms our theoretical results.
Thank you again for your comments. We hope you find the response satisfactory. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Title: Score adjusted
Comment: Thank you for your thoughtful feedback. Overall this is a technically strong and well-rounded paper. My previous concerns are largely well addressed and I appreciate the authors' efforts in addressing them. I have thus adjusted my score accordingly. Last minor note, though I remain a little skeptical about the motivation. As the authors have replied "While the booking fee doesn't change too often, promotions can change quickly and be highly personalized. Therefore, our model can also be applied to dynamic promotion pushes." -- if this is the case, it is probably more appropriate not to use service pricing as the problem background (just for the sake of being unique) if it does not fit the model well.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback! Our analysis of the model is twofold. First, charging service fees is critical for third-party platforms as it constitutes a significant portion of a company's revenue. This work makes initial attempts to integrate various factors of service fee pricing into an intuitive model, illustrating the concept of actions as instruments and the phenomenon of phase transition. Second, our paper adopts an experimental approach, commonly used by many online platforms. If a platform lacks prior market information, it must explore to learn about the environment while maximizing total rewards. This includes dynamically adjusting service fees and personalizing promotional efforts, both fitting well within such an experimental framework. Nervertheless, we look forward to incorporating promotions and developing more practical models as an exciting avenue for future research. Please let us know if you have any more questions. Thank you once again for your insights! | Summary: The primary goal of the paper is to maximize total revenue over a given time horizon by dynamically adjusting service fees on third-party platforms, considering the strategic nature of customers and the lack of complete information on demand.
The paper presents a novel approach to dynamic pricing for third-party platforms, addressing the challenges of unknown demand, equilibrium observation limitations, and strategic buyer behavior. The authors propose an algorithmic solution that incorporates active randomness injection, non-i.i.d. actions as instrumental variables, and a low-switching cost design to balance exploration and exploitation while estimating demand and mitigating strategic behavior.
Strengths: 1 The paper is technically rigorous, providing a clear mathematical framework and novel algorithmic solutions for a complex problem in platform economics.
2 The authors demonstrate an innovative use of non-i.i.d. actions as instrumental variables and the proposed algorithms are shown to have optimal regret bounds, which is a strong technical result.
3 The paper is well-structured, with a clear explanation of the problem, related work, model assumptions, and a detailed description of the algorithms and their theoretical guarantees.
Weaknesses: 1 While the paper is comprehensive, it could benefit from more real-world examples or case studies to illustrate the practical application of the proposed algorithms.
2 The assumptions made for the model may not always hold in real-world scenarios. For example, in reality, there is often a situation of diminishing marginal utility. so the assumption that supply and demand curves have linear forms hinder the practical application of this algorithm. The authors could discuss the robustness of the algorithms under different conditions.
3 The author say that they can consider incentive-compatible direct mechanism with the help of revelation principle. But the design space of the platform must be constrained compared to the original space. The authors do not discuss the impact of this change on their design of the algorithms.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the authors provide more examples or case studies to illustrate how their algorithms perform in real-world scenarios?
How do the algorithms perform under different levels of noise in the supply curve, and what is their robustness to model misspecification?
The direct and IC mechanism could be constrained compared to the original space. Could the authors discuss the impact of this on their design of the algorithms?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I did not find any limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind remarks and questions. The followings are our responses to your questions.
Re Weakness 1: Thank you for your suggestion. If there is a suitable opportunity, we will consider conducting experiments on real-world data.
Re Weakness 2: We are sorry for the confusion and misunderstanding. We discussed in Lines 165-177 why we use a linear model (based on market observations and the robustness of the linear model [15]) and how to go beyond it (using GMM and RKHS). We assumed a linear model to avoid heavy notations while expressing our main points: (1) how to use endogenous actions as IVs, (2) how to deal with buyers' strategic behavior, and (3) how to explore the unknown environment through adaptive randomness injection.
Re Weakness 3 and Question 3: Recall that our goal is to maximize the total revenue. The revelation principle [63] tells us that there exists a DSIC mechanism that can achieve the optimal revenue. Therefore, we only need to consider the policy class satisfying DSIC. If we approach the optimal DSIC mechanism, then we can approach any optimal mechanism no matter it is DSIC or not. In other words, we identify there exists an optimal point in a smaller policy space and then approximate it within this smaller space. This not only simplifies the analysis but also makes it easier to apply in real-world scenarios (buyers don't have to struggle with searching for a good strategy).
Re Question 1: Thanks for your question. One application scenario could be that some online platforms, e.g., Amazon, sells a large number of products daily, and takes service fees between sellers and buyers. At the same time, we notice that on ride-hailing platforms like Uber, prices in the app frequently fluctuate due to frequent changes in supply. While the booking fee doesn't change too often, promotions (which can be regarded as a reduction on the service fee) can change quickly and be highly personalized. Therefore, our model can also be applied to dynamic promotion pushes. This is another application scenario for our model.
Re Question 2: We demonstrate the relationship between regret and the noise level in the supply curve in Figure 5. We also add another group of experiments (see pdf) showing their negative correlation. Additionally, a higher noise level in demand leads to greater regret, as it results in larger estimation errors for the parameters. Please refer to Line 688 for this impact on $\hat\beta_1$. [56] informs us that model misspecification causes additional unavoidable errors, while [15] explains that linear models are not so affected by misspecification. This is one of the reasons why we assume linearity.
Thank you again for your comments. We hope you find the response satisfactory. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I understand that using a linear model simplifies notation and presentation. However, This also simplifies the learning process. It is still difficult to evaluate the performance of the algorithm in much more complicated, real-world applications. I will maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment. We would like to emphasize that in this paper we take the first steps towards understanding service fee pricing in third-party platforms. We take an "actions as instruments" perspective and observe a "phase transition" phenomenon. We hope to extend the techniques to nonlinear functions in our next step (we believe the techniques and insights still hold). Thanks again for your review. | Summary: The paper tackles the dynamic pricing problem faced by third-party platforms specifically concerning the optimal setting of service fees in the presence of strategic and far-sighted customers. The objective is to maximize the total revenue over a given time horizon.
Strengths: + The motivation and challenge of solving demand prediction and optimizing long-term revenue is clear.
+ Robustness consideration: the algorithm's design incorporates robustness by not requiring precise knowledge of the buyer's discount rate, enhancing its applicability across varying real-world scenarios.
Weaknesses: -. Declaration: line 100, confusing definition of price P and demand Q.
-. Limitation: lack of explanation of assumption 1, regarding the linearity with Gaussian noise.
-. There is a lack of details on instrumental variables in Algorithm 1, which is crucial in solving the regret bounds since instrumental variables are non-iid and non-external.
-. The trade-offs of data amount and randomness are straightforward in Section 3.1.
-. Lack of motivation and explanation of relaxing the two assumptions in Section 3.2.
-. Lack of detailed definition of T_0 in Algorithm 3. The intuition of using hypothesis testing is quite straightforward.
-. In Section 5, to show the regret order, it’s better to show a log-like figure which indicates the linearity relation.
Technical Quality: 3
Clarity: 3
Questions for Authors: see weakness.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your remarks. The followings are our responses to your questions.
Re Weakness 1: $P_{St}$ is the amount received by the seller at time $t$, and $P_{Dt}$ is the amount paid by the buyer. The gap between them is the service fee, which is $a_t$, and $Q$ is the transaction quantity. Then, $P_{St}(Q)$ and $P_{Dt}(Q)$ represent the supply curve and demand curve respectively.
Re Weakness 2: We are sorry for the confusion and misunderstanding. In Lines 165-177, we introduced the motivation and extension of Assumption 1. Our results hold for general subGaussian noise, and we use Gaussian noise for ease of illustration. We assume linear models because they perform well locally and have strong robustness.
Re Weakness 3: We use the service fee, i.e., action, as an instrumental variable, which is emphasized in the title. Since they are neither i.i.d. nor external, we used a martingale-based approach. Intuitively, at time $t$, $a_t$ only depends on the demand from times 1 to $t-1$; so given those, $a_t$ is independent of the demand at time $t$. We proved that in this case, the service fee can be used as an approximately valid IV.
Re Weakness 4: The challenge in our problem lies in how to add randomness. We carefully choose the amount of data used to make $a_t$ an approximately valid IV. Additionally, we add noise with carefully chosen magnitude to balance between exploration and exploitation.
Re Weakness 5: For Assumption 2.1, we can use the generalized method of moments (GMM), which is standard in econometrics, to handle non-linearity. For Assumption 3.2, we discuss other possible choices in Appendix B. Please refer to the discussion in lines 154-177. We use these assumptions to highlight our technical innovations while avoiding heavy notations, including (1) how to use endogenous actions as IVs, (2) how to deal with buyers' strategic behavior, and (3) how to explore the unknown environment through adaptive randomness injection.
Re Weakness 6: $T_0$ is a constant multiple of $\log T$. The constant does not affect the order of $T$ in the regret bound, and the concrete value is usually determined empirically, such as through cross-validation with past data. In our experimental design, as referenced in Line 1118, we chose $T_0 = 10 \log_2 T$.
Re Weakness 7: Although the idea of hypothesis testing is straightforward, our innovation lies in recognizing the phase transition and finding the optimal algorithm design through injecting randomness adaptively. This optimal algorithm is further achieved through hypothesis testing.
Re Weakness 8: Thanks for your suggestion. From the experimental results, when $\sigma_S^2=1$, we find that the regrets when $T=20000, 40000, 60000, 80000, 100000$ ($\log T$ = 9.90,10.60,11.00,11.29,11.51) are 220, 230, 234, 236, and 237, respectively. These points are even slightly sublinear. So, the actual performance is even better than the theoretical bound.
When $\sigma_S^2=0$, $\log(\text{Regret})$ = 6.43, 6.75, 6.95, 7.10, 7.17 when $\log T$ = 9.90,10.60,11.00,11.29,11.51. The estimated slope by OLS is 0.47, testifying our square root $T$ regret.
We will include this in the next version.
Thank you again for your comments. We hope you find the response satisfactory. Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for clarifying the details. Based on the response, I would like to raise my scores.
---
Reply to Comment 1.1.1:
Comment: Thanks a lot! | Rebuttal 1:
Rebuttal: Thank you all for your meaningful reviews! We add a new experiment increasing the number of trajectories to 100 with various noise levels in the supply. Please see the added pdf for detailed results.
Pdf: /pdf/0702a14b5f76ef8e7eec7aced6014dffbc95c042.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Task Confusion and Catastrophic Forgetting in Class-Incremental Learning: A Mathematical Framework for Discriminative and Generative Modelings | Accept (poster) | Summary: This paper tries to address and analyze the challenge of task confusion (TC) in class-incremental learning (class-IL). This paper proposes the Infeasibility Theorem that demonstrates that achieving optimal class-IL through discriminative modeling is impossible due to TC, even if CF is prevented. It further proposes the Feasibility Theorem, which shows that optimal class-IL can be achieved with generative modeling, provided CF is prevented. The authors further emprically assess their theorem with traditional class-IL strategies, including regularization, bias-correction, replay, and generative classifier.
Strengths: The writing of paper is clear.
The paper focuses on an interesting view: task confusion and proposes rigorous theorem to analyze it.
The theoretical contribution of this paper is significant. Researchers can use the theorem to guide their method desgin.
The authors further assesses traditional continual learning strategies from the view of task confusion.
Weaknesses: 1) I recommend that the authors clarify their theorems, experimental results, and contributions in the introduction section. Additionally, the method comparison should be detailed in the related works section.
2) The authors should give more details about discrimative/generative modeling and related continual learning strategies. Some fresh readers may not understand it.
Technical Quality: 3
Clarity: 2
Questions for Authors: See the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: I do not see any potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the thoughtful review and valuable feedback you provided on our submission. Your constructive insights are greatly appreciated, and we are committed to addressing the points you've raised.
In the revised paper, the following paragraph will be included in the introduction that clarifies the progression of our theoretical results (our contributions):
"For discriminative modeling, Lemmas 1 and 2 form the groundwork that leads to Theorem 1 and its subsequent Corollaries 1 through 5. For generative modeling, Lemma 3 underpins Theorem 2 and Corollary 6. Furthermore, Hypothesis 1 is derived from the principles outlined in Lemma 1."
In the related work section of our revised paper, we will include a table that compares various methods based on their ability to mitigate task confusion and catastrophic forgetting. This table will detail the qualitative attributes of each method and identify whether they utilize discriminative or generative modeling approaches, based on their documentation in the literature.
The following paragraph will be incorporated in the revised paper to give more details about discriminative/generative modeling:
"Since the advent of AlexNet, the neuroscience community has been skeptical of the deep learning community's discriminative modeling approach for classification [1]. They argue that humans do not learn p(y|x) for classification (discriminative modeling); instead, humans learn p(x) which is generative modeling. The primary issue with discriminative modeling is shortcut learning [1], a concern that has recently gained more attention within the deep learning community [2]. In this work, we investigate how shortcut learning can particularly hinder class-incremental learning. We discuss how shortcut learning in discriminative modeling leads to task confusion and argue that generative modeling, in principle, addresses this issue."
To give more detail about related continual learning strategies, we will provide a taxonomy table that exhaustively outlines the related work in continual learning. Also, we will include formal proofs for all of the theoretical contributions in the revised paper. For Lemmas 1 and 2, we will include the proofs submitted as a global rebuttal. The rest of the formal proofs are also available but not submitted due to the lack of space.
We would be glad to answer any further questions the reviewer may have on this matter.
Thank you for reading our response.
[1] Geirhos, Robert, et al. "Shortcut learning in deep neural networks." Nature Machine Intelligence 2.11 (2020): 665-673.
[2] Yang, Wanqian, et al. "Chroma-VAE: Mitigating shortcut learning with generative classifiers." Advances in Neural Information Processing Systems 35 (2022): 20351-20365. | Summary: This paper presents a novel mathematical framework for class-incremental learning and prove the Infeasibility Theorem, showing optimal class-incremental learning is impossible with discriminative modeling. While generative modeling can achieve optimal class-incremental learning with the Feasibility Theorem. The analysis suggests that adopting generative modeling is essential for optimal class-incremental learning.
Strengths: 1. The motivation is strong and clear, and the importance of such theoretical framework is significant.
2. The proposed framework is insightful and well-structured.
Weaknesses: 1. The proofs in appendices are informal with few equations.
2. In the appendix, Figure F.1 is missing, only the text *SIC.pdf* is presented.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Can you elaborate more on how the generative classifier promises optimal class-incremental learning?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The proofs should be more formal to make the theoretical framework complete.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We're grateful for your thoughtful review and the insightful feedback on our submission. Your constructive comments are highly valued, and we aim to respond to the issues highlighted.
In the revised manuscript, we will include the SIC.pdf file (also submitted as a global rebuttal). The formal proofs of Lemma 1 and Lemma 2 are submitted as a global rebuttal. The rest of the formal proofs (for Corollary 1, Corollary 2, Theorem 1, Lemma 3, and Theorem 2 ) could not be submitted due to lack of space but are available and we are happy to provide the formal proofs upon the reviewer's request during the discussion period.
To answer the question of how the generative classifier promises optimal class-incremental learning we would like to invite the reviewer to read the following paragraph that encompasses the big picture of our work.
Big Picture. Since the advent of AlexNet, the neuroscience community has been skeptical of the deep learning community's discriminative modeling approach for classification [1]. They argue that humans do not learn p(y|x) for classification (discriminative modeling); instead, humans learn p(x) which is generative modeling. The primary issue with discriminative modeling is shortcut learning [1], a concern that has recently gained more attention within the deep learning community [2]. In this work, we investigate how shortcut learning can particularly hinder class-incremental learning. We discuss how shortcut learning in discriminative modeling leads to task confusion and argue that generative modeling, in principle, addresses this issue. We would be glad to answer any further questions the reviewer may have on this matter.
Thank you for reading our response.
[1] Geirhos, Robert, et al. "Shortcut learning in deep neural networks." Nature Machine Intelligence 2.11 (2020): 665-673.
[2] Yang, Wanqian, et al. "Chroma-VAE: Mitigating shortcut learning with generative classifiers." Advances in Neural Information Processing Systems 35 (2022): 20351-20365.
---
Rebuttal Comment 1.1:
Title: Thanks for the Response
Comment: The authors' response resolves my concerns, I decide to keep my rating. | Summary: The paper proposes a Mathematical Framework for class-incremental learning in discriminative and generative modelings, presenting a Infeasibility Theorem for discriminative models and Feasibility Theorem for generative modelings.
Strengths: The paper is easy to understand. It offers a Mathematical Framework for modeling class-IL problem.
Weaknesses: 1. spelling error: Bias-Correction Impotence Corollary, "impotence" means "importance"? and in Corollary 1 (Catastrophic Forgetting) miss a inequality sign, and the same problem in corollary 2 ?
2. The Infeasibility theorem is hard to understand, the grammar seems wrong? "The CF-optimal class-IL model in Definition 3 is not be optimal loss are incompatible."
3. proof in Appendix E is unreadable, eg. “SIC.pdf”?
The problem modeling of class IL is good, however, the conclusion and corresponding prove needs more explanation, which is hard to follow, especially the prove in Appendix E. The most important prove part of this paper is put into the Appendix E, while Appendix E is too simple with only oral expression.
Technical Quality: 2
Clarity: 3
Questions for Authors: No more questions.
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations are proposed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our submission and for providing valuable feedback. We appreciate the constructive comments and would like to address the points raised.
This response is organized as follows: first, we specifically address the reviewer's comments (Responses). Second, we provide a paragraph to justify the existence and significance of this work by discussing the bigger picture and how it benefits the class-incremental learning community (Big Picture). We believe the reviewer will find these explanations useful and interesting, and we hope this will persuade the reviewer to reconsider their score.
Responses. We would like to clarify that the word “impotence” in “Bias-Correction Impotence Corollary” is not a spelling error. The term “impotence” meaning ineffectiveness is intentionally used to convey the ineffectiveness of Bias-Correction strategies to overcome the problem of Task Confusion (more detail on this in the final paragraph in Big Picture).
We are grateful to the reviewer for noting the missing inequality signs in Corollaries 1 and 2. We will correct these typos in the revised manuscript. Additionally, Theorem 1 will become as follows:
The CF-optimal class-IL model in Definition 3 is not optimal if the entire loss and the diagonal loss are incompatible:
$$
\sum_{i=1}^{T} \sum_{j=1}^{T} \left| \boldsymbol{P}_{ij} (\boldsymbol{\theta}) \right| \nparallel \sum_{i=1}^{T} \left| \boldsymbol{P}_{ii} (\boldsymbol{\theta}) \right|.
$$
In the revised manuscript, we will include the SIC.pdf file (also submitted as a global rebuttal). The formal proofs of Lemma 1 and Lemma 2 are submitted as a global rebuttal. The rest of the formal proofs (for Corollary 1, Corollary 2, Theorem 1, Lemma 3, and Theorem 2 ) could not be submitted due to lack of space but are available and we are happy to provide the formal proofs upon the reviewer's request during the discussion period.
Big Picture. Since the advent of AlexNet, the neuroscience community has been skeptical of the deep learning community's discriminative modeling approach for classification [1]. They argue that humans do not learn p(y|x) for classification (discriminative modeling); instead, humans learn p(x) which is generative modeling. The primary issue with discriminative modeling is shortcut learning [1], a concern that has recently gained more attention within the deep learning community [2]. In this work, we investigate how shortcut learning can particularly hinder class-incremental learning. We discuss how shortcut learning in discriminative modeling leads to task confusion and argue that generative modeling, in principle, addresses this issue. We would be glad to answer any further questions the reviewer may have on this matter.
Thank you for reading our response.
[1] Geirhos, Robert, et al. "Shortcut learning in deep neural networks." Nature Machine Intelligence 2.11 (2020): 665-673.
[2] Yang, Wanqian, et al. "Chroma-VAE: Mitigating shortcut learning with generative classifiers." Advances in Neural Information Processing Systems 35 (2022): 20351-20365.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. As a out-of-domain reviewer, I will raise my score to 5.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: We appreciate your engagement. It seems like the score is still 4. Thank you very much. | null | null | Rebuttal 1:
Rebuttal: Formal proofs for Lemma 1 and Lemma 2:
(Formal proofs of Corollary 1, Corollary 2, Theorem 1, Lemma 3, and Theorem 2 are available and will be included in the revised version but not here due to lack of space.)
We'd like to provide a more formal and rigorous proof for Lemma 1, for how the loss function of an $N$-way classifier is mathematically equivalent to the combined loss functions of $\binom{N}{2}$ binary classifiers. This approach ensures that we account for each pair of classes only once, thereby avoiding any overlap.
Starting with a simple scenario where $N=2$, the loss function is defined as:
$$
I_{\boldsymbol{\theta}} = \int_{\mathcal{X} \times \{1, 2\}} v(f_{\boldsymbol{\theta}}(x), y) p(x, y) \, dx \, dy
$$
This setup is naturally a binary classifier, focusing solely on one class pair.
When considering $N=k$, with $k \geq 2$, the loss function can be expanded to:
$$
I_{\boldsymbol{\theta}} = \frac{1}{k-1} \sum_{i=1}^{k} \sum_{j=i+1}^{k} \int_{\mathcal{X} \times \{i, j\}} v(f_{\boldsymbol{\theta}}(x), y) p(x, y) \, dx \, dy
$$
Here, each class combination $(i, j)$, where $i < j$, is included once, optimizing the calculation and removing redundancy.
For example, when $N=3$, we break it down as follows:
$$
I_{\boldsymbol{\theta}} = \frac{1}{2} \left(
\int_{\mathcal{X} \times \{1, 2\}} v(f_{\boldsymbol{\theta}}(x), y) p(x, y) \, dx \, dy +
\int_{\mathcal{X} \times \{1, 3\}} v(f_{\boldsymbol{\theta}}(x), y) p(x, y) \, dx \, dy +
\int_{\mathcal{X} \times \{2, 3\}} v(f_{\boldsymbol{\theta}}(x), y) p(x, y) \, dx \, dy
\right)
$$
Introducing an additional class, $k+1$, results in more binary comparisons:
$$
I_{\boldsymbol{\theta}} = \frac{1}{k} \left(
\sum_{i=1}^{k} \sum_{j=i+1}^{k} \int_{\mathcal{X} \times \{i, j\}} v(f_{\boldsymbol{\theta}}(x), y) p(x, y) \, dx \, dy +
\sum_{i=1}^{k} \int_{\mathcal{X} \times \{i, k+1\}} v(f_{\boldsymbol{\theta}}(x), y) p(x, y) \, dx \, dy
\right)
$$
Conclusively, for any $N$:
$$
I_{\boldsymbol{\theta}} = \frac{1}{N-1} \sum_{i=1}^{N} \sum_{j=i+1}^{N} \int_{\mathcal{X} \times \{i, j\}} v(f_{\boldsymbol{\theta}}(x), y) p(x, y) \, dx \, dy
$$
This equation confirms that the loss function for an $N$-way classifier replicates that of several binary classifiers, with each class pair uniquely represented.
**************************************
For Incompatibility Lemma (Lemma 2), which addresses the relationship between the minimizers of two incompatible functions $f(x)$ and $g(x)$, denoted as $f(x) \nparallel g(x)$. This lemma posits that the minimizer of the sum $f(x) + g(x)$ is distinct from the minimizers of $f(x)$ and $g(x)$ individually, indicated by:
$$
x^* \neq x^f, x^g, \quad x^* = \arg\min_{x} f(x) + g(x), \quad x^f = \arg\min_{x} f(x), \quad x^g = \arg\min_{x} g(x).
$$
To begin, we define two functions as incompatible if the minimization of one does not necessarily imply the minimization of the other. In other words, their minimizers do not coincide. Suppose, for contradiction, that the minimizer $x^f$ of $f(x)$, is also the minimizer of the combined function $f(x) + g(x)$. This would imply:
$$
x^f = \arg\min_{x} f(x) = \arg\min_{x} \left(f(x) + g(x)\right).
$$
Given that at $x^f$, the derivative of $f(x)$ with respect to $x$ must equal zero:
$$
\frac{df(x)}{dx}\Bigg|_{x = x^f} = 0.
$$
If $x^f$ were also a minimizer of $f(x) + g(x)$, the derivative of the sum at $x^f$ would similarly vanish:
$$
\frac{d}{dx}\left(f(x) + g(x)\right)\Bigg|_{x = x^f} = 0.
$$
This would suggest that the derivative of $g(x)$ at $x^f$ must also equal zero, leading to:
$$
\frac{dg(x)}{dx}\Bigg|_{x = x^f} = 0.
$$
However, given that $f(x)$ and $g(x)$ are incompatible, $\frac{dg(x)}{dx}\Bigg|_{x = x^f} \neq 0$, indicating that there exists a direction opposite to the gradient of $g(x)$ at $x^f$ which can further decrease $f(x) + g(x)$. This contradiction shows that $x^f$ cannot be the minimizer of $f(x) + g(x)$.
Applying a symmetric argument for $x^g$, suppose $x^g$ is both the minimizer of $g(x)$ and $f(x) + g(x)$:
$$
x^g = \arg\min_{x} g(x) = \arg\min_{x} \left(f(x) + g(x)\right).
$$
Following the same logic as before, we derive that the derivative of $f(x)$ at $x^g$ should be zero, leading to:
$$
\frac{df(x)}{dx}\Bigg|_{x = x^g} = 0.
$$
Yet, since the functions are incompatible, $\frac{df(x)}{dx}\Bigg|_{x = x^g} \neq 0$, reinforcing our contradiction and showing that $x^g$ is not the minimizer of the combined function either.
In conclusion, $x^*$, the minimizer of $f(x) + g(x)$, is distinct from both $x^f$ and $x^g$, illustrating that when functions $f(x)$ and $g(x)$ are incompatible, their individual minimizers cannot optimize the combined function.
Pdf: /pdf/0f916c9a2120c176a80085dd244c9b6f34ef5204.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improving Temporal Link Prediction via Temporal Walk Matrix Projection | Accept (poster) | Summary: The paper introduces a framework for analysis of relative encodings as a function of random walk matrices, and a new model for temporal link prediction. The new model offers SOTA performance on multiple link prediction datasets, and achieves this performance more efficiently than current best models. The design is of the model is explained in detail.
Strengths: This is an overall good paper which proposes and evaluates a new SOTA approach for temporal link prediction. It also introduces an original framework for unifying a range of existing methods.
1. The provided code is sufficiently commented, and detailed instructions for setup are provided, along with utility functions to start the training process.
2. Extensive evaluation has been done of the proposed model, and SOTA performance across a range of datasets has been demonstrated.
3. Current methods have been posed in the newly proposed framework, which is an interesting analytical contribution.
Clearly presented and significant contribution to the field of temporal link prediction.
Weaknesses: 1. The limitations of the approach are only briefly discussed in the appendix.
2. Although the code is well presented, it did not run out of the box for me. Some effort was required to install additional dependencies not listed in the requirements.txt, and to fix a runtime error in the sampling algorithm of the DataLoader.
Minor points / text linting suggestions:
Line 81: What is a "unified formation", did you mean "formulation"?
Line 94: Style "all the interactions happen" -> "all the interactions that happen"
Line 128: Doesn't A^(i) aggregate the i-step temporal walks, instead of k-step? Typo?
Line 187: "The value of score function.." -> "The value of the score function".
Line 202: What is d? The node degree or dimensionality? Please define. Also in line 2, Algorithm 1.
Line 203: Typo enumerating the H vectors. The index 1 is repeated, whereas it should be followed by 2.
Line 205: Style "Pseudocode code" -> "Pseudocode"
Line 211: Having the integer "i" in the exponent of e could be misleading for a reader who is skimming to get an overview of the paper. This letter is usually reserved as the imaginary unit when exponentiating e, so ideally pick another letter as the index.
Line 261: "and obtained" -> "and obtain"
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Line 265: Why does ReLU reduce estimation error? Is this an empirical result, or an architectural consideration specific to your data? Please clarify.
2. Line 43: The second argument sounds a bit vague. Could specific examples be mentioned for methods that use and don't use temporal information in constructing their embeddings?
3. Minor point on code: distutils has been removed in python 3.12 and above. To run the code I've had to install the following dependencies in addition to the ones listed: setuptools, numba, sklearn. I've had to fix the following issue.
```{python}
set(random.sample(test_node_set, int(0.1 * num_total_unique_node_ids)))
```
Had to be changed to:
```{python}
set(random.sample(sorted(test_node_set), int(0.1 * num_total_unique_node_ids)))
```
This happened installing your code in a new python 3.12 environment.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are briefly discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments. In response, we have clarified how ReLU can reduce estimation error and discussed the utilization of temporal information in existing methods. We have also revised the paper and code according to your suggestions. We hope this addresses your concerns and are happy to answer any further questions you may have.
**Q1 Line 265: Why does ReLU reduce estimation error? Is this an empirical result, or an architectural consideration specific to your data? Please clarify.**
It is an architectural consideration specific to our construction of temporal walk matrices. Recall that our score function $s(\\cdot)$ for a temporal walk $W=[(w\_0,t\_0),(w\_1,t\_1),\\cdots,(w\_k,t\_k)]$ is $s(W)=\\prod\_{i=1}^k e^{-\lambda(t-t\_i)}$, which is always larger than zero. Therefore, each element of the corresponding temporal walks matrices $A\_{u,v}^{(k)}(t) = \\sum\_{W \\in \\mathcal{M}\_{u,~v}^k} s(W)$ should be non-negative. From theorem 2, we know that the inner product of the node representations is approximately equal to the inner product of rows from temporal walk matrices, which is $\\langle \\bar{\\boldsymbol H}\_u^{(i)}(t), \\bar{\\boldsymbol H}\_v^{(j)}(t) \\rangle \\approx \\langle \\boldsymbol A\_u^{(i)}(t),\\boldsymbol A\_v^{(j)}(t) \\rangle$ (appears in line 225). In line 265, each element of the raw pairwise feature $ \\tilde{\\boldsymbol f}\_{u ,v}$ is the inner product of node representations from $u$ and $v$ , which can be considered as an estimation of the inner product between the u-th and v-th rows of the temporal walk matrices. Since each element of temporal walk matrices is non-negative, the inner product of two rows from temporal matrices should also be non-negative. Thus we feed the raw pairwise feature into RELU to reduce estimation error. Specifically, consider the l-th element of raw pairwise feature $ \\tilde f\_{u,v}[l]$ and assume it is the inner product of $\\bar{\\boldsymbol H}\_u^{(i)}(t)$ and $\\bar{\\boldsymbol H}\_v^{(j)}(t)$, which can be considered as an estimation of the inner product of $\\boldsymbol A\_u^{(i)}(t)$ and $\\boldsymbol A\_v^{(j)}(t)$. Then if $\\tilde f\_{u,v}[l]$ < 0, using $|\\cdot|$ to denote the absolute vlaue , we will have $\\left |\\langle \\boldsymbol A\_u^{(i)}(t),\\boldsymbol A\_v^{(j)}(t) \\rangle - \\tilde f\_{u,v}[l]\\right| >\\left |\\langle \\boldsymbol A\_u^{(i)}(t),\\boldsymbol A\_v^{(j)}(t) \\rangle - 0\\right |$ since $\\langle \\boldsymbol A\_u^{(i)}(t),\\boldsymbol A\_v^{(j)}(t) \\rangle \\geq 0$ . Therefore, setting the negative value to zero by feeding $\\tilde{\\boldsymbol f}\_{u ,v}$ into RELU will have a lower estimation error than using the original value.
**Q2 Line 43: The second argument sounds a bit vague. Could specific examples be mentioned for methods that use and don't use temporal information in constructing their embeddings?**
Sure. We list the score function $s(\cdot)$ of methods that are analyzed in Section 2.2.2 in the following table, where $W=[(w_0,t_0),..,(w_k,t_k)]$ indicates a temporal walk and $Z_i= \\sum\_{\\{(w',w),t'\\} \\in \\mathcal E\_{w\_{~i},~~t\_i}} \\exp(-\alpha (t\_i-t'))$ indicates the normalize term of CAWN.
| DyGFormer | PINT | NAT | CAWN |
| --------- | -------- | -------- | ------------------------------------------------------------ |
| $s(W)=1$ | $s(W)=1$ | $s(W)=1$ | $s(W)=\\prod\_{i=0}^{k-1}\\frac{\\exp(-\alpha(t\_i-t\_{i+1}))}{Z_i}$ |
As shown in the above table, only CAWN considers the temporal information carried by a temporal walk. The score functions of the other three methods always yield one, so the element of their temporal walk matrices merely counts the number of temporal walks, ignoring the temporal information. For more discussion about existing methods, please refer to the conclusion starting from line 163 of the paper.
**Q3 & W2: Although the code is well presented, it did not run out of the box for me. Some effort was required to install additional dependencies not listed in the requirements.txt, and to fix a runtime error in the sampling algorithm of the DataLoader.**
Sorry for the incomplete information regarding the required environment. The Python version used in our experiments is Python 3.9. We will add this information to the `requirements.txt` file and review our dependencies to ensure that no necessary packages have been omitted. Thanks for your efforts to adapt our code to the latest environment and we will revise our code according to your suggestion.
**W1 The limitations of the approach are only briefly discussed in the appendix.**
We will move the limitation section into the main body of our paper and expand the discussion.
**W3 Minor points / text linting suggestions**
We have revised the paper according to your suggestions for lines 81, 94, 187, 203, 205, 211, and 261.
For line 128: Yes, it is a typo, and we will correct it.
For line 202: $d$ indicates the dimensionality, and we will replace it with $d_R$.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and clarifications.
As it stands, I have no more outstanding concerns, and am happy to give this work an Accept.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer YNEN
Comment: Thanks again! We sincerely appreciate your timely reply and support for our work. | Summary: Based on the analysis of traditional methods, this paper proposes a unified framework for relative encoding and introduces a new temporal neural network, TPNet. This model not only addresses the high time complexity issues of traditional methods but also enhances relative encoding by incorporating factors such as time decay effects.
Strengths: 1、The authors propose a unified framework for relative encoding, treating them as a function of temporal random walk matrices.
2、The temporal random walk matrix not only takes into account temporal and structural information but also incorporates the effects of time decay.
3、To reduce computational complexity, the authors proposed an approximation scheme for the temporal random walk matrix, detailed in Algorithm 1.
4、The authors conducted thorough axiomatic proofs, providing solid theoretical support for the effectiveness of the TPNet network
Weaknesses: 1、If the authors could provide a comparison of memory usage, it would make the model more convincing.
2、This paper does not provide an analysis of the impact of different functions g on relative encoding.
3、The author did not compare with the latest models (published in AAAI 2024 or WWW 2024) in Table 1.
Technical Quality: 3
Clarity: 3
Questions for Authors: please see weaknesses
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your helpful reviews. In response, we have reported the memory usage of different methods, discussed the construction of our decoding function $g(\cdot)$, and compared TPNet with a new temporal link prediction method. We hope this addresses your concerns and are happy to answer any further questions you may have.
**W1. If the authors could provide a comparison of memory usage, it would make the model more convincing.**
We report the peak GPU memory usage in the following table, measured in MB. The last line of the table reports the relative memory usage, calculated by dividing each method's memory usage by TCL's (i.e., method with the smallest average memory usage) and then averaging across datasets.
|Dataset|TCL|JODIE|DyRep|TGN|TPNet (ours)|NAT|TGAT|GraphMixer|DyGFormer|CAWN|PINT|
|:-----:|:-:|:---:|:---:|:-:|:---------:|:-:|:-:|:--------:|:-------:|:-:|:-:|
|Wikipedia|171|247|253|271|299|430|570|714|279|633|2278|
|Reddit|510|838|839|898|663|1265|909|1049|645|972|3381|
|MOOC|336|446|459|464|470|835|735|877|673|1279|1405|
|LastFM|911|1022|1038|1039|1044|1960|1309|1450|1643|2818|1113|
|Enron|145|113|136|138|254|361|542|684|483|606|862|
|Social Evo.|1439|1406|1430|1431|1504|2938|1838|1980|1547|2382|2154|
|UCI|102|90|111|111|205|224|502|643|210|1045|890|
|Flights|1335|2190|2192|2259|1503|2989|1733|1875|1673|2277|5532|
|Can. Parl.|110|63|85|86|211|239|509|651|3150|2017|818|
|US Legis.|101|62|85|86|201|297|500|642|440|563|810|
|UN Trade|394|376|399|400|514|811|793|935|733|1337|1124|
|UN Vote|741|735|759|760|870|1592|1140|1282|926|1684|774|
|Contact|1653|1791|1813|1814|1794|3428|2052|2195|1761|2596|2544|
|Rel. Memory|1.00|1.08|1.16|1.18|1.46|2.31|2.64|3.23|3.97|4.73|5.12|
As shown in the above table, TPNet has lower relative memory usage than other link-wise methods (i.e., DyGFormer, NAT, CWAN, and PINT), showing its efficient memory usage. Additionally, on average, TPNet’s memory usage is 1.46 times that of the most memory-efficient method (i.e., TCL). Considering TPNet's SOTA performance and superior computational efficiency, this level of memory usage is satisfactory.
**W2. This paper does not provide an analysis of the impact of different functions g on relative encoding.**
The function $g(\cdot)$ introduced in Equation (3) can be considered a decoding function that extracts pairwise information from the constructed temporal walk matrices. Unlike existing methods that use predefined $g(\cdot)$, we design it as a learnable function. Specifically, we feed the raw pairwise feature into an MLP ( line 265 of the paper), which serves as our decoding function $g(\cdot)$. By optimizing the learnable parameters in the MLP, we can adaptively determine the most suitable function $g(\cdot)$ for different graphs, better modeling graph dynamics.
**W3. The author did not compare with the latest models (published in AAAI 2024 or WWW 2024) in Table 1**
After reviewing the papers published in AAAI 2024 and WWW 2024, we did not find any that were closely related to our work. However, we found an ICLR 2024 paper that proposes a new temporal link prediction model called FreeDyG [1], which adopts a Fourier Transformer to enhance the node representation learning process. We report AP under random negative sampling for performance comparison.
| | | Wikipedia | Reddit | MOOC | LastFM | Enron | Social Evo. | UCI |
| -------------------- | ------- | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: |
| Transductive Setting | FreeDyG | 99.26 ± 0.01 | **99.48 ± 0.01** | 89.61 ± 0.19 | 92.15 ± 0.16 | 92.51 ± 0.05 | **94.91 ± 0.01** | 96.28 ± 0.11 |
| | TPNet | **99.32 ± 0.03** | 99.27 ± 0.00 | **96.39 ± 0.09** | **94.50 ± 0.08** | **92.90 ± 0.17** | 94.73 ± 0.02 | **97.35 ± 0.04** |
| Inductive Setting | FreeDyG | **98.97 ± 0.01** | **98.91 ± 0.01** | 87.75 ± 0.62 | 94.89 ± 0.01 | 89.69 ± 0.17 | **94.76 ± 0.05** | 94.85 ± 0.10 |
| | TPNet | 98.91 ± 0.01 | 98.86 ± 0.01 | **95.07 ± 0.26** | **95.36 ± 0.11** | **90.34 ± 0.28** | 93.24 ± 0.07 | **95.74 ± 0.05** |
As shown in the above table, TPNet significantly outperforms FreeDyG on MOOC, LastFM, and UCI, while exhibiting similar performance on other datasets, demonstrating its superiority. We also report the inference time (s/epoch) to compare efficiency.
| | Wikipedia | Reddit | MOOC | LastFM | Enron | Social Evo. | UCI |
| :-----: | :----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | ------------- |
| FreeDyG | 12.37 | 61.11 | 32.19 | 102.73 | 9.75 | 163.94 | 4.75 |
| TPNet | 2.51 | 11.10 | 6.38 | 22.71 | 1.95 | 33.34 | 0.93 |
| Speedup | 4.93$\times$ | 5.51 $\times$ | 5.05 $\times$ | 4.52 $\times$ | 5.00 $\times$ | 4.92 $\times$ | 5.11 $\times$ |
As shown in the above table, TPNet is more computationally efficient than FreeDyG, achieving at least a 4.52 $\times$ speedup.
[1] Tian, Yuxing, Yiyan Qi, and Fan Guo. "FreeDyG: Frequency Enhanced Continuous-Time Dynamic Graph Model for Link Prediction." *The Twelfth International Conference on Learning Representations*. 2024.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks the authors for the rebuttal, and I have no more concerns.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer SMMz
Comment: Thank you for your reply. We are pleased to hear that you have no further concerns. | Summary: The article investigates the application of relative encodings in the task of link prediction over temporal networks. Initially, the authors formally unify previously used relative encodings, such as DyGFormer, PINT, NAT, and CAWN, within a unique framework. Subsequently, they propose a novel model, TPNet, and conduct comparative evaluations using several datasets and competitors.
Strengths: The provided framework offers a clearer and broader perspective on existing approaches based on temporal walks. The source code is well-commented and likely to be highly usable. The experimental setup investigates various competitors, datasets, and negative sampling techniques.
Weaknesses: A fundamental concept in the article is "relative encoding." However, the intuition behind this concept is not introduced until line 119. I recommend presenting the meaning of "relative encoding" earlier in the article for better clarity and understanding.
Technical Quality: 3
Clarity: 3
Questions for Authors: If I understand correctly, the authors introduce a weight decay mechanism to exponentially decrease the weight of older interactions. While this decision is well-motivated by previous studies, it is well known that the recurrence of interactions is fundamentally important in social networks (e.g., sociopatterns.org). I am curious whether the temporal periodicity of interactions is maintained with this approach. Could the authors discuss this aspect in more detail?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed the limitations of the model in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments. In response, we have discussed the temporal periodicity modeling ability of our score function and moved the motivation of the relative encoding earlier according to your suggestion. We hope this addresses your concerns and are happy to answer any further questions you may have.
**Q1. If I understand correctly, the authors introduce a weight decay mechanism to exponentially decrease the weight of older interactions. While this decision is well-motivated by previous studies, it is well known that the recurrence of interactions is fundamentally important in social networks (e.g., sociopatterns.org). I am curious whether the temporal periodicity of interactions is maintained with this approach. Could the authors discuss this aspect in more detail? .**
Although the time decay score function (defined in line 187 of the paper) is not specifically designed for modeling temporal periodicity, it can still capture periodic patterns to some extent. To illustrate this, consider a simple example where two nodes, $u$ and $v$, interact once every $\Delta t$ time interval. Then, we can divide the time into equal-size durations $(0,\Delta t),(\Delta t,2\Delta t),...$, and examine how the element of the temporal walk matrices changes within each duration. Specifically, assuming the current time $t \in (n \Delta t, (n+1)\Delta t)$ with $n>0$, the timestamps of the historical interactions between $u$ and $v$ will be $[\Delta t, 2\Delta t,..., n\Delta t]$, and the element of the one-step temporal walk matrix $A_{u,v}^{(1)}(t)$ will be $A_{u,v}^{(1)}(t) = \sum_{k=1}^n\exp(-\lambda (t-k\Delta t))$ (i.e., sum the scores of historical interactions). Since $\lambda >0$, $A_{u,v}^{(1)}(t)$ will decrease as $t$ increases within $(n\Delta t, (n+1)\Delta t)$ and the value of $A_{u,v}^{(1)}(t)$ is within $(\sum_{k=1}^n \exp(-\lambda k \Delta t),\sum_{k=0}^{n-1}\exp(-\lambda k \Delta t))$ (by setting $t$ to $(n+1)\Delta t$ and $n\Delta t$ respectively). The maximum value $\sum_{k=0}^{n-1}\exp(-\lambda k \Delta t)$ is at least 1 since $\exp(-\lambda 0\Delta t) =1$ and all other terms are nonnegative. For the minimum value $\sum_{k=1}^n \exp(-\lambda k \Delta t)$, if $\lambda$ is large enough, it will be less than 1. For example, if $\exp(-\lambda \Delta t) < \frac{1}{4}$, we have $\sum_{k=1}^n \exp(-\lambda k \Delta t) < \sum_{k=1}^n \frac{1}{4^k} < \frac{1}{2}$. Based on the above analysis, $A_{u,v}^{(1)}(t)$ exhibits the following behavior over time: going up to a value that is no less than 1 at the beginning of each duration and then decreasing to a value that is lower than 1 at the end of each duration, which acts like a periodic function. Then we can infer that $u$ and $v$ are likely to interact when $A_{u,v}^{(1)}(t)$ is low and unlikely to interact when $A_{u,v}^{(1)}(t)$ is high, capturing this periodic evolution pattern. We show a visual illustration in Figure 1 of the attached PDF file for better understanding.
**W1. A fundamental concept in the article is "relative encoding." However, the intuition behind this concept is not introduced until line 119. I recommend presenting the meaning of "relative encoding" earlier in the article for better clarity and understanding.**
We have added the motivation for relative encoding to the second section of the Introduction and improved the explanation of Figure 1 for better clarity and understanding. The revised part of the second section is as follows (changes highlighted in bold):
> Relative encodings have become an indispensable module for effective temporal link prediction [6-9] where, without them, node representations computed independently by neighbor aggregation will fail to capture the pairwise information. As the toy example shown in Figure 1, A and F will have the same node representation due to sharing the same local structure. Thus it can not be determined whether D will interact with A or F at $t_3$ according to their representations. **However, by assigning nodes relative encodings (i.e., additional node features, detailed in Section 2.2) specific to the target link before computing the node representation, we can highlight the importance of each node and guide the representation learning process to extract pairwise information specific to the predicted link from the subgraph**. For example, in Figure 1, we can infer from the relative encoding of E **(in red circle)** that D is more likely to interact with F than with A since D and F share a common neighbor, E.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I’ve read it and you’ve clarified my doubt, so I’ve increased my score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer vApy
Comment: Thank you for your timely reply! We sincerely appreciate your support for our work. | null | null | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their time and valuable comments. We are pleased that the reviewers acknowledge the value of our work in providing a cohesive perspective on existing methods (Reviewers vApy, SMMz, YNEN), proposing a novel method (Reviewer vApy) with solid theoretical support (Reviewer SMMz), and conducting extensive experiments to verify the effectiveness and efficiency of the proposed method (Reviewers vApy, YNEN).
To the best of our efforts, we have provided thorough responses to address the issues raised by each reviewer, which mainly consist of:
- Clarification on method design and analysis of existing methods
- Discussion on the temporal periodicity modeling capability of the score function.
- Discussion on the construction of the function $g(\cdot)$.
- Clarification of why ReLU can reduce estimation error.
- Clarification on the use of temporal information in constructing temporal walk matrices.
- Additional experimental results
- Reporting memory usage of different methods.
- Comparison with a new temporal link prediction method.
- Reorganization of the paper.
- Move the motivation of the relative encoding earlier.
- Move the limitation section to the main body of the paper.
Pdf: /pdf/3407a7886a522271344d0b0d1c546cee273a713d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization | Accept (poster) | Summary: This paper studies the in-context learning (ICL) in the linear regression setting with a Gaussian prior with a non-zero mean. They first prove that linear Transformer block (LTB) will enjoy a smaller approximation error than linear self-attention (LSA) in this setting, where the non-zero mean of Gaussian emerges. Second, they show that the core benefit brought by linear MLP is that LTB can implement the one-step gradient descent with learnable initialization (GD-$\beta$). Besides, all global minimizers in the LTB class are equivalent to the unique global minimizer in the GD-$\beta$ class. Finally, they investigate the non-convex dynamics of the GD-$\beta$ class and prove that it will converge to the unique global minimizer. Simulations on the GPT-2 support their theory.
Strengths: 1. Clarity: The paper is presented very clearly and is easy to follow.
2. Significance: The paper characterizes the benefit brought by the linear MLP in the ICL for linear regression setting, which is undoubtedly an important topic in the ICL theory community.
3. Quality: The paper investigates the additional effect of linear MLP from approximation and optimization perspectives in detail, and the derivation is solid.
Weaknesses: 1. In this paper, all non-linear parts in the transformer block (softmax and ReLU) are dropped. However, similar works on the non-linear transformer block have emerged in the ICL theory. [a] considers the ReLU MLP followed by a linear attention layer in the linear regression setting. [b] considers the standard transformer block in the in-context classification regime. Authors are suggested to discuss the potential impacts of these studies about non-linear transformers on the setting in this paper.
2. Though the LTB class enjoys the same global minimizer as the GD-$\beta$ class and the training dynamics of the GD-$\beta$ class can converge to the global minima, whether the training dynamics of the LTB class can converge to the global minimizer is not discussed by the authors. The authors should clarify this point.
[a] Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape, ICML, 2024
[b] Training Nonlinear Transformers for Efficient In-Context Learning: A Theoretical Learning and Generalization Analysis, ICML, 2024
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. In the LTB class, the paper does not merge $W_K^TW_Q$ and $W_P^TW_V$ into one matrix parameter, respectively. Are there any benefits? Does it will improve the approximation ability of the LTB?
2. In line 272, I think the $\Gamma$ is a precondition but not equivalent to the Newton step. Is it a typo?
3. The pretraining goal of this paper is different from the autoregression training in practice. Are there any potential approaches to study the autoregression pretraining and few-shot regression inference?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for supporting our work! We will address your concerns as follows.
Q1: In this paper, all non-linear parts in the transformer block (softmax and ReLU) are dropped. However, similar works on the non-linear transformer block have emerged in the ICL theory. [a] considers the ReLU MLP followed by a linear attention layer in the linear regression setting. [b] considers the standard transformer block in the in-context classification regime. Authors are suggested to discuss the potential impacts of these studies about non-linear transformers on the setting in this paper.
A1: There are several differences between our work and [a,b]. Specifically, [a] considered an LSA layer after a network for ICL over the Barron class. In contrast, we consider a linear MLP after an LSA layer. In the former setting, the network plays the role of feature learning, while in the latter setting, the MLP plays the role of improving approximation power. [b] considered ICL of a stylized binary classification problem, while our focus is ICL of linear regression. The loss landscape of these two settings is significantly different. In the former setting, the authors use hinge loss for classification while we use square loss for regression. We will cite these two papers and discuss their relationship with our work in depth in the revision.
Q2: Though the LTB class enjoys the same global minimizer as the GD-beta class and the training dynamics of the GD-beta class can converge to the global minima, whether the training dynamics of the LTB class can converge to the global minimizer is not discussed by the authors. The authors should clarify this point.
A2: We agree that an optimization guarantee for GD-beta does not imply an optimization guarantee for LTB. This has been noted in Lines 362-366, but we will emphasize this more in the revision. However, the optimization of GD-beta offers insights for the optimization of LTB. First, as we have shown that the global minimizer of LTB is equivalent to that of GD-beta, so efficient optimization of GD-beta offers a way to find the global minimizer of LTB. Second, GD-beta can be viewed as LTB with restricted parameters. So optimization of GD-beta offers insights of optimization of LTB in certain subspaces. Finally, we emphasize that, although GD-beta is easier to optimize compared to LTB, its landscape is still non-convex, which captures part of the non-convexity challenges of the latter problem class.
Q3: In the LTB class, the paper does not merge W_K^\top and W_P^\top W_V into one matrix parameter, respectively. Are there any benefits? Does it will improve the approximation ability of the LTB?
A3: We choose to present our results in terms of separate matrices just to be aligned with the practical Transformer setup. Our construction results also hold under merged matrices. Merging the matrices does not affect the approximation ability of LTB in our setting. We will clarify this in the revision.
Q4: In line 272, I think the \Gamma is a precondition but not equivalent to the Newton step. Is it a typo?
A4: In general $\Gamma$ is a precondition. When $M$ goes to infinity, the global optimal $\Gamma^*$ converges to $H^{-1}$, that is the inverse Hessian of the population MSE. Plugging this into equation (5.4), we get one step of Newton step under population MSE from $\beta^*$. We will clarify this in the revision.
Q5: The pretraining goal of this paper is different from the autoregression training in practice. Are there any potential approaches to study the autoregression pretraining and few-shot regression inference?
A5: Good question. We believe the few-shot regression inference results in Theorem 5.3 in [1] can be extended to GD-beta. However, we think autoregression training would not improve the few-shot regression inference for GD-beta, since the model capacity is limited. We believe a more comprehensive model is required to show the benefits of autoregression training.
[1] Wu et al. HOW MANY PRETRAINING TASKS ARE NEEDED FOR IN-CONTEXT LEARNING OF LINEAR REGRESSION? ICLR2024.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer XdPR
Comment: Thank you for providing the rebuttal. The authors adequately addressed my concerns, so my confidence level regarding my review of this paper has increased. | Summary: his submission extends the earlier theoretical analyses of in-context learning with linear transformers by taking into account the data covariance and a non-zero mean on the task parameters. It is shown that a linear transformer with a linear MLP block (LTB) can implement the algorithm of one-step gradient descent with non-zero initialization. This additional flexibility of learned initialization allows LTB to match the Bayes optimal risk for linear regression, whereas the linear transformer studied in prior works cannot achieve such optimality at finite $M$. The authors also showed that the optimal one GD step solution can be obtained by training the parameters with gradient flow.
Strengths: In the past years, there have been many works that examined the ICL ability of linear transformers in the linear regression setting, where it is typically shown that transformers implement one preconditioned GD step on the squared error in-context. This submission goes beyond the prior results by showing that linear transformers can adapt to the non-zero mean of the task distribution with the aid of a linear attention block. This yields a separation in the ICL efficiency between transformers with or without an attention block, which is an interesting message in my opinion.
Weaknesses: I have the following concerns and questions:
1. Firstly, the obvious limitations of the analysis: the problem setting is still linear, and the authors only showed adaptivity of ICL to the task mean but not the full prior (see point below). Given the additional linear block, the approximation gap is not really surprising. The optimization analysis directly assumes the GD-$\beta$ parameterization, and it is not clear if gradient descent on the standard LTB reaches the same solution. Also, convergence is only shown at a population level with no finite-sample guarantees.
2. The Bayes optimal estimator in Lemma 6.1 is a generalized ridge regression estimator where the $\ell_2$ regularization is anisotropic and adapts to the prior. On the other hand, when $\beta^*=0$, the GD-$\beta$ solution in Theorem 5.2 reduces to the one-step GD estimator studied in prior works, which does not incorporate the prior covariance.
It is not clear to me why these two estimators always have the same statistical efficiency, as it is well known that anisotropic shrinkage provides a substantial advantage (Wu and Xu 2020) (Li et al. 2023). I guess the conclusion in Corollary 6.2 relies on $tr(H\Psi)$ being bounded. Can the authors comment on the optimality of GD-$\beta$ if this assumption is not satisfied?
(Wu and Xu 2020) *On the optimal weighted $\ell_2$ regularization in overparameterized linear regression*.
(Li et al. 2023) *Transformers as algorithms: generalization and stability in in-context learning*.
3. Some minor questions:
* On page 5, why does the "scratchpad" take the specific form of (3.1)?
* For the experiment reported in Table 1, what is the size of the trained model? Is the curriculum learning strategy in (Garg et al. 2022) also used?
---
**Post-rebuttal update:** the authors’ response addressed most of my concerns. I found that one of my criticisms is based on a misunderstanding of the optimal solution that the authors analyzed.
I have increased my score accordingly.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See Weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We will address your concerns below.
Q1: Firstly, the obvious limitations of the analysis: the problem setting is still linear, and the authors only showed adaptivity of ICL to the task mean but not the full prior (see point below). Given the additional linear block, the approximation gap is not really surprising. The optimization analysis directly assumes the GD-beta parameterization, and it is not clear if gradient descent on the standard LTB reaches the same solution. Also, convergence is only shown at a population level with no finite-sample guarantees.
A1: Although an additional MLP component improves approximation power is to be expected, we think it is quite surprising to rigorously prove an approximation gap between LTB and LSA. Because such a result formally justifies the role of an MLP component in LTB for learning the mean signal.
We agree that our results are in a simplified setup, where the task is linear and the optimization is under the GD-beta parameterization and at a population level. Yet even in the simplified setup, there is no such kind of result before our work. Without a full understanding of LTB in the simplest possible setup, it seems unlikely that one can theoretically understand LTB in more practical setups. As our work is the first work on studying LTB for ICL, we believe we have taken an important step in this direction and the contribution of our work is significant.
Q2: The Bayes optimal estimator in Lemma 6.1 is a generalized ridge regression estimator where the $\ell_2$-regularization is anisotropic and adapts to the prior. On the other hand, when $\beta^*=0$, the GD-beta solution in Theorem 5.2 reduces to the one-step GD estimator studied in prior works, which does not incorporate the prior covariance. It is not clear to me why these two estimators always have the same statistical efficiency…. Can the authors comment on the optimality of GD-beta if this assumption is not satisfied?
A2: We would like to point out a potential misunderstanding of our result. The GD-beta solution in Theorem 5.2 does incorporate the prior covariance. This is because $\Gamma^*$ is a function of $\Omega$ defined in Theorem 5.2, which is a function of the prior covariance. We hope this also clarifies the follow-up comments made by the reviewer.
Q3: On page 5, why does the "scratchpad" take the specific form of (3.1)?
A3: Initializing the “scratchpad” with a fixed constant is a natural choice.
Here the constant is set to be $1$ for concreteness but this number is not special.
Q4: For the experiment reported in Table 1, what is the size of the trained model? Is the curriculum learning strategy in (Garg et al. 2022) also used?
A4: In our experiments, we use a small GPT2 model with 6 layers and 4 heads for each layer. The key dimension is 32 and the inner dimension in the FeedForward Network is 128. (Since we use a multi-head attention in the experiment, this does not directly correspond to the d_k and d_v in our theory part). We use the curriculum learning strategy in (Garg et al,2022). We will clarify this in the revision. | Summary: In this paper, the authors investigate the in-context learning (ICL) of a linear transformer block (LTB), i.e., a single block comprised of a linear self-attention layer and a MLP layer. The task considered is ICL of linear regression with a Gaussian prior. Unlike earlier works that studied this problem, a Gaussian prior with a non-zero mean is used for generating the ICL tasks. For this setting, it is shown that the MLP layer is essential, as using only a self-attention layer results in a weaker approximation. Furthermore, the LTB at minimum is demonstrated to have a specific structure akin to one-step gradient descent with a learnable initialization, referred as GD-$\beta$. The optimisation and statistical properties of the GD-$\beta$ estimator is also widely studied.
Strengths: a) In-context learning of statistical tasks particularly the linear regression has received particularly interest. The authors have identified an interesting and relevant modification to this problem setting when the ICL tasks are generated from a more general setup Gaussian prior with non-zero mean.
b) In this setting, an MLP layer and a skip connection are needed for approximation which is a novel finding. Even with these additional parameters it is shown that minimum of the task can be still be characterised and it corresponds to GD with learnable initialisation.
c) The statistical and non-convex optimization aspects of the GD-$\beta$ are interesting from a technical standpoint.
Weaknesses: The paper attributes that property of learning the MLP layer, however the skip connection is also equally important. Hence it is a bit of an misleading claim to attribute this to the MLP layer. Overall, the model restricted to only linear attention and also a single layer.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) What happens if we include positional encoding (P) are also included? It is possible to get an additional $P^{\top} W_K^{\top} W_QP$ term can come from the attention and can this overcome the need of positional encoding?
2) Does the optimisation dynamics of GD-$\beta$ offer any insights on the optimisation of the LTB ?
3) Does the results qualitatively change if the input sequence is generated from uncentered Gaussian $\mathcal{N}( \mu, H )$ instead of zero mean Gaussian prior ?
Minor corrections:
1) In l.251, it should be $ \Tau^{\top} $ in the expression of $W_K^{\top} W_Q$.
2) In l.263, an additional bracket in the expression of $\Omega$.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your comments. We will make sure to fix the typos in the revision. We will address your concerns below.
Q1: The paper attributes that property of learning the MLP layer, however the skip connection is also equally important. Hence it is a bit of an misleading claim to attribute this to the MLP layer. Overall, the model restricted to only linear attention and also a single layer.
A1: We emphasize that the ability of LTB to learn non-zero mean is a joint effect of an MLP component and a skip connection. Note that LSA also has a skip connection (see also [1,2]) but its skip connection is inactive. In comparison, the MLP component in LTB activates the skip connection. Therefore, we attribute the ability to learn non-zero mean to the MLP component, which is the only difference between LTB and LSA.
Nonetheless, we agree one can attribute the ability to learn non-zero mean to the skip connection — as you have mentioned, without a skip connection, LTB reduces to LSA with a potential rank constraint on the parameter, which cannot learn non-zero mean as we have proved. The above two explanations take different perspectives to interpret the same phenomenon. We believe the way of interpretation does not diminish the significance of our contribution. We have discussed this in Lines 198-206 in our submission, but we will clarify this further in the revision.
Q2: What happens if we include positional encoding (P) are also included? It is possible to get an additional $P^\top W_K^\top W_Q P$ term can come from the attention and can this overcome the need of positional encoding?
A2: This is an open question. Although we show evidence that a scathpad hardly helps, it may be possible that a positional encoding can achieve the effect of a MLP component. There are several ways of implementing positional encoding. For example, GPT 2 uses an additive trainable matrix $P$ as positional encoding. Note that this significantly increases the amount of trainable parameters in both LSA or LTB, and also makes optimization harder. A more popular method for positional encoding is to use sinusoidal functions, which are only partially trainable. A fully rigorous analysis of the effect of all kinds of positional encoding is an interesting question, which we will comment on as a future direction. With that being said, we believe our current results, which clarify the benefits of an MLP component, have already made significant contributions.
Q3: Does the optimisation dynamics of GD-beta offer any insights on the optimisation of the LTB ?
A3: Yes, we think the optimization of GD-beta offers insights for the optimization of LTB. First, as we have shown that the global minimizer of LTB is equivalent to that of GD-beta, so efficient optimization of GD-beta offers a way to find the global minimizer of LTB. Second, GD-beta can be viewed as LTB with restricted parameters. So optimization of GD-beta offers insights of optimization of LTB in certain subspaces. Finally, we emphasize that, although GD-beta is easier to optimize compared to LTB, its landscape is still non-convex, which captures part of the non-convexity challenges of the latter problem class. We will include these discussions in the revision.
Q4: Does the results qualitatively change if the input sequence is generated from uncentered Gaussian $N(\mu, H)$ instead of zero mean Gaussian prior ?
A4: Part of our results still hold when covariates are from uncentered Gaussian instead of centered Gaussian (see a detailed discussion below). Examining all of our results under uncentered Gaussian is left as a future work. In fact, we can derive an analog to Theorem 5.2 in our submission when the features $x$ are sampled from an uncentered Gaussian distribution $N(\mu,H)$. Without loss of generality, we assume $\Psi$ and $H$ are invertible. Under assumption 3.1 in our submission, we have the global minimizer of the GD-$\beta$ class is $\beta = \beta^*$ and
$$\Gamma = \Psi (H + \mu \mu^\top) \cdot A^{-1},$$
Where $A = \frac{1}{M^2} \mathbb{E}[X^\top X \Psi X^\top X] + \frac{\sigma^2}{M}(H+\mu \mu^\top)$ is a positive definite matrix and $X \in \mathbb{R}^{d \times M}$ is the feature matrix. The optimal ICL risk is $\sigma^2 + tr(\Psi (H+\mu \mu^\top) - (H+\mu \mu^\top) \Psi (H+\mu \mu^\top) A^{-1} (H+\mu \mu^\top) \Psi),$ where $A$ is defined above. Moreover, since the LTB class can always implement a GD-$\beta,$ we know there exists an LTB function that can achieve a constant level of ICL risk regardless of how large $\beta^*$ is.
[1]. R Zhang et al. Trained Transformers Learn Linear Models In-Context. JMLR 2024.
[2]. K Ahn et al. Transformers learn to implement preconditioned gradient descent for in-context learning. NIPS 2023
---
Rebuttal Comment 1.1:
Title: Reply to Author Rebuttal
Comment: Thank you for the rebuttal and for addressing my questions. The rebuttal did not change my initial impression of the paper: it presents an interesting point but also has its share of limitations. Therefore, I maintain my rating. | Summary: This paper studies the in-context learning of linear regression with a Gaussian prior that has a finite, non-zero mean across tasks. Specifically, the authors show that a linear transformer block (linear attention with an MLP layer) can achieve near-optimal Bayes risk for this task, whereas a linear attention-only block suffers from an approximation error. They further demonstrate that an LTB hypothesis class can be well-represented by a subset class of one-step gradient descent estimators, GD-$\beta$, for the ICL task.
Strengths: This work delves into theoretically understanding the very relevant and hot topic of ICL. The problem motivation is clear and concise, and I really like how the authors essentially redo the previous results of LSA and GD-$0$ estimators with LTB and GD-$\beta$, demonstrating the effective correspondence between them and showing that they achieve near Bayes optimal ICL risk. Overall, I think the authors do a great job of portraying the benefits of the MLP component in transformers, and it’s a good theoretical contribution, as most papers involve theory only from the LSA perspective.
Weaknesses: I don’t see any major weaknesses in the paper as such, but I do have a few questions (see questions).
Technical Quality: 3
Clarity: 3
Questions for Authors: Have the authors seen experimental evidence that trained LTB models converge to GD-$\beta$ models, similar to how LSA converges to GD-$0$? I believe the correspondence established between the two is excellent, and theoretically studying the convergence is probably another work in itself, but seeing some experimental evidence would have been really cool.
I believe the restrictions imposed on $d_k, d_v$ for the hypothesis classes are to ensure sufficient overparameterization, which is pretty common for theoretical works. However, I think in practice these dimensions are typically set to be less than $d$. Are these necessary conditions, or do the authors believe there are ways to override them for their analysis?
The constructions for Lemma $5.1$ and Theorem $5.3$ both operate on pairs of matrices composed together (if not more), which is also the common practice in the analysis that I’ve seen. Any reasons why the authors didn’t opt for analyzing with merged matrices, especially when the inner dimension of the product $W_p^TW_v$, that is $d_v$, is assumed to be larger than $d$? It doesn’t even enforce any additional rank constraints on the product matrix.
It was interesting to see that in your GPT-2 experiments, the full transformer model (with all the non-linearities) also relied heavily on the MLPs. How much of this do you think can be attributed to the presence of softmax only? Did you try experiments where you used LSA in GPT-2 training with MLPs?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for supporting our paper! We answer your questions below.
Q1: Have the authors seen experimental evidence that trained LTB models converge to GD-beta models, similar to how LSA converges to GD-0?
A1: Yes, empirically we also see that trained LTB models converge to GD-beta models. Similar results are also obtained in [2] for LSA converging to the GD-0 (see their figure 2). We will try to add the figure for LTB converging to GD-$\beta$ in the next version.
Q2: I believe the restrictions imposed on $d_k$, $d_v$ for the hypothesis classes are to ensure sufficient overparameterization, which is pretty common for theoretical works. However, I think in practice these dimensions are typically set to be less than $d$. Are these necessary conditions, or do the authors believe there are ways to override them for their analysis?
A2: In multi-head attention, $d_k$ and $d_v$ are set to be $(d+1)/m$, where $m$ is the number of heads and $d+1$ is the embedding size. Since we only consider single head attention, this reduces to $m=1$ and $d_k = d_v = d+1$, which satisfies our assumptions.
The assumption that $d_k, d_v \ge d$ simplifies the discussion, but we believe our results can be generalized under relaxed assumptions using standard techniques. For instance, if $d_k < d$, then the global minimum of LSA is achieved when $W_K^\top W_Q$ is the best rank-$d_k$ approximation of $\Gamma^*$. Similar results are obtained in [1]. We expect a similar result can be proved for LTB, where the global minimum is achieved by certain low-rank approximators.
Q3: The constructions for Lemma 5.1 and Theorem 5.3 both operate on pairs of matrices composed together (if not more)... Any reasons why the authors didn’t opt for analyzing with merged matrices, especially when the inner dimension of the product $W_p^\top W_v$, that is $d_v$, is assumed to be larger than d?...
A3: We choose to present our results in terms of separate matrices just to be aligned with the practical Transformer setup. As you mentioned, our construction results also hold under merged matrices. We will clarify this in the revision.
Q4: It was interesting to see that in your GPT-2 experiments, the full transformer model (with all the non-linearities) also relied heavily on the MLPs. How much of this do you think can be attributed to the presence of softmax only? Did you try experiments where you used LSA in GPT-2 training with MLPs?
A4: We think the gap observed in our GPT-2 experiments (w/ MLP or w/o MLP) is caused by MLPs instead of softmax since softmax is applied in both cases. We thank the reviewer for the proposal to train deep LTB and LSA models. However, training deep linear models is extremely hard without normalization techniques like softmax, and we observe unstable training performance when we train deep LTB and LSA. In practice, the normalization is important for training a deep model.
[1]. Y Li et al. Fine-grained Analysis of In-context Linear Estimation: Data, Architecture, and Beyond. 2024
[2]. JV Oswald et al. Transformers Learn In-Context by Gradient Descent. ICML2023.
---
Rebuttal 2:
Comment: Thank you for answering my questions. Please add the figure regarding Q1, and the clarification regarding Q3 in the next version. I will keep my positive rating and recommend acceptance. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DiffNorm: Self-Supervised Normalization for Non-autoregressive Speech-to-speech Translation | Accept (poster) | Summary: This work builds on TRANSPEECH (Huang et al., 2023) by applying the diffusion method to reduce noise, thereby normalizing speech units for further generation. The authors further use classifier-free guidance to enhance non-autoregressive generation. They conduct experiments on CVSS En-Fr and En-Es datasets, comparing their methods with the baseline. Both proposed methods show improvements.
Strengths: 1. This work proposes a diffusion method to normalize target speech units, which outperforms previous approaches.
2. It explores classifier-free guidance to improve NAT generation for the speech-to-speech translation task.
Weaknesses: 1. The paper overstates its contributions. The authors claim to be the first to apply diffusion in the speech-to-speech translation task. However, they merely use it to generate auxiliary training targets and still follow the S2UT strategy (Lee et al., 2022b). This diminishes the novelty of the paper.
2. Although the work is based on TRANSPEECH, the authors compare only two of the three translation directions. One of the core contributions is applying the diffusion method to normalize speech units, yet the method shows minimal improvement (only 0.3 BLEU) compared to BiP (Bilateral Perturbation) in the En-Fr task. Given the method significantly outperforms others in the En-De task, this result is perplexing.
3. For a minor suggestion, in Tables 3 and 4, ensure consistent significant figures. Additionally, it would be more appropriate to place Table 3 before Table 4.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How does the performance fare on the CVSS Fr-En task?
2. In my opinion, the goal of normalizing the unit is to disentangle linguistic information from noisy speech features. If you use DDPM for this process, the target $z_0$, which is the output of the VAE encoder, should be the linguistic feature. How do you ensure that $z_0$ represents the desired linguistic information?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors appropriately state the limitations and broader impacts of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time and effort you have dedicated to reviewing this paper. We are glad that you find our diffusion-based normalization and regularization strategy effective. Below are our responses to your comments:
> However, they merely use it to generate auxiliary training targets and still follow the S2UT strategy. This diminishes the novelty of the paper.
>
We believe that innovative data-centric methods represent significant contributions to the field, even if the diffusion model is not directly employed as the speech-to-speech translation model. In fact, it is explored and known that the diffusion model struggles with tasks that have weak source-target dependencies, such as translation [3].
Our method distinguishes itself from previous normalization techniques [1,2] by requiring only target-side information and being learned in a self-supervised manner. Through empirical studies on En-Es, En-Fr, and the recently added experiments on Fr-En, our methods have consistently demonstrated substantial improvements for non-autoregressive S2UT models. As reviewer AuR5 noted, “This technique has wide applicability. All NAT S2S unit-based models can potentially benefit from it. Not to mention it requires no handcrafting rules for noise injection.” This endorsement underscores the potential and versatility of our approach in enhancing speech translation models.
> The method shows minimal improvement (only 0.3 BLEU) compared to BiP (Bilateral Perturbation) in the En-Fr task. Given the method significantly outperforms others in the En-De task, this result is perplexing.
>
The effectiveness of our normalization strategy is indeed dependent on the dataset. It is important to emphasize that our method should **primarily be compared with the CMLM baseline, rather than CMLM + BiP**, because our contribution lies in a self-supervised normalization strategy combined with a regularization strategy. These are completely orthogonal to BiP’s rule-based normalization approach. Our best system achieves an ASR-BLEU improvement of approximately 7 points on En-Es, 2 points on En-Fr, and 2.5 points on Fr-En over the CMLM baseline. In contrast, the improvement from BiP over the CMLM baseline is typically around or less than 1 point.
When comparing the improvements between En-Es and En-Fr, the 7-point increase for En-Es is indeed very impressive. However, this does not diminish the significance of the 2-point improvement on En-Fr. We believe three factors contribute to the substantial gain observed in En-Es:
1. **Dataset Size**: The En-Es dataset is smaller, which means that the impact of normalization on the dataset is more pronounced, leading to more noticeable improvements.
2. **Speech Length**: Spanish speech is, on average, longer than French speech, as indicated in Table 1. Longer sequences could be more susceptible to acoustic variations, which our normalization method effectively mitigates.
3. **Background Noise**: Spanish speeches from the CommonVoice dataset also contain background noise. Our proposed normalization strategy can remove this noise, thereby enhancing the clarity and quality of the speech data and benefiting the training of NAT models.
These factors combined suggest that while our method is broadly effective, its impact is particularly significant in scenarios where the data exhibits specific challenges that our approach directly addresses.
> For a minor suggestion, in Tables 3 and 4, ensure consistent significant figures. Additionally, it would be more appropriate to place Table 3 before Table 4.
>
Thanks for the suggestion! We will adjust it accordingly to make it more consistent and appropriate.
> How does the performance fare on the CVSS Fr-En task?
>
Thanks for the question. We kindly refer you to our response to all reviewers, where we show the result for Fr-En task as this is requested by multiple reviewers. Consistent with our paper’s results, DiffNorm also improves upon baseline CMLM by more than 2.5 ASR-BLEU on Fr-En (and more than 1.5 ASR-BLEU compared to CMLM+BiP). We hope this new set of results could further validate the effectiveness of our method and address your concerns.
> How do you ensure that 𝑧0 represents the desired linguistic information?
>
Since the Variational Autoencoder (VAE) and the diffusion model are trained in a self-supervised manner on the target feature, they achieve high reconstruction quality and maintain linguistic information when properly trained. This is evidenced by the ASR-BLEU scores for the reconstructed units (BL-Rec column in Table 4), which show that transcription from reconstructed speech units achieves performance comparable to the original units (48.5 vs. 48.7 for Spanish and 40.38 vs. 40.64 for French speech). If the reconstructed units failed to preserve linguistic information, the speech synthesized from such units would exhibit significantly worse ASR-BLEU results.
It is important to note that CVSS-C is not a large dataset. By expanding the dataset size, we anticipate even higher quality reconstruction and a more pronounced normalization effect that reduces acoustic variations while preserving the linguistic content of the speeches.
----
References:
[1] Lee et al., (2022). Textless speech-to-speech translation on real data.
[2] Huang et al., (2023). TranSpeech: Speech-to-Speech Translation With Bilateral Perturbation
[3] Tan, X. (2024). Lessons From the Autoregressive/Nonautoregressive Battle in Speech Synthesis.
---
Rebuttal 2:
Title: Invitation for Comments and Clarifications
Comment: Dear Reviewer DRdm,
We greatly value your feedback and have provided clarifiications to your questions and additional experiments on Fr-En task. To ensure that we have properly addressed your concerns, we would greatly appreciate if you could review our responses and provide any further comments. We are looking forward to engaging with you before the discussion period ends.
Thank you for your time and consideration. | Summary: This paper proposes DiffNorm, a diffusion-based self-supervised method for speech data normalization, aiming to alleviate multimodal problem in non-autoregressive speech-to-speech translation (NAT). DiffNorm consists of a VAE to reconstruct the speech feature and a diffusion model to add and remove noise of latent vector. Experiment shows that DiffNorm significantly improve NAT translation quality compared to baselines.
Strengths: 1. It is surprising to see pure data normalization improves NAT so much. ASR-BLEU improves 7 BLEU on En-Es direction with DiffNorm.
2. This technique has wide applicability. All NAT S2S unit-based models can potentially benefit from it. Not to mention it requires no handcrafting rules for noise injection.
3. Ablation studies on noise level and training of DiffNorm further provide users a general guide on how to adapt DiffNorm to their own dataset and model.
Weaknesses: 1. The multi-modal problem has two aspects: semantic and acoustic. DiffNorm seems able to reduce acoustic modalities. Unclear if DiffNorm can also do that on semantic modalities, i.e., multiple feasible translations for the same source input.
2. Classifier-free guidance combined with DiffNorm leads to worse performance than DiffNorm alone in Figure 4 and the authors ignore it in the text. Elaboration is needed here.
3. Lack baseline comparison with several latest S2ST models like TransSentence [1], PolyVoice [2], SeamlessM4T and etc.
[1] TranSentence: speech-to-speech Translation via Language-Agnostic Sentence-Level Speech Encoding without Language-Parallel Data.
[2] PolyVoice: Language Models for Speech to Speech Translation
[3] SeamlessM4T—Massively Multilingual & Multimodal Machine Translation
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Issues listed in weakness.
2. What is the role of VAE here? Can we drop it and directly use a diffusion model here for denoising?
3. Table 3 mentions larger dimension of latent vector brings better performance, why not use the original number of dimensions instead of compressing it?
4. Why improvement on En-Fr is not that significant compared to En-Es? How does language direction impact the performance?
5. Is there way to visualize the noise added on the latent vectors? Like what does it mean for the audio?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Besides what I have already mentioned in the weakness, experiments in the paper are only conducted on En-X, but not reverse. Also, it would be interesting to see how DiffNorm works on non-European languages like Chinese and Japanese.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time and effort you have dedicated to reviewing this paper. We are excited to see that you find our method effective and widely applicable! Below are our responses to your comments:
> DiffNorm seems able to reduce acoustic modalities. Unclear if DiffNorm can also do that on semantic modalities
>
Like previous studies [1,2], DiffNorm also targets acoustic modalities. By injecting random Gaussian noise into all positions, DiffNorm effectively removes background noise and simplifies acoustic variations while preserving linguistic information. However, due to its design, DiffNorm is unable to simplify semantic modalities. Addressing this limitation, we plan to explore methods to simplify semantic modalities for speech units in future work, with using an auto-regressive model to generate output as a potential starting point [3,4]. Although the challenge of semantic multimodality persists, we believe that mitigating acoustic variations is crucial, and we have demonstrated significant improvements with our method.
> Classifier-free guidance combined with DiffNorm leads to worse performance than DiffNorm alone in Figure 4 and the authors ignore it in the text
>
Thank you for bringing this to our attention! We will include a discussion on this issue in our revised draft. As you may observe from Figure 4 and Table 7 (in the Appendix), CG performs poorly when the number of decoding iterations is small (e.g., 5 iterations). This occurs because, during inference, the logits are slightly perturbed in each decoding iteration (refer to eq.7). In the early decoding iterations, the probability difference ($p_{orig} - p_{uncond}$) may provide poor guidance, **as $p_{uncond}$, modeled from a sequence of [MASK] tokens, is essentially random**. In later iterations, $p_{uncond}$ becomes more useful because the sequence is already partially filled. When the number of iterations is larger, only a few most confident predictions are retained, allowing the model to disregard the early iteration’s negative influence from $p_{uncond}$ . However, when the total number of decoding iterations is small, more incorrect predictions are retained, which negatively impacts future generation.
> Lack baseline comparison with several latest S2ST models like TransSentence, PolyVoice, SeamlessM4T and etc
>
Thank you for bringing these notable works to our attention. We will incorporate a discussion to compare with them in our revised draft. Overall, we believe our contribution is orthogonal to these baselines.
For TransSentence, they employ an auto-regressive model based on the Transformer architecture, with its major novelty being the use of language-agnostic sentence-level encoding. Since they tested on the same dataset as ours, we can directly compare our results with theirs. As indicated in their result table (Table 2), our performance is superior: En→Fr (17.54 vs. 14.69), En→Es (19.49 vs. 18.9), and Fr→En (19.53 vs. 16.59).
For PolyVoice and SeamlessM4T, both utilize much larger training datasets and are based on autoregressive modeling and utilize multitask training (though SeamlessM4T incorporates NAR T2U decoding). Given these differences, a direct comparison with these systems may not be fair. We believe our contributions in speech normalization and CG regularization for non-autoregressive speech-to-speech translation are distinct from these baselines.
Nevertheless, we appreciate your pointing out these related works, and we will ensure to add a more thorough discussion in our paper to address these comparisons.
> What is the role of VAE here? Can we drop it and directly use a diffusion model here for denoising?
>
The role of the Variational Autoencoder (VAE) in our framework is twofold: (1) to reduce the feature dimension, making it more efficient for diffusion model training, and (2) to regularize the latent space with Gaussian constraints, which is crucial for the training of the diffusion model. The necessity of the VAE, particularly (2), is underscored in our ablation study. As demonstrated in Table 3, the reconstruction quality is significantly compromised when the Gaussian constraint is not applied, highlighting the essential role of the VAE in our methodology.
> Table 3 mentions larger dimension of latent vector brings better performance, why not use the original number of dimensions instead of compressing it?
>
Yes, technically it is possible to train a Variational Autoencoder (VAE) that maps features to their original dimension, but this approach would be highly inefficient. The larger the dimension, the more computationally expensive it becomes to train and perform inference with diffusion models. Utilizing the original dimension would significantly prolong the training process and increase costs for the diffusion model, with only marginal improvements in performance. Therefore, we have limited our experiments to a maximum dimension of 128.
Moreover, while a dimension of 128 provides the best reconstruction quality, using a smaller dimension for the latent space could potentially achieve similar downstream results. This can be accomplished by appropriately adjusting the T value, which controls the amount of noise injection. We plan to explore the impact of varying the latent dimension and adjusting the T value in future research to optimize performance and efficiency.
---
Rebuttal 2:
Title: Rebuttal by Authors (Cont'd)
Comment: This comment follows from **Rebuttal by Authors**.
> Why improvement on En-Fr is not that significant compared to En-Es? How does language direction impact the performance?
>
Our method has demonstrated effective performance on both language pairs, with a notable 2 BLEU point improvement for En-Fr, which is considered substantial. However, the more dramatic improvement observed in the En-Es pair can likely be attributed to specific characteristics of the dataset. Three key factors may contribute to this significant enhancement:
1. **Dataset Size**: The En-Es dataset is smaller, which means that the impact of normalization on the dataset is more pronounced, leading to more noticeable improvements.
2. **Speech Length**: Spanish speech is, on average, longer than French speech, as indicated in Table 1. Longer sequences could be more susceptible to acoustic multimodality, which our normalization method effectively mitigates.
3. **Background Noise**: The Spanish speeches from the CommonVoice dataset contain background noise. Our proposed normalization technique is capable of removing such noise, thereby enhancing the clarity and quality of the speech data.
> Is there way to visualize the noise added on the latent vectors? Like what does it mean for the audio?
>
For visualization, please refer to Figure 5, which displays the log-mel spectrogram of the reconstructed speech. The extent of corruption in the speech varies based on the level of noise injection. As the noise injection level changes, the speech feature can be quantized into speech units that produce sounds ranging from completely unrecognizable noise to those closely resembling the original audio.
> experiments in the paper are only conducted on En-X, but not reverse. Also, it would be interesting to see how DiffNorm works on non-European languages like Chinese and Japanese.
>
To address the concern regarding the X-En translation direction, we have conducted additional experiments for the Fr-En pair. We invite you to refer to our general response to all reviewers, where the results are detailed in the accompanying table. We observe consistent improvements in the Fr-En direction with DiffNorm, which we hope will further convince reviewers of the effectiveness of our proposed strategy.
We agree that extending our method to non-European languages would be intriguing, and our self-supervised approach should be readily applicable to monolingual Chinese or Japanese speech features. However, we are currently unaware of a suitable speech-to-speech translation (S2ST) dataset for English-Chinese or English-Japanese pairs. If you could point us to such datasets, we would be eager to include them in our future studies!
------
References:
[1] Lee et al., (2022). Textless speech-to-speech translation on real data.
[2] Huang et al., (2023). TranSpeech: Speech-to-Speech Translation With Bilateral Perturbation
[3] Ghazvininejad et al., (2019). Mask-Predict: Parallel Decoding of Conditional Masked Language Models
[4] Gu et al., (2018). Non-Autoregressive Neural Machine Translation
---
Rebuttal 3:
Title: Thanks for the author response
Comment: I will keep my score. | Summary: The authors introduce a process aiming to simplify the target distribution of speech-to-speech translation. This process uses a VAE model to map features to a latent space, followed by a diffusion model to normalize the features in the latent space. The authors use the generated dataset to train a non-autoregressive CMLM model on the CVSS-C dataset to validate its effectiveness.
Strengths: 1. The authors propose a novel speech normalization method using Denoising Diffusion Probabilistic Models.
2. The quality gain in En-Es direction is impressive.
Weaknesses: 1. I am confused by the rationale behind using a diffusion model to normalize the speech representation. The authors first add noise to the speech representation (Forward Process) and then remove the noise (Backward Process). This design seems awkward to me. Why do you think this process of adding and then removing noise can help with normalization?
2. Using a Variational Autoencoder to map features to a latent space seems contradictory to the motivation of reducing data multimodality. The VAE provides an indefinite mapping, which may hinder efforts to reduce multimodality. Additionally, $z_0 = f(h; \theta_{enc})$ is not correct; the VAE provides a distribution over the latent space. A more suitable expression would be $z_0 \sim p(z; f(h; \theta_{enc}))$.
3. The experiments are only conducted on synthesised dataset. As the paper's contribution is to alleviate the multi-modality problem in data, conducting experiments on a real speech-to-speech dataset, like [1], is much better to support the major claims.
4. The baseline results (Conformer in EnEs) seem quite low compared to the reported results in the literatre.
[1] Lee et al., Textless speech-to-speech translation on real data.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. A simple CG method in En-Es can bring an improvement about 4.5 BLEU (Table 2, Line 7), any reasons behind it?
2. In Table 4, Acc-Rec, BL-Rec, and BL-Dn seem highly correlated when applying your method. BL-Dn performs well with minor perturbation and vice versa. However, when there is no perturbation, BL-Dn degrades dramatically. What is the reason for this? Is there a borderline value of $T$ that separates these two regions? I think this exploration is important for understanding the effectiveness of using the diffusion process to normalize speech.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations in Appendix E.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort you've invested in reviewing our paper. We are grateful for your recognition of our approach's novelty and the significant improvements it offers. Below are our responses to your comments:
> Why do you think this process of adding and then removing noise can help with normalization?
>
By injecting noise and training models to denoise, Diffusion Models are capable of recovering noisy features. Due to the acoustic multimodality issue, where the same speech under varying acoustic conditions can exhibit slightly different speech features (refer to Figure 1(b) in [2] for examples), these models are particularly useful. Once well-trained, a Diffusion Model uses the prior information from the training data to reconstruct speech features. This is visually represented in Figure 5, where the more noise is injected, the more pronounced the "smooth" effect becomes. Thus, by corrupting the original audio and reconstructing it, we can mitigate its acoustic variation. Additionally, this process helps remove background noises when the original audio is noisy. Lastly, the self-supervised nature of diffusion-based normalization makes it widely applicable, as it only requires monolingual speech data and avoids the need for manual rules. This advantage was also acknowledged by reviewer AuR5.
Beyond the normalization use case, the forward-backward process of the Diffusion Model has shown great potential in producing more robust and consistent features for other tasks as well, such as adversarial robustness [3], speech enhancement [6], and more.
> The VAE provides an indefinite mapping, which may hinder efforts to reduce multimodality.
>
Thank you for the insightful discussion on VAEs! We agree that VAEs transform inputs into a probability distribution over a latent space which provides an indefinite mapping. However, such latent space is smoother and more regularized due to the Gaussian constraints. In the end, whether VAEs help reduce the multimodality depends on the nature of input data. We believe your point is valid that VAE does not guarantee the reduction of multimodality **given very complex data distribution.** However, in the context of speech translation, where prosody information is generally consistent, we utilize the regularized latent space to capture linguistic information and filter out unwanted details, thereby mitigating acoustic multimodality.
Additionally, we are utilizing a combination of Variational Autoencoders (VAEs) and a Diffusion Model to achieve our objectives. The denoised features produced by the Diffusion Model are cleaner and more robust. Our empirical results have further verified the effectiveness of our DiffNorm strategy, demonstrating its capability to enhance the quality and consistency of the processed speech data.
> A more suitable expression would be 𝑧0∼𝑝(𝑧;𝑓(ℎ;𝜃𝑒𝑛𝑐))
>
Thanks for pointing out the notation issue, we have corrected the formula in the draft.
> conducting experiments on a real speech-to-speech dataset, like [1], is much better to support the major claims
>
Thank you for the suggestion. Due to resource constraints, we are unable to conduct further experiments on the larger-scale data used by Meta AI's researchers in [1]. However, we would like to highlight the following points:
1. We have benchmarked our method against the speech normalization method proposed in [1] using the same dataset (CVSS). Our results, as shown in Table 2, demonstrate that our method is more effective.
2. A significant portion of the data in [1] is mined S2ST data following [4], sourced from the same CommonCrawl project as the French and Spanish data used in our study. This similarity in data sources leads us to anticipate that our diffusion-based normalization will also be effective on the dataset used in [1].
3. Additionally, we have conducted extra experiments for the French-English (Fr-En) translation direction and have provided additional data points that further validate the effectiveness of our method. For detailed results on Fr-En, please refer to our general response to all reviewers.
> The baseline results (Conformer in EnEs) seem quite low compared to the reported results in the literature
>
As indicated in the caption of Table 2, our baseline results for the Conformer-based model are sourced from the original TRANSPEECH paper (https://arxiv.org/pdf/2205.12523), specifically corresponding to the Basic Conformer Model (ID=3 in Table 1 of the TRANSPEECH paper). **Could you please provide clarification regarding the differing results you have observed in the literature?**
---
Rebuttal 2:
Title: Rebuttal by Authors (Cont'd)
Comment: This comment follows from **Rebuttal by Authors**.
> A simple CG method in En-Es can bring an improvement about 4.5 BLEU (Table 2, Line 7), any reasons behind it?
>
We are also intrigued by the significant improvement from CG on the En-Es dataset. We suspect this may be due to two factors: (1) the relatively small size of the En-Es dataset, which contains fewer than 80k data points for training, and (2) the longer sequence lengths of Spanish speech, averaging 256 tokens in the training set and 308 in the test set, which worsen the acoustic multimodality issue. CG operates by constraining the model to learn exclusively from the distribution of the target (Spanish) speech units, leading to a more consistent distribution for translation during inference. **This is particularly beneficial for longer target sequences.**
To further support our claim, we recently extended our exploration of CG to English-German translation using the WMT14 dataset. Following the same settings as the CMLM model [5] and incorporating our proposed CG for the MT task, we observed that while the CMLM+CG model does not show significant improvement using the full WMT14 dataset, it does enhance performance by approximately 1 BLEU point when the training data is filtered to only include sequences longer than 60 tokens and the test data to only include sequences longer than 30 tokens, as demonstrated below:
| Model | T=5 | T=10 | T=15 |
|-------------------|------|------|-------|
| CMLM | 19 | 20.2 | 20.5 |
| CMLM+CG | 19.8 | 20.8 | 21.04 |
For speech-to-speech translation that has much longer unit length, the effect of CG will be more noticeable, and therefore achieving large improvement over the baseline.
> ….when there is no perturbation, BL-Dn degrades dramatically. What is the reason for this? Is there a borderline value of 𝑇 that separates these two regions?
>
We believe it is dataset-dependent to judge whether there is a borderline value of T that has drastic improvement. We performed more ablation studies using T=10 and T=30 as start time for noise injection and the result is shown below:
| Start Time | Acc-Rec | BL-Dn |
| --- | --- | --- |
| T=10 | 93.8 | 15.98 |
| T=30 | 91.6 | 17.29 |
As indicated in the result table, with a very small T=10, the reconstruction accuracy is approximately 0.94, leading to a downstream ASR-BLEU score of 15.98. This represents a significant improvement compared to the baseline CMLM result of 12.58. The accuracy of 0.94 suggests that around 6% of tokens differ after reconstruction, which is likely due to the regularization effect of the VAE model, as T is too small for the diffusion model to cause reconstruction errors. This implies that the Spanish speech dataset may contain significant acoustic variations and noisy information, which the VAE model can already partially remove. With a larger T, the diffusion model further cleans up the representation, resulting in even higher downstream results (as observed from T=10 to T=100, where there is an increase in ASR-BLEU score).
-----
References:
[1] Lee et al., (2022). Textless speech-to-speech translation on real data.
[2] Huang et al., (2023). TranSpeech: Speech-to-Speech Translation With Bilateral Perturbation
[3]Chen et al., (2024). Robust Classification via a Single Diffusion Model
[4] Duquenn et al., (2021). Multimodal and Multilingual Embeddings for Large-Scale Speech Mining
[5] Ghazvininejad et al., (2019). Mask-Predict: Parallel Decoding of Conditional Masked Language Models
[6] Richter et al., (2023). Speech Enhancement and Dereverberation with Diffusion-based Generative Models
---
Rebuttal Comment 2.1:
Title: Invitation for Comments and Clarifications
Comment: Dear Reviewer g4Z7,
We greatly value your feedback and have provided clarifiications and additional experiments and analysis. To ensure that we have properly addressed your concerns, we would greatly appreciate if you could review our responses and provide any further comments. We are looking forward to engaging with you before the discussion period ends.
Thank you for your time and consideration. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain | Accept (poster) | Summary: Motivated by observation on the difference influence of amplitude and phase of adversarial examples, this paper propose a framework to generate better adversarial examples for adversarial training. Experiments verify the effectiveness of the proposed approach.
Strengths: 1. The motivation in Figure is clear and solid.
2. The experimental improvements are clear and consistent.
Weaknesses: 1. The proposed framework contain many modules, making it complex and difficult to implement.
2. Some descriptions are unclear and some details are missed.
See questions for details.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In Figure 1, the authors claim that “The adversarial perturbation severely damages phase pattern and the frequency spectrum, while amplitude patterns are rarely impacted.” However, this comparison is not clear. It seems the amplitude patterns still changes a lot in Figure 1. Both the patterns and spectrum change. Could you provide more proof on this observation (e.g., results on more images and more datasets)?
2. In Line 88, the features induced from the amplitude and phase patterns of x are denoted as $h_a(\mathbf{x})$ and $h_p(\mathbf{x})$ respectively.
Does it indicate an assumption that some parts of the learned $h$ correspond to the amplitude features? Do we need carefully design the network to explicitly accomplish this? It also makes the theoretical analysis in Sec. 3.4 difficult to understand.
3. As the proposed framework contains multiple modules and optimization steps, it is surprising that the additional time consumptions are small. Could you please kindly provide more explanation on this? Do we need any relatively complex optimization tricks? Furthermore, what is the memory cost during training and could you please present the corresponding comparison with the baseline methods?
4. It seems the mix-up operation is important in the whole framework. The adversarial amplitude and original amplitude are mixed by linear combination and the proportion is sampled from a uniform distribution. How do you make such a design? Have you ever tried other possible methods?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discuss the limitations of this work in Sec. F.9 and say that the verification may be insufficient. No limitation of the proposed method itself is discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We have provided our detailed responses below.
---
**W1: Complexity**
**R1:** Our DAT comprises **only 2** modules: the trained model and the adversarial amplitude generator (AAG). Due to the AAG's simple four-layer linear architecture, DAT's time consumption remains comparable to existing methods. Foremost, as shown in **Table C.1**, DAT significantly enhances model robustness against various adversarial attacks.
**Table C.1: Time Consumption and Memory Cost on CIFAR-10 with ResNet-18**
||PGD-AT|TRADES|ST|SCARL|DAT (Ours)
-|:-:|:-:|:-:|:-:|:-:
AE Generation Time (s)|155|155|275|166|157
Optimization Time (s)|32|32|45|55|61
Memory (MB)|2589|3586|4589|5836|5763
---
**W2: Unclear Descriptions and Missing Details**
**R2:** Please see the answers below.
---
**Q1: Impact of Adversarial Perturbation on Amplitude and Phase**
**A1:** In Figure 1, the phase (3rd column) and amplitude (5th column) spectra are affected by adversarial perturbation. Moreover, the phase patterns (2nd column), particularly within the red rectangular, are damaged, while the amplitude patterns (4th column) are rarely affected. Since models rely on patterns, especially semantics in phase patterns [i], to make predictions, AE with damaged phase patterns hinder accurate prediction. Further results can be found in **Figures 1-4** of the **attached PDF** within the global **Author Rebuttal**.
[i] Yin et al. A Fourier perspective on model robustness in computer vision. NeurIPS 2019.
---
**Q2: Learned Features and Theoretical Analysis**
**A2:**
* **Learned Features:** Indeed, we assume $h$ can extract features amplitude and phase patterns, respectively, since it is a **widely accepted assumption within feature learning theory [j, k, l]**. In practice, $h$ can extract phase patterns (semantics) and amplitude patterns (color and style) respectively [i, l], which can be readily **achieved using general networks in conjunction with the discrete Fourier transform, eliminating the need for specifically defined networks.** Such decoupling methods are commonly adopted. For instance, [i] represents Fourier information as amplitude and phase patterns, while [m] has demonstrated that CNNs (e.g., ResNet-50) can inherently capture these amplitude and phase patterns without meticulously designed components.
* **Sketch of Theoretical Analysis:** Given that DAT utilizes AAG to generate adversarial amplitude, which is then combined with the benign image's phase to augment data, it naturally follows that during AT, amplitude differences between augmented and original datasets are more pronounced than phase differences (Assumption 3.1). As a result, to achieve convergence during empirical risk minimization, the model regularizes the weights corresponding to amplitude features (Theorem 3.2), thereby reducing their influence on predicted labels (Corollary 3.3), and consequently shifting the focus towards phase features. We hope this explanation addresses your questions.
[j] Ilyas et al. Adversarial examples are not bugs, they are features. NeurIPS 2019.
[k] He et al. Data augmentation revisited: Rethinking the distribution gap between clean and augmented data. arXiv 2019.
[l] Xu et al. A Fourier-based framework for domain generalization. CVPR 2021.
[m] Chen et al. Amplitude-phase recombination: Rethinking robustness of convolutional neural networks in frequency domain. ICCV 2021.
---
**Q3: More Explanation on Time and Memory Costs**
**A3:** For clearer comparison, we separately analyze the time consumption for AE generation and model optimization (updating the model's parameters), and present the memory costs in **Table C.1**. As shown, the majority of time consumption in DAT is attributed to AE generation. Thanks to our efficient AE generation strategy, DAT exhibits comparable time consumption to baselines. Additionally, the AAG, a four-layer linear model, demands little training time. Since both benign and recombined samples, along with their AEs, are used in model optimization, DAT takes roughly twice the time for optimization compared to baselines, e.g., PGD-AT and TRADES.
---
**Q4: Mix-up Operation Design**
**A4:**
* **Linear Mix-up:** The linear mix-up operation is designed to preserve the energy of the amplitude spectrum and maintain the original amplitude information, since linear mix-up is a simple and effective method for augmenting data with less impact on the sample's original information [n, o]. Otherwise, the original amplitude could be compromised, thereby hindering accurate model predictions.
* **$\lambda$ Distribution:** To investigate the influence of $\lambda$, we conduct experiments on CIFAR-10 using ResNet-18 with $\lambda$ sampled from $\mathrm{Uniform}(0,1)$, $\mathrm{Beta}(1,1)$, and $\mathcal{N}(0,1)$. Given that $\mathcal{N}(0,1)$ can yield negative values, we transform $\lambda$ into $\mathrm{sigmoid}(\lambda)$. **Table C.2** suggests that the distribution type of $\lambda$ has little impact on the model's robust and natural performance.
**Table C.2: DAT with Different $\lambda$ Distributions**
Distribution|Natural|AA
-|:-:|:-:
Uniform|84.17%|51.36%
Beta|84.06%|51.24%
Gaussian|84.09%|51.28%
[n] Yun et al. Cutmix: Regularization strategy to train strong classifiers with localizable features. CVPR 2019.
[o] Berthelot et al. Mixmatch: A holistic approach to semi-supervised learning. NeurIPS 2019.
---
**Discussion of Limitations**
**R3:** Due to robust overfitting, DAT without AWP needs to be trained for a limited number of epochs, which restricts its performance. Moreover, while DAT primarily focuses on defending against adversarial attacks, other forms of image corruption (e.g., Gaussian noise and defocus blur) can also affect the model's performance. DAT requires further development to strengthen its protection against a broader spectrum of image corruptions. A comprehensive discussion will be included in the final version of the paper.
---
---
Rebuttal 2:
Comment: The rebuttal mostly addresses my concerns. Therefore, I decide to increase the score.
---
Rebuttal 3:
Comment: Dear Reviewer,
We greatly appreciate your responses and score improvements. Also, the final version of the paper will be revised based on your comments. | Summary: The paper introduces Dual Adversarial Training (DAT). This method enhances deep neural network resilience against adversarial attacks by employing generative amplitude mix-up in the frequency domain, focusing the model on phase patterns less impacted by such perturbations, and presenting an optimized adversarial amplitude generator for balancing robustness improvements with phase pattern retention. Experiments validate DAT's effectiveness against diverse adversarial attacks.
Strengths: 1. The paper is well-organized and easy to follow.
2. The experimental results show the effectiveness of the proposed method.
Weaknesses: 1. In the experiment, only decision space attacks are used to test the defense capability. Compared with decision space attack, feature space attack can better destroy the semantic information of benign images and obtain more powerful adversarial samples. It is recommended that some feature space attacks be added to further demonstrate the effectiveness of the proposed method, like [a] and [b].
[a] StyLess: Boosting the Transferability of Adversarial Examples, CVPR 2023
[b] Enhancing the Self-Universality for Transferable Targeted Attacks, CVPR 2023
2. The types of models used in the experiment are not rich enough. Line 258 of this paper states that "ResNet-18, WideResNet-34-10 (WRN-34-10), and WideResNet-28-10 (WRN-28-10) are used as the backbones". Line 269 says "Table 1 displays the results on CIFAR-10, CIFAR-100, and tiny-ImageNet using ResNet-18". These models are all based on ResNet architecture and have high structural similarities. To show that the method proposed in this paper is generalizable, the proposed method should be able to achieve good results on different surrogate and target models. Therefore, it is recommended to add more surrogate and target models, such as Inception_v3, ViT, etc.
3. This paper uses CIFAR-10, CIFAR-100, and Tiny ImageNet for experiments, but does not use ImageNet. However, ImageNet is the most widely used dataset in the CV field. The authors should provide additional experimental results on ImageNet for this paper's method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the authors provide some results against feature space attacks?
2. Can the authors provide some results on other models and datasets, like ImageNet and ViTs?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of the method proposed in this paper. It is suggested that the authors provide some results on other models and datasets, like ImageNet and ViTs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough and detailed reviews. Please find our responses below.
---
**W1: Additional Experiments against Feature Space Attacks**
**R1:** In **Table B.1** (more results in **Tables 1-4** of **attached PDF**), we present results assessing the defense capability against feature space attacks [a, b]. We select StyLess+MTDI (SMTDI) and StyLess+MTDSI (SMTDSI) from [a], along with DTMI-CE-SU (DCS) and DTMI-Logit-SU (DLS) target to class 1 from [b], and perform experiments on ImageNette as in [c]. We adopt surrogate ResNet-50 $\rightarrow$ targets [DenseNet-121, ViT-S] and surrogate DenseNet-121 $\rightarrow$ targets [ResNet-50, ViT-S] execute black-box transfer attacks. AT on DenseNet-121 and ResNet-50 follow the same settings as Tiny-ImageNet in the paper, while ViT-S is trained adversarially as in [c]. **For the challenging SMTDSI attack, compared to the previous SOTA ST, our proposed DAT an average robustness improvement of approximately 4.9% across all four scenarios**. Moreover, since DAT performs AT in pixel space, its performance against feature space attacks is currently less satisfactory[d], which we plan to improve in the future.
**Table B.1: Model's Performance against $\ell_{\infty}$ Threat with $\epsilon=\frac{16}{255}$ of Feature Space Attacks**
Method|SMTDI|SMTDSI|DCS|DLS
-|:-:|:-:|:-:|:-:
**ResNet-50$\rightarrow$DenseNet-121**
ST|36.46%|31.68%|76.61%|73.67%
SCARL|36.15%|31.31%|76.23%|73.46%
DAT (Ours)|**41.62**%|**37.61**%|**79.12**%|**76.41**%
**ResNet-50$\rightarrow$ViT-S**
ST|46.42%|42.34%|86.27%|76.53%
SCARL|46.32%|42.31%|86.71%|77.26%
DAT (Ours)|**51.83**%|**46.76**%|**88.45**%|**80.41**%
**DenseNet-121$\rightarrow$ResNet-50**
ST|38.67%|33.29%|78.93%|66.83%
SCARL|38.17%|33.13%|78.70%|65.93%
DAT (Ours)|**43.38**%|**38.27**%|**82.12**%|**67.49**%
**DenseNet-121$\rightarrow$ViT-S**
ST|47.53%|43.07%|87.83%|78.41%
SCARL|47.24%|42.95%|87.40%|77.58%
DAT (Ours)|**52.06**%|**47.43**%|**89.12**%|**79.41**%
[a] Liang and Xiao. StyLess: boosting the transferability of adversarial examples. CVPR 2023.
[b] Wei et al. Enhancing the self-universality for transferable targeted attacks. CVPR 2023.
[c] Mo et al. When adversarial training meets vision transformers: Recipes from training to architecture. NeurIPS 2022.
[d] Xu et al. Towards feature space adversarial attack by style perturbation. AAAI 2021.
---
**W2: Additional Experiments with More Backbones**
**R2:** In **Table B.2**, we provide results on CIFAR-10 with Inception\_v3, DenseNet-121, and ViT-S. The settings for Inception\_v3, and DenseNet-121 remain consistent with those mentioned in the paper, while ViT-S follows [c]. Compared to the previous SOTA ST, **DAT achieves robustness improvements of 0.63%, 0.90%, and 0.61%** against AA across various backbones, demonstrating its versatility and effectiveness on a range of deep models.
**Table B.2: Experimental Results with More Backbones against $\ell_{\infty}$ Threat with $\epsilon=\frac{8}{255}$**
||PGD-AT||TRADES||MART||ST||SCARL||DAT (Ours)||
-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
Backbone|Natural|AA|Natural|AA|Natural|AA|Natural|AA|Natural|AA|Natural|AA
Inception\_v3|85.26%|48.83%|86.38%|49.74%|83.41%|48.75%|86.75%|51.18%|84.43%|50.98%|**87.74**%|**51.81**%
DenseNet-121|86.34%|51.24%|86.92%|52.03%|84.11%|50.92%|58.63%|53.26%|85.15%|53.12%|**88.51**%|**54.16**%
ViT-S|81.86%|47.33%|81.95%|48.45%|80.31%|47.13%|82.42%|49.71%|80.85%|49.67%|**83.15**%|**50.32**%
---
**W3: Additional Experiments on ImageNet**
**R3:** In **Table B.3**, we show results on ImageNet-1K, which adopt settings and baselines' results from [h]. It is worth noting that existing AT methods with multiple iteration steps for AE generation do not provide results on ImageNet-1K, since AT with a 10-step AE generation on ImageNet-1K needs **approximately one week**. Due to single iterative step AE generation using less time, some fast AT methods based on FGSM [e, f, g, h] typically perform experiments on ImageNet-1K. Consequently, considering the time limitation of rebuttal, to satisfy your suggestion of providing results on ImageNet-1K, we combine DAT with several fast AT methods [e, f, g, h] and compare its performance with these methods. **For the previous SOTA FGSM-PGK, DAT combined with FGSM-PGI obtains robustness improvements of 2.1% and 1.4% against PGD-10 and PGD-50**, respectively.
**Table B.3: Experimental Results on ImageNet-1K with ResNet-50 against $\ell_{\infty}$ Threat with $\epsilon=\frac{4}{255}$**
Method|Natural|PGD-10|PGD-50
-|:-:|:-:|:-:
PGD-AT|59.19%|35.87%|35.41%
Free-AT (m=4) [e]|63.42%|33.22%|33.08%
FGSM-RS [f]|63.65%|35.01%|32.66%
FGSM-PGI [g]|64.32%|36.24%|34.93%
FGSM-PGK [h]|66.24%|37.13%|35.70%
DAT+Free-AT (m=4) (Ours)|**66.36**%|**36.27**%|**36.12**%
DAT+FGSM-RS (Ours)|**66.49**%|**37.43**%|**35.76**%
DAT+FGSM-PGI (Ours)|**67.12**%|**39.24**%|**37.13**%
[e] Shafahi et al. Adversarial training for free!. NeurIPS 2019.
[f] Wong et al. Fast is better than free: Revisiting adversarial training. ICLR 2020.
[g] Jia et al. Prior-guided adversarial initialization for fast adversarial training. ECCV 2022.
[h] Jia et al. Improving fast adversarial training with prior-guided knowledge. TPAMI 2024.
---
**Q1: Results against Feature Space Attacks**
**A1:** Please refer to **Table B.1** in **R1** and **Tables 1-4** in the **attached PDF**.
**Q2: Results on Other Models and Datasets**
**A2:** For results on additional backbones, please see **Table B.2** in **R2**. For experiments on ImageNet-1K, kindly refer to **Table B.3** in **R3**.
---
---
Rebuttal Comment 1.1:
Title: Follow-up on Submission1111 Rebuttal - Urgent Request for Feedback
Comment: Dear Reviewer,
We are truly sorry to trouble you again, but with **less than 8 hours** left before the rebuttal discussion period ends at ***11:59 PM AoE today***, we are writing with a humble and heartfelt request for your feedback on our rebuttal. We have put our utmost effort into addressing your invaluable comments and suggestions. We would be deeply grateful if you could take a few moments to review our responses and share any further thoughts. If our responses are satisfactory, we would be sincerely thankful if you might consider reflecting this in your review score.
We greatly value your guidance, and we are genuinely appreciative of your time and consideration.
Thank you so much for your continued support.
Best regards,
Submission1111 Authors
---
Rebuttal Comment 1.2:
Comment: Thank you for your responses!
For W1. could you please explain why you use a different dataset and a different threat model (a black-box setting) than those of Table 1 in the main paper?
For W3, could you please explain why you use different attacks than those of Table 1 (FGSM, PGD-20, PGD-100, C\&W, and AA) in the main paper?
---
Rebuttal 2:
Title: Follow-up on Submission1111 Rebuttal Response
Comment: Dear Reviewer,
We hope you are doing well. We sincerely appreciate the detailed feedback and the time you have devoted to reviewing our submission.
In our recent rebuttal, we carefully address your concerns and provide detailed responses:
- **Additional Experiments on Feature Space Attacks**: We conduct experiments on several **feature space attacks**, specifically including methods like **SMTDI/SMTDSI [a] and DCS/DLS [b]** as you suggested, to demonstrate the robustness of our approach against more challenging adversarial attacks. These results show that our method performs better than previous SOTA techniques in most scenarios, despite being originally designed for pixel space attacks.
- **Incorporating Diverse Backbones**: We expand our experiments to include **additional backbones such as Inception_v3 and ViT-S**, demonstrating that our method generalizes well across different network architectures. The results confirm that our approach maintains its effectiveness and robustness across these diverse models.
- **Extensive Experiments on ImageNet-1K**: We also provide **extensive results on the ImageNet-1K dataset**, a critical benchmark in the CV field. Given the time constraints, we combine our method with fast adversarial training techniques like FGSM-PGK and FGSM-PGI, achieving significant improvements in robustness compared to the baseline methods.
These comprehensive experiments are conducted with great care to address your concerns about the scope and applicability of our approach, and to demonstrate its generalization and robustness across different models and datasets.
With the rebuttal discussion period closing tomorrow, ***Aug 13, 11:59 PM AoE***, we would deeply appreciate your input if there are any further questions or points that you feel need clarification. If our rebuttal has satisfactorily addressed your concerns, we humbly request that you consider reflecting this positively in your rating score.
Thank you very much for your continued support and thoughtful consideration.
Best regards,
Submission1111 Authors
---
Rebuttal Comment 2.1:
Title: Follow-up on Submission1111 Rebuttal - Request for Timely Feedback
Comment: Dear Reviewer,
We hope this message finds you well. We apologize for reaching out again, but with the rebuttal discussion period **ending in less than 24 hours** on ***Aug 13, 11:59 PM AoE***, we kindly request your timely feedback on our rebuttal.
To briefly restate our previous points in our rebuttal:
- We conduct additional experiments on **feature space attacks**, including the SMTDI/SMTDSI [a] and DCS/DLS [b] methods that you suggested. These experiments demonstrate the robustness of our approach against more challenging adversarial attacks.
- We expand our experiments to incorporate **diverse backbones**, such as Inception_v3 and ViT-S, showing that our method generalizes well across different network architectures.
- We provide **extensive results on the ImageNet-1K dataset**, which is a critical benchmark in the CV field. Our results show significant improvements in robustness compared to baseline methods.
These comprehensive efforts are made with great care to thoroughly address your concerns and demonstrate the generalization and robustness of our approach across different models and datasets.
If our rebuttal has satisfactorily addressed your concerns, we humbly request that you consider reflecting this positively in your rating score. We would be deeply grateful for your timely input.
Thank you very much for your continued support and thoughtful consideration.
Best regards,
Submission1111 Authors
---
Rebuttal 3:
Comment: Dear Reviewer,
We are truly grateful for your continued time and effort in further discussing our submission. Below is our detailed response to the two questions you raised.
---
> ***"For W1. could you please explain why you use a different dataset and a different threat model (a black-box setting) than those of Table 1 in the main paper?"***
+ **Dataset:**
The training dataset used in [a] and [b] is ImageNet-1k. We understand your concern and would like to clarify that existing adversarial training methods with multiple iteration steps for AE generation do not report results on ImageNet-1K, since AT with a 10-step AE generation on ImageNet-1K requires **approximately one week**. Additionally, these feature space attacks need to be performed on real-world dataset, e.g. ImageNet. Given these considerations, we choose ImageNette, a widely used subset of ImageNet-1K in adversarial training [c], for the experiments in **W1**.
+ **Backbones:**
In [a] and [b], multiple backbones are adopted, including ResNet-50 and DenseNet-121. To better demonstrate the robust generalization of our method against feature space attacks, we select ViT-S, which **you mentioned in W2**, for experiments in **W1**. It is important to note that baselines in **W1** do not provide experimental results and settings with so many backbones, meaning we need significant time to explore these methods' settings across different backbones. Furthermore, given the rebuttal's **6000 characters** limit, it is challenging to present results for so many backbones. Consequently, to balance these factors and provide a clear comparison, combining the considerations of both **W1** and **W2**, we select **ResNet-50, DenseNet-121, and ViT-S** for **W1**.
---
> ***"For W3, could you please explain why you use different attacks than those of Table 1 (FGSM, PGD-20, PGD-100, C&W, and AA) in the main paper?"***
The adversarial attacks we evaluate in **W3** follow [h]. To ensure a fair and consistent comparison on ImageNet-1K, we use these specific adversarial attacks to provide the results in **W3**.
---
We sincerely hope these clarifications address your concerns. If there are any further questions or if you require additional explanations, we are more than happy to provide them. We also hope that if our responses sufficiently address your concerns, you might consider reflecting this positively in your review score.
Title: Re: Official Comment by Reviewer KucH | Summary: This paper investigates a novel approach to improving adversarial training by performing data augmentation in the frequency domain. The authors propose a unique pipeline that jointly optimizes a classification network and a generator network. The generator is used to create adversarial noise, which is added to the amplitude component of the input to encourage the model to more accurately capture phase patterns. The authors conduct a comprehensive evaluation of their approach across multiple datasets and architectures, demonstrating its competitive performance.
Strengths: Comprehensive experiments on several datasets with different architectures.
A novel approach that improves adversarial training by performing data augmentation using a generator network in the frequency domain.
Weaknesses: In Table 1 and Table 2, the authors have only compared their approach with a few weak baselines. Since the proposed approach is based on data augmentation, the authors should consider comparing it with other data augmentation-based approaches, such as DAJAT[1] and [47]. Although DAJAT’s results are presented in Table 3, the table does not show the clean accuracy of the models, making it difficult to assess the accuracy-robustness tradeoff. In fact, DAJAT’s clean accuracy is significantly higher than that of the proposed approach. For instance, DAJAT’s clean accuracies on CIFAR-10 and CIFAR-100 (with ResNet-18) are 86.67% and 66.96%, respectively, while the proposed approach’s are 84.17% and 63.28%. [37]’s clean accuracy and AA robust accuracy with WRN-34-10 are 86.18% and 58.09%, respectively, while the proposed approach’s with WRN-34-10 are 86.78% and 56.46%.
The proposed pipeline is complicated, with a multi-part loss function and a requirement for joint optimization with a generator network. This complexity may necessitate extensive hyperparameter tuning to achieve satisfactory performance, making it a much more complex error-prone solution compared to data augmentation approaches such as [1] and [47].
Technical Quality: 3
Clarity: 3
Questions for Authors: No questions
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Please find our responses below.
---
**W1: Natural Accuracy Compared to DAJAT [1]**
**R1:** Our DAT utilizes **only 1** augmentation per benign training sample to prioritize speed and simplicity in this paper, while **DAJAT applies **2** and **3** data augmentations integrating AWP, SWA, and variable $\epsilon$ and $\alpha$** to achieve the natural performance you mentioned. Moreover, DAT exhibits superior robustness relative to DAJAT, albeit with a trade-off in natural accuracy. To ensure a fair comparison under identical training settings, we employ the adversarial amplitude generator in DAT to produce 3 adversarial amplitudes, resulting in 3 recombined data for each benign sample. Using these 3 recombined data, we extend our DAT to include 2 and 3 augmentations, integrating variable $\epsilon$ and $\alpha$, AWP, and SWA as in DAJAT [1]. Furthermore, to illustrate the trade-off between natural and robust accuracy, we select the checkpoint with the best performance on natural validation data. As illustrated in **Table A.1**, where DAJAT's performance is selected from [1], with 110 and 200 training epochs, we present a comparison of natural accuracy and robustness (AA) between the two methods on CIFAR-10 and CIFAR-100 with ResNet-18. When evaluated under the same augmentation conditions and experimental settings, our DAT outperforms DAJAT in both robustness and natural accuracy, with a smaller increase in time consumption per training epoch.
**Table A.1: Comparison of DAJAT and DAT with the Same Settings**
|||DAJAT|||DAT (Ours)|||
-|:-:|:-:|:-:|:-:|:-:|:-:|:-:
Dataset|#Aug.|Natural|AA|Time (s/epoch)|Natural|AA|Time (s/epoch)
**110 training epochs**
CIFAR-10|2|85.99%|51.48%|295|**86.38**%|**51.92**%|318
CIFAR-10|3|86.67%|51.56%|383|**86.81**%|**52.13**%|407
CIFAR-100|2|66.84%|27.32%|300|**67.12**%|**27.95**%|323
CIFAR-100|3|66.96%|27.62%|407|**67.43**%|**28.16**%|429
**200 training epochs**
CIFAR-10|2|85.71%|52.50%|295|**86.18**%|**53.16**%|318
CIFAR-10|3|86.24%|52.66%|383|**86.63**%|**53.31**%|407
CIFAR-100|2|65.45%|27.69%|300|**66.32**%|**28.35**%|323
CIFAR-100|3|65.63%|27.92%|407|**66.74**%|**28.66**%|429
---
**W2: Robust Accuracy Compared to [47]**
**R2:** We infer that you are referring to Rebuffi et al. [47] instead of ST [37]. In our DAT, to achieve a better trade-off between the model's performance and training time consumption, for results of DAT in Table 2, we use **only 5** iteration steps for AE generation of both benign and recombined samples, whereas [47] employs **10** iteration steps for AE generation of both benign and augmented samples with SWA. As reflected in **Table A.2**, when integrated with AWP and SWA, DAT delivers superior natural and robust performance compared to [47] with a 5-step iteration, which uses about half of training time consumption than [47] as shown in **Table A.3**. Moreover, to facilitate a fair comparison with [47], we conduct experiments on CIFAR-10 using a 10-step iteration for DAT's AE generation. With identical iteration steps, **our DAT achieves an approximate 0.32\% improvement in robustness over [47]. For DAT with AWP and SAW, extending the iteration step to 10 can enable it to secure approximately 1\% improvement in robustness and 2.1\% in natural accuracy over [47]**, thus highlighting a more favorable trade-off between generalization and robustness.
**Table A.2: Comparison between [47] and DAT with WRN-34-10**
Method|#Iter. Step|Natural|AA
-|:-:|:-:|:-:
[47]|10+10|86.18%|58.09%
DAT (Ours)|5+5|**86.78**%|**56.46**%
DAT (Ours)|10+10|**86.23**%|**58.41**%
DAT+AWP+SWA (Ours)|5+5|**88.65**%|**58.12**%
DAT+AWP+SWA (Ours)|10+10|**88.28**%|**59.07**%
**Table A.3: Comparison of Time Consumption per Epoch with WRN-34-10 on CIFAR-10**
Method|#Iter. Step|Time (s)
-|:-:|:-:
[47]|10+10|2884
DAJAT [1]|5+5|1532
DAT (Ours)|5+5|1551
---
**W3: Complexity**
**R3:**
* **For DAJAT [1]:**
Our proposed DAT involves **only 2** hyperparameters, $\beta$, and $\omega$, which need to be explored by experiments.
In contrast, **DAJAT involves additional complexity beyond the balance parameters $\beta$ and $\omega$; it requires careful scheduling for adjusting adversarial perturbation $\epsilon$, step size $\alpha$, and iteration steps, due to its use of variable $\epsilon$, $\alpha$**, and iteration steps in AT. Moreover, the schedules for tuning adversarial perturbation $\epsilon$, step size $\alpha$, and iteration steps need to be adjusted continuously throughout the entire training epoch in DAJAT.
* **For [47]:**
In the case of [47], a combination of data augmentation strategies is employed alongside the traditional TRADES method with **10** steps for AE generation and variable $\alpha$, whereas our proposed DAT requires **only 5** steps per sample to generate AE. Consequently, as shown in **Table A.3**, **[47] demands nearly double the training time compared with our DAT when augmentations are applied to benign samples**.
Despite its multi-component structure and joint optimization, our method remains simpler than DAJAT and [47] in terms of both hyperparameter tuning and computational complexity.
---
---
Rebuttal Comment 1.1:
Title: Follow-up on Submission1111 Rebuttal Response
Comment: Dear Reviewer,
I hope this message finds you well. We sincerely appreciate the time and effort you invest in reviewing our submission.
In our recent rebuttal, we carefully address your feedback and provide detailed responses to your concerns:
- **Expanded Comparison with Data Augmentation-Based Approaches**: We conduct additional experiments to ensure a **fair comparison** with more data augmentation-based methods, specifically including **DAJAT and Rebuffi et al. [47]**. We provide an in-depth analysis demonstrating that our approach not only **enhances robustness** but also achieves **comparable or better natural accuracy** under similar conditions.
- **Comprehensive Analysis of Accuracy-Robustness Trade-off**: We offer a detailed examination of both **natural and robust accuracy**, showing how our method **balances these two metrics effectively**. For instance, our method **outperforms DAJAT in robustness** while maintaining **competitive natural accuracy**, as outlined in our comparison tables.
- **Clarification on Pipeline Complexity**: We clarify concerns regarding the **complexity of our pipeline**, explaining that while our method involves multiple components, it is designed to require **fewer hyperparameters and simpler optimization steps** than comparable methods. We also demonstrate that our approach is **more efficient** in terms of **training time and computational cost**, particularly when compared to methods like **[47]**, which require longer iteration steps for adversarial example generation.
As the rebuttal discussion period ends tomorrow, ***Aug 13, 11:59 PM AoE***, we would be truly grateful if you could let us know if there are any remaining questions or concerns regarding our submission. If our responses have satisfactorily addressed your concerns, we kindly hope you might consider reflecting this in your review score by raising it.
Thank you once again for your thoughtful feedback and for considering our request.
Best regards,
Submission1111 Authors
---
Rebuttal Comment 1.2:
Title: Type of augmentations
Comment: Thank the authors for the additional experiments. Could the authors clarify what do they mean by "DAJAT applies 2 and 3 data augmentations"? Additionally, in the statement "we extend our DAT to include 2 and 3 augmentations", what are the specific types of augmentations used?
---
Rebuttal 2:
Title: Re: Type of augmentations
Comment: Dear Reviewer,
Thank you very much for your prompt response and for your continued engagement with our submission. We appreciate the opportunity to clarify our approach, and below we provide detailed responses to the two points you raised.
---
> ***"DAJAT applies 2 and 3 data augmentations"***
Regarding your question on DAJAT [1], the natural accuracy that you referenced is achieved by generating 2 or 3 augmented data samples for each benign sample using the **AutoAugment** strategy. These benign and augmented samples, along with their corresponding adversarial examples, are then utilized in the training process.
---
> ***"we extend our DAT to include 2 and 3 augmentations"***
In our Dual Adversarial Training (DAT) approach, the recombined data with mixed amplitude spectra serve as augmented samples for each benign sample. **To ensure a fair comparison with DAJAT [1], we employ the adversarial amplitude generator to produce 3 distinct adversarial amplitude spectra. These spectra are subsequently mixed with the amplitude spectrum of the benign sample. By combining the phase spectrum of the benign sample with each of these mixed amplitude spectra, we generate 3 recombined data samples.** These are considered as 3 augmentations of the original benign sample. For a more detailed explanation of the augmentation process, including formulas, please refer to the subsequent comment titled **(Continued) Re: Type of augmentations**.
---
Using these recombined data, we conduct the experiments detailed in **Table A.1**, providing a comparison with DAJAT using 2 and 3 augmentations. Our results indicate that our DAT method offers a superior accuracy-robustness trade-off under identical experimental conditions.
We hope this explanation clarifies your concerns. We greatly appreciate your valuable feedback and look forward to any further questions or comments you may have.
---
---
Rebuttal Comment 2.1:
Comment: Thank the authors for the clarifications which have addressed my concerns. I have adjusted my rating accordingly.
---
Rebuttal 3:
Title: (Continued) Re: Type of augmentations
Comment: To provide a clearer understanding of the statements "DAJAT [1] applies 2 and 3 data augmentations" and "we extend our DAT to include 2 and 3 augmentations", we offer a detailed explanation using a benign sample $(\mathbf{x},y)\in\mathcal{D}$ as an example.
---
### **For adversarial training with model $f$ using DAJAT [1], augmented data is generated as follows:**
+ **2 augmentations (#Aug. 2)**:
+ Generate augmented data via AutoAugment:
$\mathbf{x}_1$=$\mathrm{AutoAugment}(\mathbf{x}), \quad \mathbf{x}_2$=$\mathrm{AutoAugment}(\mathbf{x})$.
+ Generate adversarial examples for $\mathbf{x}$, $\mathbf{x}_1$, and $\mathbf{x}_2$ as $\mathbf{x}^{\prime}$, $\mathbf{x}_1^{\prime}$, and $\mathbf{x}_2^{\prime}$, respectively.
DAJAT then uses $\mathbf{x}$, $\mathbf{x}_1$, $\mathbf{x}_2$, $\mathbf{x}^{\prime}$, $\mathbf{x}_1^{\prime}$, and $\mathbf{x}_2^{\prime}$ for training.
+ **Similarly, DAJAT with 3 augmentations (#Aug. 3) follows the same procedure.**
---
### **For our DAT with IDFT $\mathcal{F}^{-1}(\cdot)$, model $f$, and adversarial amplitude generator $G$, recombined data (which can be regarded as augmented data) is generated as follows:**
+ **2 recombined data (#Aug. 2)**:
+ Generate adversarial amplitude:
$\mathcal{A_1}(\mathbf{x})=G (f(\mathbf{x}), \mathbf{z}_1), \quad \mathrm{where}\ \mathbf{z}_1 \sim \mathcal{N}(0,1)$,
$\mathcal{A_2}(\mathbf{x})=G (f(\mathbf{x}), \mathbf{z}_2), \quad \mathrm{where}\ \mathbf{z}_2 \sim \mathcal{N}(0,1)$.
+ Obtain recombined data:
$\hat{\mathbf{x}}_1=\mathcal{F}^{-1}(\lambda_1 \cdot \mathcal{A_1}(\mathbf{x})+(1-\lambda_1)\cdot \mathcal{A}(\mathbf{x}),\mathcal{P}(\mathbf{x})), \quad \mathrm{where}\ \lambda_1\sim \mathrm{Uniform}(0,1)$,
$\hat{\mathbf{x}}_2=\mathcal{F}^{-1}(\lambda_2 \cdot {\mathcal{A_2}}(\mathbf{x})+(1-\lambda_2)\cdot \mathcal{A}(\mathbf{x}),\mathcal{P}(\mathbf{x})), \quad \mathrm{where}\ \lambda_2\sim \mathrm{Uniform}(0,1)$.
+ Generate adversarial examples **with the same settings as DAJAT** for $\mathbf{x}$, $\hat{\mathbf{x}}_1$, and $\hat{\mathbf{x}}_2$ as $\mathbf{x}^{\prime}$, $\hat{\mathbf{x}}_1^{\prime}$, and $\hat{\mathbf{x}}_2^{\prime}$, respectively.
Our DAT then uses $\mathbf{x}$, $\mathbf{x}_1$, $\mathbf{x}_2$, $\mathbf{x}^{\prime}$, $\mathbf{x}_1^{\prime}$, and $\mathbf{x}_2^{\prime}$ for training, **following the settings of DAJAT**.
+ **Similarly, DAT with 3 recombined data (#Aug. 3) follows the same strategy.**
---
We hope this explanation provides greater clarity on the augmentations used in our approach. If there are any further questions or points that require clarification, we would be most grateful for your feedback. If our rebuttal has satisfactorily addressed your concerns, we humbly request that you consider reflecting this positively in your rating score.
Thank you once again for your time, thoughtful consideration, and continued support.
---
---
Rebuttal 4:
Comment: Dear Reviewer,
We are truly grateful for your prompt response and for adjusting your rating. Your thoughtful feedback has been invaluable in refining our work, and we sincerely appreciate the time and effort you have dedicated to our submission. We will incorporate your suggested experiments and further clarifications in the final version.
Thank you once again for your support. | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers and AC,
We sincerely appreciate the time and effort you have dedicated to evaluating our manuscript. The concerns and feedback raised during the initial review have significantly contributed to enhancing the quality of our paper. Below, we provide a summary of our key responses to the reviewers' suggestions and questions.
---
### **To Reviewer 8ttu**
1. **Performance Comparisons:**
To address concerns regarding experimental results, we conduct additional experiments comparing the performance of our DAT framework with DAJAT [1] and [47].
> See **R1** about DAJAT and See **R2** about [47] for Reviewer 8ttu.
2. **Complexity Analysis:**
To clarify the complexity comparison between our DAT, DAJAT [1], and [47], we offer a detailed explanation of the hyperparameter settings and present the training time consumption analysis.
> See **R3** to Reviewer 8ttu.
---
### **To Reviewer KucH**
1. **Feature Space Attacks:**
In response to Reviewer KucH’s suggestions, we conduct experiments involving feature space attacks using various surrogate and target models. Detailed experimental comparisons are presented in **Tables 1-4** of the attached **rebuttal PDF**.
> See **R1** to Reviewer KucH.
2. **Backbone Diversity:**
As per Reviewer KucH’s recommendations, we perform experiments using a variety of backbones, including ViT, Inception, and DenseNet, and provide a comparative analysis between our DAT and existing methods.
> See **R2** to Reviewer KucH.
3. **ImageNet-1K Experiments:**
For ImageNet-1K, we conduct additional experiments to compare the performance of our DAT with current methods.
> See **R3** to Reviewer KucH.
---
### **To Reviewer FTsi**
1. **Clarification on Figure 1:**
Following Reviewer FTsi's suggestions, we further explain the impact of adversarial perturbations on phase and amplitude patterns. Additionally, we include more experimental results in **Figures 1-4** of the attached **rebuttal PDF**, demonstrating the influence of adversarial perturbations on both phase and amplitude, including their patterns and spectra.
> See **A1** to Reviewer FTsi.
2. **Theoretical Analysis Clarification:**
To clarify any confusion regarding learned features, we elaborate on our theoretical analysis, providing additional details and relevant citations.
> See **A2** to Reviewer FTsi.
3. **Time Consumption Explanation:**
To better illustrate the time consumption of DAT, we conduct experiments to provide a detailed analysis of the time and memory costs in comparison with other methods.
> See **A3** to Reviewer FTsi.
4. **Discussion of Limitations:**
In line with the suggestion, we outline additional limitations of DAT, which will be incorporated into the final version of the paper.
> See **R3** to Reviewer FTsi.
---
Pdf: /pdf/e119875f090ed3ff4fae6927b725ca6da3dcfcc1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.