title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
UniDSeg: Unified Cross-Domain 3D Semantic Segmentation via Visual Foundation Models Prior | Accept (poster) | Summary: This paper presents UniDSeg, a universal approach that enhances the adaptability and generalizability of cross-domain 3D semantic segmentation. To achieve this, we propose a learnable parameter-inspired mechanism for off-the-shelf VFMs with frozen parameters. This mechanism maximally preserves the pre-existing target awareness in VFMs, thereby further enhancing their generalizability.
Strengths: 1. The motivation is clear.
2. The proposed method is intuitive, and the experiments have validated their contributions.
Weaknesses: 1. Since the backbone is modified from xMUDA, the oracle performance should also be provided.
2. The task settings in this study are derived from xMUDA (TPAMI), while the prompt tuning and corresponding experimental settings are based on VPT. Therefore, this proposed method can be considered somewhat incremental.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Please explain why token length is equal to 100, the best performance can be obtained. However, it is important to note that the prompt tuning process may not be entirely stable. To address this, all experiments should be conducted multiple times, and the average performance along with the standard deviation also should be computed and reported.
2. Please explain the main contribution compared with “Learning to Adapt SAM for Segmenting Cross-domain Point Clouds”.
3. Since VFM and ViT are utilized as encoder, the model parameters and computational costs should be comprehensive reported.
4. Did the authors notice that when the point labels projected onto 2D images, there are slight mismatch between these labels and 2D image pixels? Please discuss the potential reasons.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. The authors have addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer T5Kw,
We thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns.
We hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.
Best,
Authors
**Q1**: Since the backbone is modified from xMUDA, the oracle performance should also be provided.
**A1**: "Oracle" means solely training on the target domain, except in the "Day/Night" case where it uses a 50%/50% source/target batches to prevent overfitting due to the small target size, thereby serving as the upper bound for DA3SS.
| | USA/Sing. | | | Day/Night | | | vKITTI/sKITTI | | | A2D2/sKITTI | | |
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| | 2D | 3D | Avg | 2D | 3D | Avg | 2D | 3D | Avg | 2D | 3D | Avg |
| Source-only | 58.4 | 62.8 | 68.2 | 47.8 | 68.8 | 63.3 | 26.8 | 42.0 | 42.2 | 34.2 | 35.9 | 40.4 |
| UniDSeg | 67.2 | 67.6 | 72.9 | 63.2 | 71.2 | 71.2 | 60.5 | 50.9 | 62.0 | 50.7 | 55.4 | 57.5 |
| Orcale| 75.4 | 76.0 | 79.6 | 61.5 | 69.8 | 69.2 | 66.3 | 78.4 | 80.1 | 59.3 | 71.9 | 73.6 |
**Q2**: The task settings in this study are derived from xMUDA (TPAMI), while the prompt tuning and corresponding experimental settings are based on VPT. Therefore, this proposed method can be considered somewhat incremental.
**A2**: Our method is groundbreaking in introducing the prompt-tuning concept into the **universal model for DG3SS and DA3SS**. In VPT-Deep, each prompt is a learnable token randomly initialized and directly insert into input space of all ViT layers. In contrast, **our method first leverages sparse depth as a point-level prompts inserted to each ViT layers**. It not only learns spatial distance perception prompts from point clouds but also learns invariance to sample perturbations. Secondly, **our method uses the learnable token as query for seeking matched prompting** after encoding in the ViT layer. It bridges the discrepancy between the information of pre-training dataset and the target scene.
**Q3**: Please explain why token length is equal to 100, the best performance can be obtained. The average performance along with the standard deviation also should be computed and reported.
**A3**: Consistent with all competitors, for each run of the experiment, we select the best model based on the validation set results (Via trial and error, when token length is set to 100, the result is stable and up to peak value), and use it for the final inference on the test set. All experimental settings are conducted at least 3 times and we select the best performance of all runs. The table below reports the 2D performance with standard deviation over 4 runs when the token length is set to 100.
| | USA/Sing. | A2D2/sKITTI |
|-|:-:|:-:|
| 2D mIoU | 68.100±0.071 | 43.975±0.453 |
**Q4**: Please explain the main contribution compared with "Learning to Adapt SAM for Segmenting Cross-domain Point Clouds".
**A4**: Above work leverages whole SAM to generate instance mask, guiding the alignment of features from diverse 3D data domains into a unified domain. On the contrary, our proposed UniDSeg does not directly use instance masks generated offline, but instead uses only the pre-trained image encoder of VFMs (e.g., CLIP and SAM) to extract visual prior information. UniDSeg is groundbreaking in introducing the prompt-tuning concept into the universal model for DG3SS and DA3SS. We propose a learnable-parameter-inspired mechanism to the off-the-shelf image encoder of VFMs, which maximally preserves pre-existing target awareness in VFMs to further enhance its generalizability. Just as the following A5 response, our method not only requires nearly 2% fine-tuning parameters of the visual backbone but also achieves superior segmentation performance.
**Q5**: Since VFM and ViT are utilized as encoder, the model parameters and computational costs should be comprehensive reported.
**A5**: Thank you for your suggestion. We have reported model parameters of CLIP: ViT-B, CLIP: ViT-L, and SAM: ViT-L for the VFM based encoder along with all trainable parameters. Note that, the entire ViT backbone in our VFM-based encoder is frozen during downstream training for DA3SS and DG3SS. Only two layer-wise learnable blocks, MTP and LST, are trainable. Hereby, we provide supplementary information on the trainable parameters for MTP and LST in the following table, where "Cost" means only need percentage trainable parameters compared to fine-tuning whole encoder consume.
| Visual Backbone | VFM-based Encoder | All trainable Params | **Cost** | MTP | LST |
|-|:-:|:-:|:-:|:-:|:-:|
| CLIP: ViT-B | 86.9M | 1.82M | **2.09%** | 0.48M | 1.34M |
| CLIP: ViT-L | 305M | 4.70M | **1.54%** | 1.78M | 2.92M |
| SAM: ViT-L | 307M | 4.34M | **1.41%** | 1.42M | 2.92M |
**Q6**: Did the authors notice that when the point labels projected onto 2D images, there are slight mismatch between these labels and 2D image pixels? Please discuss the potential reasons.
**A6**: The mismatch can be caused by a variety of factors, including calibration error, geometric error, image resolution, etc. These factors are beyond the scope of this work. Following xMUDA, we assume that the calibration of LiDAR-Camera is available for both domains and does not change over time. Since images and point cloud are heterogeneous, according to extrinsic matrix from dataset, only the points falling into the intersected field of view are geometrically associated with multi-modal data (i.e., 3D-to-2D projection).
---
Rebuttal Comment 1.1:
Title: Reviewer response
Comment: I've reviewed the authors' responses and appreciate their engagement. I will stay in touch for further discussion as we approach the final rating. | Summary: This paper introduces the prompt-tuning concept into DG3SS and DA3SS, and proposes a learnable parameter heuristic mechanism for the off-the-shelf VFM. Modal Transitional Prompting is proposed to capture 3D-to-2D transitional prior and task-shared knowledge from the prompt space. Learnable Spatial Tunability is constructed to the representation of distinct instances driven by prompts in the query space. Extensive experimental results demonstrate the effectiveness of the proposed method on widely recognized tasks and datasets.
Strengths: 1. The proposed method of improving cross-domain 3D semantic segmentation based on visual basic models is interesting and innovative.
2. The proposed Modal Transitional Prompting (MTP) and Learnable Spatial Tunability (LST) explore and utilize 2D prior knowledge from VFMs.
3. Experiments on multiple datasets verify the effectiveness of the method.
Weaknesses: 1. The method lacks theoretical analysis. In Section 3.2, Modal Transitional Prompting, line 193, the author mentioned that "PG is designed to capture 3D-to-2D transitional prior and task-shared knowledge from the prompt space." However, the designed method only uses learnable parameters, which is not related to the 3D-to-2D transitional prior and the promotion of 3D domain adaptation. This section only explains the steps of the method and lacks theoretical explanation of the method.
2. In Section 3.2, Learnable Spatial Tunability, the author mentioned that it was inspired by Rein [1]. However, it seems that the proposed LST has a similar design to Rein, which casts doubt on the novelty of LST. The modification of Rein lacks innovation and theoretical analysis. This inevitably makes people think that the good performance of LST is due to the effectiveness of Rein-like components.
3. In the experimental section, VFM is only verified based on the classification model CLIP [2]. However, the CLIP paper [2] states that “Additionally, CLIP is not designed for common surveillance-relevant tasks like object detection and semantic segmentation. This means it has limited use for certain surveillance tasks”. The proposed method is dedicated to segmentation tasks, but it is only verified on the classification model CLIP. As a pioneer method, it is recommended to verify its effectiveness on VFMs designed for segmentation tasks such as SAM [3] and SEEM [4].
[1]Wei, Zhixiang, Lin Chen, Yi Jin, Xiaoxiao Ma, Tianle Liu, Pengyang Ling, Ben Wang, Huaian Chen, and Jinjin Zheng. "Stronger Fewer & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 28619-28630. 2024.
[2]Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748–8763, 2021.
[3]Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 4015–4026, 2023. 1, 2, 3, 6, 11
[4]Zou, Xueyan, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng Wang, Lijuan Wang, Jianfeng Gao, and Yong Jae Lee. "Segment everything everywhere all at once." Advances in Neural Information Processing Systems 36 (2023).
Technical Quality: 4
Clarity: 2
Questions for Authors: 1. In Section 3.2, Modal Transitional Prompting, line 199, the author mentioned "From the view of deep encoding, it focuses on the scope of scenes at different scales, so that their corresponding features have different content representations when constructed." Why does sparse depth have scenes at different scales?
2. Why does the model require multiple layers of PG and TB? Experiments are needed to verify the impact of the number of layers on the results.
3. How long does it take from the start of training to completion?
4. In Table 1, caption introduction: "Avg" is obtained by averaging the predicted probabilities from the 2D and 3D networks. However, "Avg" is ambiguous here. This leads to misleading averages of 2D and 3D mIou results, but this may not be the case in reality. It is recommended to replace "Avg" with another representation.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 2
Limitations: As above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 4rFB,
We thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns.
We hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.
Best,
Authors
**Q1**: This section only explains the steps of the method and lacks theoretical explanation of the method.
**A1**: Firstly, sparse depth map as a perspective projection representation of point clouds presents an unnatural image that can be processed by the image encoder in VFM. This is an additional prior derived from the point cloud. Lee et al. [Ref1] have been proved that source data internally know a lot more about the world and how the scene is formed, which called Privileged Information. This Privileged Information includes physical properties (e.g., depth) that might be useful for cross-domain learning. Secondly, though depth information is easy-to-access and tightly coupled with semantic information, Hu et al. [Ref2] argue that depth clues that complement colors are hard to deduce from color images alone, thus directly deep encoding fail to capture valid geometric information. To this end, we consider utilizing depth as point-level prompts inserted into ViT layers, enabling the model to learn spatial distance perception. This complements 2D representations, as LiDAR-derived depth information is less affected by domain variations compared to images, which can be easily influenced by changes in lighting and other factors. Experimental results in Table 8 further demonstrate the effectiveness of our method. For convenience, we reshow the table below:
| Role of Depth | Params | 2D | 3D | Ens|
|-|:-:|:-:|:-:|:-:|
| (a) Deep Encoding | 86.9M | 66.1 | 67.7 | 73.0 |
| (b) Point-level Prompts | 0.48M | 67.8 | 67.9 | 73.8 |
[Ref1] Lee K H, Ros G, Li J, et al. Spigan: Privileged adversarial learning from simulation. arXiv preprint arXiv:1810.03756, 2018.
[Ref2] Hu S, Bonardi F, Bouchafa S, et al. Multi-modal unsupervised domain adaptation for semantic image segmentation. Pattern Recognition, 2023.
**Q2**: The novelty of LST.
**A2**: The innovation of our method is in introducing learnable-parameter-inspired mechanism to the off-the-shelf VFMs, which is guided by point-level prompts from 3D information. To achieve this, we place layer-wise MTP and LST blocks to take full advantage of semantic understanding of diverse levels and modalities. Among them, the affinity matrix computed in LST is a common solution to capture the associations between learnable tokens and 2D representations. The insight of learnable token originally comes from Visual Prompt Tuning (VPT) [Ref3].
[Ref3] Jia M, Tang L, Chen B C, et al. Visual prompt tuning. European Conference on Computer Vision, 2022: 709-727.
**Q3**: As a pioneer method, it is recommended to verify its effectiveness on VFMs designed for segmentation tasks such as SAM [3] and SEEM [4].
**A3**: We have supplemented the experimental results for DA3SS and DG3SS under another visual backbone, SAM: ViT-L. Notably, in this work, we only utilize the off-the-shelf image encoder of VFMs (e.g., CLIP (w/o text encoder) and SAM (w/o prompt encoder and mask decoder)). As shown in the table below, SAM-based UniDSeg exhibit better performance on "USA/Sing." scenario. Due to rebuttal time limitation, we will supplement all SAM-based experimental results subsequently.
| Task | Method | Visual Backbone | 2D | 3D | Ens |
|-|:-:|:-:|:-:|:-:|:-:|
| DG | UniDSeg | CLIP: ViT-L | 66.5 | 64.5 | 72.3 |
| | | **SAM: ViT-L** | 66.8 | 64.7 | **72.6 (+0.3)** |
| DA | UniDSeg | CLIP: ViT-L | 67.2 | 67.6 | 72.9 |
| | | **SAM: ViT-L** | 67.8 | 68.8 | **73.3 (+0.4)** |
**Q4**: Why does sparse depth have scenes at different scales?
**A4**: We are sorry to make you misunderstand. Here, "different scales" refers to different receptive fields of adjacent pixels. The complementarity of the features extracted by the 2D and 3D branches is also tightly correlated to the different information processing machinery, i.e., 2D and 3D convolutions, which makes networks focusing on different areas of the scene with different receptive fields. When sparse depth is projected onto the image plane and processed via a 2D network, it can focus on the scope of scenes at different scales. For example, adjacent pixels in the image may have similar pixel values, but their corresponding depth values (distance) in the depth map can differ significantly.
**Q5**: Experiments are needed to verify the impact of the number of layers of MTP and LST.
**A5**: In the following table, "Half layer" indicates that the MTP and LST blocks are inserted every other layer. Due to rebuttal time limitation, we will supplement the experiments of other layer combinations subsequently.
| Visual Backbone | MTP | LST | 2D | 3D | Ens|
|-|:-:|:-:|:-:|:-:|:-:|
| CLIP: ViT-L | All layers | All layers | 66.5 | 64.5 | 72.3 |
| | Half layers | Half layers | 66.1 | 64.3 | 71.9 |
**Q6**: How long does it take from the start of training to completion?
**A6**: All experiments are conducted on one NVIDIA RTX 3090 GPU with 24GB RAM.
| Task | USA/Sing. | Day/Night | vKITTI/sKITTI | A2D2/sKITTI |
|-|:-:|:-:|:-:|:-:|
| DA | ~22h | ~37h | ~21h | ~48h |
| DG | ~12h | ~21h | ~11h | ~27h |
**Q7**: "Avg" is ambiguous. This leads to misleading averages of 2D and 3D mIoU results.
**A7**: Initially, xMUDA used "2D+3D" to denote the final results, but we found that "2D+3D" might be misleading as it implies an average of 2D and 3D. Therefore, we follow MM2D3D and VFMSeg by using "Avg" to denote the final result. Perhaps "Ensemble Result" (short for "Ens") would be more appropriate. Notably, in all your answers, we have changed "Avg" to "Ens".
---
Rebuttal Comment 1.1:
Comment: Q2 mentioned that the design of LST is similar to Rein [1]. Please explain the differences between them.
---
Reply to Comment 1.1.1:
Title: Reply to Q2
Comment: **Q**: Q2 mentioned that the design of LST is similar to Rein [1]. Please explain the differences between them.
**A**: Due to character limit constraints, we regret to have overlooked this question. The following is an explanation of the problem:
**Similarity**: Both of them use the learnable tokens and calculate affinity matrices between visual inputs and tokens.
**Difference**: Structure. Rein generates a low-rank token sequence, while our method additionally processes the visual features through a down-projection linear layer followed by an up-projection linear layer, which are then element-wise added to the residual-connected features.
Following table shows simple experiment on "USA/Sing." scenario. Due to rebuttal time limitation, we will supplement all analysis between Rein and our method subsequently, including algorithm, diagram, and experimental results.
| Task | Method | Visual Backbone | 2D | 3D | Avg |
|------|:-----:|:-----:|:-----:|:-----:|:-----:|
| DG | Use Rein | CLIP: ViT-B | 63.3 | 64.6 | 71.2 |
| DG | Use LST | CLIP: ViT-B | 63.8 | 64.7 | 71.5 | | Summary: This paper proposes a cross-domain 3D semantic segmentation model which utilizes off-the-shelf visual foundation models to boost the adaptability and generalizability. Two key designs are described to help the cross-domain task, e.g., visual prompt learning and deep query learning. Extensive experiments have been reported in the paper to illustrate the strength.
Strengths: 1. Experiments demonstrate that the proposed model indeed outperforms compared to baseline methods.
2. Appropriately freezing and finetuning strategy seems reasonable.
3. Consider prompt tuning in domain adaptation task is novel, as it can exploit the capability of VFM by prompt engineering.
Weaknesses: 1. The improvement in both quantitative and qualitative results does not seem to be strong enough.
2. Does not analyze the reasons behind the SOTA performance, especially compared to models like VFMSeg which also exploit the power of VFM. I hope more detailed comparison and insight can be given.
3. The motivation is not clear enough. Though the paper points out two natural questions when considering using VFM, some works have already used VFM to help with segmentation. Then the motivation should include how to improve upon these works.
4. As the model uses CLIP as the backbone, why not try a more advanced model?
5. The writing needs to be improved. For example, in line 29, the last sentence is not finished. Line 37, DG3SS does not require accessing target domain data, so the logic here is not convincing. Line 42, what do you mean by “source-domain data discrimination power”? what is the relation to your method? Line 53, what is “spatial-temporal synchronized prompt space”? Line 65, what is “pre-existing target awareness”?
Some detailed explanations could be seen in the Questions.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. I wonder about the motivation behind exploring the segmentation task in DA. As the paper has mentioned some foundation models like SAM, I want to ask that is it still meaningful to research the segmentation task in DA. A foundational segmentation model like SAM seems to be general enough (What's SAM's performance on the datasets mentioned in the paper?). Even if it's not powerful enough, for segmentation which is a kind of "high-level" task, it seems to be easier to directly improve the foundation model to achieve a real general segmentation model without any adaption.
2. Other works like VFMSeg produce more accurate pseudo-labels to help the training, but in the limitation mentioned in this paper, producing pseudo-labels cannot achieve many performance improvements. I wonder what is the difference here.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: As mentioned in the limitation of the paper, not being able to exploit the ability to produce accurate pseudo labels of VFM would constrain the capacity of the model when it comes to a larger scale.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer KKSy,
We thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns.
We hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.
Best,
Authors
**Q1**: The improvement does not seem to be strong enough.
**A1**: Our method enhances the performance of both DG3SS and DA3SS. Notably, all loss functions are the same as those in xMUDA, and no typical data augmentation or style transfer methods are introduced to address domain shift issues. The following table shows that UniDSeg, when combined with cross-domain distillation loss from Dual-Cross, achieves best performance compared to SOTA MM2D3D. We believe that more cross-domain learning constraints can provide better model.
| DA3SS Method | 2D | 3D | Avg |
|-|:-:|:-:|:-:|
| xMUDA | 55.5 | 69.2 | 67.4 |
| Dual-Cross | 58.5 | 69.7 | 68.0 |
| MM2D3D | 70.5 | 70.2 | 72.1 |
| UniDSeg | 63.2 | 71.2 | 71.2 |
| **UniDSeg + Dual-Cross** | 64.5 | 71.6 | **72.5 (+1.3)** |
When combining with MM2D3D, a parallel reinitialized encoder is added to the 2D backbone to process the sparse depth. We have shown the comparisons in Table 8. On Sing./USA scenario, compared to (a), (b) not only requires 0.6% fine-tuning parameters of the 2D backbone but also achieves superior segmentation performance, making it flexible to expand to various 3D semantic segmentation tasks with multi-modal learning. For convenience, we reshow the table below:
| Role of Depth | Params | 2D | 3D | Avg |
|-|:-:|:-:|:-:|:-:|
| (a) Deep Encoding | 86.9M | 66.1 | 67.7 | 73.0 |
| (b) Point-level Prompts | 0.48M | 67.8 | 67.9 | 73.8 |
**Q2**: Analyze the reasons behind the SOTA performance.
**A2**: VFMSeg essentially utilizes SAM to generate instance masks for both the source and target images, applying mixing augmentation to the source and target domain data. This is a typical and useful method used to address domain shift in DA3SS. In contrast, our proposed UniDSeg exploits the image encoder of VFMs (e.g., CLIP and SAM) to extract robust visual prior information. UniDSeg proposes a learnable-parameter-inspired mechanism to the off-the-shelf image encoder of VFMs, which maximally preserves pre-existing target awareness in VFMs to further enhance its generalizability.
**Q3**: The motivation is not clear enough.
**A3**: Thank you for your suggestion. The motivation of this work is to study a **universal framework based on VFMs** to enhance the generalizability and adaptability of cross-domain 3D semantic segmentation, demonstrating the effectiveness of the visual foundation model priors. Different from VFMSeg, which utilize SAM to generate instance masks for both the source and target images, applying mixing augmentation to the source and target domain data. Our proposed UniDSeg is groundbreaking in introducing the **depth-guided prompt-tuning** concept into the image encoder of VFMs, which can be applied to various downstream tasks, including DA3SS, DG3SS, SFDA3SS, and fully-supervised 3SS.
**Q4**: Why not try a more advanced model?
**A4**: We supplement the experimental results for DA3SS and DG3SS under SAM: ViT-L. As shown in the table below, SAM-based UniDSeg exhibit better performance on "USA/Sing." scenario.
| Task | Method | Visual Backbone | 2D | 3D | Avg |
|-|:-:|:-:|:-:|:-:|:-:|
| DG | UniDSeg | CLIP: ViT-L | 66.5 | 64.5 | 72.3 |
| | | **SAM: ViT-L** | 66.8 | 64.7 | **72.6 (+0.3)** |
| DA | UniDSeg | CLIP: ViT-L | 67.2 | 67.6 | 72.9 |
| | | **SAM: ViT-L** | 67.8 | 68.8 | **73.3 (+0.4)** |
**Q5**: Some detailed explanations could be seen.
**A5**: (1) "source-domain data discrimination power" refers to the scenario in the DG3SS task where learning is conducted solely access to source domain data, enabling the model to develop the ability to discriminate domain-specific and domain-agnostic features; (2) Spatial: introducing spatial information (depth) into the prompt space to supplement 2D images; Temporal: since most scenes are captured smoothly, we learn the temporal invariance from the temporal correlations between the adjacent frames; (3) "pre-existing target awareness": In fine-tuning stage, the weights of VFMs are conventionally used to initialize source models and subsequently discarded. However, VFMs have diverse features important for generalization, and finetuning on source data can overfit to source distribution and potentially lose pre-existing target information.
**Q6**: What's SAM's performance on the datasets?
**A6**: In Table 2 of our paper, the experimental results demonstrate that, whether freezing or fine-tuning the image encoder of the VFM, it cannot directly solve the domain shift issue present in DG3SS and DA3SS. We supplement the experimental comparison between fine-tuning and our method, implemented on SAM image encoder serving as a visual backbone. The motivation please refer to **A3**. For other high-level tasks, they are not the focus of this work.
| Task | Visual Backbone | Strategy | 2D | 3D | Avg |
|-|:-:|:-:|:-:|:-:|:-:|
| DG | SAM: ViT-L | Fine-tuning | 65.9 | 64.3 | 70.8 |
| | | Ours | 66.8 | 64.7 | 72.6 |
| DA | SAM: ViT-L | Fine-tuning | 66.5 | 67.9 | 71.4 |
| | | Ours | 67.8 | 68.8 | 73.3 |
**Q7**: Pseudo-label difference and improvement compared to VFMSeg.
**A7**: Based on the VFMSeg experimental results, this method did not provide helpful pseudo-labels in "Day/Night" scenario, and there was even a decrease (-0.4) in performance. VFMSeg uses the averaging the probabilistic prediction of pretrained 2D network and SEEM to generate pixel-wise pseudo-labels, while our method follows the common practice adopted by most DA3SS methods in using offline 2D pseudo-label.
| DA3SS Method | 2D mIoU |
|-|:-:|
| xMUDA | 55.5 |
| xMUDA+PL | 57.5 |
| VFMSeg+PL | 57.1 (-0.4) |
| UniDSeg | 63.2 |
| **UniDSeg+PL** | 64.5 (+1.3) | | Summary: The manuscript proposes a universal method with the help of off-the-shelf Visual Foundation Models (VFMs) to boost the adaptability and generalizability of cross-domain 3D semantic segmentation, dubbed UniDSeg. The proposed method focus on learning visual prompt for 3D-2D transitional prior and deep query.
Strengths: 1.This work is the first to introduce prompt-tuning concept into the universal model for DG3SS and DA3SS.
2.The authors propose a learnable-parameter-inspired mechanism to the off-the-shelf VFMs for enhancing generalizability of VFMs.
3.The proposed method achieves state-of-the-art results on DG3SS and DA3SS tasks.
Weaknesses: 1.What does Samp. in Figure 1 means, the corresponding explanation is not provided in the manuscript.
2.Why choose CLIP as 2D backbone, since there is no language information for the task, other Visual Foundation Models (e.g. Swin-Transformer, SAM, …) seem more suitable for the 2D information extraction.
3.In lines 148-152, the author mention “The main insight of UniDSeg is to provide a universal framework that enhances the adaptability and generalizability of cross-domain 3D semantic segmentation.” Though SparseConvNet is examined as 3D network, it would be more convincing as “universal” if authors could provide results utilizing other 3D backbones.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please refer to the Weaknesses Section.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have clearly discussed the potential limitations of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer DeKz,
We thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns.
We hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.
Best,
Authors
**Q1**: What does Samp. in Figure 1 means, the corresponding explanation is not provided in the manuscript.
**A1**: "Samp." means sampling of 2D features. Only the points falling into the intersected field of view are geometrically associated with multi-modal data (i.e., 3D-to-2D projection). After "Samp." operation, we can obtain point-wise 2D features for cross-modal learning with 3D features.
**Q2**: Why choose CLIP as 2D backbone, since there is no language information for the task, other Visual Foundation Models (e.g. Swin Transformer, SAM, …) seem more suitable for the 2D information extraction.
**A2**: Thank you for your reminder. The motivation of this work is to study a **universal framework based on VFMs** to enhance the generalizability and adaptability of cross-domain 3D semantic segmentation, demonstrating the effectiveness of the visual foundation model priors. To this end, we utilize the off-the-shelf image encoder of VFMs as our visual backbone. UniDSeg **can be applied to various downstream tasks, including DA3SS, DG3SS, SFDA3SS, and fully-supervised 3SS**. Following your suggestion, we have supplemented the experimental results for DA3SS and DG3SS under another visual backbone, SAM: ViT-L. As shown in the table below, SAM-based UniDSeg exhibit better performance on "USA/Sing." scenario.
| Task | Method | Visual Backbone | 2D | 3D | Avg |
|-|:-:|:-:|:-:|:-:|:-:|
| DG | UniDSeg | CLIP: ViT-L | 66.5 | 64.5 | 72.3 |
| | | **SAM: ViT-L** | **66.8** | **64.7** | **72.6 (+0.3)** |
| DA | UniDSeg | CLIP: ViT-L | 67.2 | 67.6 | 72.9 |
| | | **SAM: ViT-L** | **67.8** | **68.8** | **73.3 (+0.4)** |
Furthermore, we supplement the experimental comparison between fine-tuning and our method, implemented on SAM image encoder serving as a visual backbone. Due to rebuttal time limitation, we will supplement all SAM-based experimental results subsequently.
| Task | Visual Backbone | Strategy | 2D | 3D | Avg |
|-|:-:|:-:|:-:|:-:|:-:|
| DG | SAM: ViT-L | Fine-tuning | 65.9 | 64.3 | 70.8 |
| | | Ours | 66.8 | 64.7 | 72.6 |
| DA | SAM: ViT-L | Fine-tuning | 66.5 | 67.9 | 71.4 |
| | | Ours | 67.8 | 68.8 | 73.3 |
**Q3**: Though SparseConvNet is examined as 3D network, it would be more convincing as "universal" if authors could provide results utilizing other 3D backbones.
**A3**: Thank you for your suggestion. We have supplemented the DA3SS experimental results for xMUDA and our proposed UniDSeg under another 3D backbone network, MinkowskiNet. Due to rebuttal time limitation, we will supplement all MinkowskiNet-based experimental results subsequently.
| 3D Backbone | DA3SS Method | 2D | 3D | Avg |
|-|:-:|:-:|:-:|:-:|
| SparseConvNet | xMUDA | 64.4 | 63.2 | 69.4 |
| | UniDSeg | 67.2 | 67.6 | 72.9 |
| MinkowskiNet | xMUDA | 65.9 | 64.0 | 69.7 |
| | UniDSeg | 67.5 | 68.6 | 73.1 |
---
Rebuttal 2:
Title: Response to Authors
Comment: The authors' response has clearly addressed my concerns. After checking the peer review comments and the author's responses, I decided to raise the given score for this work. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
UDC: A Unified Neural Divide-and-Conquer Framework for Large-Scale Combinatorial Optimization Problems | Accept (poster) | Summary: UDC focuses on the divide-and-conquer-based NCO methods, proposing a novel framework that does not require heuristics and adopting an efficient training approach DCR. Compared to existing methods, UDC has significant improvements in effectiveness and applicability.
Strengths: 1. This paper conducts extensive experiments. UDC exhibits effectiveness in 10 different CO problems, improving the existing divide-and-conquer methods in both effectiveness and applicability.
2. This paper validates the importance of unified training. The idea of DCR is quite novel.
3. The article is easy to follow and experiments give detailed explanations on the setting reason for hyperparameters and components.
Weaknesses: There are no major weaknesses in the article, my questions are presented in the Question section.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. UDC shows wide applicability compared to existing NCO methods. What is the reason for its significant advantage in applicability compared to existing divide-and-conquer methods?
2. The solution representation in the upper part of Figure 2 is not clear enough, please add illustrations in the Figure caption.
3. On TSPLib and CVRPLib, UDC seems to perform less prominently. Please provide the result of UDC on benchmark instances of other CO problems.
4. Please add more clear indexes in the main part for contents in the Appendix.
5. Should GLOP for TSPLib in Table 4 be GLOP more revision?
6. Do BS and bs in Table 3 represent beam search? Please provide a detailed introduction.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Chapter 3.3 discusses the unavailability of UDC on certain CO problems, such as TSPTW. This should also be included in discussing the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our work. We are glad to know that you find the idea of DCR quite novel, UDC exhibits effectiveness in 10 different CO problems, the article is easy to follow, and experiments give detailed explanations on the setting reason for hyperparameters and components.
We address your concerns point-by-point as follows:
>**Question 1. Reason for applicability**: UDC shows wide applicability compared to existing NCO methods. What is the reason for its significant advantage in applicability compared to existing divide-and-conquer methods?
The wide application of UDC on large-scale CO problems mainly depends on two reasons as follows: **1)** UDC gets rid of the heuristic components in both the dividing policy and the conquering policy (as shown in Table 1 in the original paper). Heuristic components often require expert knowledge to redesign when handling unseen instances and new CO problems, thus undermining the applicability of methods. **2)** The efficient training method DCR enables a unified training framework for all CO problems. The unified training framework does not rely on the design of separate pre-training procedures or tunings for specific CO problems, making UDC easy to apply to new CO problems.
>**Question 3. Benchmark experiments**: Please provide the result of UDC on benchmark instances of other CO problems.
Thank you for your suggestion. We further evaluate the proposed UDC on the benchmark dataset of min-max mTSP. The benchmark dataset of min-max mTSP contains instances from 500 to 1,000 nodes. It includes the u574 instance, the p654 instance, and the rat783 instance (selected by the paper of Equity-Transformer [1] and we use the reported results from [1]). Objective values of methods are shown in the table below. Compared to Equity-Transformer and the heuristic method LKH-3 and OR-Tools, UDC-$\boldsymbol{x}_{50}$ ($\alpha=50$) can significantly lead.
| Instance | $M$= | LKH-3 | OR-Tools | Equity-Transformer | UDC-$\boldsymbol{x}_{50}$ ($\alpha=50$) |
|-----------|------|-------|----------|---------------------|-----------------------------------------|
| | 30 | 8800 | 19391 | 6642 | 6641 |
| u574 | 40 | 8051 | 15924 | 6642 | 6641 |
| | 50 | 7733 | 14192 | 6642 | 6641 |
| | 30 | 13317 | 25552 | 12795 | 12380 |
| p654 | 40 | 13668 | 25547 | 12795 | 12303 |
| | 50 | 13188 | 25547 | 12795 | 12270 |
| | 30 | 2217 | 5105 | 1272 | 1243 |
| rat783 | 40 | 1872 | 5105 | 1272 | 1232 |
| | 50 | 1640 | 4005 | 1272 | 1232 |
>**Question 2,4. Writing clarity.**
Thanks for your valuable suggestion. The upper part of Figure 2 shows the pipeline of the training method DCR. In the DCR-enabled training process, there are two conquering stages, the Conquer step and the Reunion step, where the Reunion step is introduced to eliminate the negative impact of the sub-optimal dividing policy. The lower part provides a local view of a solution fragment to demonstrate our motivation. DCR has the potential to correct wrong dividing results by generating the solution of the connection part in the Reunion step.
We will include this illustration in the caption of Figure 2, use more appropriate colors for better expression, and add more indexes in the main part for contents in the Appendix of the original manuscript.
>**Question 5,6. Abbreviation in Tables.**
Thank you for your comments. Yes, in Table 4 in the original paper, the GLOP for TSP should be GLOP more revision instead of GLOP-LKH3. We will change the index in Table 4 to GLOP and provide an additional description of the settings.
Yes, both BS and bs in Table 3 of the original manuscript represent the beam search. Beam search is a commonly used decoding strategy in autoregressive sequence model decoding, which is a form of breadth-first search (BFS) [2]. Attention Model [3] first introduces the technique into the field of NCO. For a beam limited to k (noted as bs$k$ in Table 3 of the original manuscript, e.g., bs1024 or bs16), the beam search will construct k solutions with top probabilities sampled from the model policy. Models with beam search in decoding can get better results with more solving time.
>**Limitation**: Chapter 3.3 discusses the unavailability of UDC on certain CO problems, such as TSPTW. This should also be included in discussing the limitation.
Thank you for pointing this out, we will include the content discussed in Chapter 3.3 in the illustration on limitations.
***
>**References**
[1] Son, Jiwoo, et al. "Equity-Transformer: Solving NP-Hard Min-Max Routing Problems as Sequential Generation with Equity Context." AAAI 2024, 2024.
[2] Meister, Clara, Tim Vieira, and Ryan Cotterell. "If beam search is the answer, what was the question?." arXiv preprint, 2020.
[2] Kool, Wouter, Herke Van Hoof, and Max Welling. "Attention, learn to solve routing problems!." arXiv preprint, 2018.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, which has addressed my concerns. I raised my score to 7. I appreciate the novelty and effectiveness of UDC, especially its wide applicability. This work is inspiring to the NCO research so I think it should be accepted.
---
Reply to Comment 1.1.1:
Comment: Thanks for your support and raising the score to 7. We are delighted to hear your appreciation for the novelty, effectiveness, and wide applicability of UDC. | Summary: The paper introduces a novel Unified Neural Divide-and-Conquer (UDC) framework designed to address large-scale CO problems. The UDC framework leverages a Divide-Conquer-Reunion (DCR) training method that uses GNN for global instance division and a constructive neural solver for sub-problem solutions. This unified approach aims to improve both efficiency and applicability across various CO problems by combining the training process of the dividing stage and the conquering stage.
Strengths: 1. The DCR method proposes a view of unifying the training of dividing and conquering stages for solving large-scale CO problems.
2. The UDC framework is designed with applicability to a wider range of CO problems, without heavy reliance on problem-specific heuristics like in the previous works.
3. The idea of considering interdependencies between dividing and conquering stages is interesting as a neural CO training scheme.
4. The authors conduct extensive experiments where the proposed framework's performance is shown to be generally good on certain problems.
Weaknesses: 1. The novelty of the proposed framework requires further explanation. The current UDC/DCR seems to be a combination of existing techniques in either dividing or conquering stages. From a local perspective, 1) the sub-problem dividing policy is simply a conventional continuous node split based on the initial solution, 2) the conquering phase is a direct exploitation of existing problem-specific models like POMO, ICAM, MatNet, etc., to tackle the small scaled sub-problem, and 3) the technique in the reunion part resembles prevalent tricks in the neural CO literature, such as POMO/random-start/multi-sampling/random augmentation, etc. Therefore, the proposed unified framework is more of a mathematical or conceptual summarization of the current RL-based divide-and-conquer pipeline rather than a novel methodology for tackling various CO problems using a unified architecture.
2. The training procedure is not stated clearly enough. As two different networks are adopted in either phase, how are the gradients in equation (6) updated simultaneously during an instantiated epoch? If the initial solution is generated using a heatmap and some decoding scheme while the sub-solutions are computed auto-regressively with another model, how the REINFORCE algorithms are implemented necessitates further presentation in order to address obscurity regarding implementation details.
3. The solving time of UDC is relatively longer than several compared neural methods which is questionable of its efficiency and practiability. Further notably is the rapidly increased time consumed with the raising number of stages it takes in conquering, to which, on the other hand, the solving quality actually highly relates. Thus, further experiments shall be necessary to explicitly illustrate the relationship among solving quality, the number of conquering stages and solving time.
4. Will it benefit from using supervised learning over current RL method for sub-problem conquering? Note that SL-based approaches achieve better performance in problems like TSP in previous problem-specific research.
5. Can the framework generalize to a wider range of CO problems? Ten problems as the authors evaluated in this paper, most of them have a similar setting or formulation to routing tasks like TSP, VRP or their variants.
6. Experiments on different initial solutions are missing. To what extent are the effectiveness of the UDC framework rely on the quality of the initial solution generated during the dividing stage? Note the fact that a poor initial solution may lead to sub-optimal conquering solutions (as mentioned in Appendix D.4), more empirical studies or ablation should be conducted regarding the ways and quality of initial solution generation.
7. Unsatisfactory readability and inelegant comparison. A great many experiments are conducted, yet tables demonstrating results in this paper are somewhat dispersed in pieces and hard to follow. Different problems/scales/settings own different compared components/methods and reporting manners. A unified format with as many commonly comparable and explicitly categorized methods is supposed better-organized. Names like AM-bs1024 require explanation. Introduction or classification of methods like SO-mixed is missing. The meaning of the asterisk mark is unclear until Appendix D. Minor points. E.g., in line 167, the second $x_{1,0}$ is likely to be $x_1$? Furthermore, results of T2T are not reported.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our work. We are glad to know that you find the proposed UDC framework is designed with applicability to a wider range of CO problems, and its performance is shown to be generally good on certain problems.
>**Weakness 1. Novelty of UDC.**
Thanks for your insightful comments. Neural-divide-and-conquer methods are indeed a combination of existing methods in terms of network structure and solving process. However, we would like to declare that the significant novelty of UDC lies in its **training approach**.
Considering the **training approach**, we do not agree that the technique in the Reunion step resembles prevalent tricks in the neural CO literature. The training methods you mentioned (i.e., POMO/Sym-NCO, etc.) are all for obtaining the RL baseline for single-stage constructive solvers [1]. However, the DCR training method proposed in this article is designed to handle the negative impact of sub-optimal dividing results in neural divide-and-conquer methods (as shown in Table 1 in the original manuscript), which is novel in NCO methods.
As demonstrated in the paper, the DCR training method can significantly facilitate the applicability and effectiveness of UDC.
>**Weakness 2. Training process.**
Thanks for your suggestion. UDC employs a neural network for the dividing policy and another network for the conquering policy. In a Divide-Conquer-Reunion (DCR) enabled training process, the model for conquering policy will successively update its parameters twice according to the loss $\mathcal{L} _ {c1}(\mathcal{G})$ and $\mathcal{L} _ {c2}(\mathcal{G})$ after the Conquer step and the Reunion step, respectively. Finally, we will calculate the dividing loss function $\mathcal{L} _ {d}(\mathcal{G})$ based on the solution $\boldsymbol{x}_ 2$ and the dividing policy will update the model parameters according to the dividing loss.
A pseudo-code of the training process is provided in Algorithm 1 in Appendix C.6. Please refer to ``Common Concern 2`` for MDP \& Training objectives and we will provide more implementation details to increase the clarity of illustrations.
>**Weakness 3. Solving time.**
Thanks for your constructive comments. For a better discussion, we respond to the two concerns in Weakness 3 in reverse order.
**The number of conquering stages**: Both the solving time and solution quality of UDC are related to the number of initial solutions $\alpha$ and the number of conquering stages $r$. Therefore, we conduct ablation experiments on the two factors in Appendix E.1 and Appendix E.2, respectively, in the original paper. Changing the $r$ value to adjust solving times, the results in ``Figure 5 of the original paper`` demonstrate that UDC has an efficiency advantage, as its variants generally need less time for better performance compared to the SOTA neural divide-and-conquer methods GLOP and SL-based sub-path solver LEHD.
**Solving time of UDC**: Based on the results above, together with Table 3 and Table 17 of the original manuscript, when generating solutions with $\alpha$=1, UDC can generate acceptable solutions with a significant advantage in solving speed.
Additionally, we are sorry for an index mistake in line 816 where ``Figure 6`` should be ``Figure 5``.
>**Weakness 4. Supervised learning.**
Although SL-based approaches achieve better performance in some CO problems like TSP and CVRP, we believe that the SL method is not suitable for sub-problem conquering for **Poor applicability**: using SL for sub-problem conquering requires a large number of high-quality heuristic algorithm solutions as training labels. However, in most CO problems, heuristic algorithms are not outstanding (e.g., PCTSP) or are too time-consuming (e.g., min-max mTSP). Therefore, SL-based sub-problem conquering will compromise the availability of UDC for these CO problems.
>**Weakness 5. Applicability.**
Yes, according to the discussion in Section 3.3, UDC can handle combinatorial optimization problems that meet the conditions.
This paper chooses the 10 CO problems to evaluate UDC, mainly because they have **established datasets, baseline methods, and constructive solvers**. When a new CO problem has developed established datasets, baseline methods, and constructive solvers, it can also be incorporated to evaluate the proposed UDC framework.
>**Weakness 6. Initial solution.**
Thanks for your valuable suggestion. We further conduct experiments on TSP with different initial solutions, and the results are shown in ``Table 4 of the one-page PDF``. Results demonstrate the contribution of a good initial solution to the final solution. The variants with a random or nearest greedy initial solution cannot converge to good objective values even with 50 UDC conquering stages. Moreover, from TSP500 to TSP2,000, the dividing policy of UDC produces similar results to the heuristic algorithm random insertion, which verifies the quality of the dividing policy.
>**Weakness 7. Readability.**
Thank you for your suggestion. We will unify the reporting methods, add footnotes, clarify all method names and references, and provide more instructions for appendices accordingly to improve readability. In Table 2 in the original manuscript, we will supplement the reported results of T2T [4].
***
>**References**
[1] Berto, Federico, et al. "Rl4co: an extensive reinforcement learning for combinatorial optimization benchmark." arXiv preprint, 2023.
[2] Drakulic, Darko, et al. "Bq-nco: Bisimulation quotienting for efficient neural combinatorial optimization." Advances in Neural Information Processing Systems 36, 2024.
[3] Bertazzi, L. et al. Min–max vs. min–sum vehicle routing: A worst-case analysis. European Journal of Operational Research, 2015.
[4] Yang Li, Jinpei Guo, Runzhong Wang, and Junchi Yan. From distribution learning in training to gradient search in testing for combinatorial optimization. Advances in Neural Information Processing Systems, 36, 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for you rebuttal. I would like to raise my scores.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising the score. | Summary: The paper proposes a Divide-Concur-Reunion training approach for solving multiple "large" scale COPs.
Strengths: 1- Handling several problems under the same framework with possible different components per CO problem.
2- When compared to learning-based methods, in terms of testing time and solutions sizes, the proposed approach achieves competitive results.
3- The Divide-Concur-Reunion training algorithm.
Weaknesses: [Major Comments]
1- Large- and small-scale instances are not properly defined. Large-scale cannot be only larger instances from previously tested relatively smaller instances by learning-based methods. Scalability is the main bottleneck in many COPs. For example, ILPs are very efficient for smaller scale instances for the MIS problem and are also efficient in larger scale sparse graphs.
2- Furthermore, any learning-based CO solver (or NCO methods) must undergo generalization testing/analysis for every problem. While proposing a method to solve many CO problems is the main goal of the paper, investigating the generalization can differ significantly for each COP. This leads to improved understanding of the limitations/capabilities of the proposed framework. For TSP, the weights and size of the graph are the two parameters to consider. However, for MIS, there are additional parameters such as the graph density and degree distribution. The authors only consider ER700 with p=0.15 (probability of edge creation per node).
3- The proposed solver should tackle what heuristics and exact solvers cannot handle. Comparing run-time results with heuristics is not valid as learning-based methods require datasets and training time (which is ~10 days for the proposed method). For example, LKH3 results of Table 8 indicate that this heuristic requires only 6 seconds (unclear if it is average of total time) to solve 16 instances of OVRP2000 (which is considered large-scale by the authors) whereas the proposed approach takes 20 minutes, excluding the training time.
4- The availability of datasets is a challenging problem. For example, if at testing time, a real-world graph from the SNAP dataset (https://snap.stanford.edu/data/) is presented to UDC. How is it handled? The SNAP graphs do not follow a generative function in NetworkX which means training instances are not available. A discussion is needed here.
5- No details about graph sparsification or why it is needed. In Section 3.1, it is briefly described for TSP and MIS. How about other problems? The paper considers 10 problems. Furthermore, if the edges of the original graph in MIS are maintained, how is that graph sparsification?
6- In the second paragraph of Section 3.1, the write-up covers VRP and TSP, but no discussion is given of the other problems.
7- For the previous two points, Appendix B covers some of the details, but not all. For example, how is the sub-problem preparation done for the MIS?
8- In line 150, if the dividing and conquering stages can be defined as MDP, can you define the MDP explicitly? For example, what are the States set, Transition function, and Reward function?
9- What is “i” in \Omega_i in (5)? If it is from the set defined in line 132, shouldn't it be k?
10- What is the objective function(s) for training the DCR? Since training is the main contribution of the paper, this needs to be explained properly. Saying that "we use the REINFORCE algorithm" is not clear! Algorithm 1 outlines the procedure but does not explain this point. I think the training Algorithm should be placed in the main body of the paper while the gradients in (6) should be placed in the Appendix.
11- What is the functions Rank and Top-k in line 190? Citation(s) or brief description is needed.
12- How are the constraints in Appendix B being handled?
13- Generally, there are many unclear descriptions of the proposed methodology. Even if the proposed approach consists of many components, they need to be properly described and defined. Examples include points 5, 6, 7, 8, 10, 11, and 12.
14- The last three rows of the sub tables in Table 7 (Appendix D) indicate that the proposed approach is not fully unified across different COPs. They do not only suggest that different components are being enabled or disabled for every COP, but also the use of different strategies/architectures of obtaining the policies. For example, the DCR (which is the main contribution of the paper) in the KP problem is disabled.
[Minor]
There are many writing issues and typos in the paper. Few examples are:
1- "exp" in (3) and (4) are denoted using different fonts.
2- Subscripts of \script H is not defined in (4).
3- The title of Section 3.3 can be "Applicability in General CO Problems".
4- "Experiments" in line 199.
5- "Definitions" in line 539.
Technical Quality: 2
Clarity: 1
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our work. We are glad to know that you find the proposed UDC achieves competitive results and can handle several problems under the same framework with possible different components per CO problem.
We sincerely appreciate the effort you have put into reviewing this article and we address your concerns as follows.
>**Weakness 1. Definition of ``large-scale``.**
We agree that the definition of large-scale cannot only consider the size of instances. As the definition in [1], the large-scale CO problem has more data or decision variables or constraints (large-size sparse graphs might have relatively more decision variables, data, and fewer constraints). We will provide a clearer definition of large-scale CO problems.
>**Weakness 2 & 4. Generalization.**
We deeply agree that the generalization performance on instances with different parameters is essential to test NCO methods. However, we have to declare that for some CO problems, there are no established datasets to evaluate the generalization. When generalizing the model trained on the ER graph with p=0.15 to the ``Social circles: Facebook`` graph in the SNAP dataset, we find UDC can also output legal solutions with seemingly good objective values but there is no ground truth of MIS on the graph so we cannot check the ability of UDC on it.
We evaluate the generalization performance of UDC on established out-of-domain datasets, e.g., vehicle routing problem (VRP) instances with coordinates sampled from non-uniform distributions. As exhibited in Table 13 in Appendix D.3, the generalization ability of UDC on TSP and CVRP can surpass existing neural solvers. Such advantage may be attributed to the normalization process in the conquering stage.
>**Weakness 3. Solving time.**
Firstly, we apologize for the typo in the time results of OVRP and we will revise it to the content shown in ``Table 3 of the one-page PDF``, where LKH3 takes 16 minutes for the OVRP500 dataset, 32 minutes for OVRP1,000, and 27 minutes for OVRP2,000.
We agree that neural methods should tackle what heuristics and exact solvers cannot handle, especially when heuristics has only low performance or requires an unacceptable time to generate solutions. In certain CO problems, UDC can compensate for the above two situations. Considering **performances**, UDC can produce better results than all available existing heuristics in (S)PCTSP and min-max mTSP. For **solving time**, we acknowledge that NCO methods require additional training time, which is time-consuming. However, well-trained models, such as UDC for large-scale OP and TSP, can generate good solutions within several seconds. Compared to heuristics, the online solving characteristic of UDC may be valuable for practical applications that require response speed.
>**Weakness 5-7, 13. Pipeline settings.**
Thanks for your comments. We will improve the expression in Section 3 and Appendix B based on your suggestions to eliminate ambiguity, and we will also publish all the code after the paper is published to show the detailed implementation.
For weakness 5, in non-graph input CO problems, especially VRPs, constructing sparse graphs aims at deleting more useless edges to reduce the time complexity. It is a general step for existing heatmap-based solvers. By processing only sparse graphs, each layer of GNNs can achieve a time complexity of O ($KN$) ($K$ is the neighbor size, usually 100 in UDC), which is helpful to large-scale VRPs. In addition, this procedure is to construct sparse graphs for instances rather than ``graph sparsification``. For MIS, the ER graph with p=0.15 is a sparse graph itself so there are no additional steps needed for constructing sparse graphs. For weakness 7, Sub-MIS is generated with random nodes from the original MIS.
>**Weakness 8 & 10. MDP & training.**
Please refer to ``Common Concern 2`` for the MDP definition & objective functions.
>**Weakness 11: Line 190.**
Only CO problems with decomposable aggregate functions [2] can be solved by the divide-and-conquer method. So UDC can not apply to CO problems with indecomposable aggregate functions such as minimizing the visiting rank of given nodes or the top-k longest route in a graph.
>**Weakness 12. Constraint handling.**
Feasibility masks will mask the illegal action in generating both initial solutions and sub-solutions (You can also refer to ``Question 1 of reviewer Eauz`` for details).
>**Weakness 14. Unified configuration.**
We need to declare that 'unified' in this article means that UDC adopts a unified training framework for the diving and conquering stages. For the 10 CO problems involved, there is no unified conquering policy that can solve all problems (Please refer to ``Common Concern 1``) so UDC adopts different conquering policies for different CO problems.
**The third to last** and **the second to last** rows of Table 7 are related to the environment or constraint of different CO problems. For KP, the sub-optimal connection shown in Figure 2 in the original paper will not occur, so there is no optimality difference in whether DCR is enabled. We provide the KP results with DCR enabled in ``Table 2 of the one-page PDF`` and these results support the above conclusion.
The two-side conquering baseline may not be applicable in certain CO problems like ATSP and CVRP where reversing the sub-solution will be illegal, so we disable such baseline in these problems. We will add an explanation to summarize all the settings in Table 7.
>**Minor Weakness & Weakness 9. Typos.**
Yes, the $\Omega_i$ should be $\Omega_k$. Thanks for pointing typos out, we will fix all of them accordingly.
***
>**References**
[1] Gervet, Carmen. "Large scale combinatorial optimization: A methodological viewpoint." DIMACS series, 2001.
[2] Jesus, Paulo, Carlos Baquero, and Paulo Sérgio Almeida. "A survey of distributed data aggregation algorithms." IEEE Communications Surveys & Tutorials, 2014
---
Rebuttal 2:
Title: My concerns are partially addressed.
Comment: Definition of large Scale instance:
- CO problems have been well-studied, with a history dating back to the seventies (see "Reducibility among combinatorial problems" for an example). Understanding the difficulty of each problem should give the authors a better sense of which instances are more challenging to solve and, therefore, evaluate them accordingly. Harder instances may not necessary be larger instances. For the MIS example, mid-density graphs (w.r.t. complete graphs) with n~2000 is much harder to solve in KaMIS and ILP solvers when compared to the very-large and very sparse SNAP dataset graphs (with million of nodes).
Generalization:
- Generalization in COPs shouldn't rely on the availability of a dataset. Most datasets in COPs were specifically created for ML4CO methods, but that doesn't mean datasets will always be available in practice. For example, in the case of MIS, apart from the SATLIB dataset (which represents hard, sparse instances compared to ER), there are no other datasets with accurately known MISs.
- The generalization test can be as simple as the following: Evaluate your ER-trained model with different values of p and n. Does your model require different graph embeddings for larger graph sizes?
- For SNAP, you can compare with SOTA heuristics (KaMIS) unless the proposed model requires different embeddings or needs to be trained from scratch (or possibly fine-tuned). If any of these conditions apply, they should be stated and discussed.
- Prior to the development of DNNs, we didn't know how to classify images. This is not the case for COPs as they are not typical supervised ML problems. For COPs, we already have (1) well-performing problem-specific heuristics and/or ILP solvers, and (2) a large body of previous problem-specific studies that investigate the hardness of every problem.
- Please note that I am not saying that the proposed method should obtain SOTA results (in terms of both solutions sizes and run-time) on all CO problems on all instances, but I am saying that the proposed framework should be evaluated such that the limitation/capabilities are clear and understood with the main target: Can the proposed framework, by any metric, outperform SOTA problem-heuristics that does not require any data on a specific instance(s)? For MIS, the results in Table 9 says otherwise if we count for the nearly 10 days of training. Even if we exclude the training time and observe only that the inference time (21.05m), can the proposed method outperform KaMIS when p or n change for ER? These are the types of questions that should be accounted when evaluating the 10 problems.
- I acknowledge the authors results for TSP in Tables 3 and 4 of the one-page PDF in the rebuttal. This is a good example of evaluating the performance UDC for a CO problem.
Overall:
- My comments on solving time, pipeline Settings, description of MDP and the training algorithm, and constraints handling are well explained by authors. I highly recommend to integrate these into subsequent versions of the paper.
- I think the advantage of this proposed framework is obtaining solutions (not necessarily the best in terms of solution quality and run-time) to some instances of 10 different CO problems.
- I thank the authors for their efforts and the additional experiments. As such, I will increase my score to 5 as I still believe that the capabilities and limitations of the proposed framework of most problems are not well-understood.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for your valuable comments and raising your score to 5. We are glad to hear that some of your concerns have been resolved and that you recognize the advantage of UDC in applicability. We will include these details in our manuscript and provide a formal definition of large-scale accordingly.
Thank you very much for your helpful suggestions on the generalization study. We will try our best to accomplish these experiments as soon as possible and provide the results before the end of the rebuttal period.
By the way, the score in the system seems to remain unchanged. Could you please double-check it?
---
Rebuttal 3:
Comment: Thank you for changing the score. Sorry, we're a little confused about your question. Does the graph embedding refer to the embedding itself or its size?
If it is the former, different graph inputs are bound to get different embeddings.
If it is the latter, the answer is no. The calculation methods of embeddings are unchanged regardless of the size of the input graph. The obtained graph embeddings on each vertex are always d-dimensional vector $\mathcal{R}^{d}$ and no new parameters will be introduced.
When AGNN computes the graph embedding of each vertex, it only needs the neighborhood information of this vertex, without the full graph information. Therefore the computation of embeddings is independent of the graph size.
---
Rebuttal 4:
Comment: >**Further experiments to validate the capabilities and limitations**:
Thank you for your valuable suggestions on generalization experiments. In addition to the TSP and CVRP, we conduct generalization experiments on the MIS problems. We conduct zero-shot generalization on the model trained by ER graphs with n$\sim U(700,800)$ and p=0.15 to five other datasets, including:
* A random ER graph dataset with n=720 and p=0.05. (average 12,960 undirected edges)
* A random ER graph dataset with n=720 and p=0.25. (average 64,800 undirected edges)
* A random ER graph dataset with n=2,000 and p=0.05. (average 100,000 undirected edges)
* The ego-Facebook data in SNAP. (4,039 nodes, 88,234 undirected edges)
* The feather-lastfm-social data in SNAP. (7,624 nodes, 27,806 undirected edges)
All the first three ER random datasets contain 128 graph instances, and we consider the last two data as a single non-connected graph. The table below exhibits the comparison results between UDC and the SOTA heuristics KaMIS (implemented on https://github.com/KarlsruheMIS/KaMIS) on the original in-domain dataset (ER-[700-800], p=0.15) and the five out-of-domain datasets mentioned above. Obj. represents the objective function value, Gap represents the performance gap compared to the best algorithm and Time is the time consumption for solving the whole dataset.
| Dataset || |ER, n=720, p=0.05 | |||ER-[700-800], p=0.15| (in-domain) |
|---------------------------|-|-------------|-------------|---------|-|-----------------|-----------------|-----------------|
| Method | | Obj. | Gap | Time | | Obj. | Gap | Time |
| KaMIS | | 99.5 | - | 40.6m | | 44.9 | - | 52.13m |
| UDC-$x_0$($\alpha=50$) || 79.5 | 20.10% | 8s | | 41.0 | 8.62% | 40s |
| UDC-$x_{50}$($\alpha=50$) || 89.5 | 10.07% | 9.03m | | 42.9 | 4.44% | 21.05m |
| UDC-$x_{250}$($\alpha=50$)|| 94.5 | 5.03% | 44.84m || 43.8 | 2.41% | 1.73h |
| **Dataset** | ||**ER, n=720, p=0.25** |||| **ER, n=2,000, p=0.05** | |
| Method | | Obj. | Gap | Time | | Obj. | Gap | Time |
| KaMIS | | 28.2 | - | 1.4h | | 133.9 | - | 3h |
| UDC-$x_0$($\alpha=20$) || 21.6 | 23.46% | 58s | | 116.9 | 12.72% | 2m |
| UDC-$x_{50}$($\alpha=20$)| | 25.3 | 10.54% | 21m | | 119.1 | 11.05% | 21m |
| UDC-$x_{250}$($\alpha=20$)|| 26.4 | 6.59% | 63m | | 122.9 | 8.25% | 90m |
| **Dataset** | |**SNAP-** | **ego-Facebook** | ||**SNAP-**| **feather-lastfm-social** ||
| Method | | Obj. | Gap | Time | | Obj. | Gap | Time |
| KaMIS | | 1052.0 | - | 155s | | 4177.0 | - | 0.1s |
| UDC-$x_0$($\alpha=1$) || 767.0 | 27.09% | 6s | | 3622.0 | 13.29% | 2s |
| UDC-$x_{250}$($\alpha=1$)| | 901.0 | 14.35% | 11s| | 3751.0 | 10.20% | 13s |
| UDC-$x_{2500}$($\alpha=1$)|| 1009.0 | 4.09% | 60s | | 4067.0 | 2.63% | 80s |
Compared to KaMIS, UDC variants exhibit relatively stable performance gaps across datasets with various node sizes (n), sparsity (p), and generation methods, demonstrating good generalization ability. Moreover, except for the extremely sparse graph SNAP-feather-lastfm-social, UDC variants have significant advantages over KaMIS in terms of time.
As for learning-based methods, currently, we are not able to run the given code of any learning-based algorithm with given instruction, so we are temporarily unable to compare the generalization ability with other learning-based algorithms. We will try to include these learning-based baselines in future manuscripts. In addition, besides MIS, we will also strive to conduct generalization tests on more CO problems.
We hope the above discussion could address your remaining concerns. Please let us know if you have any further concerns.
Title: Further experiments to validate the capabilities and limitations. | Summary: The paper introduces a Unified Neural Divide-and-Conquer (UDC) framework designed to tackle large-scale combinatorial optimization problems by leveraging a novel training methodology called Divide-Conquer-Reunion (DCR). This framework employs graph neural networks for the division of problems and utilizes established constructive solvers for conquering sub-problems, aiming to optimize the entire process. UDC has been evaluated across a range of benchmark datasets and demonstrated superior performance compared to existing neural and heuristic methods, particularly in scalability and solution quality.
Strengths: 1. **Innovative Framework**: UDC integrates a novel training methodology that effectively addresses the challenge of sub-optimal divide policies, improving overall solution quality.
2. **Extensive Applicability**: The framework demonstrates broad applicability across various large-scale combinatorial problems, confirming its versatility and effectiveness.
3. **Strong Empirical Results**: UDC outperforms several baseline methods on a diverse set of problems, which strongly supports the utility and efficiency of the approach.
Weaknesses: **Dependence on Specific Architectures**: The success of the method heavily relies on the choice of graph neural networks and the specific configurations of constructive solvers, which might not generalize across all types of combinatorial optimization problems.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the UDC framework ensure that feasible initial solutions are generated during the division stage, particularly when traditional heuristics are not applicable?
2. Does the UDC framework require extensive expert intervention for tuning neural network parameters and training regimens to achieve optimal performance, especially when adapting to new or unseen problem types?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: **Constraint Handling**: UDC may not perform well on problems with intricate constraints (e.g., time windows in routing problems) that require more than just feasibility checks.
This is an interesting field, and if the authors can provide sufficiently detailed responses to the questions raised, I would be willing to consider revising my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our work. We are glad to know that you find the proposed UDC integrates a novel training methodology, demonstrates broad applicability across various large-scale CO problems, and outperforms several baseline methods.
We address your concerns as follows.
>**Weakness 1. Dependence on specific architectures:** The success of the method heavily relies on the choice of graph neural networks and the specific configurations of constructive solvers, which might not generalize across all types of combinatorial optimization problems.
Thanks for your insightful comments. However, we respectfully disagree with you that the success of UDC heavily relies on the choice of graph neural networks and the specific configurations of constructive solvers. Our reasons are as follows:
**Constructive solvers**: As illustrated in ``Common Concern 1`` of the general response, UDC involves 10 CO problems and it is impossible to employ the same constructive solver for all CO problems [1]. However, UDC exhibits outstanding flexibility in employing different constructive solvers as the conquering policy. Experiments mentioned in ``Common Concern 1`` show that the final performance of UDC does not rely on a specific constructive model for the conquering policy, which means that any available constructive solver with good performance can be chosen.
**Graph neural networks (GNNs)**: The current UDC framework only evaluates the AGNN for the dividing policy across all 10 CO problems. However, the use of AGNN [2] in UDC is not a specific choice as well. **1)** UDC adopts GNN instead of other network structures because of its lightweight memory and time consumption and **2)** AGNN is one of the most basic forms of massage-passing neural network [3], which only calculates the bidirectional edge information and bidirectional node information within neighbors. **3)** When using AGNN to process embeddings of all 10 CO problems, UDC follows a consistent configuration.
>**Question 1. feasible initial solution**: How does the UDC framework ensure that feasible initial solutions are generated during the division stage, particularly when traditional heuristics are not applicable?
For involved CO problems, UDC autoregressively constructs the initial solution node-by-node based on the heatmap generated by GNN. At every step, it employs a **feasibility mask mechanism** to ensure the feasibility of the final generated solution. This mechanism is fundamental in both heatmap-based solvers [4] and constructive solvers [5]. It prevents the selection of nodes that would lead the next partial solution outside the set of all feasible solutions $\Omega$. For example, the feasibility mask mechanism for TSP will mask the selection probabilities of all visited nodes to 0.
With the feasibility mask, the dividing policy can construct valid solutions for most CO problems. UDC cannot apply to the relatively rare CO problems where the feasibility mask mechanism cannot ensure a valid initial solution.
>**Question 2. Requirement on tunning**: Does the UDC framework require extensive expert intervention for tuning neural network parameters and training regimens to achieve optimal performance, especially when adapting to new or unseen problem types?
UDC does not conduct special tuning for specific CO problems. As shown in Appendix D.1, we use the same learning rate and AGNN structure across all 10 CO problems.
When applying to a new problem type, UDC requires:
**1)** Employing an available constructive solver for conquering policy (Please refer to ``Common Concern 1`` \& ``Weakness1``).
**2)** Set up new RL environments and feasibility masks based on constraints.
**3)** Changing $\alpha$ and $\beta$ to maximize the CUDA memory usage. Note that this change is not for solving performance and will only facilitate training efficiency.
***
>**References**
[1] Kwon, Yeong-Dae, et al. "Matrix encoding networks for neural combinatorial optimization." Advances in Neural Information Processing Systems 34, 2021.
[2] Xavier Bresson and Thomas Laurent. "An experimental study of neural networks for variable
graphs." In ICLR 2018 Workshop, 2018.
[3] Gilmer, Justin, et al. "Message passing neural networks." Machine learning meets quantum physics, 2020.
[4] Ruizhong Qiu, Zhiqing Sun, and Yiming Yang. DIMES: A differentiable meta solver for combinatorial optimization problems. In Advances in Neural Information Processing Systems 35, 2022.
[5] Zhou, Changliang, et al. "Instance-Conditioned Adaptation for Large-scale Generalization of Neural Combinatorial Optimization." arXiv preprint, 2024.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for your detailed and thoughtful rebuttal. Your explanations have addressed my concerns, particularly regarding the flexibility of the UDC framework in terms of architecture and solver selection. I now have a better understanding of how the framework ensures feasible solutions during the division stage and the level of tuning required for different problem types.
This is indeed an impressive framework with broad applicability across various combinatorial optimization problems. I am satisfied with your responses and will maintain my positive score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your support and positive score. We are glad to hear your appreciation of the broad applicability of UDC and that our explanation has addressed your concerns. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their constructive comments and valuable suggestions. These suggestions greatly help us to improve our manuscript. The current manuscript has received extensive positive evaluations regarding its novelty, effectiveness, applicability, and impact:
* **Reviewer Eauz**: UDC integrates a novel training methodology that effectively addresses the challenge of sub-optimal divide policies. It demonstrates broad applicability across various large-scale combinatorial problems. UDC outperforms several baseline methods on a diverse set of problems, which strongly supports the utility and efficiency of the approach.
* **Reviewer h7Dg**: When compared to learning-based methods, in terms of testing time and solution sizes, the proposed UDC achieves competitive results.
* **Reviewer kAJo**: The UDC framework is designed with applicability to a wider range of CO problems, without heavy reliance on problem-specific heuristics like in the previous works. The authors conduct extensive experiments where the proposed framework's performance is shown to be generally good on certain problems.
* **Reviewer Uqww**: This paper conducts extensive experiments. UDC exhibits effectiveness in 10 different CO problems, improving the existing divide-and-conquer methods in both effectiveness and applicability. The idea of DCR is quite novel and experiments give detailed explanations to the setting reason for hyperparameters and components.
***
We address some common concerns shared by different reviewers in this response.
>**Common Concern 1**: Explanation of choosing different constructive solvers (i.e., models for conquering policy) for different CO problems. (Reviewer Eauz, Reviewer h7Dg)
Using the same model for conquering policy is an ideal choice. However, the experiment of UDC involves 10 CO problems, and some CO problems, like min-max mTSP [1] and ATSP [2], require specific model structures to process their inputs and constraints. Consequently, there is no constructive solver that can be used for all 10 involved CO problems. For each CO problem, UDC adopts the best-performance constructive solver for the conquering policy (i.e., summarized in Appendix Table 7).
UDC exhibits flexibility in choosing the constructive solver for the conquering policy. For well-developed CO problems like TSP and KP, there are multiple constructive solvers available. Table 5 in the original paper presents an ablation study on the selection of these constructive solvers for TSP. In rebuttal, we conduct a similar ablation study on KP, and the results are displayed in ``Table 1 of the appendix one-page PDF``. Both sets of results indicate that the final performance of UDC does not rely on a specific constructive model for the conquering policy.
***
>**Common Concern 2**: Presentation of training objective and their Markov Decision Process (MDP) (Reviewer h7Dg, Reviewer kAJo)
In addition to Algorithm 1 in Appendix C.6, we will include descriptions of the training objective of both policies and the MDP for the two stages.
**Objective functions of optimizing the two networks**: For a CO problem (or a sub-CO problem) with an objective function $f(\cdot)$, the objective of optimizing both networks is to maximize the reward functions of their corresponding MDP. The reward is $-f(\boldsymbol{x}_2)$ for the GNN, $-f(\boldsymbol{s}^1)$ for the constructive solver in the Conquer step (i.e., the first conquering stages), and $-f(\boldsymbol{s}^2)$ in the Reunion step.
**MDP for the dividing stage**: The MDP $\mathcal{M}_d=${$\mathcal{S}_d,\mathcal{A}_d,\boldsymbol{r}_d,\mathcal{P}_d$} of the dividing stage can be represented as follows:
* State. The state ${st} _ d \in \mathcal{S} _ d $ represents the current partial solution. The state in the $t$-th time step is the current partial solution with $t$ nodes ${st} _ {d,t}=\boldsymbol{x} _ {0,t}=(x_{1},x_2,\ldots,x_t)$. $st_{d,0}$ is empty, $st_{d,T}=\boldsymbol{x}_0$.
* Action \& Transaction. The action is to select a node at time step $t$, i.e., $a_{d,t}=x_{t+1}$. The chosen node needs to ensure that the partial solution ${st} _ {d,t+1}=(x _ 1,\ldots,x _ t,x _ {t+1}) $ is valid (i.e., $st _ {d, T}\in \Omega$).
* Reward. Every single time step has the same reward $r_{d,t}=-f(\boldsymbol{x}_2)$, which is the negative objective function value of the final solution $\boldsymbol{x}_2$ after the whole DCR process.
* Policy. The policy $\pi_d$ is shown in Eq. (4) of the original paper.
**MDP for the conquering stage**: The MDP $\mathcal{M}_c=${$\mathcal{S}_c,\mathcal{A}_c,\boldsymbol{r}_c,\mathcal{P}_c$} of any single conquering stage is represented similarly as follows:
* State. Each state ${st} _ c\in \mathcal{S} _ c$ represents a partial solution of sub-CO problems. The state in the $t$-th time step is the current partial (sub-)solution with $t$ nodes ${st} _ {c,t}=\boldsymbol{s} _ {t}=(s_{1},s_2,...,s_t)$. $st_{c,0}$ is empty and $st_{c,T}=\boldsymbol{s}$.
* Action \& Transaction. The action is to select a node at time step $t$ as well, i.e., $a_{c,t}=s_{t+1}$.
* Reward. The reward in each time step becomes the objective value of sub-CO solution $\boldsymbol{s}$, i.e., $r_{c,t}=-f(\boldsymbol{s})$.
* Policy. The policy $\pi_c$ is shown in Eq. (5) of the original paper.
***
Point-to-point responses can be found below. We are also glad to continually improve our work to address any further concerns.
Best Regards,
Paper12083 Authors
***
>**Reference**
[1] Son, Jiwoo, et al. "Equity-Transformer: Solving NP-Hard Min-Max Routing Problems as Sequential Generation with Equity Context." AAAI 2024, 2024.
[2] Kwon, Yeong-Dae, et al. "Matrix encoding networks for neural combinatorial optimization." NeurIPS 2021, 2021.
Pdf: /pdf/7b47234a8702c530ffbf0b7093d4e68d141bc625.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Probabilistic size-and-shape functional mixed models | Accept (poster) | Summary: The paper deals with estimation functional data, i.e. time-series represented as functions – under a specific observation model, that aims to separately account for a fixed effect (modeled as a mean function) and other variations (modeled as noise functions), with the added confounding variable being a norm-preserving warping function, that leads to observed data.
The paper proceeds to develop an Bayesian approach to estimating the model parameters, under certain assumed distributions for the noise terms, and certain basis functions to represent the functions in a finite dimensional space.
Evaluation is shown both on simulated data and some real functional data examples.
Strengths: The paper does a good job of introducing various temporal warping models, and their intuition on many example data. I found the mathematical and visual explanations of the differences between value preserving and norm preserving warps illuminating. The paper is well positioned to advance the field of functional data analysis as more and more applications get impacted by these methods.
The paper tries to motivate the presented mixed effects models by highlighting historical examples based on 2D shape analysis, which provides good context for the proposed developments.
The paper also has good strategies for actually implementing the infinite dimensional computations using projections onto basis functions such as B-splines.
Weaknesses: Comparisons are limited to a publicly available implementation of the technique known as warpMiX. Default parameters are used as given in the R implementation. At the very least some attempt at matching noise variance assumed in the warpmix model and the proposed model should be attempted. Default noise variance parameter in warpmix is at the level of 10^-3 whereas the noise variances for the proposed model are much higher at sigma_c^2 = 0.25, sigma^2 = 1 etc… Are these comparable in any sense, if not how should this comparison be made more fairly?
Further, the paper cites a wealth of literature that uses a model similar to the one used, at least as far as the assumed norm preserving action model. There are papers by the cited authors that use these models for estimating mean functions from a stack of observed functions, mainly by Kurtek, Srivastava and colleagues. Many of these references are also cited in the paper, but some attempt should be made to compare to these approaches.
Beyond simulation, for the task of recovery of functions (and functional parameters) under an assumed observation model, evaluation should include some real downstream task. Otherwise, we are only faced with visual assessments of quality. For instance, the results as shown in figure 4 are claimed to be better in the sense that the estimate is ‘as desired and contains sharp features that are representative of the observed data’ (line 351, pg 9). These are at best subjective assessments, and it is unclear if such an estimate is useful in some quantifiable way. There are many ways to take this to the next step of evaluation, to see if some downstream task such as classification shows improvement after the recovery of the needed parameters.
Technical Quality: 3
Clarity: 3
Questions for Authors: I would like to know how well the method could be implemented without using basis function approximations – i.e. representing functions simply by their samples. What gets simplified and what gets complicated. What are the practical implications of assuming a specific family of basis functions such as B-Splines. I did notice the data-driven functional PCA basis was tried as well, in the appendix, which seemed to perform worse than B-splines.
Additional questions are on evaluation, which are described in the experiments section of the review.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper has a reasonable description of the limitations of the model and its assumptions in section 3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Noise Variance** The warpMix default noise variance parameter, "sigmaEpsilonTilde", which is set to $10^{-3}$ in the R implementation, is the variance of $\theta_i$, a set of parameters in the model for the phase functions (Equation (6) in Claeskens et al., Nonlinear Mixed Effects Modeling and Warping for Functional Data, 2021). It is not the noise variance or the variance of the random effect ($\sigma^2$ and $\sigma^2_c$ in our model). In Simulated Example 2 in Sec. 4.1, we compare the proposed Bayesian model to warpMix. There, we actually simulate data using the warpMix model (with default parameter values). Note that, as in the warpMix model, the data is generated using the value-preserving action and not the norm-preserving action used in our model. Despite this, our model recovers the underlying fixed effect function $\mu$ with higher accuracy than the warpMix model (see Table 1 in the manuscript and attached file). In this regard, all of our comparisons to warpMix are fair.
**Novelty and Comparison to Existing Methods** To the best of our knowledge, the Bayesian functional mixed effects model proposed in our manuscript is the first to use the norm-preserving action as a size-and-shape preserving random effect on the original function space. The cited papers use the norm-preserving action on a transformed space, the space of square-root velocity functions, which corresponds to the value-preserving action on the original space (see Chapter 4 in Srivastava et al., Functional and Shape Data Analysis, 2016). In addition, none of those models incorporate size-and-shape altering random effects. The table in the attached file includes a quantitative comparison of the proposed modeling framework to the model of Cheng et al., Bayesian Registration of Functions and Curves, 2016 which is a mean+noise model and utilizes the aforementioned square-root velocity representation. As can be seen in this table, our model outperforms their approach, labeled BRFC, in terms of recovery of the mean (fixed effect) function $\mu$ on all simulated data examples.
**Evaluation of Estimation Results and Novelty** The question of whether the fixed effect function $\mu$ can be reliably recovered is a highly studied problem within the functional data analysis literature, and sufficient conditions have only recently been identified. In this context, our conclusion that the geometric size-and-shape of $\mu$ is recoverable is novel. This finding has clear implications in many machine learning applications with functional data as input to neural networks, which additionally incorporate a random effect in its architecture (Simchoni et al., Integrating Random Effects in Deep Neural Networks, 2023). However, incorporating downstream tasks, e.g., a classification layer, into the proposed model is something of interest and we plan to pursue this in future work. Our quantitative evaluations are based on simulated datasets where the true underlying fixed effect function $\mu$ is known. In these cases, we outperform state-of-the-art competitors including warpMix and BRFC as seen in the table in the attached file. As mentioned by the reviewer, we rely on qualitative evaluations for the real data examples. Nonetheless, in most real data scenarios it is fairly clear that the fixed effect function estimated using the proposed model is "better" or more representative of patterns in the observed data than that recovered using warpMix; see Sec. 4.2 and App. H. For the PQRST complexes, the warpMix model fails to yield an estimate, and for many of the other datasets it tends to oversmooth expected prominent features of $\mu$.
**Alternative Model Formulation** The fixed effect function and size-and-shape altering random effect components of our probability model are specified using linear combinations of basis functions, *but not the discretized data itself*. The main motivation behind this choice is dimension reduction (sample size $n$ is typically smaller than number of time points per function), and this is the most common approach in specifying models for functional data. That said, an alternative approach would be to specify the model pointwise using parameter function evaluations as suggested by the reviewer. This, however, would drastically increase the dimension of the parameter space for the fixed effect function, complicating inference. The basis functions are only used to specify the prior distributions for the fixed effect function and the size-and-shape altering random effect. In that sense, basis functions allow us to enforce desired smoothness in the estimate of $\mu$, which would be much more difficult if the fixed effect function was modeled pointwise. As mentioned by the reviewer, we tried to use a data-driven FPCA basis in our model; see App. F. However, we found that this basis contained small scale variation, even in the leading components, making it ineffective at modeling the fixed effect.
---
Rebuttal Comment 1.1:
Comment: I am satisfied by the rebuttal and have upgraded my decision to Accept. I suggest summarizing the key clarifications in the camera-ready draft if the final decision is indeed an Accept.
---
Reply to Comment 1.1.1:
Comment: Thank you again for providing constructive comments during the review period and for considering our rebuttal. If accepted, we will make sure to provide the necessary clarifications in the camera-ready version of the manuscript. | Summary: The paper considers uncertainty quantification for one-dimensional regression tasks, where the observations are noisy. A rather advanced additive model considering invariances under space-time unitary transformations is proposted. Numerical experiments demonstrate the superiority of the approach over other state-of-the-art methods.
Strengths: - Due to the strong modeling assumptions, on synthethic datasets, the method can recover the true signal even in situations where there is lots of noise and classical methods (such as GPs) may fail.
- The methods seems to be useful to analyze the Berkley growth spurt data set.
Weaknesses: - It is unclear whether the work will be of interest to the wider machine learning audience and the considered topic and datasets feel more of a niche.
- Despite the well-written introduction, the paper was quite difficult to understand due to the advanced math and notations -- after reading it I am still quite unsure why I should really care about functional mixed models and their Bayesian inference.
Technical Quality: 3
Clarity: 2
Questions for Authors: I am slightly puzzled as of why NeurIPS was chosen as a venue for this paper - the work may not get the appreciation here it deserves. I spent quite some time on trying to understand the paper, but its motivations and applications to wider machine learning remain unclear. Perhaps the paper may be better suited towards a statistics journal, such as the ones referenced often in the paper? If more general motivations are clarified, I am considering to change my score, but accessibility issues are remaining.
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: All limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's concern regarding accessibility to the broader machine learning community. If the manuscript is accepted, we plan to simplify notation as much as possible and provide more intuitive descriptions of some of the mathematical concepts.
There were several factors that motivated us to choose NeurIPS as the outlet for this work.
**1.** Functional data is arising as a common object in various applications, including computer vision and biomedical imaging, and functional data models are becoming increasingly important in machine learning, see e.g., Rao et al., Modern Non-linear Function-on-function Regression, 2023 and Rao et al., Nonlinear Functional Modeling Using Neural Networks, 2023. Furthermore, employing mixed models with random effects to better model correlated input data in neural networks is fast gaining traction, see e.g., Simchon et al., Integrating Random Effects in Deep Neural Networks, 2023.
**2.** The field of shape analysis and more broadly geometric data analysis on quotient spaces, which provides part of the motivation behind the proposed modeling framework, falls under the umbrella of machine learning with broad applications in computer vision, graphics and biomedical imaging.
**3.** In both, functional data analysis and shape analysis, geometry and invariance to nuisance variation play an important role and aid in model formulation and estimation. Indeed, these ideas can be applied more broadly in machine learning and there is increasing interest in the use of geometry and invariance in neural networks; see Bronstein et al., Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges, 2021 and references within.
**4.** Our motivation stems from understanding the type of signal that can be recovered by a neural network or any probabilistic model from complex input data in the presence of nontrivial symmetries and geometric information. We have considered one specific instance of this, versions of which have been extensively studied in the statistics community. | Summary: For the problem of the reliable recovery of a fixed effect function $\mu$, this paper focuses on sampling from and summarizing the posterior distribution of a fixed effect function $\mu$ in a functional mixed model with random object-level phase and amplitude components, without a finite-rank covariance assumption on the error process.
----
Post-rebuttal: The rebuttal addressed most of my concerns and I would like to raise my score.
Strengths: 1) This paper tries to solve a challenging task, recovering the size-and-shape of an unknown function by sampling.
2) The theoretical analysis is interesting, e.g., Derivation of Metropolis–Hastings acceptance ratio.
3) The authors also give the convergence analysis by numerical experiments.
Weaknesses: 1) It seems that we should give different prior models for different functions.
2) The parameter sensitivity of $\theta$ and the running time of the proposed method should be given.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Neural networks can also recover functions by sampling. What are the advantages of the proposed method compared to neural networks?
2) How can we recover arbitrary by a fixed model in Eq.(3)?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Prior Models** We want to address this comment from two different perspectives. First, depending on the real data scenario, the choice of the type/number of basis functions in the model for the fixed effect function $\mu$ and the size-and-shape altering random effect $v_i$ can be different, e.g., for growth rate functions we used the modified Fourier basis to model the fixed effect $\mu$, while for the PQRST complexes we used B-splines. This choice is data dependent. At the same time, different types of basis functions can be used for the fixed effect function and the size-and-shape altering random effect. In general, we recommend the use of B-splines for the size-and-shape altering random effect since they have compact support and are able to capture local variation around the fixed effect function. In the end, the prior distributions are specified for the basis function coefficients.
**Computational Cost and Sensitivity Analyses** Lines 366-370 provide information about the computational cost of MCMC sampling for the proposed Bayesian model. In short, we require ~111 minutes to yield 100,000 posterior samples for inference. The figure in the attached file contains sensitivity analyses with respect to (i) the number of basis functions used to model the fixed effect $\mu$, (ii) the number of basis functions used to model the size-and-shape altering random effect $v_i$, and (iii) the value of the concentration hyperparameter $\theta_\gamma$ in Prior Model 2 for the phase functions (size-and-shape preserving random effect). Through these experiments, we show that posterior inference is robust in almost all scenarios, except when the number of basis functions to model the fixed effect is under-specified (this is not unexpected since the ground truth $\mu$ does not lie in the span of the basis functions used in the model).
**Neural Networks** We agree that neural networks can be used to approximate posterior distributions and their functionals; in other cases, they can aid in designing more efficient MCMC samplers by optimizing the proposal distribution such as in Li et al., A Neural Network MCMC Sampler That Maximizes Proposal Entropy, 2021. We have not explored these directions and leave them for future work. The primary goal of the current paper is to introduce a Bayesian functional mixed model, motivated by size-and-shape models in the context of shape analysis, that accounts for size-and-shape altering as well as size-and-shape preserving random effects. The novelty is in treating phase variation in functional data as a size-and-shape preserving random effect. We use standard MCMC for posterior inference, but plan to consider other approaches for approximating the posterior distribution under this Bayesian model in the future.
**Generative Model (3)** In (3), we specify a generative model for functional data that incorporates a fixed effect function $\mu$, a noise process $\epsilon_i$, a size-and-shape altering random effect $v_i$, and a size-and-shape preserving random effect $\gamma_i$. The ability to simulate arbitrary functional data based on the model in (3) depends on the assumptions, e.g., smoothness, etc., on each of these model components. That said, this is a very general model, and to the best of our knowledge, the first one to consider both, size-and-shape altering and size-and-shape preserving random effects. | Summary: The paper studies the problem of recovering a fixed effect function µ in functional mixed models, where measurement errors and object-level phase variations make the task difficult. It focuses on disentangling the size-and-shape characteristics of µ, which remain invariant under certain transformations. The authors hypothesize that it is feasible to reliably recover the size-and-shape of µ using a Bayesian functional mixed model framework. The formulation is justified by numerical experiments on synthetic and real data.
Strengths: 1. The presentation over all is clear, self-contained, and precise. While this might sound standard in the past, I find these properties rare in papers these days, so I appreciate them even more.
2. The formulation is carefully backed up by justifications for the choice of components (e.g., basis functions, prior distributions) as well as numerical experiments.
Weaknesses: ## Significance:
1. The main proposal of this paper is to integrate phase variations into the model in a phase-preserving manner (equations 3 and 4). Nonetheless, I am struggling to understand what is the significance of such a model. More precisely,
1. Is this a reasonable model for real applications? Namely, are there some practical applications in which we have reasons to believe the phase functions act on the fixed effect mu in a phase-preserving way?
2. What are the main conceptual benefits and drawbacks of phase-preserving compared to value-preserving actions? The only answer I seem to find is in lines 142-147, but I do not understand the statement there as I detail in “Clarity”.
2. The comparison of the proposed method with warpMix on real data (Figure 4) seems limited, and this comment relate to the one above on real applications. It is hard to say which method is better when there is no ground-truth - the two methods give different estimates of mu, but it is hard to say which one makes more sense.
## Clarity:
1. Lines 142-147: Why is “The equivalence class of functions having the same size-and-shape under the norm-preserving action is in an appropriate sense ‘larger’ than the one under the value-preserving action”?
1. The explanation following the colon is not sufficient, since the same could be said for value-preserving action: for any g and gamma(t), one can find f(t) := g(gamma^{-1}(t)) such that f(gamma(t)) = g(gamma^{-1}(gamma(t))) = g(t).
2. The explanation in footnote 1 is a bit hand-wavy and it would be great if rigorous proof could be provided or referred to.
## Novelty:
The idea that the parameterization of the model is not minimal and therefore there is the ambiguity of having an equivalence class of solutions has been there for a long time, as the authors also point out in equation (2). Since (2) is an example in geometric vision which i am a bit more familiar with, I would like to point out more high-level connections here.
1. A basic idea to resolve the ambiguity is to enforce constraints that favors certain elements in each equivalence class than others. This is more or less what the authors are doing near line 76. Note however that: while two sets of parameters can be equivalent in the sense that if there were no noise then the two sets of parameters explain the observation equally well, in the noisy case the two sets of parameters could handle noise very differently. Therefore, one needs to be smart on which elements in each equivalence class to pick. For example, the work of [A] argues that picking a particular transformation (normalization) in the parameters improves numerical stability of the estimation in the noisy case.
2. Following the same line, two sets of “equivalent” parameters could also be favored differently when the data are corrupted by outliers. An alternative idea to resolve the ambiguity is to estimate the entire equivalent class (rather than finding an element of it), which is for example the path explored in the work of [B].
So far i do not see a strong need for the current paper to exhaust all options, but it would be great if the authors could discuss on how optimal the choice of line 76 is, what might be potential alternatives, etc.
[A] R. I. Hartley, “In defense of the eight-point algorithm,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997.
[B] Ding, T., Yang, Y., Zhu, Z., Robinson, D. P., Vidal, R., Kneip, L., & Tsakiris, M. C, “Robust homography estimation via dual principal component pursuit”, In the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020.
Technical Quality: 3
Clarity: 2
Questions for Authors: ## Minor comments / questions:
1. Could you comment on how does using MCMC to get parameters from the posterior compare to using optimization, e.g., by writing down the log likelihood and then maximize it over the parameters?
2. The introduction was somewhat confusing, as illustrated below.
3. The first paragraph motivates the reader with the Berkeley growth data example, saying that a problem for modeling is that the sample size is smaller than the dimension observation, so one must assume prior / constraints in the hypothesis class. Yet the model in equation (1) is rather general and does not assume much prior for recovery.
4. Now the second paragraph makes the model even more complicated and harder to estimate by introducing phase variability. In particular, why isn’t model (1) sufficient and why does phase variability make sense?
To sum up, I think there are multiple problems motivated and highlighted along the way which somehow buries the key issue of phase variability.
3. The reviewer is grateful that some basics on phase functions and size-and-shape-preserving transformations are provided in section 2. Nonetheless, when I read Section 2 (before going through the rest of the paper) it is unclear why are these properties important - are they going to contribute to the solution? Which key properties allow you to do something that previously was not done? This for example can be said in the preamble of section 2 to make the flow smoother.
4. Line 119: gamma in Gamma?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Significance, Size-and-Shape Preserving Transformations** Broadly, for *any* application involving functional data, our model has a two-fold motivation: (i) norm-preserving action $D_\gamma$ on fixed effect $\mu$ and size-and-shape preserving random effect $v_i$, expressed as linear combinations of orthonormal bases, allows us to rotate the basis systems to better align with observed data, and (ii) isometric group actions play a key role in finite and infinite-dimensional shape analysis to account for nuisance variation (Srivastava et al., Functional and Shape Data Analysis, 2016). For (i), modeling flexibility enabled by $D_\gamma$ results in more parsimonious models, especially since $\mu$ is represented via a finite, and potentially low-dimensional, linear combination of basis functions. This has applied implications since the type of basis is chosen apriori. The viewpoint in (ii) allows us to formulate models and carry-out computations in a geometric fashion.
**Comparisons** Quantitative comparisons to warpMix on synthetic data generated via the warpMix model in Sec. 4.1 show that our model performs better in terms of recovery of $\mu$. In Sec. 4.2, we consider two real datasets: growth rate functions and PQRST complexes. In the first, warpMix only recovers a single growth spurt while our model recovers two growth spurts, a smaller initial one and later pubertal one. In the second, warpMix fails to produce an estimate. We provide results for more real datasets in App. H. In general, warpMix oversmooths the estimate of $\mu$ resulting in fewer geometric features than expected. Finally, posterior MCMC samples from our model can be used to quantify uncertainty, which is an important applied consideration (warpMix provides a point estimate).
**Equivalence Classes** The example you provide demonstrates surjectivity of the operator from a value-preserving action, but such an operator is not unitary since it does not preserve the $\mathbb L^2$ norm. Operator $D_\gamma$ on the other hand is unitary for every $\gamma \in \Gamma$. Let $[f]_n:=$\{$f(\gamma)\sqrt{\dot \gamma},\gamma\in\Gamma$\} be the equivalence class of a function $f$ under the norm-preserving action and $[f]_v:=$\{$f(\gamma),\gamma\in\Gamma$\} be the equivalence class under the value-preserving one. Class $[f]_v$ contains functions that have the *same* sequence of ordinate values as $f$ - image $t \mapsto f(t)$ is preserved. On the other hand, $[f]_n$ contains functions with new ordinate values (due to scaling factor $\sqrt{\dot \gamma}$) that are originally not in $f$, and new extrema may be created. Intuitively, thus, $[f]_n$ is "larger" than $[f]_v$. A rigorous proof that compares sizes of two open sets (e.g., open balls) in two norm topologies induced by two quotient metrics (w.r.t. to the equivalence relation) can be provided. However, the following simple example with Brownian motion $W(t)$ illustrates the above point. Let $X(t)=W(c t)$ with $c>0$ be a time-changed process, corresponding to a value-preserving action by scaling of time on some interval in $(0,\infty)$. Then, the law of $X$ is singular w.r.t. the law of $W$ for every value of $c$ except $c=1$. However, under the norm-preserving map, $X(t)=W(\gamma(t))\sqrt{\dot \gamma(t)}$ is absolutely continuous w.r.t. $W(t)$ for any $\gamma \in \Gamma$ (Example 6.5.2 in Bogachev, Gaussian Measures, 1998).
**Equivalent Solutions** Indeed, line 276 introduces a post-hoc constraint to choose an appropriate element of the equivalence class of $\mu$ in a data-driven manner, ensuring that the resulting $\mu$ is associated with identity phase $\gamma_{id}(t)=t$; this step is necessary since functional data comes under arbitrary phase variation. This is commonly referred to as orbit centering where the average of phase functions is used to choose an orbit representative (Chapter 4 of Srivastava et al., Functional and Shape Data Analysis, 2016). *This choice is optimal with respect to minimizing the extrinsic Frechet variance of the $\gamma_i$s.* An alternative that can be used in the presence of phase outliers is the Frechet median. We thank the reviewer for pointing us to the two references. For [A], we agree that choosing an appropriate equivalence class representative is difficult in the presence of varying levels of noise. In our model, we use finite-dimensional priors on phase, which help us constrain the size of equivalence classes. For [B], it is an interesting idea to recover the entire equivalence class instead of a representative. However, the parameter spaces for our model are (high or) infinite-dimensional (even though some dimension reduction is enforced via prior formulations). Under the one-dimensional Prior Model 1 on phase, one can potentially explore the entire equivalence class of the posterior mean of $\mu$.
**Frequentist Inference** Maximum likelihood estimation of parameters would yield the maximum aposteriori estimate corresponding to our model under appropriate non-informative prior choices. Despite higher computational cost, we favor Bayesian inference since it allows us to easily quantify posterior uncertainty.
**Introduction** We agree that, while important, the main issue motivating our framework is not the data dimension. Instead, the motivation stems from the fact that children can undergo different numbers of growth spurts of different magnitudes at different times. This implies that the data contains phase variation, which is not accounted for in model (1) (an implicit assumption in (1) is that there is no time variation in growth spurts). Phase variation in our model acts as a size-and-shape preserving random effect that allows better alignment of the magnitude and timing of growth spurts across observations. If accepted, we will clarify our motivation. Methodological motivation of our model, as opposed to one such as warpMix with value-preserving phase variation, is given in lines 31-45.
We will use $\gamma\in\Gamma$ in line 119.
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal
Comment: Thanks to the reviewers for the response! I am clear on all except the following concerns:
**Clarity 1.1**, **Significance 1.2** / **Equivalence Classes**: I am unsure how this intuition helps conclude that $[f]_n$ is larger than $[f]_v$. From your argument, I can agree that functions in $[f]_v$ satisfy the property you mentioned (i.e. same sequence of ordinate values), and that functions in $[f]_n$ do not satisfy that property. However, functions in $[f]_n$ could apriori satisfy some other properties that functions in $[f]_v$ do not satisfy, making $[f]_n$ small.
- I do not have any clue if that is indeed the case, but your argument does not prevent such cases from happening (if I'm understanding it correctly), which is why I could not understand the intuition. Do I miss something here?
**Significance 1.1**: I guess I can agree on what (i) and (ii) do, i.e., what norm-preserving action and isometric group actions do, but that's not what I was trying to ask. Maybe I am not understanding your response. My original questions are regarding concrete applications, in which we know the better way to model the signals is by norm-preserving actions rather than the alternatives (e.g., value-preserving actions or area-preserving actions)
- Are there such applications? Why are norm-preserving actions a better model (other than empirical numbers showing its better)?
- Maybe you already had them in the paper/response: are you saying that equation (2) is an example? Or time warping?
---
Reply to Comment 1.1.1:
Comment: Thank you for your additional questions.
**Clarity 1.1, Significance 1.2 / Equivalence Classes**: You are indeed right that $[f]_n$ need not be larger than $[f]_v$ for every function $f$ - that necessarily depends on the function $f$. Our claim on sizes of equivalence classes, however, is with respect to the measure induced on the quotient $\mathbb{L}^2/\sim_n$ compared to the quotient $\mathbb{L}^2/\sim_v$ starting from a non-degenerate measure on $\mathbb{L}^2$; here $\sim_n$ is the equivalence relation on $\mathbb{L}^2$ induced by the norm-preserving action, and similarly $\sim_v$ for the value-preserving action. The example in our response illustrates this when the measure on $\mathbb{L}^2$ is the Wiener measure (with support on continuous functions): its push forward under $\sim_v$ is effectively singular with respect to the Wiener measure, and thus prescribes zero mass to $[f]_v$; in contrast, the push forward of the Wiener measure under the quotient map corresponding to $\sim_n$ is absolutely continuous with respect to the Wiener measure, and prescribes positive mass to $[f]_n$ for every $f$, since the norm-preserving action is an isometry of $\mathbb{L}^2$. Indeed, this does not preclude the possibility of constructing a measure on $\mathbb{L}^2$ that violates the preceding claim; but, in our opinion, such measures are not “natural” since they are not compatible with isometries of $\mathbb{L}^2$, which in this case is the norm-preserving action.
Further, ignoring the random effect, if the fixed effect function is mis-specified with fewer basis functions than required to capture the correct number of extrema, the value-preserving action cannot rectify this, whereas the norm-preserving one can. This has modeling significance since, in a way, the norm-preserving action makes the model more robust. Of course, the size-and-shape altering random effect can introduce new extrema, but only on the individual level and not the population level.
The discussion has brought to light the need to be clearer with the claim on sizes of the equivalence classes in the paper, and we thank you for this.
**Significance 1.1**:
1. Using the Hilbert space $\mathbb L^2([0,1],\mathbb{R})$ as the representation space for the observed functional data is popular, mainly due to availability of an orthonormal basis that enables working with the basis coefficients and facilitates dimension reduction. Given this, modeling the presence of phase variability via the $\mathbb L^2$-norm preserving action of $\gamma$ allows us to use the geometry of the quotient space to develop computational tools; the value-preserving action, in contrast, preserves the uniform $\mathbb L^\infty$-norm $\sup_t|f(t)|$. Moreover, an action preserving any $\mathbb L^p$ norm for $1 \leq p <\infty$ can be converted to an action that preserves the $\mathbb L^2$-norm via the pointwise transformation $f(t)\mapsto \text{sgn}(f(t))|f(t)|^{p/2}$ for every $t \in [0,1]$. The norm-preserving action allows more flexibility in modeling the fixed effect function $\mu$ as compared to the value-preserving action since time warping is accompanied by local rescaling. While a definitive answer to your question is difficult, in many applications including biology, medicine and biometrics, the size-and-shape of $\mu$ is of primary interest for inference. One application that motivates our model, as opposed to one that utilizes a norm-preserving action such as warpMix, is in recovery of the underlying number and magnitude of growth spurts based on a sample of growth rate curves (from the Berkeley growth data). It is well known in this context that children undergo different numbers of growth spurts with different magnitudes occurring at different times. This implies that the data contains phase variation, and the primary task of interest is recovery of the size-and-shape (magnitude and number) of the population level fixed effect function $\mu$. Our estimate, which contains two growth spurts, a smaller initial one and a larger pubertal one later on, agrees with expectations, since it has been shown that children undergo more than one growth spurt during the growth process. The warpMix estimate on the other hand only contains only a single growth spurt.
2. It is commonplace in geometric statistics and statistical shape theory to incorporate the action of a nuisance transformation as an isometric group action (see, e.g., the books Shape and Shape Theory by Kendall et al. and Functional and Shape Data Analysis by Srivastava et al.). For example, when size-and-shape of a landmark configuration is of interest, since rotations and translations do not change the configuration's size-and-shape, they are nuisance, and due to use of the Euclidean norm on the landmarks, act isometrically on the configuration. Model (2) in our paper uses this in its formulation, and inspires our model for functional data in the presence of phase variation. | Rebuttal 1:
Rebuttal: We thank all reviewers for their careful consideration of our manuscript and constructive comments.
**Significance and Motivation** Employing mixed models with random effects to better model correlated input data in neural networks is fast gaining traction (e.g., Simchon et al., Integrating Random Effects in Deep Neural Networks, 2023). In this context, when the inputs are functional data observed with *arbitrary* phase variability, it is important to understand what feature/property of the fixed effect function $\mu$ may be reliably recovered. Our main contribution is the proposal, and investigation, of a geometric reformulation of the problem: we use the phase component in the probabilistic model as a space-time unitary transformation, and then determine a data-driven optimal rotation of the model's coordinates to recover the size-and-shape of $\mu$, with corresponding uncertainty estimates. The property of the norm-preserving action being an infinite-dimensional rotation of $\mathbb{L}^2$ is key in the formulation of the proposed model; this action is an isometry under the $\mathbb{L}^2$ metric. Since the fixed effect function (and the size-and-shape altering random effect) are eventually represented using a finite-dimensional basis set, such rotations allow us to align the $\mathbb{L}^2$ coordinate system of model components to the coordinate system of each observation, thus resulting in a more parsimonious model. Quantitative and qualitative evaluations on simulated and real datasets confirm these claims.
**Implementation** Our main contribution is a novel geometric perspective on the problem of reliably recovering the size-and-shape of a fixed effect function $\mu$ using a Bayesian functional mixed model with *unconstrained* (infinite-dimensional) size-and-shape altering and preserving random effects, in the presence of noise. As such, the computational algorithm used for posterior inference via MCMC serves more as a proof of concept rather than a definitive (optimal) computational tool, but nevertheless outperforms the current state-of-the-art (SOTA). There are many alternative algorithms that can be used to approximate the posterior distribution including variational approaches, and neural networks for intractable likelihoods and MCMC (e.g., Li et al., A Neural Network MCMC Sampler That Maximizes Proposal Entropy, 2021).
**Additional Results in Attached File** The attached PDF file contains a figure and a table. In the figure, we show results of more extensive sensitivity analyses to misspecification of three hyperparameters: $B_f$ (number of basis functions used to model the fixed effect function $\mu$), $B_r$ (number of basis functions used to model the size-and-shape altering random effect $v_i$) and $\theta_\gamma$ (concentration hyperparameter in Prior Model 2 (PM 2) on the size-and-shape preserving random effect or phase function $\gamma_i$). Each plot shows the centered estimate of the posterior mean (red), 95% credible interval (dashed blue) and ground truth $\mu$ (black). Rows 1-3 show results for under-specified, correctly specified and over-specified values of hyperparameters, respectively. Posterior inference, in terms of the posterior mean for $\mu$ and its uncertainty as ascertained via the 95% credible interval, is very robust to (i) over-specification of $B_f$, (ii) under or over-specification of $B_r$, and (iii) under or over-specification of $\theta_\gamma$. When $B_f$ is under-specified, we are unable to reliably recover the ground truth $\mu$ since it does not lie in the subspace spanned by the specified basis functions. Thus, in general, we recommend specifying a larger number of basis functions to model the fixed and size-and-shape altering random effects.
The table reports a more comprehensive quantitative evaluation, in terms of estimation accuracy for the fixed effect function $\mu$, on five simulated datasets. Rows 1-2 consider data simulated from our model under Prior Model 1 (PM 1) and PM 2 for phase functions. Rows 3-5 consider data simulated using the warpMix model with default parameter values. We compare estimation results produced using our model (columns Model 1-F through Model 2-B where the number refers to the prior model on phase and the letters F or B refer to the Fourier or B-spline bases) to those produced using warpMix (Claeskens et al., Nonlinear Mixed Effects Modeling and Warping for Functional Data Using B-splines, 2021) and the mean+noise model proposed in Cheng et al., Bayesian Registration of Functions and Curves, 2016 (labeled as BRFC). As seen in the table, our model outperforms warpMix and BRFC in all of these simulation scenarios.
Pdf: /pdf/c4efb561ecc8b8bd3c35b4d88a6e0aa6f7ec6a05.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this paper is proposed a mixed model in a functional Hilbert space for a size-and-shape (a type of geometric property) of a square-integrable fixed effect. To this end, the authors consider an isometric action of the infinite-dimensional group of phase functions. Synthetic and real experiments and comparisons with respect to another method in literature are reported, demonstrating how a posterior mean, and under the proposed model, captures the main properties of the function.
Strengths: + A Bayesian functional mixed effects model with an unrestricted form of phase variation is presented.
+ A novel perspective of inferring a size-and-shape function, a type of geometric constraint.
+ In general, the paper is concise and direct. Most explanations and discussions are provided. It is easy to follow, up to some points in formulation.
Weaknesses: - I think the notation could be improved, representing, for example, the use of vectors, matrices and scalars differently. In some cases, including dimensionality may help readers. Moreover, some symbols are different along text (see the transpose operator).
- A couple of linear combinations of basis functions are assumed. The rank of every subspace is Bf and Br. This type of formulation has been proposed in previous approaches, following an LDA style. The rank of every subspace is known a prior.
- The authors could include a practical motivation of the study, where they could apply their approach and why it is important.
Technical Quality: 3
Clarity: 3
Questions for Authors: How is the rank of both subspaces fixed? By hand? How sensitive is the solution with respect to these parameters? Could Bf and Br be inferred automatically? This seems to be a key factor in the model, but the authors do not provide any discussion in this line.
The authors propose to use a B-spline basis, so a pre-defined one. Why this type of function? Note that we can find many piecewise polynomials in literature.
Could both basis functions be learned jointly with weight coefficients?
In real data, n=93, but when the authors use synthetic one, n=100,000. I cannot understand why the difference needs to be so big. In my opinion, if the goal is to sort out the problem in scenarios where the amount of data is limited, this should be also considered in the synthetic scenario.
Figure 1 (b) and (c) are not cited in the introduction.
The method consistently outperforms warpMix on synthetic data. While I like this comparison, I think the authors should consider more datasets for this evaluation. In addition to that, what about computational cost? I would like to see the trade-off accuracy vs. cost for both methods.
What about failure cases?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors do not analyze properly the limitations of the paper. In any case, I do not think the work in this paper can produce a negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Notation, Figure 1, Sample Sizes** We agree that the notation is dense in certain sections. If accepted, we will try to simplify notation. We denote vectors and matrices in bold and functions and scalars in regular font. We will include dimensionality when appropriate. Figure 1 (b-c) shows the effect of the norm-preserving action $D_{\gamma}$ on $f$ (referred to on line 26 in Section 2). These images are provided in Figure 1 due to the page limit. In all simulations, we use $n=30$ (line 289). Thus, sample sizes in simulations and real data examples are comparable. The $N = 100,000$ is the number of posterior samples that are used for inference (line 280).
**Motivation** Incorporating random effects in neural networks is a popular technique to account for correlations in input data (e.g., Simchoni et al., Integrating Random Effects in Deep Neural Networks, 2023). In this context, our main motivation stems from ascertaining what may be reliably recovered by a Bayesian model when inputs are functions, and the objective is to estimate a population-level fixed effect function $\mu$ in the presence of two different types of random effects: size-and-shape altering one ($v_i$) and size-and-shape preserving one ($\gamma_i$, phase variation). Simulated and real data examples show that our model better recovers geometric features of $\mu$ than the SOTA model warpMix. We expect the understanding gained via the simple Bayesian model to have important consequences when overparameterized neural networks are used with input functional data. We discuss and analyze two motivating datasets in Sec. 1 and 4.2. The proposed framework is general with many applications; results for other real datasets are in App. H.
**Type of Basis and Joint Estimation** Our model allows use of *any* basis for the fixed effect $\mu$ and the size-and-shape altering random effect $v_i$. *We clarify that, despite choosing the type of basis apriori, the norm-preserving action $D_{\gamma}$ rotates the basis system toward the data, allowing us to learn a data-driven basis for $\mu$. Thus, theoretically, the apriori choice of basis is unimportant for recovery of size-and-shape of $\mu$, a crucial implication of our geometric formulation of the problem.* Below, we provide practical guidance on this issue.
Orthonormality: The basis should be orthonormal since our model is motivated by viewing the norm-preserving action as a rotation of the basis system.
Data complexity, fixed effect: Choice of basis for $\mu$ should be motivated by the structure of given data, e.g., we use a Fourier basis for growth rate data, but B-spline basis for PQRST complexes; this is based on prior belief of whether $\mu$ contains periodicity (growth rate data) or sharp geometric features (PQRST complexes). We found that a functional PCA (FPCA) basis, estimated using the data, is ineffective in modeling $\mu$ as it captures noise and small-scale variation (App. F).
Random effect: We prefer a basis with compact support, e.g., B-splines, to capture local variation and finer features as compared to the fixed effect.
We appreciate the comment on joint estimation, which can be quite challenging: one needs to disentangle MCMC errors related to estimating the functional subspace (what is the optimal basis and how many?) from that of estimating the size-and-shape of $\mu$. We will pursue this in the future for the fixed effect, which is the primary object of interest for inference, in two ways: by (i) placing a prior on the Stiefel manifold of orthonormal frames, or (ii) using ideas from Matuk et al., Bayesian Modeling of Nearly Mutually Orthogonal Processes, 2023.
**Number of Basis Functions** This is chosen apriori, and can be informed by exploratory analyses, e.g., FPCA. We found that posterior inference is robust to (i) under- and over-specification of the number of basis functions for the size-and-shape altering random effect, and (ii) over-specification for the fixed effect. It is somewhat robust to under-specification for the fixed effect; see figure in the attached file. Thus, one may choose a large basis for the fixed and random effects. Alternatively, we could treat $B_f$ and $B_r$ as random and infer them. We leave this for the future since posterior inference becomes more complex: we may have to use Reversible Jump MCMC to allow the parameter space to change dimension.
**warpMix Comparisons** We carefully chose simulations to ensure fair comparisons to warpMix. For quantitative evaluation, we use the criterion used in their work (line 318). Simulated Example 2 is particularly illuminating wherein the datasets are generated using the warpMix model, which uses a value-preserving action on $\mu$ and $v_i$ rather than a norm-preserving one. Our model qualitatively/quantitatively outperforms warpMix in all considered simulations. We provide comparisons to warpMix on real data in Sec. 4.2 and App. H; our model recovers a fixed effect with more pronounced geometric features. Also, since we use a Bayesian model, we are able to quantify uncertainty via 95% credible intervals; this is not straightforward in warpMix (result is only a point estimate).
Computationally, warpMix is more efficient: we rely on MCMC while warpMix uses iterative likelihood optimization. We discuss computational cost of our method in Section 5, lines 366-370 and mention that there is room for improvement in MCMC efficiency (most efficient MCMC was not our primary focus). For comparison, 10 iterations in warpMix take ~5 minutes as compared to ~111 minutes needed by MCMC for 100,000 posterior samples for inference.
**Failure Cases/Limitations** As mentioned earlier, we cannot reliably recover the true fixed effect when the number of basis functions is under-specified. This is not unexpected since the ground truth $\mu$ does not lie in the span of the basis functions used in the model. We discuss limitations in Section 5, including lack of theoretical support and computational cost.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications!
I am happy with the answers that have been provided. In any case, although recovering the rank is a complex problem, this point should not be hidden and must be included as part of the future work.
---
Reply to Comment 1.1.1:
Comment: Thank you for considering our rebuttal. If accepted, we will make sure to list future work related to rank/joint estimation as future work. | null | null | null | null | null | null |
ControlSynth Neural ODEs: Modeling Dynamical Systems with Guaranteed Convergence | Accept (poster) | Summary: The paper introduces an extension to Neural ODEs. At it's core this is an architectural change to Neural ODEs, adding more important structure to the dynamics function through a control signal $u(t)$. The paper shows that this can lead to rich non-linear dynamics with convergence guarantees. Extensive experiments on various synthetic and real systems show that structuring the dynamics in this way leads to faster training, better performance and reliable scaling.
Strengths: This is a really solid paper in my opinion. The paper identifies a problem with Neural ODEs, proposes a theoretically grounded solution. As far as I can tell the theory is correct. The evaluation is extensive, generally exploring all important aspects of the work.
The appendix is rich with important details, showing further results.
Weaknesses: The paper would be improved most by restructuring the writing. I believe the related work section would be better after the introduction rather than before the conclusion. Section 2 needs to be slightly improved. The description jumps straight into the mathematical detail without very much explanation of $u(t)$ or the subnetworks. I believe the paper would be improved significantly by reducing Section 3 by moving details to the appendix and giving a more intuitive explanation of the main result. Additionally use the space to give a more accessible explanation of CSODE in Section 2.
There should be some sort of comparison to Neural CDE, since it also uses control signals and is considered SOTA for Neural ODE methods. Other methods that could be compared to to improve the evaluation is Latent ODE and ODE-RNN.
Section 6 on scaling is good but should involve comparisons to other models. Since a claim of the paper is that CSODE has guarantees especially for larger models then this should be shown against NODE and NODE variants. Additionally for the scatter plot in Figure 6, as far as I can tell this is about including more subnetworks, this is not clear from the plot legend or figure caption.
Another claim is that this method inherently works at different spatial scales. There does not appear to be an experiment testing this, is this possible?
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why is the Euler method used for ODE integration and not Dopri5? This is the standard ODE solver used in Neural ODE works.
- Is it possible to include results for Neural CDE/ODE-RNN/Latent ODE?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: - Limitations are well addressed in the conclusion.
- A thorough broader impact statement is provided in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We sincerely appreciate your thorough review and valuable feedback on our paper. Your insights are crucial for improving the quality of our work. We have dealt with each of your comments or suggestions carefully.
## 1. Suggestions on the Organization of Paper
We are grateful for your constructive suggestions. We restructured the paper to enhance its readability as follows:
- Move the Related Work section to follow the Introduction.
- Strengthen the explanation in Section 2, providing a more intuitive and comprehensible introduction to CSODE.
- Simplify the content in Section 3, moving some technical details to the Appendix.
- Present Figure 6 more clearly to explicitly show the changes in the number of subnetworks.
## 2. Comparison with Neural CDE and ODE-RNN
We acknowledge the importance of Neural CDE and its excellent performance in learning and prediction.
In the revision, we discuss Neural CDE in more detail in the Related Work section. Additionally, we have conducted supplementary experiments (CharacterTrajectories and PhysioNet Sepsis Prediction) with CSODE-Adapt following the experimental setup in the Neural CDE paper, maintaining similar network structures, parameter counts, and optimization methods. Overall, CSODE performs slightly worse than Neural CDE in irregular observation experiments but better in other time-series-related tasks. And we have also performed our main experiments for neural CDE. The corresponding results are shown as follows:
### CharacterTrajectories (Irregular Observations):
With 30%, 50%, and 70% data loss, CSODE-Adapt achieved Test Accuracies of 97.3%, 97.8%, and 96.8% respectively, outperforming ODE-RNN but slightly underperforming Neural CDE.
### PhysioNet Sepsis Prediction:
Without considering Observational Intensity (OI), CSODE achieved a test AUC of 0.868, surpassing both ODE-RNN and Neural CDE. When considering OI, we emulated the Neural CDE approach by training the control variable u in CSODE using OI observations, the test AUC reached 0.881, very close to Neural CDE but superior to Neural CDE and ODE-RNN.
### Dynamic System Modeling:
Taking the Reaction-Diffusion model task as an example, Neural CDE performed with MSE 7.1e-3, MAE 0.60, and Chamfer Distance 0.945, very close to CSODE but slightly inferior to CSODE-Adapt.
## 3. Comparison with Latent ODE
CSODE, as a generalized extension of Neural ODE, can indeed be combined with the Latent ODE framework. In Preliminary Experiments of our original manuscript, inspired by the Neural ODE paper, we compared the original Neural ODE and CSODE for dynamical systems based on the Latent ODE framework.
However, in our main experiments, we found that Neural ODE itself can model long-sequence dynamical systems. To obtain more general conclusions, we chose to directly compare Neural ODE and its variants with CSODE, eliminating the influence of how well other components in Latent ODE (such as encoders and decoders).
## 4. Model Scalability in Section 6
CSODE enhances the scalability of Neural ODE, theoretically guaranteeing convergence while increasing parallel subnetworks. Section 6 was originally intended to observe whether increasing subnetworks could effectively maintain high performance.
Following your suggestion, we conducted supplementary experiments comparing CSODE, original Neural ODE, and Augmented Neural ODE when achieving the same parameter count by increasing width. Taking the Reaction-Diffusion task as an example, when the number of CSODE subnetworks is 2, increasing the width (128, 256, 512, 1024, 2048):
- CSODE losses (as shown in Figure 6 of the initial submission): 0.060, 0.056, 0.045, 0.037, 0.030
- Neural ODE losses: 0.083, 0.063, 0.058, 0.061, 0.063
- Augmented Neural ODE losses: 0.078, 0.061, 0.055, 0.057, 0.057
The results indicate that the improvement in the learning ability of these two models with increasing width is not as significant as CSODE. We added detailed experimental data and more details to the Appendix of the revised version.
## 5. Ability to Adapt to Different Spatial Scales
We apologize for any confusion caused by our previous explanation. We intended to provide an intuitive explanation. In practice, we found that CSODE learns more robustly when dealing with dynamical system datasets with different parameter settings, with a smaller standard deviation percentage in experimental results, compared to Neural ODE and Augmented Neural ODE.
For example, when we alter the spatial scale of the Reaction-Diffusion model task:
- In the original setting: The standard deviation percentages for Neural ODE, Augmented Neural ODE, and CSODE were approximately 5.3, 4.6, and 5.3 respectively.
- Expanding the spatial domain to 5 times of the original: The standard deviation percentages for Neural ODE, Augmented Neural ODE, and CSODE increased to 7.8, 7.1, and 5.5 respectively. At 10 times the original, the percentages were 8.8, 7.3, and 5.5.
This indicates that CSODE has advantages in adapting to different spatial scales. We included more complete information in the Appendix of the revised version.
## 6. Choice of Solver
We chose the Euler method mainly based on the following considerations:
- In practice, we found that more accurate solvers can indeed improve network performance to some extent, but significantly increase training and inference time.
- The Euler method is the most basic and intuitive integration method, which can reduce the reader's understanding cost and provide the most essential comparison.
In response to your suggestion, we have supplemented experiments using the Dopri5 solver. The results show that changing the solver does not affect the relative performance between models. For example, in the Reaction-Diffusion model task, NODE, ANODE, and CSODE-Adapt performed with MAE of 0.073, 0.063, and 0.033 respectively, consistent with the original conclusions. These supplementary results were added to the Appendix of the revision.
---
Rebuttal Comment 1.1:
Title: Score stays the same, confidence stays the same
Comment: Thank you to the authors for the detailed response. I have read through the other reviews and all repsonses. I am satisfied with this paper and my score remains the same.
I would ideally like to see the new figure 6, but there has been no pdf uploaded.
Please make sure to carry out the polishing of the writing, because it is very important. I appreciate we cannot see a revision, so I will trust the authors. Maybe it's possible to give some of the changes made to section 3 in a reply?
---
Reply to Comment 1.1.1:
Title: The Changes Made to Section 3 and Figure 6
Comment: Dear Reviewer,
Thank you very much for your detailed feedback and continued support. Your suggestions are crucial to improving the quality of our paper. We have made significant changes to Figure 6 and Section 3 based on your comments. The following are specific improvements:
## About New Figure 6
Thank you for your valuable feedback on Figure 6 and your efforts to improve the readability of our paper. We deeply appreciate your trust in our work and your commitment to helping us enhance its quality.
We sincerely apologize for the lack of clarity in our original presentation and the absence of the new detailed figure in our initial global response. We regret that our previous global response focused more on explaining the paper's approach, motivation, and additional experimental details, without including the PDF of the improved figure. Unfortunately, we are no longer able to edit the global rebuttal and upload a PDF.
We recognized the importance of clarity and precision for selling the contribution. As such, we have made significant improvements to Figure 6:
1. We've reduced the number of compared models from five to three for better clarity.
2. The color scheme has been changed to use more distinguishable colors for each model.
3. In addition to colors, we now use different marker shapes for each model configuration.
4. The legend no longer obscures scatter points.
We've also revised the caption to be more explicit:
> "Figure 6: Performance comparison of CSODE models with varying numbers of sub-networks. The scatter plot illustrates the relationship between training and validation losses for three distinct sub-network configurations (e.g., 1, 3, and 5 sub-networks) at a fixed width of 512. Each configuration is represented by a unique color and marker shape. This visualization demonstrates how increasing the number of sub-networks affects the model's performance and generalization capability."
>
These changes are part of our ongoing effort to improve the clarity and accessibility of our work. We are grateful for your insights, which have helped us identify points for improvement. Your feedback is invaluable in our pursuit of presenting our research in a clear and understandable manner possible.
We appreciate your patience and the opportunity to refine our work. Thank you again for your thoughtful review and constructive suggestion.
## About New Section 3
We have significantly revised the theoretical results section to enhance its readability and intuition while maintaining rigor. Key changes include:
1. Structure and Intuition: We reorganized the chapter, focusing on core results and adding intuitive explanations. For example, we now explain how LMI conditions relate to the system's stability through the concept of "generalized energy" decreasing monotonically. We also provide more context for the role of compensating matrices in balancing linear and nonlinear terms.
2. Assumptions and CSODE Structure: We simplified the presentation of assumptions, clarifying how they differ from traditional Lipschitz continuity requirements. We have highlighted the flexibility that the CSODE framework provides in constructing Lyapunov functions. Particularly, we emphasize how Assumption 3 allows for more adaptable stability analysis, capturing the local behavior of the error $\xi$ rather than relying solely on global properties.
3. Technical Details and Activation Functions: Complex derivations were moved to appendices, improving the readability of the main text. We simplified the activation function discussion, retaining key points like the potential relaxation to "non-decreasing" functions. This expansion to include non-smooth functions like ReLU illustrates the broader applicability of our analysis.
These changes aim to make theoretical results more accessible while showcasing the CSODE framework's potential in analyzing complex nonlinear systems. We welcome your feedback on these points and are prepared for further refinements to ensure clarity and accuracy.
We will attempt to incorporate the revised Section 3 into the next official comment as a reply.
---
Rebuttal 2:
Comment: Dear Reviewer,
To demonstrate the more specific effects after adjustment, please allow us to provide the complete Theoretical Results Section for your reference. Due to length constraints, we have divided it into two parts, each presented in separate comment boxes.
(Part 1):
## Theoretical Results: Convergence Analysis
Consider the CSODEs given in Equation (1), where $f_j: \mathbb{R}^{k_j} \to \mathbb{R}^{k_j}$ are nonlinear activation functions. For them, an imposed condition on $f_j^i$ (the $i$-th element of the vector-valued $f_j$) is presented as follows:
**Assumption 1:** For any $i \in \{1,\dots, k_j\}$ and $j \in \{1,\dots,M\}$, $s f^i_j(s) >0$ for all $s \in \mathbb{R} \backslash \{ 0 \}$.
**Remark 1:** Assumption 1 applies to many activation functions, such as $\tanh$ and parametric ReLU. It picks up the activation functions passing through the origin and the quadrants I and III. For more explanations, the reader is referred to Appendix B.1.
In this study, to analyze the convergence property of the NN (1), we first define the concept of *convergence*:
**Definition 1:** The model (1) is convergent if it admits a unique bounded solution for $t \in \mathbb{R}$ that is globally asymptotically stable (GAS).
In order to investigate the convergence, two properties have to be satisfied, that is, the boundedness and the GAS guarantees of the solution $x^{*}$ for Equation (1). In this respect, two assumptions are given as follows.
**Assumption 2:** Assume that the functions $f_j^i$ are continuous and strictly increasing for any $i \in \{1,\dots, k_j\}$ and $j \in \{1,\dots,M\}$.
Assumption 2 aligns with CSODE's structure, reflecting continuity and monotonicity of activation functions. This relates to model dynamics and is satisfied by most common activations.
In the analysis of convergence, one needs to study two models in the same form but with different initial conditions and their contracting properties. To that end, along with Equation (1), we consider the model
$\dot{y}(t)=A_{0}y(t)+\sum_{j=1}^{M}A_{j}f_{j}(W_jy(t))+g(u(t))$ with the same input but different initial conditions $y(0)\in\mathbb{R}^{n}$.
Let $\xi:=y-x$. Then the corresponding error system is
$$\dot{\xi}=A_{0}\xi+\sum_{j=1}^{M}A_{j}p_{j}(x,\xi),$$
where $p_{j}(x,\xi) =f_{j}(W_j(\xi+x))-f_{j}(W_jx)$. Note that for any fixed $x\in\mathbb{R}^{n}$, the functions $p_j$ in the variable
$\xi \in\mathbb{R}^{n}$ satisfy the properties formulated in Assumptions 1, 2. The following assumption is imposed for analyzing the contracting property of Equation (2).
**Assumption 3:** Assume that there exist positive semidefinite diagonal matrices $S_{0}^{j},S_{1}^{j},S_{2}^{j},S_{3}^{j,r},H_{0}^{j},H_{1}^{j},H_{2}^{j},H_{3}^{j,r} \left(j,r\in \{1,\dots,M\}\right)$ with appropriate dimensions such that
$$\begin{aligned}
p_{j}(x,\xi)^{\top} p_{j}(x,\xi) & \leq \xi^{\top} W_j^\top S_{0}^{j} W_j \xi+2\xi^{\top} W_j^\top S_{1}^{j}p_{j}(x,\xi) +2\xi^{\top} W_j^\top S_{2}^{j}f_{j}(W_j\xi) \\
& +2\sum_{r=1}^{M}p_{j}(x,\xi)^{\top} W_j^\top W_j S_{3}^{j,r} W_r^\top W_r f_{r}(W_r \xi)
\end{aligned}$$
and
$$\begin{aligned}
f_{j}(W_j \xi)^{\top}f_{j}(W_j \xi) & \leq \xi^{\top} W_j^\top H_{0}^{j} W_j\xi+2\xi^{\top} W_j^\top H_{1}^{j}p_{j}(x,\xi)+2\xi^{\top} W_j^\top H_{2}^{j}f_{j}(W_j \xi) \\
& + 2\sum_{r=1}^{M}p_{j}(x,\xi)^{\top} W_j^\top W_j H_{3}^{j,r} W_r^\top W_r f_{r}(W_r \xi)
\end{aligned}$$
for all $x,y\in\mathbb{R}^{n}$ and $\xi=x-y$.
Notice that Assumption 3 is at least more relaxed than Lipschitz continuity (see Appendix B.2 for an intuitive example of activation functions satisfying Assumption 3).
...
Title: Revised Section 3 (Part 1)
---
Rebuttal 3:
Title: Revised Section 3 (Part 2)
Comment: ...
(Part 2):
### Convergence Conditions
We are now ready to show the convergence conditions for the CSODEs:
**Theorem 1:** Let Assumptions 1-3 be satisfied. If there exist positive semidefinite symmetric matrices $P, \tilde{P}$; positive semidefinite diagonal matrices $\Lambda^j, \tilde{\Lambda}^j$ for $j = 1, \ldots, M$, $\Xi^s$ for $s = 0, \ldots, M$, $\Upsilon_{s,r}$ for $0 \leq s < r \leq M$, $\tilde{\Upsilon}_{j,j'}$ for $j, j' = 1, \ldots, M$, $\Gamma_j, \Omega_j$ for $j = 1, \ldots, M$, $\tilde{\Xi}^0$; positive definite symmetric matrix $\Phi$; and positive scalars $\gamma, \theta$ such that the linear matrix inequalities (LMI) in Appendix B.3 hold true. Then, a forward complete system (1) is convergent.
Proof in Appendix C.3. Note that the used conditions on $f_j^i$ in Assumption 3 can be relaxed to "non-decreasing", which enlarges the scope of activation functions, including non-smooth functions like ReLU, then the resulting modifications for the formulations of Theorem 1 can be readily obtained, highlighting the CSODE framework's adaptability.
Those LMI conditions ensure system convergence. From an energy perspective, this indicates the error system's generalized energy (represented by the energy (or Lyapunov) function) is monotonically non-increasing, leading to convergence towards the equilibrium point: origin. These conditions can be easily verified, thanks to CSODE's structural characteristics and LMIs' highly adjustable elements.
The matrices, such as $\tilde{\Xi}^0$ and $\tilde{\Upsilon}_{j,j'}$, in the LMIs act as compensation terms balancing the effects of linear and nonlinear terms, ensuring the derivative
of the energy function $\tilde{V}$ remains non-positive. Properties of $f_j$ (Assumptions 1 and 2) provide facilitation in constructing these matrices. Assumption 3 allows for non-restrictive conditions on activation functions, avoiding strong global Lipschitz continuity assumptions and providing precise local asymptotic stability characterization.
---
Rebuttal 4:
Comment: We greatly appreciate your thorough review and positive feedback. Thank you for your time and valuable insights, which have helped improve our manuscript. We're glad that our responses have addressed your concerns satisfactorily. | Summary: The authors present a new class of continuous-time neural networks, ControlSynth ODEs. This new class of ODEs are able to learn the dynamics of physical systems faster and more precisely. In addition the authors provide theoretical convergence guarantees for these new models, and demonstrate their effectiveness compared to traditional Neural ODEs.
Strengths: Although the analysis is difficult to follow, the theoretical convergence guarantees make these an attractive model.
In the benchmark experiments, adaptive CSODEs achieve are superior compared to the other models.
Weaknesses: The explanation of the model is far too condensed and unclear.
Technical Quality: 2
Clarity: 1
Questions for Authors: It would be beneficial to outline what architectural changes are made from standard NODEs to CSODEs.
In addition, to what hypothesis motivated these changes.
Why are they termed ControlSynth ODEs? The name is uninformative.
What are the inductive biases of these models?
Confidence: 2
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The limitations of the method are unclear.
What are the inductive biases of these models?
How does sensitivity to hyperparameters emanate during training?
Why is this specific architecture more difficult to train than the alternatives?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you very much for your time and effort. We highly value your invaluable suggestions and sincerely apologize for the lack of clarity in certain parts of the manuscript. Please allow us to further elaborate and analyze the key issues you have raised.
**Regarding the Concept and Motivation for Introducing Control Terms in CSODE:**
In the field of Control Theory, control terms are typically used to regulate system states to achieve desired performance. Inspired by this, we introduced an additional control term to the traditional NODE to enhance the model’s ability to handle complex multi-scale dynamics. Specifically, the control term is generated by an independent sub-network, which can adaptively regulate the evolution of the system state. This design endows CSODE with greater flexibility and expressive power when learning and predicting the behavior of complex physical systems, such as those described by partial differential equations. Therefore, we named the model CSODE to highlight this inspiration. We have provided a more in-depth explanation of the motivation and mechanism for introducing the control term in the revised manuscript's introduction and methods sections to underscore its importance and novelty.
**Regarding the Inductive Bias Analysis of CSODE:**
Inductive bias reflects the constraints or preferences of a model in function space. NODE, based on a simple but general differential equation form, has a strong inductive bias, which limits its ability to fit complex nonlinear dynamics to some extent. CSODE, by introducing control terms, expands the function space, enabling the model to learn a broader range of dynamics. This smaller inductive bias gives CSODE stronger expressive power, making it advantageous in learning and predicting complex system behaviors. By introducing Lyapunov functions and providing verifiable linear matrix inequality conditions, we established sufficient conditions for the convergence of CSODE states. This stability theory-based inductive bias ensures that the solutions generated by CSODE are more in line with the characteristics of physical systems. We have added a new section in the Appendix of the revised manuscript to specifically discuss the inductive bias of CSODE and highlight its advantages through a comparison with NODE.
**Regarding the Architectural Evolution from NODE to CSODE:**
CSODE made two main improvements on the basis of NODE: first, the introduction of an independent control term, and second, the allowance for different more complex nonlinear terms. Specifically, we used a parallel neural network to generate control inputs, creating two pathways for information transmission in CSODE: one responsible for the active evolution of states and the other for the adaptive regulation of states. This dual-path structure enables CSODE to handle multi-scale effects. Additionally, CSODE expanded the form of nonlinear functions, allowing for more general nonlinear combinations, creates **a more generalized and extensible structure** for Neural ODE, allowing it to flexibly combine various different models to parameterize internal dynamic models, thereby enhancing the model’s capacity and flexibility. In the revised manuscript, we have added a schematic diagram to visually compare NODE and CSODE and provide a detailed explanation of the motivation and significance of each improvement in the text to highlight the novelty and superiority of CSODE.
**Regarding Hyperparameter Sensitivity:**
We have conducted supplementary experiments and have included more hyperparameter sensitivity analysis results in the Appendix of the revised version.
Our analysis focused on the impact of learning rate and batch size on CSODE performance. Results show that CSODE exhibits strong robustness to changes in these two hyperparameters. Specifically, within the learning rate range of [0.0005, 0.005] and batch size range of [32, 256], CSODE's performance fluctuations were controlled within 5\%.
It is noteworthy that we observed some interesting phenomena. For instance, when the batch size is fixed at 128, CSODE's performance is most stable when the learning rate varies within [0.0005, 0.005] with a standard deviation of only 1.2\%.
However, due to CSODE's deeper structure compared to the original neural ODE, appropriate learning rate selection is crucial for maintaining gradients within a reasonable range. In our main experiments, learning rates at the 1e-2 level were more prone to gradient issues, which may face problems of gradient vanishing or explosion.
Regarding the impact of network depth and width, we conducted a systematic exploration in Chapter 6: Model Scaling Experiment. Results show that increasing CSODE's network width and the number of sub-networks steadily improves model performance. Specifically, when network width increased from 128 to 2048 and the number of sub-networks increased from 1 to 5, CSODE's performance on the test set steadily improved, with an average increase of 23\%.
We believe that by supplementing these detailed experimental results and discussions in the revised manuscript, we can help readers gain a more comprehensive understanding of CSODE's hyperparameter characteristics, providing valuable insights for future applications and improvements.
Once again, we sincerely thank you for your valuable comments. These suggestions not only help improve the quality of the paper but also provide excellent inspiration for our subsequent research. In the revised manuscript, we carefully address every detail to improve the clarity of our work and accurately convey the innovations and contributions to the readers. If you have any further suggestions or questions, we are more than willing to listen and respond positively.
---
Rebuttal Comment 1.1:
Comment: I have read the response and I have updated my score.
---
Rebuttal 2:
Comment: We sincerely hope that our responses have addressed your concerns and questions. If you require any further clarification or have additional inquiries, we would be more than happy to provide a more detailed explanation. Your feedback is invaluable to us, and we are committed to ensuring that all aspects of our work are thoroughly understood. Please don't hesitate to reach out if you need any additional information or if there's anything else we can assist you with regarding our manuscript.
---
Rebuttal 3:
Comment: We are deeply grateful for your review and feedback. Your comments and the time you've invested have been instrumental in enhancing the quality of our manuscript. We sincerely thank you for your valuable contributions to this work. We also greatly appreciate your recognition of our efforts. | Summary: The paper introduces ControlSynth Neural ODEs (CSODEs), a novel approach to modeling dynamical systems with neural ordinary differential equations (NODEs). The proposed models constraint the system to a convergent once. The CSODE framework incorporates an additional control term to ensure the existence of the solutions. The authors present theoretical guarantees for convergence and demonstrate the superior performance of CSODEs compared to other NODE variants in their experiments.
Strengths: * **Experiments**: The authors validate their model on learning dynamical systems. This is a welcome change from several papers testing neural odes on problems like image classifications and such, where neural odes tend not to be the best solution. Hopefully more papers do the same!
* **Technical Rigor**: The paper provides detailed theoretical analysis and proofs for the convergence of CSODEs.
Weaknesses: Overall the paper looks solid. 1 questions regarding the training pointed out in the next section.
Technical Quality: 3
Clarity: 3
Questions for Authors: In the preliminary experiments section, neural odes seems not to converge properly. Was an alternate optimizer like LBFGS or BFGS tried? In most cases where vanilla neural ODEs tend to not converge, it is more often than not an optimizer issue rather than a model issue.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We sincerely appreciate your positive evaluation and valuable feedback on our work. Your insights are crucial in enhancing the quality and rigor of our research. We are particularly grateful for your observation regarding the convergence issues of Neural ODEs, which prompted us to conduct in-depth supplementary studies.
We agree with your assessment that applying Neural ODEs to understand physical models is indeed a significant and promising direction. As you pointed out, while previous works often focused on tasks like image classification where Neural ODEs may not be the optimal solution, we believe that leveraging Neural ODEs for modeling dynamic systems is not only more natural but also holds profound implications. The continuous-time representation of Neural ODEs aligns well with many physical processes, and the ODE formulation often allows for better interpretability of learned dynamics, bridging the gap between data-driven and physics-based modeling.
## Addressing the Optimizer Issue
Our initial choice of the Adam optimizer was based on its versatility, aiming to provide a universally applicable baseline. In response to your suggestion about the optimizer, we have conducted comprehensive additional experiments:
### 1. Optimizer Comparison
Following your recommendation, we expanded our experiments to include the L-BFGS optimizer in addition to the original Adam optimizer. The results indeed show that L-BFGS significantly improves the convergence performance of Neural ODEs and their variants.
### 2. Experimental Setup
We performed comparative experiments on Neural ODE, Augmented ODE, and our proposed CSODE across all main experimental tasks.
### 3. Key Findings
Using the Reaction-Diffusion model task as an example, the MSE results after employing the L-BFGS optimizer are given as follows:
| Model | Original MSE | L-BFGS MSE | Improvement |
| --- | --- | --- | --- |
| Neural ODE | 1.134e-2 | 9.536e-3 | 15.9% |
| Augmented ODE | 1.095e-2 | 9.153e-3 | 16.4% |
| CSODE | 6.365e-3 | 5.321e-3 | 16.4% |
Similar improvements were observed in other tasks.
### 4. Result Analysis
Although all models benefited from the use of L-BFGS, CSODE maintained its advantage over other methods. This not only validates your insight but also further confirms the effectiveness and robustness of CSODE.
The results suggest that the quasi-Newton characteristics of L-BFGS may be more suitable for addressing the gradient flow issues in Neural ODEs, and L-BFGS might be more effective in avoiding local optima compared to Adam in training Neural ODEs.
### 5. Revision Details
Based on these new findings, we did:
- Briefly discuss the impact of optimizer choice on model performance in the main text;
- Provide complete supplementary experimental results and detailed analysis in the Appendix;
- Add a discussion about the importance of optimization strategies for Neural ODE-type models.
## Regarding the Concern that Neural ODEs Seem Not to Converge Properly
We acknowledge that the curves presented in our paper indeed show some minor fluctuations towards the end, which may be due to insufficient epoch settings in our original setup. This observation reminds us to pay more attention to the stability of experiments and the presentation of results in future work.
To validate our results, we conducted additional experiments by extending the training epochs to 1.5 times the original number while continuing to use the original Adam optimizer. The results show that the Neural ODE model ultimately achieved training and test losses within 10% of those obtained under the original epoch settings. Moreover, the performance fluctuations in the additional epochs were contained within 5%.
These new experimental results indicate that our original experimental results are representative and the model is in a relatively converged state. Although the fluctuations at the end of the original curves might give the impression of incomplete convergence, the extended training time experiments confirm that the model indeed reached a stable performance level.
## Conclusion
We sincerely thank you for your suggestion, which has not only helped us improve our current research but also provided valuable insights for future Neural ODE research. We believe these additions will significantly enhance the depth and impact of the paper.
If you have any further questions, or suggestions, or need clarification, we welcome your feedback and look forward to an in-depth discussion.
Thank you again for your valuable time and insightful opinion.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for running additional experiments. My main concern with the paper was indeed the optimization issue and it seems that even after "better convergence" of the Neural ODEs, ControlSynth NODEs maintain an edge. I believe this is a valuable contribution and I will revise my score to 8!
---
Rebuttal 2:
Comment: Dear Reviewer,
Thank you for your positive feedback and for recognizing the value of our contribution! We are grateful for your time and effort in reviewing our work and are pleased to hear that the additional experiments addressed your concerns about optimization. Your support and revised score are greatly appreciated. | Summary: In this paper, a method called ControlSynth Neural ODEs is proposed. This method is essentially defined as a neural ordinary differential equation with a control input. In particular, the authors focus on the convergence, of which definition requires the existence of a solution and also the asymptotic stability of the solution. In this paper, theoretical conditions to satisfy the convergence are presented. Some numerical experiments are also provided.
Strengths: In my opinion, the main contribution is the theoretical analysis of convergence. In fact, stability is very important for the use of neural ordinary differential equations, which are often unstable when used for modeling.
Weaknesses: In this paper, conditions for convergence are presented. Thus, it is expected that the control input and the neural ordinary differential equations should be trained so that these conditions are satisfied. However, I could not find any description of how to achieve this in the paper. In fact, the conditions for the convergence assume the existence of several matrices, and it seems difficult to design an optimization algorithm for training while satisfying these conditions.
Technical Quality: 3
Clarity: 3
Questions for Authors: As it is stated that the convergence is guaranteed in this paper, I imagine that there must be some tricks in designing the control input and the learning algorithm so that the conditions for the convergence are always satisfied. How are they designed?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There seems to be no problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your thoughtful reviews of our work and for recognizing our study on convergence analysis. We sincerely appreciate your review comments and have carefully analyzed and responded to them, conducting a relevant experiment:
1. **Regarding the existence of special optimization algorithms for training:**
We acknowledge your concerns about satisfying convergence conditions in training algorithms. Our paper's main focus is proposing the CSODE models based on theoretical convergence analysis, rather than designing new learning algorithms. For fair comparisons, we used the widely adopted Adam optimizer with MSE loss. This uniform approach highlights the strengths and weaknesses of different model structures, avoiding bias from varied optimization algorithms. We believe this comparison method is fair and reasonable.
We agree designing new learning algorithms is a promising future research direction, but it is beyond our current paper's scope. Our focus is on model theory and structure, instead of algorithm improvement. In the Conclusion, we suggest this as a future research topic.
2. **Regarding the design of control inputs:**
In our work, we used the state variable $x_0$ as a placeholder for $u_0$ to ensure fair comparison with baselines, as original neural ODEs and variants lack control inputs. We believe this is a reasonable approach. In future works, more refined methods can be considered under the existence of control inputs in the CSODE framework.
From a theoretical perspective, the design or the form of control input $u$ does not affect convergence property. In the error dynamics equation (2), the term $u$ is eliminated. Theorem 1 also does not restrict $u$. The only restriction for $u$ is given after Equation (1), not in Theorem 1. In the current version, we do not include the design of inputs as our main attention is on the convergence guarantees under the most basic form.
3. **Regarding the way of applying the convergence conditions:**
First, Theorem 1 directly relates model parameters to convergence through matrix inequalities. Inserting only the trained parameters into those inequalities verifies convergence without checking assumptions or matrix inequalities in the training processes. This provides a more concrete, operational convergence criterion. Note that the three assumptions only have a claim on the activation functions of CSODE, but convergence is assured as long as the matrix inequalities hold (solutions of the matrix inequalities are only relevant to the weight matrices $A_0, A_1, \dots, A_M, W_1, \dots, W_M$).
Secondly, we discuss how each assumption naturally holds:
**Assumption 1:**
Assumption 1 selects the activation functions passing through the origin and the quadrants I and III. Remark 1 actually states all cases of the boundedness of the activation functions: none of, part of, or all of the nonlinearities $f_s^i$ are bounded. Also, both the unboundedness and boundedness of $\lim_{\nu\rightarrow\pm\infty} f_s^i(\nu)$ may lead to the unboundedness of $\lim_{\nu\rightarrow\pm\infty}\smallint_{0}^{\nu}f^{i}_{s}(r)dr$.
**Assumption 2:**
Assumption 2 aligns with CSODE's structure, reflecting continuity and monotonicity of activation functions. This relates to model dynamics and is satisfied by most common activations.
**Assumption 3:**
We describe the rationality of Assumption 3 from three perspectives:
- The positive semi-definite diagonal matrices in Assumption 3 are mathematical solutions used to derive system stability conditions and do not need to appear explicitly in the model design.
- The inequalities in Assumption 3 resemble local Lipschitz continuity or local quadratic boundedness. In practice, they can be verified based on chosen $f_j$ (e.g., ReLU, Sigmoid, $\tanh$). For the $\tanh$ function,
$$
f_j(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}}, \quad |f_j(x)| < 1, \quad |f_j(x)-f_j(y)| \leq |x-y|.
$$
It is Lipschitz continuous with constant $1$ and satisfies:
$$
p_j(x,\xi)^\top p_j(x,\xi) \leq \xi^\top W_j^\top W_j \xi, \quad f_j(W_j\xi)^\top f_j(W_j\xi) \leq \xi^\top W_j^\top W_j \xi
$$
In this case, we can choose:
$$
S_0^j = H_0^j = I, \quad S_1^j = S_2^j = S_3^{j,r} = H_1^j = H_2^j = H_3^{j,r} = 0.
$$
If $S_1^j, S_2^j, S_3^{j,r}, H_1^j, H_2^j, H_3^{j,r} >0$, the restrictions are relaxed, making Assumption 3 less strict than Lipschitz continuity.
- To address your concerns about the actual existence of matrices in **Assumption 3**, we designed **a supplementary experiment** to validate the trained CSODE models, involving three dynamical system time series prediction tasks in the main experiments.
Experimental Steps: Retrained CSODE models (3 FC layers, 128 units, Softplus; Adam optimizer, lr=1e-3, batch=64, 500 epochs, MSE loss). We randomly selected 500 sample pairs $(x^i, y^i)$ from each task's test set, calculated the difference $\xi^i = y^i - f(x^i)$ where $f(\cdot)$ is the trained CSODE model, and used YALMIP to define optimization variables and constraints. We constructed 500 matrix inequalities $\xi^{iT} P \xi^i \preceq 0$ and added the semi-definite constraint $P \succeq 0$. Using the Sedumi solver (precision 1e-6, max 5000 iterations), we solved the optimization problem and recorded the solving time and iteration count. We then performed individual validation for each sample pair to calculate the satisfiability ratio of Assumption 3.
Experimental Results (average of 3 independent experiments): For the Hindmarsh-Rose model, solving time was 15.3 minutes with 2,684 iterations and 99.8\% (499/500) satisfiability. The Reaction-Diffusion System task took 18.7 minutes with 3,217 iterations and 99.6\% (498/500) satisfiability. The Shallow Water Equations required 24.2 minutes with 4,105 iterations and 99.4\% (497/500) satisfiability. The solver found solutions satisfying Assumption 3 quickly, with >99\% satisfiability across tasks, supporting our theoretical assumptions for trained models.
---
Rebuttal 2:
Comment: We sincerely hope that our responses have addressed your concerns and questions. If you require any further clarification or have additional inquiries, we would be more than happy to provide a more detailed explanation. Your feedback is invaluable to us, and we are committed to ensuring that all aspects of our work are thoroughly understood. Please don't hesitate to reach out if you need any additional information or if there's anything else we can assist you with regarding our manuscript.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for the detailed reply. If I understand it correctly, the conditions in Theorem 1 are checked numerically after training, and the trained models should be retrained if the conditions are not satisfied. Is this understanding correct?
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer,
We are deeply grateful for your insightful question. Your question not only demonstrates your meticulous review of our research, but also provides us with a great opportunity to clarify and deepen our perspectives.
We would like to emphasize that Theorem 1 and its conditions are not intended to be checked numerically after training, nor do we propose retraining if the conditions are not satisfied. This is because such a process was never the motivation behind our model training design. While we do not recommend it as a standard practice, we acknowledge that in certain scenarios, incorporating examinations followed by appropriate sub-networks additions and optimizer modifications could potentially serve as prescription. However, this was not the focus of our current study.
It's worth noting that such sufficient conditions are very common in the control theory community and are frequently used to analyze system stability. Our approach was indeed inspired by these concepts. Please allow us to elaborate further on our approach:
1. **Relationship between Theoretical Foundation and Practical Application:**
Theorem 1 indeed provides the theoretical basis for CSODE's convergence, but this does not mean that we expect or require strict verification of these conditions during actual training. This theorem serves more to prove that the CSODE structure has the potential to reach a convergent state theoretically.
2. **Structural Advantages of CSODE:**
The primary intention of CSODE's design was to provide greater flexibility and a broader parameter search space for the model through its unique structure (including additional control terms, sub-networks and matrices). This structure allows the model to naturally find convergent solutions more easily during training, without explicitly enforcing the conditions in the theorem.
3. **Relaxation of Conditions:**
When deriving Theorem 1, we deliberately designed relatively relaxed conditions. The purpose was to make these conditions more easily satisfied naturally in regular learning processes, rather than serving as strict constraints.
4. **Experimental Validation:**
Our experimental results strongly support our theoretical insights. The results show that CSODE indeed outperforms other models in terms of convergence and performance, validating the practical value of our theoretical analysis.
5. **Clarification of the Term "Guaranteed Convergence":**
We acknowledge that using the term "Guaranteed Convergence" may have caused misunderstanding, and we apologize for this. More accurately, this concept is more like providing a theoretical "safety net" or "insurance policy" for the model's convergence, rather than a guarantee that needs to be strictly checked after training.
6. **Bridge between Theory and Practice:**
The true value of Theorem 1 lies in providing a theoretical explanation for CSODE's superior performance. It helps us understand why CSODE performs well in practice, rather than serving as a constraint or checkpoint during the training process.
7. **Clarification of Research Motivation:**
Our main motivation was to design a model that is structurally more conducive to reaching a convergent system. Through the design of CSODE, we hoped to create a model that "naturally towards convergence," rather than one that requires external enforcement or frequent checks to ensure convergence.
8. **Future Research Directions:**
Your question inspires us to consider how we can more closely integrate these theoretical insights into actual training processes in future research, possibly through some soft constraints or regularization techniques.
In summary, Theorem 1 should be viewed as the theoretical cornerstone of CSODE's performance, not as a practical constraint in the training process. While it could potentially be utilized as a constraint, our current work does not recommend this approach so far.
Once again, thank you for your valuable feedback, which helps us communicate our research ideas more clearly. If you have any further questions or need additional clarification, we are more than happy to continue the discussion. | Rebuttal 1:
Rebuttal: Dear Program Chairs and Reviewers,
We sincerely thank you for your thorough review of our paper. We have carefully considered each comment and conducted extensive supplementary experiments and analyses. Below are our responses to the main concerns raised:
## 1. CSODE as a Generalized Extension of Neural ODE
CSODE is not only an improvement on Neural ODE but also takes a more general form. Our model has the following characteristics:
1. **Universality**: CSODE can be viewed as a superset of Neural ODE. When the control term is zero, CSODE degenerates to the standard Neural ODE. This design allows CSODE to encompass all functionalities of Neural ODE while providing greater modeling flexibility.
2. **Flexibility**: By adjusting the structure and strength of the control term, CSODE can smoothly transition between Neural ODE and more complex dynamical systems. This flexibility enables CSODE to adapt to a wider range of important domains.
3. **Compatibility**: CSODE maintains the same basic structure as Neural ODE, allowing it to be seamlessly integrated into existing Neural ODE-based frameworks, such as Latent ODE.
## 2. Innovations and Contributions of CSODE
Based on the above generality, the main innovations of CSODE include:
1. Introduction of an independent control term, enhancing the model's ability to handle complex multiscale dynamics. This design allows CSODE to adaptively regulate the evolution process of system states.
2. Extension of nonlinear function forms, improving the model's expressiveness and flexibility. CSODE allows for more general nonlinear combinations, enabling it to capture more complex dynamical features.
3. Establishment of a convergence analysis framework based on Lyapunov stability theory, providing theoretical guarantees for model convergence. This analysis not only applies to CSODE but also provides a methodological basis for studying more general forms of Neural ODE.
## 3. A New Paradigm for Understanding Physical Models
CSODE provides a powerful tool for understanding and modeling complex physical systems:
1. The continuous-time representation naturally aligns with physical processes, allowing CSODE to more accurately capture the dynamic characteristics of systems.
2. The introduction of the control term enables CSODE to simulate physical systems with external inputs or internal regulation mechanisms, which is challenging to achieve in standard Neural ODEs.
3. The ODE form of CSODE facilitates the interpretation of learned dynamics, bridging the gap between data-driven modeling and physical modeling.
## 4. Generality and Scalability
CSODE provides a highly general and scalable Neural ODE framework:
1. By adjusting the complexity of the control term, CSODE can smoothly transition between simple and complex models, adapting to problems of varying complexity.
2. CSODE's parallel subnetwork structure allows for increasing model capacity by adding subnetworks while maintaining convergence.
3. Our experiments show that as the number of subnetworks and network width increase, CSODE's performance steadily improves, demonstrating excellent scalability.
## 5. Natural Integration of Convergence Conditions with Model Structure
CSODE's convergence conditions are mathematically closely connected to the model structure:
1. Our theoretical analysis draws on Lyapunov stability theory, constructing sufficient conditions for convergence.
2. These conditions not only provide theoretical guarantees for the model but also guide model design, ensuring that the solution generated by CSODE better conforms to the state of physical systems.
3. The establishment of the convergence analysis framework provides a theoretical foundation for studying more general forms of Neural ODE.
## 6. Summary of Supplementary Experiments
1. **Optimizer Experiments**: We compared the performance of Adam and L-BFGS optimizers. Results show that L-BFGS can significantly improve the convergence performance of Neural ODE and its variants, but CSODE maintains its advantage.
2. **Comparison with Neural CDE and ODE-RNN**: In irregular observation experiments, CSODE performed slightly worse than Neural CDE but better in other time-series-related tasks.
3. **Model Scalability Experiments**: By increasing network width, we compared the performance of CSODE, original Neural ODE, and Augmented Neural ODE with the same parameter count. Results show that CSODE's learning ability improves more significantly with increasing width.
4. **Ability to Adapt to Different Spatial Scales**: Experiments show that when changing the spatial scale of dynamical systems, CSODE demonstrates stronger robustness, with a smaller standard deviation percentage in experimental results.
5. **Solver Selection**: We supplemented experiments using the Dopri5 solver. Results indicate that changing the solver does not affect the relative performance between models.
These supplementary experiments further validate the superiority of CSODE and the correctness of our theoretical analysis.
Through these innovations and improvements, CSODE not only extends the capabilities of Neural ODE but also provides new insights into the design and analysis of continuous-time deep learning models. We believe that CSODE, as a more general and flexible framework, will bring new opportunities and challenges to fields such as dynamical system modeling and time series analysis.
Once again, we appreciate your valuable comments. We have comprehensively revised and improved the paper based on your feedback. If you have any further questions, we welcome continued discussion and exchange. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Linear Causal Representations from General Environments: Identifiability and Intrinsic Ambiguity | Accept (spotlight) | Summary: This paper considers the task of learning linear causal representation with data collected from general environments. Authors show that there exists a surrounded-node ambiguity (SNA) which is basically unavoidable in their setting. On the other hand, identification up to SNA is possible under mild conditions in their considered setting. An algorithm, LiNGCReL, is further proposed to achieve such identifiability guarantee.
Strengths: - Learning linear causal representation using soft-interventions seem to be new.
- The surrounded-node ambiguity (SNA) identifiability is new.
Weaknesses: - The placed assumptions do not seem to be weaker, compared with existing ones.
- Many places/claims are vague/incorrect, and presentation is poor.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. what does "general environments" mean? I don't think it is a widely used concept, so perhaps describe it more explicitly and accurately.
2. Definition 2: this definition is questionable, or at least inaccurate.
- Line 105: here "$i \in S$" and then "for all $i\in [d]$"? what is $S$ and what is the difference bewtten $S$ and $[d]$?
- Further, when you write $<=>$ , what does $i\notin S$ indicate?
3. Similarly in Def. 4, line 121: it is difficult to interpret what you mean by <=>. Can you give a complete sentence of the condition?
4. More importantly, I find several motivations/claims are not convincing/accurate, which is particularly critical to a theory-heavy paper:
1) a motivation of considering soft interventions is that, ”the latent variables are unknown and need to be learned from data, it is unclear how to perform interventions that only affect one variable. “, but how soft interventions avoid this concern?
2) lines 123-124: "if there exists some i ∈ surG(j), then ambiguities may arise for the causal variable at node j, since any effect of j on any of its child k can also be interpreted as an effect of i." in pearl's book or the topic of mediation analysis, you can define clearly the effect of $j$ and the effect of $i$ on child $k$ , and why would I interpret the effect of $j$ as that of $i$? That is not sound.
3) I also have a question with the example with three causal variables in Appendix E, which aims to show the different between hard and soft interventions. It seems that one can distinguish the two even with soft interventions, e.g., by setting $z_3=z_2+\epsilon_3$, $v_3=v_2+\epsilon_3$, then $z_2$ and $z_3$ (or $x_2, x_3$) would still be dependent while in the right, $v_2$ and $v_3-v_2$ (or $x_2, x_3$) are independent.
4) lines 139-142: "in contrast with existing literature on single-node interventions, we impose no similarity constraints on the environments.": however, assumptions 4 and 5 seem to assume the environments are sufficiently diverse, which are not weaker.
I just stop here, since the presentation and rigorousness issues have led to the decision for the submitted version. While I tend to believe the paper may contribute some interesting ideas and results, at least a major revision is required.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing insightful feedback and suggestions. Below are our responses to the reviewer’s questions and concerns of our paper.
**Q1: What does "general environments" mean? I don't think it is a widely used concept, so perhaps describe it more explicitly and accurately.**
As the reviewer notices, the “general environments” setting is relatively new in the literature. As a result, we include discussions of this concept in several parts of our paper. The most notable one might be at the beginning of Section 4, where we wrote that “In this section, we consider learning causal models from general environments. Specifically, we assume that the environments share the same causal graph, but the dependencies between
connected nodes (latent variables) are completely unknown, and, in contrast with existing literature on single-node interventions, we impose *no* similarity constraints on the environments.” The motivation of considering such a setting is discussed in Section 1 line 41-51. We thank the reviewer for pointing this out and we will improve exposition by moving the definition from Section 4 to the introduction so that the reader has a more precise understanding of the use of the term earlier on.
**Q2: Definition 2: this definition is questionable, or at least inaccurate. Line 105: here "$i\in S$" and then "for all $i\in[d]$"? What is $S$ and what is the difference between $S$ and $[d]$? Further, when you write $\Leftrightarrow$, what does $i\notin S$ indicate?**
Thank you for pointing at potential points of confusion/improved readability and let us clarify below the accuracy of the statements and ways that we plan to address the readability issues.
"i\in S" is part of the sentence “......a subset of latent variables $z_i,i\in S$”, so it describes for which zi’s we are defining the soft intervention. On the other hand, "if for all $i\in[d]$" belongs to the sentence that follows: if $\forall i\in[d]$, $\forall E_1, E_2 \in \hat{\mathfrak{E}}, E_1 \neq E_2$, we have $p_i^{E_1}=p_i^{E_2} \Leftrightarrow i\notin S$. So these two $i$’s are essentially the subscripts of different objects: latent variables $z_i$ and distribution $p_i$, not contradictory statements or typos. We will use different subscripts in the two sentences for improved readability in a revised version.
The notation $\Leftrightarrow$ stands for “if and only if”. We will replace the arrow with the if and only if text to make it clearer in a revised version.
So, to summarize, Definition 2 can be rephrased as follows: “We say that a collection of environments $\hat{\mathfrak{E}}$ is a set of (soft) interventions on a subset of latent variables $z_i, i\in S$ if the following statements hold for any $i\in[d]$: $\forall E_1, E_2 \in \hat{\mathfrak{E}}, E_1 \neq E_2$, we have $p_i^{E_1}=p_i^{E_2}$ if and only if $i\notin S$.
**Q3: Similarly in Def. 4, line 121: it is difficult to interpret what you mean by <=>. Can you give a complete sentence of the condition?**
Similar to Q2, we use the mathematical convention that <=> means if and only if. The sentence reads “For all $ i,j\in[d]$, $i\in pa_{G}\left(j\right)$ if and only if $ \pi(i)\in pa_{G}\left(\pi(j)\right)$. We will replace the arrow with the "if and only if" text for improved readability.
**Q4.1: a motivation of considering soft interventions is that, ”the latent variables are unknown and need to be learned from data, it is unclear how to perform interventions that only affect one variable. “, but how do soft interventions avoid this concern?**
We are afraid that the reviewer mixed up the following two settings: one is the “general environment” setting, which is the main focus of our paper (Theorem 1, 3 and the LingCReL algorithm), the other is the “soft intervention” setting, for which we only include a negative result (Theorem 2) to highlight its difference from the general environment setting. In fact, soft interventions exactly inherit the limitations that the reviewer quoted, and this is the main motivation for us to consider general environments that do not have this limitation. We included the soft-intervention result in our paper, as it only makes the negative result stronger. Obviously the negative impossibility result holds for general environments too, but proving it for soft interventions only strengthens the impossibility result. All our positive and constructive results are for general environments, which exactly bypass the requirement that the reviewer describes as a problem.
**Q4.2: lines 123-124: "if there exists some i ∈ surG(j), then ambiguities may arise for the causal variable at node j, since any effect of j on any of its child k can also be interpreted as an effect of i." in pearl's book or the topic of mediation analysis, you can define clearly the effect of 𝑗 and the effect of 𝑖 on child 𝑘, and why would I interpret the effect of 𝑗 as that of 𝑖? That is not sound.**
We thank the reviewer for pointing to the references. Note that this sentence is only an intuitive description and serves as a supplement to Definition 3, which is mathematically rigorous.
In this section, the word “interpret” is used in the following sense: from the learner’s perspective, the task is to find a suitable data model that matches the data. intuitively, there would be two indistinguishable but non-equivalent data generating processes, so in other words, there are two distinct causal mechanisms that match observation data. So you can of course define these theoretical causal quantities, but given the data you cannot conclude which of these two causal effects are at play. We hope this explanation would make things clear, and we point the reviewer to Definition 3 for a more rigorous mathematical formulation.
Due to space limits, the remaining parts of the rebuttal can be found in the official comment that follows this post.
---
Rebuttal 2:
Title: Rebuttal continued
Comment: **Q4.3: I also have a question with the example with three causal variables in Appendix E, which aims to show the difference between hard and soft interventions. It seems that one can distinguish the two even with soft interventions, e.g., by setting $z_3=z_2+\epsilon_3$, $v_3=v_2+\epsilon_3$, then $z_2$ and $z_3$(or $x_2,x_3$) would still be dependent while in the right, $v_2$ and $v_3-v_2$(or $x_2,x_3$) are independent.**
We thank the reviewer for raising this question. The reason why the two models in Example 3 cannot be distinguished is that, given any soft intervention on the first model, there always exists a soft intervention on the second one that yields the same data distributions. This is not to say that any two soft interventions on two models would yield the same data distribution – which is actually impossible. Thus, by constructing two soft interventions separately for the two candidate models and proving that they are different, one cannot conclude that these two models are distinguishable. In practice this translates to: "given the data that I have, I cannot distinguish whether they were generated by a true structural causal model SCM1 under an intervention SoftIntervention1 or whether it was SCM2 with intervention SoftIntervention2." The statement of Theorem 1 includes a precise statement of what “undistinguishable models” mean in our paper (a formal definition also used in prior work).
Returning to the reviewer’s example, a soft intervention $z_3=z_2+\epsilon_3$ on the first model is equivalent to $v_3=2*v_2+\epsilon_3$ on the second one. The key point here is that we are not assuming that the experimenter knows the specific form of the interventions, nor the nodes on which the interventions are performed (a commonly considered setting in CRL, see e.g. [3,4]). Thus, from the experimenter’s perspective, these two causal models cannot be distinguished.
[3] Squires, Chandler, et al. "Linear causal disentanglement via interventions." International Conference on Machine Learning. PMLR, 2023.
[4] Varıcı, Burak, et al. "Linear Causal Representation Learning from Unknown Multi-node Interventions." arXiv preprint arXiv:2406.05937 (2024).
**Q4.4: lines 139-142: "in contrast with existing literature on single-node interventions, we impose no similarity constraints on the environments.": however, assumptions 4 and 5 seem to assume the environments are sufficiently diverse, which are not weaker.**
Our Assumption 4 and 5 can be easily satisfied by single-node soft interventions. Indeed, soft interventions on node $i$ is equivalent to changing the entries of the $i$-th row of $B_k$. As a result, as long as there exists at least $i$ soft interventions in general positions for node $i$ (which is $\Theta(d^2)$ interventions in total), our assumption can be automatically satisfied. As a result, while our assumption allows for identification from diverse environments, it is not restricted to this case; it also holds for soft interventions as well. On the other hand, we also show in Theorem 2 that the number of interventions $\Theta(d^2)$ cannot be reduced, thereby providing a clear picture of the soft intervention setting.
We would like to stress that, however, the main motivation of considering general environments is that single-node soft interventions are too restrictive for applications. Hence, the focus of our paper is the setting of general environments, and we do not compare with the restrictive setting of soft interventions explicitly in the main text. We will add it in a revised version of the paper.
We hope that the above explanations are helpful, and we are also happy to answer further questions that the reviewer has.
---
Rebuttal Comment 2.1:
Comment: Thanks for your response which addressed many of my questions. Some of my feedbacks and remaining questions are:
- regarding Q2 and Q3: to clarify, I know "A <=> B" means A is equivalent to B. The thing here is, when you use this notation, please be careful to make A and B crystal clear. If you look at the writing, A is a very long setence with many commas. And $S$ shall be defined first and explicitly.
- regarding Q4.2: "Note that this sentence is only an intuitive description and serves as a supplement to Definition 3". No, I cannot agree. Every sentence, every it's an explaination, shall be made accurate and serious. My question is on this statement "since any effect of j on any of its child k can also be interpreted as an effect of i."
- regarding Q4.4: from my understanding, these two assumptions either require sufficient diverse environments (which is contradictraty to the statement "... no similarity constraints on the environments", or additonal interventions, right? Then how are these soft interventions conducted?
---
Reply to Comment 2.1.1:
Comment: We would like to thank the reviewer for providing invaluable feedbacks and for letting us know the remaining concerns.
**For Q2 and Q3:** We thank the reviewer for the clarification. We agree with the reviewer that confusion in math notations should be avoided, and we will definitely make these statements clearer in future versions.
**For Q4.2:** We thank the reviewer for pointing out this issue. We realize that this sentence is not accurate; the statement should be "since any effect of j on any of its child k can also be interpreted as a mixture of the effect of i and j." We will make this modification in the revised version.
**For Q4.4:** it seems that there are some misunderstandings here. In the statement "... no similarity constraints on the environments" we are comparing our work to existing works that assume access to environments that differ only on one node. By the phrase "similarity constraint" we are referring to "constraints that require different environments to be similar to some extent". The reviewer seems to treat the diversity constraints as a form of similarity constraint too. It is fine to understand this assumption in this way, but what we would like to point out is that 1) one has to make *some* assumptions on the environment to ensure identifiability. 2) the main contribution of our paper is that we completely remove the single-node interventions assumptions in existing identifiability theory. We replace the very special single-node interventions assumption with Assumption 4/5 that basically holds with probability $1$ if the weights are sampled from some continuous distribution. 3) Our Theorem 1 naturally implies that having $O(d^2)$ single-node soft interventions ($O(d)$ for each node) is sufficient for identifiability, and we show in Theorem 2 that this is also necessary.
We hope that the above explanations would resolve the reviewer's concerns, and please feel free to let us know if the reviewer has more questions. | Summary: The paper is about causal representation learning (i.e, learning the latent causal graph and the unmixing function) from high-dimensional observations in the case of linear SCMs where the mixing function is also linear. The paper defines the notion of surrounded node ambiguity (SNA) and then performs studies for CRL under assumptions of soft interventions (where there are K environments that share the causal graph). Theoretical analysis is performed and the authors also introduce a method, LinGCRel that can perform CRL upto SNA and is provably identifiable.
Strengths: 1. The authors introduce the idea of SNA and identifiability upto SNA which is novel and also important for understanding ambiguities in the CRL setup, even for the simple case of linear SCMs with a linear mixing function.
2. LinGCRel, a practical method is also proposed to perform CRL in such a setting, provably identifying upto SNA. The paper is also about soft single node interventions, in comparison to other previous work which has primarily dealt with hard interventions.
Weaknesses: Since I am not aware of many theoretical results and proofs for CRL, which seems to be the main contribution of the paper (apart from LinGCRel), I do not have particular weaknesses to state.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Figure 2e looks like an undirected graph but on zooming looks like there might be directed edges. If so, this figure can be corrected.
* Font size for the axis labels is too small to read in Fig 2a-2d
* Apart from the interventional CRL works already cited which mainly focuses on identifiability, the authors could also cite BIOLS [1] which studies CRL from a more empirical perspective and proposes an algorithm to learn linear latent causal graphs from high-dimensional data under hard, multi-node interventions.
[1] Subramanian, Jithendaraa, et al. "Learning latent structural causal models." arXiv preprint arXiv:2210.13583 (2022).
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and comments. In the following, we respond to the questions raised by the reviewer.
**Q1: Figure 2e looks like an undirected graph but on zooming looks like there might be directed edges. If so, this figure can be corrected.**
That was indeed the case – the graph is directed. We thank the reviewer for pointing out this issue, and will correct them in a revised version.
**Q2: Font size for the axis labels is too small to read in Fig 2a-2d**
We thank the reviewer for pointing out this issue. We will update these labels to make them bigger.
**Q3: Apart from the interventional CRL works already cited which mainly focuses on identifiability, the authors could also cite BIOLS [1] which studies CRL from a more empirical perspective and proposes an algorithm to learn linear latent causal graphs from high-dimensional data under hard, multi-node interventions.**
We thank the reviewer for pointing to this super interesting and closely related paper, and we will definitely cite it in the revision.
We thank the reviewer again for appreciating our work and are happy to address any other questions that the reviewer might have. | Summary: This paper investigates causal representation learning from low-level observed data across multiple environments. The authors address the surrounded-node ambiguity (SNA) in linear causal models and propose the LiNGCReL algorithm, which achieves identifiability up to SNA without relying on single-node interventions. Experiments on synthetic data show the effectiveness of LiNGCReL in the finite-sample regime.
Strengths: 1. It addresses the limitations of previous methods that rely on single-node interventions, providing a more practical approach to causal representation learning.
2. The proposed LiNGCReL algorithm achieves identifiability up to SNA under mild conditions
Weaknesses: 1. Could you provide an intuitive explanation or motivations for the assumptions? What do they represent in real-world data scenarios?
2. The authors mentioned that this work differs from Xie et al. [54, 55] and Dong et al. [11] because the latter requires structural assumptions. However, it seems that the proposed model in this paper also inherently assumes that there are no direct causal edges between observed variables.
3. In the first step of the proposed algorithm, "any identification algorithm for linear ICA is used to recover the matrix $M_k$". This may introduce some errors, as perfect identification cannot be achieved. These methods typically assume that the dimensions of $X$ and $Z$ are the same and other restrictions.
4. In Figure 2(e), the arrows on the edges could be made larger, as they currently look like undirected edges.
5. How would the proposed method perform when applied to real-world data?
6. The authors only provided the results of the LiNGCReL algorithm on simulated data. It would be more objective and validate the effectiveness of the proposed method if results of other baselines or classical methods on the same data were also provided for comparison.
Technical Quality: 2
Clarity: 2
Questions for Authors: The paper considers the problem of learning linear causal representations from general environments. Is the information about these different environments known? If so, it indeed provides some additional information for structure learning.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing insightful comments and feedback. Below are our responses to the reviewer’s questions and concerns of our paper.
**Q1: Could you provide an intuitive explanation or motivations for the assumptions? What do they represent in real-world data scenarios?**
Certainly. Our paper considers the task of learning linear representations from general environments. Assumption 1 states that all environments share the same mixing function, which is a standard assumption in CRL literature [1,2]. For example, the generating process of images can be thought of as first generating a few causally related features (scheme, location, weather, etc.) and then these features are transformed by some mixing function into high level representations (pixels in images). Assumption 1 basically states that the relationship between features and the final image remains the same for all environments, but that those features are generated with different probabilities. Assumption 2 requires that the mapping from features to images is injective, i.e. there can’t be two different feature vectors that correspond to the same image. Assumption 3 requires the noise variables to be non-Gaussian and have different distributions. The non-Gaussian assumption is relatively standard in causal graph discovery, while requiring different distributions is not restrictive, since real-world noise distributions are seldom the same. Assumptions 4 and 5 are the main assumptions in our theory and intuitively state that the environments should contain enough information for all latent variables. Notably, it does not require single-node soft interventions, a widely-adopted assumption [1,4,5] but is questionable in real-world applications such as genomics (see e.g. Figure 1 of [3]), while interventions typically affect multiple nodes.
[1] Squires, Chandler, et al. "Linear causal disentanglement via interventions." International Conference on Machine Learning. PMLR, 2023.
[2] von Kügelgen, Julius, et al. "Nonparametric identifiability of causal representations from unknown interventions." Advances in Neural Information Processing Systems 36 (2024).
[3] Tejada-Lapuerta, Alejandro, et al. "Causal machine learning for single-cell genomics." arXiv preprint arXiv:2310.14935 (2023).
[4] Wendong, Liang, et al. "Causal component analysis." Advances in Neural Information Processing Systems 36 (2024).
[5] Varici, Burak, et al. "Score-based causal representation learning with interventions." arXiv preprint arXiv:2301.08230 (2023).
**Q2: The authors mentioned that this work differs from Xie et al. [54, 55] and Dong et al. [11] because the latter requires structural assumptions. However, it seems that the proposed model in this paper also inherently assumes that there are no direct causal edges between observed variables.**
We thank the reviewer for pointing out the possible confusion here, and we will modify the sentence in a revised version. Our point is that since these works rely on observational data only, the underlying causal models may at best be recovered up to Markov equivalence. If we hope to establish stronger identification guarantees, more structural assumptions must be made.
**Q3: In the first step of the proposed algorithm, "any identification algorithm for linear ICA is used to recover the matrix $M_k$". This may introduce some errors, as perfect identification cannot be achieved. These methods typically assume that the dimensions of $X$ and $Z$ are the same and other restrictions.**
As the reviewer noticed, standard linear ICA only applies to the case where $X\in\mathbb{R}^n$ and $Z\in\mathbb{R}^d$ have the same dimension. If $n>d$, then this means that our observation $X$ contains redundant information, and we can naively ignore the last $n-d$ components of $X$. Alternatively, we can run PCA to identify the top $d$ components of observation $X$ and use the new $d$-dimensional vector to run linear ICA. We will make this point more precise in a revised version of the paper.
**Q4: In Figure 2(e), the arrows on the edges could be made larger, as they currently look like undirected edges.**
We thank the reviewer for pointing out this issue and will make the arrows larger in an updated version.
**Q5: How would the proposed method perform when applied to real-world data?**
The experiments conducted in this paper mainly aims to verify the correctness of our theory and algorithm. We have not tested it on real-world data, because it is unclear how to evaluate the performance, since the underlying causal structure is generally unknown, and there is no benchmark in our setting. We agree with the reviewer that dealing with real-world data is an important future direction of CRL, and we will definitely explore it in future studies.
**Q6: The authors only provided the results of the LiNGCReL algorithm on simulated data. It would be more objective and validate the effectiveness of the proposed method if results of other baselines or classical methods on the same data were also provided for comparison.**
To the best of our knowledge, LiNGCReL is the first algorithm that can handle CRL problems given data from multiple environments. We are not aware of any other algorithms that work for this task.
**Q7: The paper considers the problem of learning linear causal representations from general environments. Is the information about these different environments known? If so, it indeed provides some additional information for structure learning.**
We assume that the environments are unknown; the weights in the linear causal models can be arbitrarily different across different environments.
We hope that the above explanations have properly addressed the reviewer’s questions, and feel free to let us know if the reviewer has any other questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. But I have the following concerns:
1. Although the authors explain the definition of "general environment", they mention that "all environments share the same causal graph", which is a very strong assumption. Moreover, intervening in an arbitrary number of nodes in real-world scenarios is challenging.
2. For Q3, the performance of the ICA technique heavily relied on the variance of noises. I am concerned that it is not a suitable method to recover the matrices $M_k$.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for letting us know the concerns.
1, The reviewer is right that "all environments share the same causal graph" might be restrictive in some cases. However, as we explained in the rebuttal, our results actually apply to cases where some causal edges are absent. Moreover, most existing literature assumes single-node soft interventions, which are special cases of environments with the same causal graph, and our main contribution is that we remove the single-node assumption.
In practice, we often do not know how interventions change the underlying latent variables. Our results apply to interventions that may affect one or more latent variables. Please note that we do not need to perform interventions on a chosen set of latent variables; this set is actually unknown to us in the learning process.
2, To verify that the first-stage ICA estimation is accurate, we run experiments on randomly generated models with size $d=5$ and $d=10$. We found that with $N=3000$ data, the recovered matrix has average error <1e-3 compared with the ground-truth (up to row permutations) for $d=5$, and around 2e-3 for $d=10$. Currently, we are not aware of methods better than ICA that can be used in our context, but it is definitely a promising direction and we leave it to future work. | Summary: The paper studies causal representation learning (CRL) in the linear setting, that is, linear latent SEMs and a linear mixing function. Key contributions include:
* Identifying an intrinsic surrounded-node ambiguity (SNA) that exists when the causal model and mixing function are linear. This ambiguity is unavoidable without hard interventions.
* Proving that identification up to SNA is possible for linear models under reasonable conditions with data from $O(d)$ diverse environments, where $d$ is the number of latent variables.
* Proposing an algorithm called LiNGCReL that provably achieves identification up to SNA in the linear case.
* Showing that $\Omega(d^2)$ single-node soft interventions would be required to achieve the same identification guarantee, highlighting the benefit of diverse environments.
* Demonstrating LiNGCReL's effectiveness on synthetic data in recovering the true causal model up to SNA.
The paper provides some new theoretical insights into the identifiability limits of CRL and an algorithm to achieve those limits in the linear case with general environments.
Strengths: * New theoretical insights on identifiability limits for CRL, particularly connecting the SNA concept.
* The authors propose an algorithm which provides a concrete way to achieve the theoretical guarantees.
* Experimental results validate the theoretical findings on synthetic data.
Weaknesses: * Limited to linear causal models and mixing functions. Nonlinear extensions not discussed.
* All environments share the same causal graph. My understanding on this is that soft interventions considered in this work do not allow for removal of a subset of the parents.
* Experiments only on synthetic data. Real-world applications or datasets not tested. Perhaps a semi-synthetic experiment following Squires et al. 2023. Specially when a key point is that the assumptions are less stringent than prior work, an experiments like this might help.
* Computational complexity of LiNGCReL not thoroughly analyzed.
* Implications of SNA for downstream tasks not explored.
Technical Quality: 3
Clarity: 3
Questions for Authors: I overall think that the paper would be a nice contribution to NeurIPS. Here are some moderate/minor comments:
* Regarding my comment above on "all environments share the same causal graph", if my understanding is correct, what's the main challenge on allowing partial removal of parents during a soft intervention?
* L113-114: "One may expect that identifiability with soft interventions is not much different from hard interventions, since soft interventions can approximate hard interventions with arbitrary accuracy". I am not sure really sure about this sentiment on soft interventions, I particularly would expect the identifiability to be different.
* In many places I see "for $\forall$", I think either use "for all" or simply "$\forall$".
* In line 133, it reads "each latent variable $v_i$" but $z_i$ was used to denote latent variables.
* In Theorem 1, $v$ is used to denote the candidate/hypothetical latent varaibles, wouldn't it be easier to use $\hat z$? This would keep the consistency with $\hat H$, $\hat A$, etc.
* In the model setup in eq.(3), you might want to index the variables $z$ per environment too since these are not exactly the same r.vs.
* I think you want to cite the proceedings version of Squires et al 2023 and not the arxiv version from 2022.
* In Definition 10 in the appendix, what is $\phi$? I also think there might be a better way to state Definition 4 instead of relying on a definition in the appendix as early as in page 3.
* L627 in appendix, typo "must stronger".
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The natural limitations that come from the assumptions are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our work and for giving insightful comments. Below are our responses to the questions and weaknesses mentioned by the reviewer.
**Q1: All environments share the same causal graph. My understanding of this is that soft interventions considered in this work do not allow for removal of a subset of the parents…… Regarding my comment above on "all environments share the same causal graph", if my understanding is correct, what's the main challenge of allowing partial removal of parents during a soft intervention?**
We are sorry for the confusion here. There are actually three different settings that existing works consider:
1). single-node, hard interventions. Hard intervention means removing the edge between the intervened node and all its parents.
2). single-node, soft interventions. This means changing the weights of the edges between the interveded node and its parents.
3). general environments (a.k.a. soft interventions that allow simultaneously intervening an arbitrary number of nodes. This is more general than 2) and is the setting this paper focuses on.
While we highlight in our paper that we can deal with the case where the weights are nonzero, implying that the causal graph does not change, our setting does allow some weights to be zero. This is because our Assumption 4 (and 5) only requires a non-degeneracy condition, and this condition does not necessarily require all weights to be nonzero. for example,let there by a three-node graph with edges $1\to 3$ and $2\to 3$, and we have access to three environments $E_i,i=1,2,3$ where $E_1$ has nonzero weights for both $1\to 3$ and $2\to 3$, $E_2$ only has nonzero weights for the former edge and $E_3$ only has nonzero weights for the latter edge, then Assumption 4 and 5 are satisfied on node $3$ since the third rows of the $B_i$’s are of forms $(x_1,x_2,1),(y_1,0,1),(0,z_2,1)$ (where $x_1,x_2,y_1,z_2$ denotes some nonzero numbers) and they span $\mathbb{R}^3$. We will make this point more explicit in a revised version of our paper.
**Q2: L113-114: "One may expect that identifiability with soft interventions is not much different from hard interventions, since soft interventions can approximate hard interventions with arbitrary accuracy". I am not really sure about this sentiment on soft interventions, I particularly would expect the identifiability to be different.**
We thank the reviewer for pointing this out, and we will delete this sentence to avoid unnecessary confusion.
**Q3: In line 133, it reads "each latent variable $v_i$" but $z_i$ was used to denote latent variables. L627 in appendix, typo "must stronger". In the model setup in eq.(3), you might want to index the variables z per environment too since these are not exactly the same r.vs. I think you want to cite the proceedings version of Squires et al 2023 and not the arxiv version from 2022.**
We are sorry for these issues and will fix it in a revised version.
**Q3: In Theorem 1, v is used to denote the candidate/hypothetical latent variables, wouldn't it be easier to use z^? This would keep the consistency with H^, A^, etc.**
The reviewer is correct about this. We will change the notations in the revision.
**Q4: In Definition 10 in the appendix, what is $\phi$? I also think there might be a better way to state Definition 4 instead of relying on a definition in the appendix as early as in page 3.**
We are very sorry for the typo. We restate this definition below:
We write $(h,G)\sim_{CRL}(\hat{h},\hat{G})$ if there exists a permutations $\pi$ on $[d]$, and a diffeomorphism $\psi:R^d\to R^d$ where the $j$-th component of $\psi$, denoted by $\psi_j(z)$, is a function of $z_j$ for $\forall j\in[d]$, such that the following holds:
For $\forall i,j\in[d]$, $i\in pa_{G}\left(j\right) \Leftrightarrow \pi(i)\in pa_{\hat{G}}\left(\pi(j)\right)$,
$P_{\pi}\circ\hat{h}=\psi\circ\hat{h}$, where $P_{\pi}$ is a permutation matrix satisfying $(P_{\pi})_{ij}=1 \Leftrightarrow j=\pi(i)$.
Intuitively, this means that one can find a permutation of the nodes, such as the two models have node-wise correspondence.
We thank the reviewer again for the positive feedback and are happy to address any other questions that the reviewer might have.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I will keep my score for now. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Towards Understanding How Transformers Learn In-context Through a Representation Learning Lens | Accept (poster) | Summary: The paper explores the ICL capabilities of LLMs, focusing on understanding the mechanisms underlying ICL. The authors aim to investigate the ICL process using representation learning principles. They mainly use kernel methods to develop a dual model for one softmax attention layer, demonstrating that the ICL inference process of this layer aligns with the training procedure of its dual model.
* Extend theoretical analysis to more complex scenarios with multiple attention layers
* Propose potential modifications to the attention layer to enhance ICL capabilities
Experiments also support the theoretical findings and improvements on ICL. This paper may give a deeper understanding of ICL and suggests practical approaches for improving the ICL capabilities of LLMs in real-world applications.
Strengths: * While previous studies mainly on the linear attention, this work extends it to a more realistic case -> softmax attention, including its training dynamics.
* Based on the dual gradient descent process, the authors also propose potential attention modifications, which could be benificial to real-world downstream application.
Weaknesses: * The work may ignore the format of task input/output. And some of recent work such as (Theoretical Understanding of In-Context Learning in Shallow Transformers with Unstructured Data, Xing et al.) the success of ICL may be also because of the alignment of attention mechanism, especially in real-world nosiy cases.
* The experimental section is not well-organized and self-explained, the readers may need to refer extra papers to get the protocols.
Technical Quality: 3
Clarity: 2
Questions for Authors: Theoretical Section
* Will the ratio of the number of negative samples have an impact on generalization bound?
Experimental Section
* As mentioned in weeknesses part, the authors may need to add more experimental details to make it more self-explained.
* Since the authors propose three attention modifications, which one could be the best, and do we need add these three together? why or why don't need?
* Does the conclusion hold the same for both one-layer transformer and multi-layer transoformer?
* The dynamics of token interactions may not be well supported in the experiments.
Conclusion Section?
* seems missing a conclusion section due to the restriction of page sizes.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have clearly stated the limitations of this work, the selected transformer components of this work, and selected task settings.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your recognition of the novelty of our paper. We have carefully addressed each of your concerns and considering the word limits of rebuttal, we notice that some issues might overlap, so we have organized our responses as follows.
> **Weakness 1** (The work may ignore the format of task input/output. And some of recent work...)
> **& Question 5** (The dynamics of token interactions may not be well supported in the experiments.)
The mentioned work [1] extends the original structured input format [2,3] to the unstructured setting and then explores the impact of various factors (one/two layers, look-ahead attention mask, PE) when learning linear regression tasks both theoretically and experimentally. The explanation for ICL derived from these interesting findings is more aligned with real-world scenarios (where the examples are usually stored in different tokens).
Although our work also relates ICL to GD, our focus is on how we can view the ICL inference process as a GD process on a dual model from a representation learning perspective. We do not make assumptions about the input format, such as $[x; y]$ or $[x; 0], [0; y]$, as we do not target specific tasks (e.g. linear regression tasks) but aim to provide more general explanations. To simplify the analysis, our input tokens can be viewed as embeddings of demonstration sentences (or intermediate states among multi layers).
Correspondingly, in the experimental section, our focus is not on the dynamics of token interactions as specific tasks are not being targeted. Instead, we are more concerned with whether the result of ICL inference $h'\_{T+1}$ is equivalent to the test prediction $\hat{y}_{test}$ of the trained dual model, as illustrated in the left part of Figure 3. This result aligns with our analysis in Theorem 3.1.
[1] Theoretical Understanding of In-Context Learning in Shallow Transformers with Unstructured Data
[2] Transformers learn in-context by gradient descent
[3] Trained transformers learn linear models in-context
> **Weakness 2** (The experimental section is not well-organized and self-explained...)
> **& Question 2** (As mentioned in weeknesses part...)
Thank you for your careful suggestions! Due to the resrtiction of page sizes, we simplified the description of the experiments. In fact, more details of our experiments are presented in the Appendix E. We will reorganize our experimental part and provide more details in our future revision.
> **Question 1** (Will the ratio of the number of negative samples have an impact on generalization bound?)
The answer is yes. After introducing negative samples, we consider the following representation loss:
$$
\mathcal{L}(f) = \mathbb{E}_{x\sim\mathcal{D}\_{\mathcal{T}}} \left[- \frac{1}{K} \sum\_{j = 1}^{K} \left( W\phi(W_Kx) \right)^T\left(W_Vx - W_Vx^{-}\_{j} \right)\right],
$$
where $K$ is the number of negative samples for each $x_i$ and $x\_{j}^-$ denotes the $j$-th negative sample for token $x$. The empirical loss is modified correspondingly. Then, by the other definitions in Section 3.3, we can obtain the generalization bound as
$$
\mathcal{L}(\hat{f})\le \mathcal{L}(f) + O\left(w\rho d\_o \sqrt{\mathrm{Tr}(K\_S)\left(\frac{5}{N^2} + \frac{1}{rN^3}\right)} + \sqrt{\frac{log\frac{1}{\delta}}{N}}\right),
$$
where $r = \frac{K}{N}$ is the ratio of the number of negative samples. It can be observed that as the ratio increases, the generalization error will decrease. However, we also notice that $\frac{5}{N^2} > \frac{1}{rN^3}$ thus the reduction in generalization error due to an increased proportion of negative samples is limited. The proof is similar to that of Theorem 3.2 (see Appendix C). The main difference is that we need to define the new function class $G$ and $F$ according to above definition. Nevertheless, we do not rule out the possibility of a tighter generalization bound using better theoretical tools.
> **Question 3** (Since the authors ... which one could be the best ... need add these three together? why...?)
The answer is related to the specific tasks but generally speaking, using all three methods together is not necessary, especially when the design of augmentated or negative modification is not effective enough. We conduct experiments with their combinations on linear, trigonometric, and exponential tasks. **The results are shown in Figure 1 of the PDF** in `To AC and All Reviewers`. For the latter two tasks, the results are actually worse than using regularized or augmented modification individually (compared to Figures 11 and 12 in Appendix E). Therefore, when the design of augmented or negative mofification is not effective, we recommend using a single modification method.
> **Question 4** (Does the conclusion hold the same for both one-layer transformer and multi-layer transformer?)
For more scenarios, the conclusion still holds: using all three methods simultaneously is not necessary. We supplemented our experiments using BERT (multi-layer Transformer), see ` (To AC and All Reviewers)`. From Table 1 in the PDF, the combined models do not outperform the augmented version alone. We also observed that negative sample improvements remain quite limited, indicating that our method for selecting negative samples is not effective enough. In these cases, using augmented modification alone will be a better choice.
> **Question 6** (seems missing a conclusion section)
We will streamline our paper content and supplement the conclusion section based on the valuable questions raised by all you reviewers in the future revision. Thank you very much for your reminder!
**Final Note:** We want to thank you again for all the questions you have provided. If there are any remaining questions, please do not hesitate to let us know.
---
Rebuttal Comment 1.1:
Title: Looking forward to your reply
Comment: We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript and providing us with your valuable feedback!
As the author-reviewer discussion phase is drawing to a close, we kindly wish to confirm whether our response has adequately addressed your concerns. A few days ago, we submitted a detailed response addressing your concerns and hope that they have adequately resolved any issues. If there are any remaining questions, please do not hesitate to let us know.
We would greatly appreciate any additional feedback you may have! | Summary: The paper explores the in-context learning (ICL) abilities of Transformer-based models. The authors propose an interpretation of ICL through the lens of representation learning. They establish a connection between the inference process of softmax attention layers in Transformers and the gradient descent process of a dual model, providing theoretical insights and generalization bounds. The paper also suggests potential modifications to the attention mechanism inspired by contrastive learning techniques and supports its findings with experiments.
Strengths: (Reminder) I'm not a researcher working in the theory field, so I couldn't recognize this paper's theoretical contributions well.
- The paper provides a fresh perspective on understanding ICL by connecting it to gradient descent and representation learning, which is a novel and insightful contribution to the field.
- The authors extend their theoretical findings to various Transformer settings, including single and multiple attention layers, enhancing their conclusions' generalizability.
Weaknesses: (Reminder) I'm not a researcher in the theory field, so I couldn't also recognize this paper's limitation from a perspective.
- The paper relies on several assumptions and simplifications, such as ignoring the impact of layer normalization and residual connections. These might limit the applicability of the findings in more complex Transformer architectures.
- Lack of discussions towards some related works that argue against the equivalence between ICL and gradient descent [1,2].
[1] In-context Learning and Gradient Descent Revisited, 2023
[2] Do pretrained Transformers Learn In-Context by Gradient Descent, 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: - How would the inclusion of layer normalization and residual connections affect the theoretical framework and findings?
- How do the proposed methods perform on more complex and diverse tasks beyond the linear regression tasks used in the experiments?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The experiments are primarily focused on linear regression tasks, and it remains to be seen how well the proposed methods generalize to a wider range of tasks and datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the novel perspective on understanding ICL of our paper. Considering the word limits of rebuttal, we notice that some issues might overlap, so we have organized our responses as follows.
> **Weakness 1** (The paper relies on several assumptions and simplifications...)
> **& Question 1** (How would the inclusion of layer normalization and residual connections affect the theoretical framework and findings?)
As for residual connection, intuitively, our conclusions can be naturally extended to versions with residual connections, which are analogous to introducing a residual connection structure in dual models, i.e., $\mathrm{Atten}(x) + x = \hat{W}\phi(W_Qx) + x$, where $\hat{W}$ is the weight in the dual model $f(x) = W\phi(x)$ trained under the original representation loss in Eq (9).
This modification does not have a significant impact on the specific representation loss or generalization bound, but its effect on the GD process of the dual model still requires more detailed analysis.
As for layer normalization, from the representation learning perspective of the dual model, it normalizes $x$ to a scaled unit sphere to make the process more stable intuitively. However, we acknowledge the analytical challenges posed by the nonlinearity of layer normalization, and the more detailed effects still warrant further investigation. This may require more powerful theoretical tools, e.g. [1], which is also a direction for our future research.
[1] On the Nonlinearity of Layer Normalization.
> **Weakness 2** (Lack of discussions towards some related works that argue against the equivalence between ICL and gradient descent...)
Following the previous work [1], the first mentioned work [2] further explores potential shortcomings in the previously used evaluation metrics and further points out that the ICL output is more dependent on previous lower layers, which is different from the finetuning that relies on all model parameters. Inspired by this, it introduces a new finetuning method called LCGD and validates the claim with new metrics.
Unlike previous works including [3,4], which studies ICL and Gradient Descent (GD) under strong constraints (specific regression tasks and weight constructions, etc.), the second mentioned work [5] highlights significant gaps between these strong constraints and real-world LLMs. They observe that there are differences in how ICL and GD modify the output distribution and emphasize the distinction between naturally emergent ICL (pretrained on natural text data) and task-specific ICL (such as in [3,4]). It provides a clearer definition and experimental results in more realistic settings.
Although our work also relates ICL to GD, our focus is on how we can view the ICL inference process as a GD process on a dual model from a representation learning perspective:
- To simplify theoretical analysis, we concentrate on analyzing the core component of the Transformer architecture—the attention layer—while neglecting residual connections and layer normalization. This does introduce some divergence from real-world large language models;
- In terms of research approach, we focus on the GD process of dual model under representation learning loss, rather than analyzing GD directly on the original model as in [2,5]. This is also a significant departure from the mentioned works.
- Additionally, the representation learning process of the dual model resembles a self-supervised process, which we believe is reasonable: unlike fine-tuning, ICL inference is not explicitly provided specific metrics to adapt to the target task and this also aligns with certain aspects of previous work [5]. And we also compare this process with existing self-supervised learning methods in our work.
- Inspired by this representation learning perspective, we propose modifications to the attention mechanism, which were not sufficiently explored in previous work.
Thank you for your suggestion and we will add more supplementary information on the relevance between our work and the mentioned works in our future revision.
[1] Why Can GPT Learn In-Context? Language Models Implicitly Perform Gradient Descent as Meta-Optimizers
[2] In-context Learning and Gradient Descent Revisited
[3] Transformers Learn In-context by Gradient Descent
[4] Trained Transformers Learn Linear Models In-context
[5] Do pretrained Transformers Learn In-Context by Gradient Descent?
> **Question 2:** How do the proposed methods perform on more complex and diverse tasks beyond the linear regression tasks used in the experiments?
For more complex tasks, considering the limited computational resources and time constraints, we select the pre-trained BERT-base-uncased model to apply attention modifications, and validate the results on part of GLUE datasets. **More experimental settings are detailed in the (global) author rebuttal and the results are presented in the Table 1 of PDF** `(To AC and All Reviewers)`.
**For the regularized models**, we select different values $\alpha$ and conclude that smaller absolute values of $\alpha$ are recommended. **For the augmented models**, we adopt a parallel MLP approach for data augmentation $g_1/g_2$, that is, $g_1(W_Vx) = W_Vx + c W_2\sigma(W_1x)$. The results suggest that using $g_1$ alone may be a more effective choice under our settings. **For the negative sample models**, we continued to select tokens with lower attention scores as negative samples. The final results showed limited improvement in model performance. We think that this may be due to the ineffective selection of negative sample tokens, and better methods for selecting negative samples would be interesting to explore in the future.
In conclusion, these results validate the potential of our proposed methods.
**Final Note:** We want to thank you again for all the questions you have provided. If there are any remaining questions, please do not hesitate to let us know.
---
Rebuttal Comment 1.1:
Title: Looking forward to your reply
Comment: We are truly grateful for the time and effort you have invested in reviewing our manuscript and offering your valuable feedback.
As the author-reviewer discussion phase is drawing to a close, we kindly wish to ensure that our recent detailed response, submitted a few days ago, has effectively addressed all of your concerns. If any issues remain or if further clarification is needed, please do not hesitate to reach out.
We would greatly appreciate any additional feedback you may have! | Summary: The author's present a new way of linking in-context-learning (ICL) to gradient descent.
The author's are able to demonstrate that indeed a (simplified) transformer decoder layer ICL is equivalent to "representation learning".
Using the theoretical findings the authors are also able to propose extensions to the attention mechanism.
The paper shows theoretical proofs for these claims as well as additional experiments to demonstrate the claim.
Strengths: Strenghts:
1. Excellent theoretical contribution to understanding in-context-learning
2. Adding relevant proofs
3. Using theoretical results to motivate better attention mechanisms
Weaknesses: Weakness:
1. Slightly simplified transformer architecture is used (it would be interesting to see the skip-connection version). As recent work, such as "mechanistic interpretability" often relies on these skip connections.
2. The experimental setup and results are a bit limited. Slightly more description of the task would be good, as well as a somewhat "realistic" task would be interesting to see - and what the results mean qualitatively.
3. A small section on how these results can be applied for bigger models in practice.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How could a "realistic" task look like for your evaluation?
2. How could one practically extend your augementations to e.g. modern LLMs (even of `moderate' size, e.g. 7B)
- I.e. what would these "data augmentations" or "negative samples" really look like?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for acknowledging the theoretical contribution of our paper. We have carefully addressed each of your concerns and considering the word limits of rebuttal, we notice that some issues might overlap, so we have organized our responses as follows.
> **Weakness 1**: Slightly simplified transformer architecture is used (it would be interesting to see the skip-connection version) ...
The attention module, as one of the core components of the transformer, is crucial for extracting contextual information. Therefore, for the sake of analytical simplicity, we focus on comparing pure attention modules/mechanisms. Intuitively, our conclusions can be naturally extended to versions with residual connections, which are analogous to introducing a residual connection structure in dual models, i.e., $\mathrm{Atten}(x) + x = \hat{W}\phi(W_Qx) + x$, where $\hat{W}$ is the weight in the dual model $f(x) = W\phi(x)$ trained under the original representation loss in Eq (9). Although it is relatively straightforward to connect residual connections with dual models in form, the role of such residual connections in the learning process of the dual model's weight from a representation learning perspective still requires further investigation. Therefore, a more reasonable explanation is currently lacking. We believe this is an interesting direction for future research.
> **Weakness 2** (The experimental setup and results are a bit limited ... as well as a somewhat "realistic" task would be interesting to see - and what the results mean qualitatively. )
> **& Question 1** (How could a "realistic" task look like for your evaluation?)
Due to constraints of page sizes, we have simplified the description of the experiments for simulation tasks in the main body. In fact, more detailed experimental setups are presented in the Appendix E. We will provide more detailed supplementary information on the experimental setups if space permits in future revisions.
For the evaluation on more realistic tasks, we have supplemented our work with additional experiments. **More experimental settings are detailed in the (global) author rebuttal and the results is presented in the Table 1 of PDF** `(To AC and All Reviewers)`.
Regarding the qualitative results: **For the regularized models**, smaller absolute values of $\alpha$ are recommended. We interpret this as adding a small scaled identity matrix to the attention score matrix helps achieve full rank, thereby better preserving information. **For the augmented models**, to better retain the information from the pre-trained $W_V$ and $W_K$ weights, we adopted a parallel MLP approach for data augmentation $g_1$ and $g_2$. For example, $g_1(W_Vx) = W_Vx + c W_2\sigma(W_1x)$. The results suggest that enhancing the value mapping may be a more effective choice. **For the negative sample models**, we continued to select tokens with lower attention scores as negative samples while the final results showed limited improvement in model performance. We believe this may be due to the ineffective selection of negative sample tokens, and better methods for selecting negative samples would be interesting to explore in the future.
> **Weakness 3** (A small section on how these results can be applied for bigger models in practice.)
> **& Question 2**(How could one practically extend your augementations to e.g. modern LLMs ...)
We believe that our supplementary experiments on pre-trained BERT provide more insights into this issue (see global rebuttal `(To AC and All Reviewers)`). Although the scale of the model and datasets selected is relatively small due to our limited computational resource and time constraints, we believe similar approaches would naturally extend to larger-scale experiments.
Take the augmented models as the example: we also considered using more complex data augmentations for the linear key/value mapping ($g_1(W_Vx)$ and $g_2(W_Kx)$). However, unlike previous methods used in simulation tasks, we do not choose $g_1/g_2$ as $g_1(W_Vx)=W_2σ(W_1W_Vx)$, because our experiments showed that this design struggles to fully leverage the pre-trained weights $W_V$ and $W_K$. Thus, we adopted a parallel approach, that is, $g_1(W_Vx) = W_Vx + c W_2\sigma(W_1x)$, where $c$ is a hyperparameter to control the influence of the new branch ($g_2$ is similar). This approach introduces nonlinear "augmentation" while preserving the knowledge from the original pre-trained weights, making it easier to train. The results in Table 1 also prove the effectiveness of this method especially using $g_1$ alone, which leads to the most significant improvement in model performance.
Additionally, we do not rule out the possibility of more powerful and efficient augmentation methods. The parallel MLP approach is chosen primarily to make better use of the pre-trained weights $W_V$/ $W_K$ and this design is quite general and does not consider specific characteristics of particular tasks. We still encourage the design of more task-specific augmentation format for different tasks. For example, in CV tasks, it might be natural to incorporate CNN-extracted features into the $g_1$/$g_2$ in ViTs.
Similarly, our discussion on the negative sample models further supports the need for more task-specific designs. As shown in Table 1, selecting tokens with low attention scores as negative samples is a rather crude approach, leading to relatively limited performance improvements. Exploring how to select or construct higher-quality negative samples is also an interesting direction. These detailed considerations go beyond the scope of this paper, and these interesting directions will be the focus of our future work.
**Final Note**: We are excited that you acknowledge our theoretical contribution. If there are any remaining questions, please do not hesitate to let us know. Thank you once again for your insightful comments and for your encouraging feedback!
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal and additional comments.
At this stage there are no questions on your rebuttal to my questions.
With regards to a response to another reviewer (k2cV).
> Weakness 2 (Lack of discussions towards some related works that argue against the equivalence between ICL and gradient descent...)
The answer is not very clear. Could you elaborate on:
1. A 1-1 (quick) comparison against each of the five papers. [1-5].
2. A more detailed explanation why using the dual is important. (As the dual is simply a mathematical tool).
---
Reply to Comment 1.1.1:
Title: Replying to Official Comment by Reviewer cRiK
Comment: Thank you sincerely for your feedback. Here is our response:
> **Question 1: ** A 1-1 (quick) comparison against each of the five papers. [1-5].
- Oswald et al. [1] focus on the ability of the linear attention layer to perform gradient descent during ICL when faced with linear regression tasks. Their observations are based on specific assumptions, including the constructed forms for $W_Q, W_K, W_V$ and the input tokens (concatenated $[x,y]$). Our work does not target specific tasks like linear regression; therefore, we do not make detailed assumptions about the model weights (simply treated as weights after pre-training) or construct specific input forms. Instead, we aim to view the ICL inference process from the perspective of representation learning in the dual model.
- Zhang et al. [2] also theoretically analyze the gradient flow of linear self-attention modules under certain initialization conditions when dealing with linear regression tasks and provide the forms of weights at the global minimum as well as the prediction error. Similar to the comparison above, the aim of our work is not to investigate the expressive capabilities of the model's structure for these specific tasks but to interpret the ICL inference process from a representation learning lens in a more general setting, thus we do not use these assumptions.
- Shen et al. [3] experimentally illustrates that the assumptions used in previous works including Oswald et al. [1] and Zhang et al. [2] may be too strong in real LLMs. They analyze the differences between ICL inference in LLMs and the fine-tuned models in real-world scenarios from various perspectives, including weight sparsity, order stability, and output distribution. In comparison, our work: (1) for the sake of theoretical analysis, considers not a "complete" real model but a simplified one that omits structures like residual connections; and (2) our interpretation of the ICL inference is not linked to fine-tuning on the original model but rather to training on the dual model.
- Dai et al. [4] use the dual form to interpret ICL as an implicit fine-tuning (gradient descent) of the original model under the linear attention setting and this alignment is ambiguous as the specific details of the gradient descent process is not clear. Thus, our work extends this analysis to the nonlinear attention setting and delve deeper into more details of this process (exploring the specific form of the loss function). The main difference is that we consider this as training the dual model under a self-supervised representation learning loss, rather than performing supervised fine-tuning on the original model.
- Natan et al. [5] investigate potential shortcomings in the evaluation metrics used by Dai et al.[4] in real model assessments and propose a layer-causal GD variant that performs better in simulating ICL. However, similar to the comparison above [3], their study discusses complex gradients in the original model in real scenarios, whereas we turn our attention to the gradient descent of the dual model of the attention layer.
> **Question 2: **A more detailed explanation why using the dual is important. (As the dual is simply a mathematical tool).
By using the dual, or rather, by analyzing the potential gradient descent process of the dual model, we can gain new insights into the model mechanisms in reverse. Specifically, through the dual perspective,
- we transform the forward inference process into an optimization process. Since optimization processes are well-known and have established theoretical tools (for example, generalization error as mentioned in our work), this transformation can provide reverse insights into analyzing the model mechanisms.
- we can clearly observe that the dual model involves a self-supervised representation learning process. Considering that there are lots of mature works in this area, we can draw on these works to reflect on the attention mechanism, which has also inspired attention modifications as illustrated in our work.
Finally, as for "the dual is simply a mathematical tool", we want to clarify that the term "dual" we use is different from the one in optimization within the mathematical field. Instead, it follows the terminology used in previous work [4], where the forward process of the attention layer and backward process on some model are referred to as a form of "dual". We are not sure if this has also led to some misunderstandings. If there are any remaining questions, please do not hesitate to let us know!
[1] Von Oswald J, et al. Transformers learn in-context by gradient descent.
[2] Zhang R, et al. Trained transformers learn linear models in-context.
[3] Shen L, et al. Do pretrained Transformers Really Learn In-context by Gradient Descent?
[4] Dai D, et al. Why can gpt learn in-context? language models implicitly perform gradient descent as meta-optimizers
[5] Nathan T B, et al. In-context learning and gradient descent revisited | Summary: The paper investigates the in-context learning (ICL) capabilities of Transformers, explaining it through a representation learning lens. It establishes a theoretical connection between ICL and gradient descent, deriving a generalization error bound tied to demonstration tokens. The authors also suggest modifications to the attention mechanism inspired by theory and support their findings with experiments.
Strengths: * **Theoretical Depth**: It provides a rigorous theoretical analysis, including the derivation of a generalization error bound, which contributes to the theoretical foundation of Transformer models.
* **Insights into Attention Mechanism**: The paper provides potential modifications to the attention layer, inspired by theory, which could potentially improve the learning capabilities of Transformers.
* **Good Writing**: Clear Structure and Presentation: The paper is well-structured, with a clear abstract and a good introduction. It is easy for readers to understand.
Weaknesses: * **Generalization to Other Tasks**: The paper's findings are based on specific tasks. It's unclear how well these insights would generalize to general language tasks.
* **Empirical Validation**: While the paper includes experiments, the extent of empirical validation is limited. We may need more experiments on real-world language tasks to verify the modification of the attention mechanism.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper has discussed the limitations sufficiently.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the encouraging feedback, especially for recognizing the theoretical contribution and insights of our paper. Our response is detailed below.
> **Weakness 1**: Generalization to Other Tasks: The paper's findings are based on specific tasks. It's unclear how well these insights would generalize to general language tasks.
In fact, our findings can be naturally extended to more realistic tasks. We supplement our experiments on more general language tasks. **More experimental settings are detailed in the (global) author rebuttal and the results is presented in the Table 1 of PDF** `(To AC and All Reviewers)`. Considering the limited computational resources and time constraints, we choose the BERT-base-uncased model to and select part of GLUE datasets to validate the effectiveness of modifications to the attention mechanism. More discussions on the results are provided in detail in our response to Weakness 2.
> **Weakness 2**: Empirical Validation: While the paper includes experiments, the extent of empirical validation is limited. We may need more experiments on real-world language tasks to verify the modification of the attention mechanism.
To answer the questions, below we discuss the main experimental results presented in the Table 1 of PDF `(To AC and All Reviewers)`.
**For the regularized modification**, we consider different values of $\alpha$, and as can be observed in Table 1, except for RTE, the best regularized models outperform the original model on the other three datasets. while we also note that when the absolute value of $\alpha$ is too large, the model's performance declines significantly, so smaller absolute values for $\alpha$ is recommended.
**For the augmented modification**, we also consider applying more complex ''augmentation'' functions to the linear key/value mappings. Specifically, we adopt a parallel approach, i.e., $g_1(W_Vx) = W_Vx + c W_2\sigma(W_1x)$, where $c$ is a hyperparameter to control the influence of the new branch, $\sigma$ is the GELU activation function and the hidden layer dimension is set to twice the original size of $W_Vx$. And $g_2(W_Kx) = W_Kx + c W_2\sigma(W_1x)$ follows the same format. Experimental results show that the best augmented models achieve better performance than the original model across all four datasets. Notably, using $g_1$ alone proves to be more effective than other methods and using both $g_1$ and $g_2$ introduces more parameters, which is particularly significant for larger models. Thus, under the augmentation methods and experimental settings we selected, using $g_1$ alone is recommended.
**For the negative modification**, we continue to select tokens with lower attention scores as negative samples. The parameter $r$ represents the ratio of tokens used as negative samples, while $\beta$ indicates the overall reduction in attention scores. And the results show that the best negative models only outperform the original model on CoLA and STS-B, whereas their performance on MRPC and RTE is worse than the original one. This suggests that our simple approach of considering tokens with low attention scores as negative samples might be too coarse. A more effective method for constructing negative samples should be designed, which is a direction worth exploring in the future.
We also consider **combining different modification methods**. The results indicate that under our settings, the combination of augmented and negative modification achieves the best performance on CoLA, MRPC, and RTE, while the combination of regularized and augmented modification achieves the best performance on STS-B. However, their optimal performance is slightly inferior to the best performance achieved with augmented models alone. Therefore, we conclude that using all three modifications simultaneously is not necessary. With appropriate hyperparameter choices, using augmented modification alone or in combination with one other modification is sufficient.
In conclusion, the experimental results show the potential of our approach of improving the attention mechanism from a representation learning perspective. However, due to time and computational resource limitations, our experiments are conducted on a limited tasks and model size. More detailed parameter searches, validation across additional tasks and models, and the development of task-specific augmentation and negative sampling methods are all interesting directions worth exploring in the future.
**Final Note**: Thank you for your detailed review. We are excited that you found our novel insights into attention mechanism. If there are any remaining questions, please do not hesitate to let us know.
---
Rebuttal 2:
Title: Looking forward to your reply
Comment: We sincerely appreciate your time and effort in reviewing our manuscript and providing valuable feedback!
As the author-reviewer discussion phase is nearing its end, we would like to confirm whether our response has effectively addressed your concerns. We provided a detailed response to your concerns a few days ago and hope that they have adequately addressed your concerns. If there are any remaining questions, please do not hesitate to let us know.
We would greatly appreciate any additional feedback you may have!
---
Rebuttal Comment 2.1:
Comment: Thank you for your detailed responses, my concerns have been addressed and I will maintain my score.
---
Reply to Comment 2.1.1:
Title: Replying to Official Comment by Reviewer HnPE
Comment: We have received your feedback and would like to express our sincere gratitude for the time you dedicated to the review and for the valuable suggestions you shared with us! | Rebuttal 1:
Rebuttal: ### **To AC and All Reviewers**
We thank the reviewers for providing valuable suggestions that help us improve our paper.
We are particularly encouraged that the reviewers have found that (i) the fresh perspective on understanding attention mechanisms `(HnPE, cRiK, k2vC,8J5b) ` , (ii) thorough theoretical analysis `(HnPE, cRiK, 8J5b)`, (iii) the potential of proposed attention modifications `(8J5b, HnPE, cRiK, k2vC)`, and (iv) good writing `(HnPE)` of our work.
In response to the feedback, we've done our best to address each concern and have added new experiments and theoretical results.
We notice that the reviewers share a common concern regarding the generalizability of our findings to broader NLP tasks. Below, we provide a detailed response to the additional experiments we've conducted:
- **Basic Experiment Setting**: We supplement our experiments on more realistic NLP tasks. Considering the limited computational resources and time constraints, we choose the BERT-base-uncased model (hereafter referred to as BERT) to validate the effectiveness of modifications to the attention mechanism. As for datasets, we select part of GLUE (CoLA, MRPC, STS-B, RTE). We load the checkpoint of the pre-trained BERT model (where the classifier are newly initialized) and then fine-tune the model to explore the performance of three modifications as well as their combinations. We set the batch size to 32, the learning rate to 2e-5, and the number of epochs to 5 for all datasets. All experiments are conducted on a single 24GB NVIDIA GeForce RTX 3090. **All experimental results are presented in Table 1 of PDF**. Below, we discuss the setting of various modifications and their performance.
- **For the regularized modification**, we consider different values of $\alpha$. As can be observed in Table 1, except for RTE, the best regularized models outperform the original model on the other three datasets. However, we also note that when $|\alpha|$ is too large, the model's performance declines significantly, so we recommend using smaller $|\alpha|$.
- **For the augmented modification**, we also consider applying more complex ''augmentation'' functions to the linear key/value mappings. However, unlike the previous methods used in simulation tasks, we do not simply select $g_1$ and $g_2$ as MLPs, i.e., $g_1(W_Vx)=W_2σ(W_1W_Vx)$ because it could undermine the effort made during pre-training to learn the weights $W_V$ and $W_K$, leading to difficulties in training and challenges in comparison. Instead, we adopt a parallel approach, i.e., $g_1(W_Vx) = W_Vx + c W_2\sigma(W_1x)$, where $c$ is a hyperparameter to control the influence of the new branch, $\sigma$ is GELU and the hidden dimension is set to twice the original size of $W_Vx$. And $g_2(W_Kx) = W_Kx + c W_2\sigma(W_1x)$ follows the same format.
Experimental results show that the best augmented models achieve better performance than the original model across all four datasets. Notably, augmentation on the value mapping (i.e., using $g_1$ alone) proves to be more effective than other methods, both in terms of performance and the amount of additional parameters introduced. Using both $g_1$ and $g_2$ introduces more parameters, which is particularly significant for larger models. Thus, under the augmentation methods and experimental settings we selected, using $u_1$ alone is recommended.
In addition, we do not rule out the possibility of more powerful and efficient augmentation methods. Our choice of $g_1$ and $g_2$ is primarily motivated by the desire to make better use of the pre-trained weights $W_K$ and $W_V$. This design is relatively general and does not take into account the specific characteristics of individual tasks. We still encourage the development of more task-specific augmentation strategies tailored to different tasks.
- **For the negative modification**, we continue to select tokens with lower attention scores as negative samples. The parameter $r$ represents the proportion of tokens used as negative samples, while $\beta$ indicates the overall reduction in attention scores. The best negative models only outperform the original model on CoLA and STS-B, whereas their performance on MRPC and RTE is worse than the original one. This suggests that our simple approach of considering tokens with low attention scores as negative samples might be too coarse. A more effective method for constructing negative samples should be designed, which is a direction worth exploring in the future.
- We also consider **combining different modification methods**. The results indicate that under our settings, the combination of augmented and negative modification achieves the best performance on CoLA, MRPC, and RTE, while the combination of regularized and augmented modification achieves the best performance on STS-B. However, their optimal performance is slightly inferior to the best performance achieved with augmented models alone. Therefore, we conclude that using all three modifications simultaneously is not necessary. With appropriate settings, using augmented modification alone or in combination with one other modification is sufficient.
Overall, the experimental results show that our modifications inspired by the representation learning process are helpful in enhancing performance even with rough parameter selections. This further validates the potential of our approach of thinking about and improving the attention mechanism from a representation learning perspective. However, due to time and computational resource limitations, our experiments are conducted on a limited set of datasets and model size. More detailed parameter searches, validation across additional tasks and models, and the development of task-specific augmentation and negative sampling methods are all interesting directions worth exploring in the future.
Pdf: /pdf/04dc2ccd55a3d2c97b48b5c1bea1fa7c064dd422.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SpaFL: Communication-Efficient Federated Learning With Sparse Models And Low Computational Overhead | Accept (poster) | Summary: The authors consider distributed training of sparse models, by optimizing over the thresholds used to prune the models.
Strengths: I am not an expert of deep learning but the results look convincing enough.
Weaknesses: I would be interested in a discussion and comparison with other approaches that train sparse models, such as
* Meinhardt et al. "Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning," preprint arXiv:2405.20623, 2024.
* Yi et al. "FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models," preprint arXiv:2403.09904, 2024.
and references therein.
The experiments should investigate more the influence of the level of heterogeneity, because this is what makes it difficult to identify the sparsity pattern.
Technical Quality: 3
Clarity: 3
Questions for Authors: No question
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The technical limitations are discussed at the end of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments. We provide our response for each question below.
>Q1:I would be interested in a discussion and comparison with other approaches that train sparse models, such as
Meinhardt et al. "Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning," preprint arXiv:2405.20623, 2024.
Yi et al. "FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models," preprint arXiv:2403.09904, 2024.
and references therein.
**A1:** We compare SpaFL with two more baselines: FedP3 [1] and FedSpa [2]. Specifically, FedP3 communicates a subset of sparse layers that are pruned by the server and personalize the remaining layers. FedSpa trains a personalized sparse models while fixing the model density during the training, For FedP3, we set global pruning ratio as 0.5 and use OPU2 method (overlaps two layers) as done in [5]. We use the same data distribution and model architecture as done in our experiment, For CIFAR-10, we set the learning rate as 0.01. For CIFAR-100, we set the learning rate as 0.1 and decayed it by 0.997 for each communication round, We set the initial pruning ratio of FedSpa as 0.5 with cosine annealing as done in [6]. We set the learning rate as 0.01 and 0.1 for CIFAR-10 and CIFAR-100, respectively.
|Algorithm | Accuracy (CIFAR-10) | Accuracy (CIFAR-100)|
| --- | --- | ---|
| SpaFL | $\mathbf{69.75 \pm 2.81}$ |$\mathbf{40.80 \pm 0.54}$|
|FedP3 | $67.54 \pm 0.52$ | $37.73 \pm 0.42$|
| FedSpa | $67.03 \pm 0.63$ | $36.32 \pm 0.35$|
>Q2: The experiments should investigate more the influence of the level of heterogeneity, because this is what makes it difficult to identify the sparsity pattern.
**A2:** As the data distribution becomes iid, the sparsity pattern will be more similar. Due to the limited time frame, we will update the result shortly.
**References**
[1] Yi, Kai, et al. "FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity." The Twelfth International Conference on Learning Representations, 2024
[2] Huang, Tiansheng, et al. "Achieving personalized federated learning with sparse local models." arXiv preprint arXiv:2201.11380 (2022).
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal. I think the paper contains interesting insights on the difficult problem of sparse training so I am keeping my score.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer for the constructive comment. We will update the revised version with newly added baselines and sparsity patterns with decreasing data heterogeneity. | Summary: This paper introduces SpaFL, a federated learning framework that enhances communication efficiency and minimizes computational overhead by optimizing sparse model structures. They achieve this goal by defining a trainable threshold which leads to structured sparsity. Since the server and clients only exchange thresholds, SpaFL is able to reduce communication overhead. Furthermore, these trainable thresholds also lead to low computing costs.
Strengths: 1. The paper is generally well-written and easy to follow
2. The idea of using trainable thresholds is new and effective for achieving both communication efficiency and low computational overhead.
3. They perform valid experiments with the baseline methods and some image datasets. The accuracy performance of their method is better than others.
Weaknesses: 1. I recommend the authors to do additional experiments with NLP tasks.
2. In Figure 8 of the FedPM paper, the accuracy achieved by FedPM is higher than what is reported in this paper. Could you clarify the reason for this discrepancy? Is it due to differences in experimental setup and hyperparameters? If so, could you ensure that all conditions are consistent for the CIFAR-100 experiments and compare your method to the original accuracy reported in the FedPM paper?
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. If you apply this method to the transformer based models, how can you define trainable thresholds? To be specific, you gave an explanation for a convolutional layer in section 3.1, and can we just similarly define for self attention modules?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments. We provide our response to each comment below.
>Q1: In Figure 8 of the FedPM paper, the accuracy achieved by FedPM is higher than what is reported in this paper. Could you clarify the reason for this discrepancy? Is it due to differences in experimental setup and hyperparameters? If so, could you ensure that all conditions are consistent for the CIFAR-100 experiments and compare your method to the original accuracy reported in the FedPM paper?
**A1**: In Figure 8 of the FedPM paper, the authors distributed IID CIFAR-100 dataset over 10 clients, and every clients are sampled at each communication round, Meanwhile, in our setting, we distributed non-iid CIFAR-100 dataset with Dirichlet distribution of 0.1 over 100 clients and only sampled 10 clients at each communication round. We would like to note that every algorithm has different well working configurations. In the implementation of the FedPM, the authors used Adam optimizer. Since we were not able to reproduce their results with SGD optimizer, we used the Adam optimizer for the FedPM in Table 2. Next, we compare the result of SpaFL over IID CIFAR-100 dataset over 10 clients with the reported accuracy in the FedPM paper. We sampled every client at each communication round and set the learning rate as 0.05 and decayed it by 0.993 at each round. We set local epoch as 7 and averaged over 3 runs.
| Algorithm |Accuracy |
| --- | --- |
| SpaFL | $\mathbf{45.32 \pm 0.3}$ |
| FedPM | 42 |
>Q2:I recommend the authors to do additional experiments with NLP tasks.
**A2:** Due to the limited time frame, we will update the result with a transformer model shortly.
>Q3: If you apply this method to the transformer based models, how can you define trainable thresholds? To be specific, you gave an explanation for a convolutional layer in section 3.1, and can we just similarly define for self attention modules?
**A3:** A transformer model mostly consists of multi-head attention modules and feed-forward network modules. In SpaFL, we already provided pruning methods for feed-forward networks such as Linear and Convolutional layers. We can use a similar pruning method for multi-head attention modules as we did for Linear layers. Specifically, we defined a trainable threshold for each column of a matrix in a linear layer, We pruned a whole column if the average magnitude is smaller than its threshold, thereby achieving structured sparsity. Since multi-head attention modules also consist of multiple matrices, we can use the same pruning method. For each column of query, key, value and projection matrices, we can define a trainable threshold, If the average magnitude of parameters in a column is smaller than its threshold, we can prune the entire column. Due to the limited time frame, we will update the result with NLP tasks shortly,
---
Rebuttal Comment 1.1:
Comment: As the reviewer mentioned, the current algorithm can be applied to transformer models. Since multi-head attention modules consist of multiple linear matrices: query, key, and value, we can use the same pruning approach as we did with linear layers in SpaFL. For each column of query, key, value matrices, we can define a trainable threshold, If the average magnitude of parameters in a column is smaller than its threshold, we can prune the entire column. We tested this with a vision transformer with depth of 6, heads of 8, dim of 128, and linear layer dimension of 256 on non-iid CIFAR-10 dataset over 100 clients with dirichelet distribution of 0.1. We briefly used the same hyperparameters as we did in CIFAR-10 experiment in our manuscript without hyperparameter optimization and averaged the results over three random seeds.
| Algorithm | Accuracy (CIFAR-10) | Density|
| -------- | ------- | --------|
| SpaFL | $68.6 \pm 1.5$ | 54.7%|
| FedAvg | $59.2 \pm 0.4$ | 100%|
Hence, the proposed algorithm can be applied to transformer models. | Summary: This paper suggests communicating the threshold instead of the model parameters in federated learning. Through empirical validations on popular benchmarks, the proposed method, SpaFL, is shown to have lower computational overhead and achieve relatively good results.
Strengths: Communicating the threshold instead of the model weights seems to be interesting. Preliminary empirical results on popular benchmarks seem to have shown that this idea works with lower communication overhead and relatively good performance.
Weaknesses: 1. The main figure, Figure 1, is not informative and very confusing. I would expect to see a high-level explanation of how the threshold for each client is selected, whether they are the same for each layer, and how the selected thresholds are combined. At this moment, I can only see that all local models have the same threshold and they are directly combined at the server. Due to the misalignment between the main figure and the illustration in section 3.1, the necessary details to fully understand the main idea of this paper are not easy to follow. The writing in this section could also be improved.
2. In Theorem 1, the authors provided a generalization bound. However, the intuition and analysis for this bound are missing. At this moment, I do not have a good sense of, in a practical or special case, how large the right-hand side probability of Equation 14 is? How large must the minimal training data size $D$ be? Is it practical? Also, this bound only measures the distance between the empirical risk and the expected risk; how can this be regarded as a generalization bound? To my understanding, a generalization bound should be between the training set and the testing set. Furthermore, could you please provide the convergence rate and communication complexity of the proposed method? These are very important for the theoretical analysis of FL.
3. The baselines are outdated, such as Fjord (NeurIPS21) and HeteroFL (ICLR21). Can you please compare more recent methods, which can be easily found, such as [FedPAC](https://arxiv.org/abs/2306.11867) and [FedCR](https://proceedings.mlr.press/v202/zhang23w.html)? I would expect to see more comparisons with newer methods. They do not need to be the ones mentioned above, but I believe it is critical to determine whether the proposed method is still interesting to explore at this moment.
Technical Quality: 2
Clarity: 1
Questions for Authors: I would expect a README file to better understand your code efficiently.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The major limitations include the unclear presentation of the proposed method, lack of analysis of the generalization bound, absence of convergence and communication efficiency analysis, and the use of outdated empirical baselines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive comments. We provide our response to each comment below.
>Q1: I would expect to see a high-level explanation of how the threshold for each client is selected, whether they are the same for each layer, and how the selected thresholds are combined.
**A1:** In SpaFL, all thresholds are initialized to zero. Each threshold will be updated by (6) during the training. As such, each threshold will have different value. We averaged selected thresholds to generate global thresholds for the next round as shown in (7). We will clarify Figure. 1 and Section 3.1.
>Q2: This bound only measures the distance between the empirical risk and the expected risk; how can this be regarded as a generalization bound? To my understanding, a generalization bound should be between the training set and the testing set.
**A2:** It is our understanding that the generalization bound is the difference between empirical and expected risks as shown in [1, 2].
>Q3: Is it practical? how large the right-hand side probability of Equation 14 is? How large must the minimal training data size D be?
**A3**: Theorem 1 can provide a probabilistic guarantee with a certain amounts of data and communication rounds under the assumption of the bounded loss function, As shown in the result and the proof, the result depends on theoretical values such as $(\epsilon, \delta)$ differential privacy, gaussian noise $\sigma$ and the maximum diameter of gradient space $M_g$, To calculate the right-hand side probability of (14), we first assume $\epsilon =4, \delta = 1/10^6$, $\sigma = 2$, and $M\_g =1$ from [1, 2]. We also assume mini-batch size $\xi=64$ and do $T=500$ communication rounds with $\rho = 0.5$ average model density. Then, we can obtain the right-hand side probability as 0.61 for bound the generalization error around as 0.25, and the minimum required training sample $D$ is around 10000.
>Q4: Furthermore, could you please provide the convergence rate and communication complexity of the proposed method?
**A4**: We provided the training curve in Figure 2. We can see that the proposed approach shows relatively faster convergence rate compared to the baselines,
>Q5: The baselines are outdated, such as Fjord (NeurIPS21) and HeteroFL (ICLR21). Can you please compare more recent methods, which can be easily found, such as FedPAC and FedCR? I would expect to see more comparisons with newer methods. They do not need to be the ones mentioned above, but I believe it is critical to determine whether the proposed method is still interesting to explore at this moment.
**A5**: We compare SpaFL with two more baselines: FedP3 [5] and FedSpa [6]. Specifically, FedP3 communicates a subset of sparse layers that are pruned by the server and personalize the remaining layers. FedSpa trains a personalized sparse models while fixing the model density during the training, For FedP3, we set global pruning ratio as 0.5 and use OPU2 method (overlaps two layers) as done in [5]. We use the same data distribution and model architecture as done in our experiment, For CIFAR-10, we set the learning rate as 0.01. For CIFAR-100, we set the learning rate as 0.1 and decayed it by 0.997 for each communication round, We set the initial pruning ratio of FedSpa as 0.5 with cosine annealing as done in [6]. We set the learning rate as 0.01 and 0.1 for CIFAR-10 and CIFAR-100, respectively.
|Algorithm | Accuracy (CIFAR-10) | Accuracy (CIFAR-100)|
| --- | --- | ---|
| SpaFL | $\mathbf{69.75 \pm 2.81}$ |$\mathbf{40.80 \pm 0.54}$|
|FedP3 | $67.54 \pm 0.52$ | $37.73 \pm 0.42$|
| FedSpa | $67.03 \pm 0.63$ | $36.32 \pm 0.35$|
**References**
[1] Dupuis, Benjamin, George Deligiannidis, and Umut Simsekli. "Generalization bounds using data-dependent fractal dimensions." International Conference on Machine Learning, 2023.
[2] Chu, Yifeng, and Maxim Raginsky. "A unified framework for information-theoretic generalization bounds." Advances in Neural Information Processing Systems 36 (2023): 79260-79278.
[3] Abadi, Martin, et al. "Deep learning with differential privacy." Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016.
[4] Balle, Borja, Gilles Barthe, and Marco Gaboardi. "Privacy amplification by subsampling: Tight analyses via couplings and divergences." Advances in neural information processing systems 31 (2018).
[5] Yi, Kai, et al. "FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity." The Twelfth International Conference on Learning Representations, 2024
[6] Huang, Tiansheng, et al. "Achieving personalized federated learning with sparse local models." arXiv preprint arXiv:2201.11380 (2022).
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal and for clarifying the threshold settings and the generalization bound in your paper. However, I still have concerns regarding the theoretical analysis and comparison with baseline methods.
**Convergence Rate and Communication Complexity**: My concern pertains to the lack of theoretical analysis concerning the convergence rate and communication complexity of your proposed method. Currently, your paper and theorem do not include an analysis or theoretical comparison with existing works. I am looking for a more comprehensive analysis and a detailed theoretical comparison with established studies.
**Baseline Comparison**: While I appreciate the inclusion of baselines such as FedSpa (2022) and FedP3 (ICLR24) in your rebuttal, the main comparison with Fjord (NeurIPS21) and HeteroFL (ICLR21) in your paper and FedSpa (2022) still seems outdated for a NeurIPS24 submission. Regarding FedP3, I understand the rationale behind selecting a global pruning rate of 0.5. However, I must point out a discrepancy. FedP3 employs personalized model aggregation not for enhancing performance but for significantly reducing communication costs and maintaining privacy. Comparing this with your method, which uses full-layer aggregation, to FedP3’s partial-aggregation is *potentially unfair and misleading*. It is essential to clarify this point. Since the FedP3 study demonstrated that reducing the global pruning rate from 0.9 to 0.5 significantly impacted performance, the results you presented are not surprising. I would like to see a comparison that includes trends in communication costs between FedP3 and your method. Additionally, what would the performance of FedP3 look like with a higher global pruning rate? Alternatively, as suggested in my initial review, consider comparing your method with more recent baselines to clearly demonstrate the effectiveness of your method.
---
Reply to Comment 1.1.1:
Comment: Thank you for the constructive feedbacks. We first provide the convergence analysis. For the derivation, we need few conventional theoretical assumptions following [1] [2].
**A1**: Loss function $F_k(\cdot)$ is $M$ smooth for $\tau$ and client $k, \forall k$.
**A2**: The stochastic gradient $h_k$ is an unbiased estimator of the gradient $\nabla_{\tau} F_k$ for client $k, \forall k$ such that $\mathbb{E} h_k(w_k) = \nabla_{\tau} F_k(w_k, \tau)$
**A3**: There exists $G \geq 0$ such that $\mathbb{E}||g_k(w_k)||^2 \leq G^2, \forall k$.
In SpaFL, both parameters and thresholds are simultaneously updated, but only thresholds are communicated between clients and the server. Following the same notations in the paper, we assume $N$ number of clients , learning rate $\eta$, $T$ number of communication rounds, sparsity regularizer coefficient $\alpha$. and the thresholds update rule $\tau\_k(t) \leftarrow \tau(t) - \eta(t) h\_k(\tilde{w}\_k(t)) + \alpha \exp(-\tau(t))$. Then, we have the following convergence rate.
For $\gamma(t) = \eta(t) (1 - \frac{\alpha (1 - M \eta(t) }{2} )$ and the largest number of parameters connected to a filter/neuron $n_{in}^{max} >0$ in a given model, we have
$\frac{1}{NT} \sum_{t=0}^{T-1} \mathbb{E} || \sum_{k=1}^N \nabla_{\tau} F_k (\tilde{w}_k(t), \tau(t)) ||^2 $
$\leq \sum_{t=0}^{T-1} \sum_{k=1}^N \frac{ \mathbb{E} || \nabla_{\tau} F_k (\tilde{w}\_k(t), \tau(t) ) - \nabla_{\tau_k} F_k ( \tilde{w}_k(t), \tau(t) ) ||^2} { {MNT\gamma(t)}}$
$+ \sum\_{t=0}^{T-1} \frac{ 2\alpha \eta(t) }{T \gamma(t) } (1 - M \eta(t)(1 - \alpha) ) || \exp(-\tau(t) ) ||^2$
$ + \sum\_{t=0}^{T-1} \frac{M^2 \eta(t)^2n\_{in}^{max} }{2 T \gamma(t)} G^2 $ + $\sum_{t=0}^{T-1} \sum_{k=1}^N \frac{\mathbb ||\tau(t) - \tau\_k(t)||^2 }{NT\gamma(t)}$.
In SpaFL, we enforce sparsity through the sparsity regularizer $\alpha$. We can see that as $\alpha$ increases, the convergence rate can be damaged due to the second and the last terms on the right-hand side. Hence, a large $\alpha$ can induce more sparsity to models, but it can damage the performance. We can also see that the convergence rate depends on the difference of gradients between global and local thresholds as shown in the first term, thereby capturing the impact of the non-iid dataset. We will provide the detailed proof in the appendix of the revised version,
[1] Li, Xiang, et al. "On the convergence of fedavg on non-iid data." arXiv preprint arXiv:1907.02189 (2019).
[2] Yi, Kai, et al. "FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity." arXiv preprint arXiv:2404.09816 (2024).
---
Rebuttal 2:
Comment: Thank you for your follow-up response. I appreciate the additional comparison with FedP3. I would suggest that the authors incorporate the points we've discussed into the paper, as this could significantly enhance both the clarity and contribution of the work. I also want to acknowledge the effort put into the rebuttal and the follow-up response.
I would like to highlight a few important points:
a) Assumption 3 regarding constant bounded gradient is quite strong and may not be feasible in the context of FL. It only seems reasonable when considering differential privacy to achieve privacy guarantees (perhaps future work on differential privacy could also help alleviate this.) This could be a potential limitation of your work, and it would be valuable to see a proper discussion of this in your future revisions. Additionally, directly assigning a complex term, such as A, B, etc., to the final convergence rate—especially when it depends on many variable factors, including the function itself—is not a reasonable way unless you have thoroughly examined its characteristics.
b) The suggested theoretical comparison with existing methods is still missing. This will likely require substantial effort to carefully examine each variable in your results to ensure a fair comparison. Including this will greatly improve the soundness of the work.
c) The calculation of uplink and downlink communication costs remains unclear. For example, you might follow Eqn (15) in [1] to provide an explicit computational analysis of the cost, which would be more convincing in demonstrating the potential benefits.
d) While Fjord remains a popular method for comparison, I look forward to seeing comparisons with more recent approaches. I’m glad you included a comparison with FedP3 in the rebuttal, even though your focuses differ. If possible, I suggest incorporating additional comparisons with recent baselines, which would more convincingly demonstrate the merits of this work.
Overall, while I believe there is substantial room for improvement in terms of clarity and completeness, I see this work as interesting and promising. I have decided to raise my score.
[1] Malinovsky, Grigory, Kai Yi, and Peter Richtárik. "Variance reduced proxskip: Algorithm, theory and application to federated learning." Advances in Neural Information Processing Systems 35 (2022): 15176-15189.
---
Rebuttal Comment 2.1:
Comment: We appreciate the reviewer for the constructive comments. We provide our response to each comment below.
>Q1: Assumption 3 regarding constant bounded gradient is quite strong and may not be feasible in the context of FL
**A1:** In our Theorem 1, we derived the generalization guarantee through differential privacy. Specifically, we showed that the proposed algorithm satisfies differential privacy at each communication round by following [1]. Although our work does not focus on privacy, we will discuss about privacy guarantees in the revised version,
>Q2: The suggested theoretical comparison with existing methods is still missing.
**A2**: We compare our convergence analysis with [2], which provided the convergence rate of FL with arbitrary time-varying pruned models. Specifically, clients train pruned models, whose binary masks can change over time. and communicated pruned models with the server to generate a global model. The authors in [2] also made the same Assumption 1 and 3. Then, they further assumed that $||\theta_q - m_q \odot \theta_{q, n}||^2 \leq \delta^2 ||\theta_q||^2$, where $\theta_q$ is a global model at round $q$, $m_q$ is a binary mask, $0 \leq \delta < 1$ is pruning induced noise, and $\theta_{q, n}$ is the local model of client $n$. According to the Theorem in [2], they provide
$\frac{1}{T} \sum_{t=1}^{T} \mathbb{E} || \nabla F(\theta_q)||^2 \leq \frac{G_0}{\sqrt{T}} + \frac{V_0}{\sqrt{T}} + \frac{H_0}{T} + \frac{I_0}{\Gamma\_{min}} \sum_{t=1}^{T} \mathbb{E} ||\theta_q||^2$, where $\Gamma\_{min}$ measures the minimum occurrence of the parameter in the local models in all rounds.
Now we compare our convergence rate with the one from [2]. Although both algorithm converges to a stationary point with the rate of $1/\sqrt{T}$, we note that [2] has a non-vanishing term $\frac{\delta^2 I_0}{\Gamma\_{min}} \sum_{t=1}^{T} \mathbb{E} ||\theta_q||^2$ due to the noise from pruning. We also have similar terms such as $\frac{\sum\_{t=0}^{T-1} 4\alpha || \exp(-\tau(t) ) ||^2}{T(1-\alpha)}$ due to the regularization. The main difference is that the convergence analysis of [2] cannot effectively capture the impact of sparse models due to uncontrollable and theoretical value $\delta$. Meanwhile, in our analysis, we can effectively show the impact of the regularization coefficient $\alpha$ on the convergence rate.
>Q3: The calculation of uplink and downlink communication costs remains unclear.
**A3:** We calculated the uplink and downlink communication costs from physically communicated amount of data between clients and the server during the fixed number of rounds. Specifically, each parameter can be represented with 32bit. For the uplink communication cost, we measured how many parameters are transmitted from clients to the server and then multiplied the number of transmitted parameters by 32 [bit] to derive the cost. We do a similar calculation for the downlink by measuring the number of parameters sent to clients. As the reviewer mentioned, we note that calculating the communication cost from the convergence rate can provide better understanding of the communication complexity as done in [3]. We will clarify how we calculated the up/down link communication costs and also provide the result by following [3] in the revised version.
>Q4: While Fjord remains a popular method for comparison, I look forward to seeing comparisons with more recent approaches.
**A4**: In the revised version, we will include the comparison with [4] and [5], which studied structured sparsity in FL.
[1] Rong Dai, Li Shen, Fengxiang He, Xinmei Tian, and Dacheng Tao. Dispfl: Towards communication-efficient personalized federated learning via decentralized sparse training. In
International Conference on Machine Learning, pages 4587–4604. PMLR, 2022
[2] Zhou, Hanhan, et al. "Every parameter matters: Ensuring the convergence of federated learning with dynamic heterogeneous models reduction." Advances in Neural Information Processing Systems 36 (2024).
[3] Malinovsky, Grigory, Kai Yi, and Peter Richtárik. "Variance reduced proxskip: Algorithm, theory and application to federated learning." Advances in Neural Information Processing Systems 35 (2022): 15176-15189.
[4] Dongping Liao, Xitong Gao, Yiren Zhao, and Cheng-Zhong Xu. Adaptive channel sparsity for federated learning under system heterogeneity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20432–20441, 2023.
[5] Chen, Daoyuan, et al. "Efficient personalized federated learning via sparse model-adaptation." International Conference on Machine Learning. PMLR, 2023.
---
Rebuttal 3:
Comment: We appreciate the reviewer for the constructive comments. We will update the convergence analysis, communication costs, and the comparisons with more recent baselines in the revision | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
One-Shot Safety Alignment for Large Language Models via Optimal Dualization | Accept (spotlight) | Summary: The paper studies the safety alignment of language models using constrained Reinforcement Learning from Human Feedback (RLHF). The main contribution of the paper is deriving a closed-form solution of the dual function of a constrained RLHF problem. This closed-form solution reduces solving a constrained RLHF problem to an unconstrained RLHF problem.
Strengths: This is a superbly written paper studying a highly important problem: the safety alignment of language models. The main contribution is a novel closed-form solution of the dual function, which allows for a new two-step algorithm for constrained RLHF. This presents a significant improvement over earlier algorithms for constrained RLHF as it (probably) greatly reduces the computational burden.
Weaknesses: ## No Details on Compute Given
The main weakness of the paper is that compute resources and computation times are not stated in the paper (I searched extensively but couldn't find it; sorry if I missed it). This makes the sentence from the abstract: "[...], thus greatly reducing the computational burden [...]" essentially unjustified. It makes perfect sense to me that the two-stage approach in the paper is significantly cheaper than performing gradient ascent/descent or other constrained optimisation algorithms. However, I could also imagine that the situation might be different in practice. For example, I could imagine that:
1. Starting with a larger value of $\boldsymbol{\lambda}$ than $\boldsymbol{\lambda}^\ast$ and reducing the value of $\boldsymbol{\lambda}$ to $\boldsymbol{\lambda}^\ast$ during optimisation might accelerate convergence to the feasible set.
2. Or the opposite: Starting with a smaller value of $\boldsymbol{\lambda}$ than $\boldsymbol{\lambda}^\ast$ and increasing the value of $\boldsymbol{\lambda}$ to $\boldsymbol{\lambda}^\ast$ during optimisation might lead to better optimisation conditions initially, thereby accelerating convergence.
Both could occur when applying gradient descent/ascent. Therefore, to justify the claims in the abstract, the paper at least needs to provide details on the computation time of their experiments. However, to make the claim well-founded, the paper should actually provide a comparison of the computation times with the baseline approach and should discuss computation times in the main part of the paper.
## Experiment Evaluation
The model-based evaluation of the experiments makes sense from an optimisation point of view. However, it does not say so much about the quality of the obtained language models since overoptimisation [17] might have occurred, for example. The GPT evaluation helps somewhat in this regard but could still be flawed. I understand that this is difficult to address since a statistically significant human evaluation would be expensive.
## Limitations
The discussion of social impact is very brief; see the "Limitations" section of this review.
Technical Quality: 2
Clarity: 4
Questions for Authors: ## Questions
### Experiments
- What is the computational budget of your experiments (see "Weaknesses")?
- Line 295: How was the grid of safety margins chosen?
### Code Availability
- I would like to have a look at the code during the rebuttal, as offered in the answer to question 5 of the NeurIPS paper checklist.
- Are there any plans for publishing the code?
## Appendix N: Sample Responses
I would be interested in the helpfulness and safety scores of the sample responses. Additionally, I would be interested in an interpretation of the sample outputs of the PeCAN-aligned language model. For example, the answer in Table 6 is partially non-sensical and unrelated, and the answer in Table 8 contains typos and grammar errors ("I don against advise"). Do these answers also lose in helpfulness against the baseline or is there maybe an overoptimisation of the helpfulness model taking place?
## Typos and Other Minor Suggestions
- Line 100: preference-based *safety* alignment?
- Line 181: I assume this accuracy notion is from the literature, since line 187 states that [11] proves something using this accuracy notion. I would suggest using a different wording than "we introduce" if this is the case. For example, "To quantify the level of estimation error, we consider the accuracy notion of Chang et al. [11]", or something along these lines.
- Line 221: with *an* existing dataset
- Line 229: If *the* size
- Line 409: Conference name capitalised as in line 418?
- Line 242: conference name also capitalized?
Confidence: 3
Soundness: 2
Presentation: 4
Contribution: 4
Limitations: The paper sufficiently discusses limitations.
In my opinion, the discussion of social impact is too brief. As far as I can see, it is limited to this sentence: "Our methods can benefit researchers in building safer language models." However, what is proposed in the paper is really a general-purpose method of solving contained RLHF problems. I think it should be acknowledged that there are also harmful dual-use applications for this general-purpose tool (as there are for most other general-purpose tools, e.g. [A]). Examples include minimising safety while maintaining a level of helpfulness which could be relevant to malignant online communities, such as cyberbullying communities or troll networks.
[A]: Fabio Urbina, Filippa Lentzos, Cédric Invernizzi, Sean Ekins: Dual use of artificial-intelligence-powered drug discovery. Nat. Mach. Intell. 4(3): 189-191 (2022)
**Post-Rebuttal**: Since the authors have addressed my concerns, I raise my score to 8 (Strong Accept).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive evaluation and the valuable feedback. We have answered all questions to the best of our ability. We are glad to address any further questions you might have.
**1. Computing resources and running time.**
Our experiments are conducted on a single 48G NVIDIA A6000 GPU, taking about 15 hours to align each model. For comparison, the constrained RLHF [R4] indicates 20 hours for each run on a more powerful NVIDIA A100 GPU. The computing resources and running time are not reported in safe-RLHF [12]. However, since safe-RLHF uses PPO for policy updates, like constrained RLHF, on a much larger dataset, we expect its running time to scale 2-3x (i.e., 40-60 hours) proportional to constrained RLHF. Constrained DPO [22] uses a single 40G NVIDIA A100 GPU without reporting the running time. Therefore, our methods reduce running time by at least 25% compared to the aforementioned methods while using a much cheaper GPU. We will include this discussion in future revisions.
**2. Monotonically tuning the dual value.**
Although intuitively tuning the dual variable simulates gradient descent/ascent updates to some extent, its efficacy highly depends on the unknown optimization landscape. To our understanding, the suggested strategy implicitly involves three undefined factors: (i) the optimal dual variable $\lambda^\star$; (ii) positive or negative constraint satisfaction; and (iii) annealing step size. Adjusting the dual variable depends on whether the primal variable (i.e., parameters of an LM) violates the constraint. However, the LLM policy, updated via sophisticated optimizers (e.g., DPO), does not necessarily ensure that the updated LLM policy always satisfies (or violates) the constraint. Hence, gradually reducing or increasing the dual variable, even starting from a sufficiently large or small initial value, may not converge to the optimum.
Moreover, small step sizes slow down the convergence speed of the dual variable, while large step sizes often result in severe oscillation of policy iterates. Therefore, iterative primal-dual algorithms often suffer from high computational burden and instability issues, as reported in Figure D.2 and the conclusion section of [R1], and Figure 2 of [22].
We would also like to note that the heuristic tuning strategy does not scale well to multi-constrained cases as the search space grows exponentially. In contrast, our method provides a principled and guaranteed methodology for multi-constraint cases.
**3. Insufficiency of model-based evaluation.**
We agree on the limitations of model-based evaluation, which is why we have the GPT-based evaluation. While human evaluation can be ideal, it is costly and subject to biases, as you mentioned. It seems there is no gold-standard evaluation for language models. We believe that human and AI evaluations should be complementary [R1].
**4. Choices of safety margins**
We chose the set of safety margins to achieve a diverse range of safety improvement levels. These margins were made primarily to better visualize the trend of safety improvement versus the dual variable in Figure 2 (left).
**5. Code Availability**
According to the rebuttal policy, we have shared our source code with the AC in an anonymized link. Please feel free to contact the AC for access to our code. We plan to officially release our code to the public after cleaning it up and adding detailed instructions.
**6. Details on sample Responses**
Thank you for your great interest in our experiments. We provide the GPT-evaluated safety scores of sample outputs upon the malicious red-teaming prompts on Pages 29--31 of the submitted manuscript as follows. We observe noticeable improvements brought by MoCAN- and PeCAN-alignment. However, we would like to remark that these sample prompts are handcrafted by literature [12] to mainly test safety performance. The helpfulness score evaluated on these prompts may not fully indicate the ground-truth helpfulness of each LM.
Table 1. GPT-evaluated safety levels for sample responses in the submitted manuscript.
| Moldel | SFT | Safe-RLHF | DPO$_{\rm H}$ | DPO$_{\rm S}$ | MoCAN | PeCAN |
|--------:|:--------------------|:--------------------|:--------------------|:--------------------|:--------------------|:-----------------------|
| Table 5 | 5 | 8 | 9 | 8 | 7 | 8 |
| Table 6 | 0 | 10 | 0 | 10 | 10 | 10 |
| Table 7 | 2 | 9 | 4 | 10 | 10 | 9 |
| Table 8 | 0 | 10 | 10 | 10 | 10 | 10 |
Table 1. GPT-evaluated help levels for sample responses in the submitted manuscript.
| Moldel | SFT | Safe-RLHF | DPO$_{\rm H}$ | DPO$_{\rm S}$ | MoCAN | PeCAN |
|--------:|:--------------------|:--------------------|:--------------------|:--------------------|:--------------------|:-----------------------|
| Table 5 | 6 | 8 | 8 | 8 | 8 | 7 |
| Table 6 | 1 | 10 | 1 | 10 | 10| 10 |
| Table 7 | 2 | 7 | 7 | 10 | 9 | 9 |
| Table 8 | 1 | 10 |9 | 10 | 9 | 9 |
**7. Typos and Other Minor Suggestions**
Thank you for your careful reading of our paper and for catching typos. We will fix these typos and double-check the paper's writing in revisions.
**8. Social impact.**
Thank you for bringing our attention to the broader scope of social impact and for pointing out an excellent reference on the dual use of AI. We will acknowledge the dual use of constrained alignment methods and remark on potential applications that could be negative to our society, such as dialogue systems with gender biases [R3].
*References*
[R1] Confronting Reward Model Overoptimization with Constrained RLHF
[R2] Complementarity in Human-AI Collaboration: Concept, Sources, and Evidence
[R3] GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer. My questions and criticism are fully addressed except regarding Code Availability/Reproducibility. Thank you for providing the code. Could you pinpoint me the the method/lines where step 4 of MoCAN takes place? I think I understand where the 5th takes place, but I haven't found the 4th so far.
Additional remark: the setup procedure described in the Readme is incomplete: I also had to run `pip install -r requirements.txt` after creating a conda environment using `conda env create --file conda-recipy.yaml`.
---
Reply to Comment 1.1.1:
Comment: First, we are glad that most questions and criticism have been addressed. We are also grateful to reviewer TvNM for carefully checking our implementation and identifying the missed step in the setup process. We will incorporate this step into our README file for the released version.
Regarding step 4 of MoCAN, please refer to *OneShot/safe_rlhf/trainers/model_based_dual_trainer.ipynb*.
Should you have any further questions or need clarification regarding our implementation, please feel free to reach out. We are more than happy to provide additional information or address any concerns. | Summary: This paper introduces a novel approach to aligning large language models (LLMs) with safety constraints using a dualization perspective. The key contributions are:
1) A method to reduce constrained alignment to an equivalent unconstrained problem by pre-optimizing a dual function.
2) Two practical algorithms, MOCAN and PECAN, for model-based and preference-based scenarios respectively.
3) A theoretical analysis of the dual function's properties and the stability of the approach.
4) Extensive experiments demonstrating the effectiveness of the proposed methods in improving both helpfulness and safety of LLMs.
Strengths: Strengths:
- Novel theoretical approach to constrained LLM alignment with strong mathematical foundations
- Practical algorithms that reduce computational burden compared to iterative primal-dual methods
- Comprehensive experimental evaluation across multiple tasks and baselines
- Flexibility to work with both model-based and preference-based scenarios
- Theoretical guarantees on the stability and effectiveness of the approach
Weaknesses: Weaknesses:
- Limited to single safety constraint in experiments due to dataset limitations
- Assumes Bradley-Terry preference setup, which may not always hold in practice
- Potential sensitivity to the quality of pre-trained reward and safety models in MOCAN
- PECAN slightly underperforms MOCAN, suggesting room for improvement in preference-based scenarios
- Limited discussion of potential negative effects or failure cases
Technical Quality: 4
Clarity: 3
Questions for Authors: Questions:
1. How does the computational complexity of MOCAN and PECAN compare to iterative primal-dual methods?
2. Have you explored the effectiveness of your approach with multiple simultaneous safety constraints?
3. How sensitive are MOCAN and PECAN to the quality of pre-trained reward and safety models?
4. Could you provide more insight into why PECAN slightly underperforms MOCAN?
5. Have you considered extending the approach to more general preference models beyond Bradley-Terry?
6. How does the performance of your methods scale with larger language models (e.g., 13B or 70B parameters)?
7. Have you explored the potential for using your dualization approach in other constrained optimization problems in machine learning?
8. How robust is the method to potential adversarial attacks or attempts to circumvent the safety constraints?
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Limitations:
The authors have addressed some limitations, but there is room for improvement:
1. Addressed:
- Experiments limited to single safety constraint due to dataset availability
- Assumption of Bradley-Terry preference setup
- Slight underperformance of PECAN compared to MOCAN
2. Could be better addressed:
- Scalability to multiple simultaneous safety constraints
- Sensitivity to quality of pre-trained reward and safety models
- Robustness to potential adversarial attacks or constraint circumvention
3. Missing:
- Discussion of potential biases introduced by the method
- Analysis of computational requirements compared to baseline methods
- Exploration of failure cases or scenarios where the method might not perform well
- Consideration of privacy implications when using human preference data
Suggestions for improvement:
1. Conduct experiments with multiple simultaneous safety constraints, if possible
2. Explore the method's effectiveness with more general preference models
3. Provide a more detailed analysis of computational requirements and scalability
4. Investigate potential failure cases and limitations of the approach
5. Discuss potential biases and privacy implications of the method
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the very positive evaluation and the valuable feedback. We have answered all questions to the best of our ability, and we are glad to address any further questions you might have.
**1. Experiments are limited to a single safety constraint.**
As stated in the limitation section, our experiments use a single safety constraint due to the lack of suitable datasets. We are eager to test our method with multi-safety datasets if they become available. To the best of our knowledge, all existing works in safety alignment, such as [12, 22, 35], involve only one constraint, even in theory. In contrast, our algorithms and theoretical guarantees have advanced in dealing with multiple constraints.
**2. Restriction of the Bradley-Terry preference setup.**
We agree that the Bradley-Terry model may not reflect all ground-truth human preferences. However, we want to remark that our two-stage strategy is orthogonal to the preference setup. In fact, our two presented algorithms readily adapt to more general preference setups by generalizing the DPO optimization backbone to the more generic $\Phi$-PO (see Equation (6) of [R1]). Specifically, we can consider $r(x,y) = \mathbb{E}\_{y'\sim\mu}[\Phi(\mathcal{P}\_r(y\succ y'\vert x))]$ and $g(x,y) = \mathbb{E}\_{y'\sim\mu}[\Phi(\mathcal{P}\_g(y\succ y'\vert x))]$, where $\mathcal{P}\_r$ is the preference for helpfulness and $\mathcal{P}\_g$ is the preference for safety, $\Phi$ is a preference-based utility function, and $\mu$ is an underlying behavior policy. When $\Phi(p) = \log (p/(1-p))$, it recovers the standard Bradley-Terry preference setup (e.g., Proposition 1 of [R1]). We have remarked on the general preference setup in the conclusion section and left the extension for future work.
**3. Potential sensitivity to pre-trained reward and safety models.**
In practice, pre-training reward and safety models can only be estimated up to some error, as characterized by Definition 1. Theorem 3 shows that our strategy can find a nearly optimal LLM policy up to some estimation errors. Therefore, in theory, MoCAN enjoys stability against perturbations in reward/safety models.
Experimentally, we supplemented new experiments of MoCAN using the beaver-7b-v3.0-reward/cost models (see Table 3 in the PDF attached to the global response). It is observed that MoCAN-aligned LMs attain reasonably good performance under the new reward/cost models.
**4. PECAN slightly underperforms MOCAN.**
Please refer to the first point in our **global response**.
**5. Potential negative effects or failure cases.**
Potential negative effects can arise from misusing our constrained alignment method in applications that harm society, such as promoting gender biases [R2]. Our method assumes IID preference datasets and potential failures can be caused by out-of-distribution datasets [R3]. We will discuss potential negative effects and failures in future versions.
**6. The comparison of computational complexity.**
Our alignment methods turn to solve a one-shot unconstrained problem, while iterative primal-dual algorithms [12, R4, 22] must solve an unconstrained optimization problem for each round of dual update. Moreover, these algorithms need to generate a large batch of on-policy responses for evaluating the update of the dual variable, which is computationally expensive.
In practice, our experiments are conducted on a single 48G NVIDIA A6000 GPU, taking about 15 hours to align each model. For comparison, the constrained RLHF [R4] indicates 20 hours for each run on a more powerful NVIDIA A100 GPU. The computing resources and running time are not reported in safe-RLHF [12]. However, since safe-RLHF uses PPO for policy updates, like constrained RLHF, on a much larger dataset, we expect its running time to scale 2-3x (i.e., 40-60 hours) proportional to constrained RLHF. Constrained DPO [22] uses a single 40G NVIDIA A100 GPU without reporting the running time. Therefore, our methods reduce running time by at least 25% compared to the aforementioned methods while using a much cheaper GPU. We will include this discussion in future revisions.
**6. Scaling with larger language models.**
Due to resource and compute limits, we cannot experiment with our alignment methods on larger language models. To our knowledge, 7B models are commonly used to evaluate generation performance in recent NeurIPS papers (e.g., [R5, R6]). We believe it is acceptable to experiment with 7B models for fair comparison.
**7. Exploration in other constrained ML problems.**
Our dualization approach also applies to MinMax-RLHF [10] and alignment with $f$-divergence [R7]. Please check Appendix I and Appendix A for more details.
**8. Resistance to adversarial attacks.**
We thank the reviewer for highlighting this interesting direction. We are not aware of any common attackers that target safety constraints. However, one could naturally consider formulating the improvement of adversarial robustness into new constraints. Please refer to the paragraphs in our global response.
**9. Suggestions for improvement.**
Thank you for carefully reviewing our paper and suggesting several important points for improvement. We will discuss them point-by-point in revisions.
*References*
[R1] A General Theoretical Paradigm to Understand Learning from Human Preferences. Azar et al., 2023.
[R2] GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models. Zhang et al., 2024.
[R3] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization. Hassan et al., 2023.
[R4] Confronting Reward Model Overoptimization with Constrained RLHF. Moskovitz et al., 2023.
[R5] Training language models to follow instructions with human feedback. Ouyang et al., 2022.
[R6] RRHF: Rank Responses to Align Language Models with Human Feedback. Yuan et al., 2023.
[R7] Aligning Language Models with Preferences through $f$-divergence Minimization. Go et al., 2023
---
Rebuttal Comment 1.1:
Title: response to authors
Comment: I thank the authors for their detailed response. I have now a much clearer idea of my concerns and I would like to keep my score. I look forward to seeing all of the changes in the next version. | Summary: This paper aims to address the issue that Lagrangian-based primal-dual policy optimization methods are computationally expensive and often unstable in conventional RLHF. The authors improve stability by pre-optimizing a smooth and convex dual function in a closed form, eliminating the need for cumbersome primal-dual policy iterations and thereby enhancing its stability. Experiments show that the proposed approach yields a good trade-off between helpfulness and safety improvements. The proposed method can achieve better safety performance than the existing DPO algorithm.
Strengths: 1. The paper is logically clear and easy to follow.
2. The motivation of the paper is clear.
3. Very detailed mathematical formulations.
4. The proposed method is promising in enhancing a model's security and usefulness.
Weaknesses: 1. The existing benchmark Alpaca-eval mainly assesses the model's instruction-following capability. To confirm that this method does not undermine the model's overall performance, additional benchmarks, such as MMLU and TruthfulQA, are suggested to be considered.
2. Additional testing on benchmarks such as AdvBench or PKU-SafeRLHF Evaluation is suggested to evaluate the safety performance of the model's open-ended generation.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: YES
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive evaluation and the valuable feedback. All questions have been answered as best as we could. We are glad to address any further questions you might have.
**Additional benchmarks.**
We have conducted additional experiments on the benchmark datasets TruthfulQA and AdvBench. Please refer to Tables 1 and 2 in the PDF attached to the global response for detailed results. These tables show that our aligned models achieve higher safety scores with increased $\lambda$ values, even when evaluated on these out-of-distribution prompts.
---
Rebuttal 2:
Comment: Dear Reviewer EtW9,
We are grateful for your insightful comments and feedback. In our detailed responses, we have carefully addressed each of your concerns. Given the impending rebuttal deadline, we kindly request that you review our responses to ensure that all issues have been adequately resolved. If further clarification or elaboration is required, we would be happy to provide additional explanations. Thank you for your time and consideration.
Best regards,
The authors of Submission 13052
---
Rebuttal 3:
Title: response
Comment: I would like to thank the authors for addressing my concerns, and I raised the acceptance rate from "weak accept" to "accept".
---
Rebuttal Comment 3.1:
Comment: Thank you so much for taking the time to review our paper and for kindly raising your rating in support of our paper. We appreciate the valuable feedback you shared. Thank you again! | Summary: The paper proposes a novel dualization-based method to convert a constrained alignment of a LLM to an unconstrained alignment. The proposed two-stage policy learning method, CAN, eliminates the need for cumbersome primal-dual iterations with theoretical analysis. Based on CAN, two practical algorithms, MOCAN and PECAN, are compatible with cases when reward/safety models are known and when they are unknown, e.g., human preference, respectively. The paper also presents strong empirical results that support the claims.
Strengths: - By and large, the paper is written well. I especially appreciated the discussion on the comparison with existing works and showcase the importance of reduction to unconstrained alignment which comes with stability analysis.
- The idea of using a dualization perspective to build a two-stage approach that reduce the constrained LM alignment into unconstrained LM alignment is novel and useful.
- The section on two practical algorithms based on CAN are well described and the proposed two algorithms covers cases when reward and safety models are known or unknown.
- The observation on influence of offline data for the difference between the number of prompts and responses is also very helpful.
- The theoretical results appear to be correct.
Weaknesses: - It would have been interesting to see how would this method can be combined with LM adversarial attack techniques to improve the model robustness.
- Some additional commentary on how to improve the pre-alignment mentioned in the MOCAN vs PECAN comparison would have been very helpful.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Could you comment on how LM adversarial attack techniques can be used with CAN to improve model robustness?
- Could you comment on directions that may help with pre-alignment to improve the PECAN performance?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes, the author sufficiently addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the very positive evaluation and the valuable feedback. We have answered all questions as best we could, and we are glad to address any further questions you might have.
**1. Incorporating adversarial robustness.**
Please refer to the second point in our **global response**.
**2. Ways to improve pre-alignment.**
There are several ways to improve pre-alignment and, consequently, PeCAN's performance. As clarified in the global response, models can always be pre-aligned over larger datasets or using stronger reference LMs. With the same experimental resources, properly regularizing the pre-alignment process, as done in the training of reward/cost models in reference [12] (see Equations (13) and (14)), could enhance the reliability of the pre-aligned pseudo-preference labelers. Additionally, integrating more safeguard measures into the vanilla log-probability computation, as demonstrated in our supplementary experiments, could significantly improve the quality of pseudo-preference labeling and PeCAN's overall performance. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their careful review and valuable feedback. We have addressed all questions raised by the reviewers in the separate rebuttals below and are glad to address any further concerns. Multiple reviewers have brought up several matters, so we present a global response to these concerns below.
**1. The empirical gap between MoCan and PeCan.**
First, we want to clarify that we did not intend to claim that PeCAN outperforms MoCAN throughout the manuscript. The main motivation for having both PeCAN and MoCAN is to facilitate flexible working scenarios, specifically model-based and model-free scenarios. Since PeCAN requires pre-aligned models and uses log-probabilities to indicate ground-truth helpfulness and safety, its empirical performance compared to MoCAN is heavily tied to the quality comparison of the pre-trained reward/safety models (used in MoCAN) and the pre-aligned helpfulness-only/safety-only LMs (used in PeCAN). If the pre-alignment and downstream log-probability computation do not sufficiently indicate the ground-truth helpfulness/safety levels, PeCAN can underperform MoCAN.
Second, we would like to highlight the differences in training details between the beaver reward/cost models [12] and our pre-aligned DPO models. The first difference is in the size of the training data. The beaver reward/cost models are trained on the full Safe-RLHF dataset, which contains roughly 1 million data points (please refer to https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF). In contrast, the pre-alignment in our paper, due to resource limits, is conducted over a smaller dataset containing approximately 30K data points (please refer to https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF-30K), as practiced in references [22,35]. The second difference lies in the training loss objective. While both the literature [12] and our work assume the Bradley-Terry preference setup, they additionally impose regularization upon the DPO-type loss function (see Equations (13) and (14) in [12]), which may greatly boost the robustness and empirical performance of the reward/cost models. Conversely, we pre-align models by faithfully following the DPO loss without regularization. The lack of regularization may result in less robust log-probabilities as proxies for ground-truth helpfulness and safety, as partly manifested in our calibration results in Figure 6.
Additionally, log-probabilities, by definition, can be sensitive to special tokens such as termination and new-line tokens, as well as a few potential outlier tokens (e.g., those caused by hallucination). Therefore, using the plain log-probabilities faithfully, as in the PeCAN algorithm, can be less indicative than pre-trained scalar reward/cost models for the ground truth. To address this, we supplement new experiments by adopting the log-probability computation module in the DPO pipeline on the Huggingface platform (https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L72) in pseudo-preference labeling, which incorporates many safeguard engineering tricks. For example, lines 760-771 adjust the difference between tokenizing the concatenated string of prompt+response and merging the separate tokenization results of prompt and response; lines 851-859 clip chosen and rejected responses to be of the same length; lines 862-870 add a BOS token in front of prompts; and lines 880-895 truncate responses to avoid the resulting token sequences being too long. Only the pseudo-preference labeling part is updated with the new computation module, while the pre-alignment maintains the same DPO manner. We find these additional safeguard tricks can significantly improve the performance of PeCAN and offset the visible gap between MoCAN and PeCAN (see Figure 2 in the PDF attached below).
**2. Incorporating adversarial robustness.**
We thank the reviewer for highlighting this interesting direction. One promising approach could be incorporating robustness requirements as a constraint into our alignment framework. For instance, one could consider the improvement margins of adversarial robustness as follows:
$$\mathbb{E}\_{x\sim \mathcal{D}}\left[\mathbb{E}\_{y\sim \pi(\cdot \,\mid\, \text{attacker}(x))}[g(\text{attacker}(x), y)]\right]- \mathbb{E}\_{x\sim \mathcal{D}}\left[\mathbb{E}\_{y\sim \pi_{\rm ref}(\cdot \,\mid\, \text{attacker}(x))}[g(\text{attacker}(x), y)]\right]\geq b,$$
where $\text{attacker}$ is an external entity that maliciously modifies the original prompt input. While this basic formulation interprets the attacker as a function that acts only on prompts, one could also consider more sophisticated constraints that account for the interaction between the LM and the attacker. We refer the reviewer to several related studies [R1, R2, R3] for more details.
However, it is questionable whether the one-shot computational benefit of exact dualization remains valid after incorporating such sophisticated constraints. We would like to leave this direction for future work and are glad to discuss it further in revisions.
**3.Supplemented experiments.**
As suggested by reviewers, we supplement new experimental results, including (i) the evaluation of MoCAN- and PeCAN-alignment on new benchmarks TruthfulQA and AdcBench, (ii) MoCAN-alignment of using new pre-trained beaver-
7b-v3.0-reward/cost models, (iii) the model-based evaluation scores of DPO and safe-RLHF models, (iv) PeCAN-alignment using the Huggingface log-probability computing module in the pseudo-preference labeling. Please refer to the attached PDF for more details.
*References*
[R1] Representation Engineering: A Top-Down Approach to AI Transparency. Zhou et al., 2023.
[R2] Open the Pandora's Box of LLMs: Jailbreaking LLMs through Representation Engineering. Li et al., 2024.
[R3] Uncovering Safety Risks in Open-source LLMs through Concept Activation Vector. Xu et al., 2024.
Pdf: /pdf/daf729eb1eadb45b3d3540d630e8f20487d9259d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes a new constrained optimization formulation to ensure the safety of language models (LMs). Based on the proposed constrained optimization problem, the authors derive the dual formulation and provide the closed-form solution to this dual formulation. Through theoretical derivations, they show that the dual function has four desirable properties, including smoothness and local strong convexity in $\lambda$. Then, the authors optimize the Lagrange multiplier $\lambda$ and update an LM using this $\lambda$.
Based on these theoretical derivations, the authors propose two practical algorithms: Model-based Constrained Alignment via dualizatioN (MoCAN) and Preference-based Constrained Alignment via dualizatioN (PeCAN).
First, MoCAN assumes that reward model $r$ and safety model $g$ are given. Then, it estimates $h$ for each offline data $(x,y)$, optimizes $\lambda$ using the offline data and estimated $h$, and finally updates the LM with pseudo-preferences constructed with $r\_\lambda$.
Second, PeCAN trains $m+1$ pre-aligned LMs $\pi\_{\theta\_r}$ and $\lbrace\pi\_{\theta\_{g\_j}}\rbrace_j$. Using the pre-aligned LMs, PeCAN generates a dataset, optimizes $\lambda$ using the collected dataset, and updates LM with pseudo-preferences constructed with an implicit reward constructed by the pre-aligned LMs instead of $r\_\lambda$.
In the experiments section, the authors demonstrate that the proposed algorithm successfully enhance the safety and helpfulness of LMs. Notably, the proposed algorithm provide Pareto frontiers in terms of helpfulness and safety win rates with respect to the changes in $\lambda$.
Strengths: The proposed algorithm is based on the rigorous theoretical derivations.
Weaknesses: The experiments are not sufficient to fully support the authors’ claims.
Firstly, there is a notable gap between the empirical results of MoCAN and PeCAN. In practice, MoCAN utilizes reward and cost models pre-trained using a pre-collected dataset, while PeCAN relies on pre-aligned LMs $\pi\_\theta$ and $\pi\_{\theta\_g}$. This means PeCAN replaces pre-trained reward and cost models by pre-aligned LMs. However, MoCAN shows better performance than PeCAN. Therefore, contrary to the authors’ claim, it seems that training reward and cost models is more effective than using pre-alignment LMs.
Secondly, based on the empirical results of MoCAN, changes in $\lambda$ do not show a clear trade-off between safety and helpfulness (as shown in the middle of Figure 3). In addition, it is hard to say that PeCAN outperforms DPOs.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Difference between MoCAN and PeCAN: In theory, MoCAN and PeCAN are not significantly different, as $r$ and cost $g$ can be replaced by $\pi\_\theta$ and $\pi\_{\theta\_g}$, respectively, based on $r=\frac{\pi\_\theta}{\pi\_{ref}}$ and $g=\frac{\pi\_{\theta\_g}}{\pi\_{ref}}$. Then, why are there such different empirical results between MoCAN and PeCAN?
2. Using closed-form solution $\pi\_\lambda$ to update the policy: Why not use the closed-form solution to update the policy? The authors have already derived $\pi\_\lambda$. Then, it seems straightforward to update $\pi$ to be proportional to $\pi\_\lambda$. However, the authors adopt preference-based approaches, which are sparser than the value of $h$.
3. Model-based evaluation of DPOs and RLHF: In Figure 3 (left), there are no empirical results for DPOs and RLHF. Are there any results for model-based evaluation?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We have addressed all your comments/questions to the best of our ability; see our detailed response below. We hope that our response would add your openness of re-evaluating our paper. We will be happy to address any further questions you might have.
**1. The empirical gap between MoCan and PeCan.**
Please see to the first point in our **global response** all the reviewers above.
**2. No clear trade-off in MoCAN.**
We note that the trade-off in MoCAN is evident in the model-based evaluation in Figure 3 (left), which is consistent with our theoretical findings. The less clear trade-off in the pairwise win-rate comparison in Figure 3 (middle) is somewhat understandable, as the win-rate performance does not directly correspond to the associated model-based performance. The model-based evaluation and the MoCAN algorithm are associated with proxy reward/safety models, while the pairwise win-rate evaluation is more closely tied to the ground-truth reward/safety and the specific baseline LM (i.e., the reference policy). Additionally, the GPT-based evaluation uses handcrafted prompts from [12] for the safety study and prompts from the Alpaca-eval dataset [20] associated with the "helpful_base" category, following the practice in [22]. The model-based evaluation uses prompts from the SafeRLHF-30K test set. These prompts come from different sources, leading to expected distributional shifts. In particular, the GPT-based evaluation result can be viewed as out-of-distribution performance, and thus, it may differ from the in-distribution performance seen in model-based evaluation. We also note that in related literature, such as [35], the helpfulness-safety trade-off under pairwise win-rate evaluation is unclear.
**3. Infeasibility of straight-forward policy update.**
There seems to be a misunderstanding regarding the alignment of language models. Sampling responses from a fixed and known policy is generally infeasible, even if the policy can be expressed in a closed form. In alignment, we aim to fine-tune language models, which contain billions of weights/parameters. Any change in the sampling policy for an LM must be implemented by updating its internal weights. Moreover, the input space (prompt tokens) and the output space (response tokens) of an LM policy can be viewed as infinitely large. Sampling in a large discrete space is notoriously difficult [R1, R2]. There is no direct way to force a language model to generate responses by exactly following a mathematically known policy. This practical impossibility is ubiquitous in LM alignment, even in unconstrained problems, departing from the common belief in classic reinforcement learning.
Regarding the reviewer's follow-up comment that "preference-based approaches are sparser than the value of $h$", we do not have a clear understanding. We kindly ask the reviewer for more clarification on this comment so that we can respond accurately.
**4. Model-based evaluation of DPOs and RLHF.**
We have supplemented the model-based evaluation results of DPOs and Safe-RLHF (see Figure 1 in the PDF attached to the global response). Our findings indicate that the model-based scores of DPO LMs align closely with those in Figure 3 (middle). Notably, Safe-RLHF exhibits better helpfulness vs. safety tradeoff, supporting our claim that there can occasionally be a discrepancy between model-based and GPT-based evaluations. The surprisingly good safety level of Safe-RLHF under in-distribution model-based evaluation may be attributed to the large dataset used in training (nearly one million data points, see https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF).
*References*
[R1] Oops I Took A Gradient: Scalable Sampling for Discrete Distributions. Grathwohl et al., 2021.
[R2] A Langevin-like Sampler for Discrete Distributions. Zhang et al., 2022.
---
Rebuttal Comment 1.1:
Title: Need more clarifications?
Comment: Dear Reviewer wcCD,
We are grateful for your insightful comments and feedback. In our detailed responses, we have carefully addressed each of your concerns. Given the impending rebuttal deadline, we kindly request that you review our responses to ensure that all issues have been adequately resolved. If further clarification or elaboration is required, we would be happy to provide additional explanations. Thank you for your time and consideration.
Best regards,
The authors of Submission 13052
---
Rebuttal Comment 1.2:
Comment: Thank you for your response.
However, I still have a question regarding the policy update. Specifically, I’m curious why we don't use the closed-form solution of $\pi\_\lambda$, which allows us to express $\pi\_\lambda$ as a function of $\pi\_{\theta\_r}$ and $\pi\_{\theta\_g}$. Why not use this function instead of explicitly learning $\pi_\lambda$?
This is a minor question, and I will raise my score.
---
Rebuttal 2:
Comment: We thank the reviewer for responding to our rebuttal and for being willing to raise the score. We would like a few more clarifications on your concern.
First, we would like to remark on the relation between $\pi_{\lambda}$, $\pi_{\theta_r}$, and $\pi_{\theta_g}$:
$$ \ln\left(\frac{\pi\_{\lambda}(y_1 | x)}{\pi\_{\rm ref}(y_1 | x)}\right) - \ln\left(\frac{\pi\_{\lambda}(y_0 | x)}{\pi\_{\rm ref}(y_0 | x)}\right) = \ln\left(\frac{\pi\_{\theta_r}(y_1 | x)}{\pi\_{\rm ref}(y_1 | x)}\right) - \ln\left(\frac{\pi\_{\theta_r}(y_0 | x)}{\pi\_{\rm ref}(y_0 | x)}\right) + \lambda \left(\ln\left(\frac{\pi\_{\theta_g}(y_1 | x)}{\pi\_{\rm ref}(y_1 | x)}\right) - \ln\left(\frac{\pi\_{\theta_g}(y_0 | x)}{\pi\_{\rm ref}(y_0 | x)}\right) \right).$$
However, this relation does not give the explicit formula of $\pi\_{\lambda}(y | x)$ due to the lack of normalization factors $Z_{r}(x)$, $Z_{g}(x)$ of $\pi_{r}(\cdot \mid x)$ and $\pi_{g}(\cdot \mid x)$, where $Z_{r}(x) =\int_{y}\pi_{\rm ref}(y \mid x)\exp(r(x, y)/\beta){\rm d} y$ and $Z_g(x)$ is defined similarly. Moreover, such normalization constants are intractable to compute due to the high-dimensional sampling space and input space (i.e., response tokens and prompt tokens).
Second, even assuming the probability oracle $\pi_{\lambda}(\cdot \mid x)$ is available, sampling from the oracle $\pi_{\lambda}(\cdot \mid x)$ remains way more challenging. The reason is multi-fold, including the high-dimensional sampling space and input space and the unmeasurable complex landscape of the probability distribution $\pi_{\lambda}(\cdot \mid x)$, unlike sampling from a common distribution like normal or uniform distributions.
We hope this response can clarify the reviewer's questions. We look forward to the follow-up discussion and are happy to address any further comments or questions.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response!
---
Reply to Comment 2.1.1:
Comment: We thank the reviewer for the quick response. As we noticed in your previous comment that **"This is a minor question, and I will raise my score"**, we were wondering if there were any clarification issues regarding the question we could address better. Many thanks. | null | null | null | null | null | null |
Classification Diffusion Models: Revitalizing Density Ratio Estimation | Accept (poster) | Summary: This paper introduces a class of generative models based on neural networks which learn the conditional probability distribution of the noise level given a noisy image. It is trained by combining a maximum-likelihood (cross-entropy) objective with a denoising score matching objective. The resulting generative model can be sampled from by iterative denoising like in diffusion models, and also allows for direct likelihood evaluation (in principle).
Strengths: - The proposed model is original and very interesting. Such an approach has the potential to significantly improve upon diffusion models.
- The experimental results are very convincing (except for NLLs, see below) and show great promise.
- The analysis of the shortcomings of density ratio estimation is also on point and very clear. Although these issues were already evidenced in the previous work of Choi et al., their results were limited to MNIST while CDMs achieve satisfying results on CIFAR10 and CelebA 64x64, and have the added potential benefit of direct likelihood evaluation.
Weaknesses: The biggest weakness of the paper to me is the method used for likelihood evaluation, which has several issues:
- An implicit assumption in the paper is that the marginal distribution of the noise level learned by the model matches the correct one used during training, that is, $p_\theta(t) = p(t)$. Although reasonable, it is very difficult to verify, as it requires marginalizing over the high-dimensional variable $x$.
- There is no guarantee that the resulting density function (eq. (10)) is normalized (i.e., integrates to 1). This can lead to arbitrarily small _reported_ NLLs (potentially even smaller than the entropy of the data). Although the soundness of the approach is verified in a synthetic but high-dimensional example, I think it would be necessary to verify at least this property on a natural image dataset, though this is also very hard to do (but see below).
- There is also no guarantee that the density function in eq. (10) is the density of samples generated by the reverse diffusion (Algorithm 2). CDMs thus contain a priori two models of $p(x)$, one for sampling and one for likelihood evaluation, which could be different. I would also like to see in the final version of the paper (not for the rebuttal) a comparison between the NLLs of these two models on natural images (the latter can be evaluated with an integral along the sampling trajectory using the change of variable formula, as pointed out in the text). If they match, this also verifies that the reported densities are correctly normalized (see previous point).
- Finally, restricting NLL comparisons to "models where likelihood can be evaluated in single shot" seems slightly arbitrary, especially since here sampling is not in a single shot. It would make sense to allow a similar computational complexity for sampling from the model and evaluating the log-density at a data point.
I might be missing subtle details in my assessment though, and I would be happy to update my score if the authors point out misunderstandings.
Technical Quality: 3
Clarity: 4
Questions for Authors: - In the proposed approach, $t = T+1$ plays a special role, which seems slightly unsatisfying to me. Having correct log-density estimates using eq. (10) requires an accurate estimation of $\log p(T+1|x)$ for $x$ a clean image, which is very challenging as natural images are very far from typical realizations of Gaussian noise (unless the explicit form of the Gaussian density is enforced here?). However, the MSE loss (which indirectly involves $\nabla_x \log p_\theta(T+1|x)$) seems to fix this problem. Still, is there a way to avoid it altogether with a more symmetric formulation? I liked how the approach of Choi et al. elegantly solved it by considering a path $(x_t)$ of typical realizations of each $p(x|t)$, though I understand that in their setting log-density estimation requires computing a (one-dimensional) integral.
- Out of curiosity, what does Figure 3 look like when the model is trained with MSE loss only? A first observation is that one can add an arbitrary function of $t$ only to the model outputs without changing the MSE loss, so the predicted conditional distributions cannot be right. However, I wonder if this is the only degree of freedom: does the difference between the "true" and predicted $p(t|x)$ depend on $x$? If true, it would show that the CE objective just fixes this constant and is thus not so important.
- I am very puzzled by the DDM denoising result at high noise in Figure 4. Why would they be so bad? Predicting the mean of the dataset is really easy, so this is not a capacity issue but rather a training issue? And why CDM would be specifically better at large noise levels? Is this because $T+1$ plays a special role?
Minor suggestion:
I understand that the "classification" term comes from historical considerations in the density ratio estimation problem. However, it seems ill-adapted here: $t$ is really a continuous variable (which has been discretized, but this does not seem necessary), and therefore $p(t|x)$ is simply a probabilistic model of a conditional distribution that is trained by maximum-likelihood. The title of the paper also sets the incorrect expectation that it is about unifying diffusion models with image classifiers.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: As mentioned in the discussion, an important aspect that is not studied in this paper is the architecture of the network to approximate $\log p(t|x)$. The community has extensively explored architecture to model scores $\nabla_x \log p(x|t)$, but architectures for energies have been much less studied. As rightfully pointed out, here $f$ is a UNet, so that $\nabla f$ is the gradient of a UNet and thus significantly differs from a UNet (which would be the goal to emulate score-based approaches). Let me suggest a few references that consider this problem:
- Tim Salimans and Jonathan Ho. Should EBMs model the energy or the score? In Energy Based Models Workshop-ICLR, 2021.
- Samuel Hurault, Arthur Leclaire, and Nicolas Papadakis. Gradient Step Denoiser for convergent Plug-and-Play. In International Conference on Learning Representations, 2021.
- Regev Cohen, Yochai Blau, Daniel Freedman, and Ehud Rivlin. It has potential: Gradient-driven denoisers for convergent solutions to inverse problems. In Advances in Neural Information Processing Systems, 2021.
All in all, I very much like this paper, and strongly recommend acceptance. It proposes novel ideas of interest to the generative modeling community, and the strong numerical results demonstrate that this approach has promising potential.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and interesting comments.
**Comparing the one step likelihood computation to an integral along the sampling trajectory**
In theory Eq. (10) should be normalized, but this is a great idea. We’ll definitely add it to the final version. We’d just like to note that while our one-step likelihood computation depends only on the model's output, the integral approach depends also on the sampling method. Specifically, different noise schedulers correspond to different ODE/SDE discretizations, and may affect the quality of the generated samples. This is even if each denoiser by itself is optimal. For a scheduler that leads to low quality samples, the likelihood of the generated images obtained with our one-step approach would be low (high NLL). For example, as we show in Sec. 4.3, we can choose a scheduler that improves the NLL over the test set but leads to a worse FID of the generated samples. We argue that this doesn’t imply the model learned a worse approximation of $p(x)$, but only that the sampling trajectory is worse than that of DDPM.
The DDPM scheduler leads to good generated samples, though. This suggests that the discretization associated with this scheduler leads to an accurate solution of the ODE. And therefore, in this case, we agree that your suggested experiment is interesting and we will include it in our final version. Thanks!
**Likelihood evaluation is performed in a single step, but sampling is not**
We completely agree. However, there may be applications that only require likelihood evaluation for real images, and not sampling of synthetic images. This is the case e.g. in out-of-distribution detection. Therefore, we believe there is merit in separately reporting the NFEs required for likelihood evaluation and for sampling.
The ability of our method to evaluate the likelihood of real images in one step stands in contrast to methods that calculate the likelihood by solving an ODE through a sampling-like process (with repeated evaluations of the network), as in DDMs.
In any case, note from Table 3 that our method is competitive also with methods that evaluate the likelihood using many NFEs.
**The role of t=T+1**
This is a very interesting question. Please note that both in Equation (3) in [34] and in Proposition 1 in Choi’s work [7], the likelihood is extracted from the ratio $\log\frac{p(x|0)}{p(x|T+1)}$, since $p(x|T+1)$ is known (a Gaussian in this case). In order to achieve a correct likelihood estimation, all the intermediate ratios need to be accurate for the same input $x$, and specifically, the ratios between densities which represent high noise levels need to be accurate when the input is a clean image. For this reason, DRE methods failed to learn distributions of datasets which are more complicated than MNIST. When you incorporate the MSE loss using our theorem, it enforces the log ratio $\log\frac{p(t|x)}{p(T+1|x)}$ to be accurate for any $t$, which solves this problem. However, it would be very interesting in future work to try solving it in different ways.
**Using only the MSE loss**
When training the model using only the MSE loss, the CE is very high (please see Table 1) and Figure 3 is very noisy, which indicates that the model fails to learn the distribution $p(t|x)$ in this case. Your observation is correct and very interesting. The MSE indeed doesn’t change if we add a function of $t$ to the model’s output. This is why the CE loss is important for estimating $p(t|x)$. Without the CE loss, the model can function as a denoiser but is useless for the purpose of outputting the likelihood in a single step. This indeed seems to be the only degree of freedom that the CE loss fixes because the MMSE predictor corresponds to $\nabla_x \log p(t|x)$, which uniquely defines $\log p(t|x)$ up to an additive constant that may depend on $t$.
Note that the difference between the true and predicted $\log p(t|x)$ may depend on $x$. However, in this case it is the same function of $x$ for all $t$. This is true because this function should cancel out after the softmax and also when we take the difference between $\log p(T+1|x)$ and $\log p(t|x)$ in Equation (8).
To conclude, the MSE loss alone is invariant to adding a function of $t$ to the model’s output, and the CE loss removes this degree of freedom. In addition, both losses are invariant to adding a function of $x$ to the model’s output, but the softmax removes this degree of freedom. Importantly, when we calculate the NLL using Equation (10), this function of $x$ is canceled out even before the softmax, since it is added both to $\log p(t|x)$ and $\log p(T+1|x)$.
**Why is the prediction of DDM poor for large $t?$**
We don’t know why diffusion models fail here, and it would be very interesting to explore it in the future. However, we do know why CDM is particularly good at the higher noise levels. Please note that in Theorem 3.1, we show that the optimal denoiser depends on the model's prediction for $p(t|x)$ and $p(T+1|x)$. At the high noise levels, $t$ is close to $T+1$, and as we show in Fig. 3, in this case the model’s prediction for $p(t|x)$ and $p(T+1|x)$ is accurate even without incorporating the MSE loss. This accuracy is further improved when we include the MSE loss.
---
Rebuttal 2:
Comment: I am happy that you find my suggestions helpful, and that they might improve the paper. I reiterate that this paper should be accepted for the reasons outlined above.
**t = T+1**
In Choi et al., I was referring to what the authors of this paper call the "pathwise method" (section 5.2). The resulting integral has the property that the (infinitesimal) density ratios (as well as the scores) need only be accurate on typical samples from $p(x|t)$ for their respective timestep $t$. Although it seems that they evaluate this more sophisticated method only on Gaussian data.
**Using only the MSE loss**
The reason I asked this question is because this statement is deceptive: "$\nabla \log p$ uniquely defines $\log p$ up to an additive constant". This is true in a strict sense, but not in an approximate sense. In particular, one might have a good _approximation_ of the score _on the support_ of $p$ without having a good approximation of the energy on said support, _when the support is disconnected_. E.g., for a mixture of two well-separated Gaussians, the relative probability of each mode is not captured by the score (it is encoded in the value of the score in-between the two modes, where there is no training data). This is known as the lack of a Poincaré inequality in the maths literature. So this means that in practice, only using the MSE loss might lead to estimated values of $\log p(t|x)$ whose error also depends on $x$.
---
Rebuttal Comment 2.1:
Comment: **$t=T+1$**
Thanks for the clarification. We indeed missed the "pathwise" variant in Choi's paper. This is a clever idea, and we'll definitely mention this in our final version. But, as you note, this method was illustrated only on Gaussian distributions, which suggests that it still fails to solve the problem for complex high-dimensional distributions. This is probably due to the accumulation of many small errors along the path, which our method avoids by the incorporation of the MSE loss.
**MSE**
Thanks again for the clarification. We absolutely agree. In practice, learning the score by itself can lead to errors in the likelihood, which depend on $x$ (especially for multimodal distributions with well separated modes). In those cases, the CE loss may indeed help fix those errors. We'll clarify this in the final version. | Summary: A common approach for likelihood estimation is the density ratio estimation (DRE). DRE is a method of modeling the density of a target distribution (or data distribution) in the form of a density ratio with a known reference distribution. The standard normal distribution is often chosen as the reference. To train the density ratio models, a classification-based method called noise contrastive estimation (NCE) is generally used.
However, in high dimensions, the difference between the reference and the data distribution is large, making the classification problem very easy. Consequently, the model does not learn accurate density ratios. To overcome this saturation problem, some have proposed to create intermediate distributions between the target and reference, similar to diffusion models. For example, the transporting ratio estimation (TRE) first creates a bridge between the target and the reference distribution through their linear combinations. The density ratio between two adjacent timesteps—$t$ and $t+1$-th timestep—is trained using the NCE method. However, despite the diffusion-like modifications, TRE has failed to resolve the saturation problem in high-dimensional data.
Therefore, to overcome the limitation of the previous approaches and estimate the log-likelihood efficiently for the high-dimensional datasets, the paper proposes a novel DRE-based diffusion-based generative model. Most importantly, the paper proposes to predict the timestep $t$ for a given observation $x$—$p(t | x)$—when $x$ is sampled uniformly from all intermediate distributions in the bridge. It means that the model predicts how noisy the given input is, unlike the classification problems proposed in TRE. The paper highlights two important benefits of the proposed method.
First, the authors draw the analytical connection between the noise-predicting classifiers and the Bayes-optimal denoisers at each time step. This connection means conventional parameterizations in diffusion models, such as epsilon/score/denoiser, can also be written by the noise-predicting classifiers. Thus, the noise-predicting classifiers can be trained via denoising score matching (DSM) as well as the cross-entropy loss. This training overcomes the saturation problems that originated from classification-based training.
Second, even compared to the TRE, regardless of its scalability issue, the noise-predicting classifiers provide efficiency at the log-likelihood evaluation. The proposed method can estimate the log-likelihood of the (clean) data in one-g, while the TRE requires $T$-number of model evaluations.
In addition, the paper mentions a concurrent work that proposes a similar noise-predicting classifier, but the authors emphasize that the concurrent work didn't incorporate the DSM loss, which is the key ingredient of the proposed approach's scalability.
Finally, the authors demonstrate the proposed method's effectiveness via various experiments, including generative modeling of image generation benchmark datasets.
------------------------
Update the rating from 7 to 8 after the authors' rebuttal
Strengths: One of the paper's key contributions is the introduction of a novel solution for likelihood estimation. The proposed method addresses significant challenges in the density ratio estimation and offers a fresh perspective on the parameterizations in diffusion-based models.
Moreover, the authors effectively motivate the use of DSM by establishing the connection between the time-prediction classifiers and the conventional parameterizations. This training overcomes the limitations of previous DRE-based approaches.
Furthermore, the proposed method exhibits comparable performance and scalability compared to popular diffusion-based generative models, further underscoring the practical effectiveness of the proposed DRE-based method.
Weaknesses: While the proposed method's originality and novelty are clear, its significance is still doubtful. While achieving efficient likelihood estimation is the primary motivation, the paper doesn’t provide sufficient discussion about it—instead, it focuses on demonstrating the improved generation performance of the proposed DRE-based method. Thus, the proposed method would be much more impactful if the authors could demonstrate what becomes available when the likelihood estimation for high-dimensional data works.
Additional discussion about the importance of the classification loss may seem required. It is clear that the time-prediction classifier links to the conventional denoiser parameterization, and thus, this connection equips the use of DSM to train the classifiers, which enables the proposed model's applicability to high-dimensional data. However, this also implies the possibility of training the proposed models without any classification objective.
Technical Quality: 4
Clarity: 4
Questions for Authors: N/A
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Likelihood estimation is the primary motivation; the paper lacks illustrations of its use**
Our primary motivation was to analyze why existing DRE methods fail to capture distributions of complex high-dimensional data, and to develop theory and a practical DRE method that doesn’t suffer from this limitation. The single-step likelihood evaluation is actually a by-product of our analysis. Nevertheless, we agree with the reviewer that it would be interesting to explore the single step likelihood computation in the context of downstream tasks. We see this as an intriguing direction for future work. For example [R2] showed that a generative model which is able to calculate the likelihood, can be used for zero-shot classification. However, in their case they need to perform many NFEs in order to calculate the likelihood. These applications are out of scope for our paper, but we believe that our method has the potential to be used for solving these tasks in future work.
**Additional discussion about the importance of the classification loss may seem required**
That's an important point. As shown in Table 1, when training using only MSE loss, the CE is very high. In this case, we can use the model as a denoiser, but it will not function as a noise level classifier and specifically will be useless for calculating the likelihood of a given input using Theorem 3.2. The explanation is that the MSE is invariant to an addition of a function of $t$ to the classifier’s output (since when we take the derivative with respect to $x$ in Eq. 8, it zeros out). Therefore, we must combine the CE loss to remove this degree of freedom.
[R2] Zimmermann, Roland S., et al. Score-based generative classifiers. In: NeurIPS Workshops ‘21. | Summary: This paper introduces a clever connection between Density Ratio Estimation (DRE) and Diffusion Models (DM), showing that optimal denoisers are also optimal noise classifiers. Doing so allows them to construct a new type of loss based on noise classification. This allows DRE methods to inherit the benefits of diffusion models, like strong sampling, while adding new capabilities like single step likelihood estimation.
Strengths: - The mathematical connection is clever and elegant.
- I liked the framing / connections to DRE literature.
- The single step likelihood calculation gives a distinguishing feature that sets the method apart from diffusion models.
- Reasonable ablations were considered, and clear and readable notation was used.
- The method requires an additional backward pass for each denoising/sampling step. In the spirit of the NeurIPS guidelines, I count this clearly labeled limitation as a strength. It's intellectually interesting and also clearly distinguishes that the computation done by their model is different from regular diffusion models in a non-trivial way.
Weaknesses: The proposed approach is functionally equivalent to a diffusion model (using Eq. 8 and with necessary incorporation of MSE loss that matches typical diffusion models), so you might hope to see that you can substantially improve on diffusion models by adding the CE loss as a kind of regularizer. But there are several reasons why it isn't clear that this gives a susbtantial improvement.
- The architecture has to be changed. There's no obvious way to get around that, but it does marginally affect the ease of use for these ideas, and the directness of comparisons (though on this point I agree with the authors' statement that they chose a minimal intervention).
- MSE improvements (Fig. 4) are only marginally different in absolute terms, which is what matters for log likelihood. See some questions about this result.
- The authors only used / compared with relatively old diffusion models, so it's not clear if modest benefits to MSE or FID in some cases would still be seen in more SOTA models. I would expect much more comprehensive experiments to support the idea that using CE loss can improve sampling (Table 2).
The strongest claim of improvement is the single step likelihood calculation. However, there are three weaknesses associated with this result.
Table 3 is a bit of a dubious comparison for two reasons.
- Unlike the ELBO, it's not clear if the proposed estimator gives an upper bound on NLL. I suspect it doesn't. If there are errors in p(t|x), then because it shows up with both signs in Eq. 10, it could contribute to the error in either direction. So we are comparing bounds on NLL to an estimate of NLL which could look lower due only to error.
- The NFE comparisons are also a bit dubious. Function evaluation complexity varies wildly between methods. We should have a fair comparison to the associated diffusion model with the same architecture, but it wasn't clear which one that is in the table. Though it's a bit moot as I wouldn't trust the result anyway, as we don't have a bound here.
- A compelling application of the single step log likelihood evaluation was not discussed. If the log likelihood had been used and evaluated for some downstream application, it might have assuaged worries about the accuracy of the estimate.
Minor comments:
- I didn't like the abstract sentence "directly output likelihood...lacking in most generative techniques", since most common generative techniques today do output likelihood. You clarify later what you mean by "directly" so it is Ok. I just wanted to say that it was off-putting on first read.
Technical Quality: 3
Clarity: 3
Questions for Authors: I was really surprised by the fact that your architecture has no timestep conditioning, but matches or outperforms DDMs that do have timestep conditioning in denoising MSE. Have I understood that correctly (i.e. that the DDM in Fig. 4 does have timestep conditioning?) If so, that seems like a really interesting conclusion for diffusion research, as the final appearance of the timestep only shows up in a simple way in Eq. 8, potentially leading to a simplification of other architectures. It would be interesting to see if this effect persists for stronger architectures. On the other hand, maybe the right way to interpret this result is that your architecture essentially outputs the result for all time-steps at once. Eq. 8 selects the correct one gets a score through backprop. Still, that may be an interesting possibility to try to explore for standard diffusion models.
In Fig. 3, I can understand why the classifier is limited, but it wasn't totally clear to me why MSE helps. But there may not be an easy answer to that (besides the superficial one that it is a regularizer of some sort).
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations were well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The architecture has to be changed**
We agree with the reviewer that the architecture can't be exactly the same since DDM is a denoiser and CDM is a classifier, which may impact the comparisons a little bit. However, as the reviewer notes, the change in the architecture is minor – we replaced only the last layer of the DDM architecture by a convolution, followed by max-pooling and a linear layer. This has a negligible effect on the number of parameters. It is important to note that this architecture was optimized for DDPM and not for CDM, yet CDM still achieves comparable or better results with it. As we mentioned in the text, we believe that an important and interesting future direction would be to optimize the architecture for CDM.
**Old diffusion models**
We want to emphasize that our main goal is not improving denoising diffusion models, but rather making DRE-based methods work for the first time on datasets more challenging than MNIST. Therefore, we mostly focus on analyzing the reasons that DRE-based methods failed, and show how our theorem leads to a practical algorithm that overcomes these issues. Moreover, we used the same architecture of DDPM only for a fair comparison. In future work, it would be interesting to investigate different architectures which are more suitable for CDM (and in general may be different than the standard denoiser architectures).
**Upper bound on NLL**
Our method indeed doesn’t upper bound the NLL but rather estimates it. Yet, this is true for all methods in Table 3, which calculate the likelihood and not the ELBO. For example, DDPM-SDE and DDPM++ (which estimate the likelihood and not ELBO) numerically solve an ODE in which each step can accumulate errors.
As for the fact that $p(t|x)$ appears with both signs, this is a key problem with existing DRE based methods, which our method solves. Without incorporating the MSE loss, the model’s prediction is inaccurate for $t$ values that are far from the real $t$ as we show in Fig. 3. Specifically, when calculating the likelihood of a clean image $x$, the model needs to be accurate for both $p(0|x)$ and $p(T+1|x)$. This is the reason that DRE-based methods failed to date. In our case, the addition of the MSE loss solves this problem and makes the NLL calculation more accurate (see also the answer to the last question below). In App. D we show that our NLL calculation is exact on high-dimensional toy examples for which we can calculate the ground-truth NLL analytically.
**Function evaluation complexity varies wildly between methods**
While the evaluation complexity depends on the architecture, in all cases shown in the table, the number of NFEs is at least 100 and in some cases even thousands. In our case, the calculation requires only a single NFE which is much cheaper computationally. In addition, we use the same architecture as DDPM which appears in the table and requires approximately 200 NFEs, and as we show, we achieve a better NLL.
**Validating the log likelihood evaluation on a downstream task**
We agree with the reviewer that it would be interesting to validate the effectiveness of our single step likelihood computation in the context of downstream tasks. We see the use of our method in applications as an intriguing direction for future work. For example [R1] showed that a generative model which is able to calculate the likelihood, can be used for zero-shot classification. However, they need to perform many NFEs in order to calculate the likelihood, whereas our method can do this with a single NFE.
Following the reviewer’s concern, to further validate our NLL evaluation, we will add to the final version more high-dimensional distribution examples (in addition to those already appearing in App. D), for which we can compute the ground-truth NLL analytically.
**The architecture has no timestep conditioning**
Exactly; you understood it correctly. Our theorem suggests that, given the optimal classifier that predicts the timestep (and therefore can't be conditioned on it), we can extract the optimal MMSE denoiser. This effect should persist independently of the architecture and specifically for stronger architectures. It is a unique property of CDMs that doesn't exist in DDMs, since in CDMs the time condition implicitly appears by taking the corresponding classifier's output (i.e., if we want to be conditioned on timestep $t$, we will take the derivative with respect to the $t$-th entry of the classifier outputs, as shown in Theorem 3.1).
Your interpretation that our architecture essentially outputs the result for all time-steps at once is correct, but note that it does so in a very efficient manner. Specifically, if we wanted to do so in a diffusion model, it would have to output $T$ predicted noise maps at once. Our model, on the other hand, only outputs $T$ scalars. It is the gradient of each output that gives the predicted noise map.
**Why does MSE help in Fig. 3?**
That is a very important point. When training the model using only the CE loss, the model’s prediction $p(t|x)$ may be accurate for the correct noise level but not for more distant noise levels (as shown in Fig. 3). This is why DRE-based methods have failed to capture the distribution of images to date. As Theorem 3.1 shows, the optimal denoiser for timestep $t$ depends on the model prediction for both $\log p(t|x)$ and $\log p(T+1|x)$. Therefore, when we add the MSE loss on the derivative of the difference between them (using the formula from Theorem 3.1), we enforce the classifier’s output to be accurate not only in it’s $t$-th entry (as can be achieved using only CE) but also in its $(T+1)$-th entry. By enforcing this, we obtain an optimal classifier that predicts the correct probability $p(t|x)$ both locally around $t$ (thanks to the CE) and globally for distant $t$ values (thanks to the MSE).
[R1] Zimmermann, Roland S., et al. Score-based generative classifiers, NeurIPS Workshops ‘21.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the responses.
I will maintain my score - I think this is nice theoretical foundation work but as "the main goal is not improving [the state of the art models], but rather making DRE-based methods work for the first time on datasets more challenging than MNIST", it somewhat limits the impact.
FYI, while bidding for papers I noticed a concurrent similar work that you may want to check out/cite as concurrent: "diffusion models as noise classifiers" or something similar.
---
Reply to Comment 1.1.1:
Comment: **Main goal**
Thanks. We want to emphasize that although improving diffusion models wasn't our main motivation, we succeeded in achieving comparable or better results than the base models using the same architectures (which are optimized for denoising and not for our task). As we mentioned, an important direction for future work would be to design architectures that are optimized for CDM and to scale our work to larger datasets. Given our preliminary results, we believe that CDM has the potential to improve upon diffusion models in large scale challenging settings.
**Concurrent work**
Thanks for letting us know. We'll look it up and cite accordingly. | Summary: This work develops a new generative framework called the classification diffusion models (CDMs) based on the density ratio estimation (DRE), by establishing an interesting connection between the DDPM's denoiser and noise-predictive classifier, which also helps the exact likelihood computation in a single pass. As is claimed by authors, the proposed method is the first DRE-based technique to successfully generate images of the CIFAR-10 dataset.
Strengths: 1. This work is well-written and clearly formulated.
2. The proposed algorithm is theoretically grounded.
3. The experiments are convincing, which verify the effectiveness of proposed methods.
Weaknesses: 1. The main concern is the capability or potential of DRE-based methods to successfully learn data distributions, particularly when applied to datasets with large scales in practice. As is claimed by authors, the current development of DRE-based modeling is up to the CIFAR-10 dataset (MNIST before), which is too small to indicate the validity compared to more standard methods (e.g. DDPM).
2. Another major concern is that CDMs can be more computationally challenging than DDPMs in sampling. According to Algorithm 2, CDMs require an extra backward propagation (BP) than DDPMs for each time step when denoising, which is quite expensive.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Please provide more details of questions raised in the weaknesses section above.
2. Check minor grammar typos, e.g. in Line 258, " trains a classifiers...".
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As is stated by authors, it is worthy to explore BP-friendly architectures to alleviate the computation inefficiency of CDMs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **The potential of DRE-based methods to learn data distributions of larger scale datasets**
We acknowledge that the Celeb-A 64x64 and CIFAR-10 32x32 datasets we experimented with are not very large and high-dimensional by today’s standards, and we certainly agree that extending the evaluation to datasets like ImageNet would be valuable. Unfortunately, however, we lack the computational resources to do so. For instance, as reported in [9], Table 10, achieving satisfactory results (in terms of FID) on ImageNet 128x128 requires a large amount of computation, reaching roughly 521 days on a single V100 GPU. We can run on eight A6000 GPUs, with which we approximate it will take us 56 days to train a standard DDM with the same architecture and number of iterations, using a batch size which is smaller by a factor of 2.
Nevertheless, we believe that the preliminary exploration we present on CIFAR-10 and Celeb-A is still valuable for the research community, as it analyzes why DRE-based methods have failed to capture distributions of high-dimensional data to date, and suggests a practical algorithm based on our theoretical result for overcoming this problem. Furthermore, we truly believe that with sufficient computational resources, future works will be able to train our method on larger-scale datasets.
**CDMs can be more computationally challenging than DDPMs in sampling**
We agree with the reviewer and have mentioned it as a limitation in the main text. Indeed, while a DDM requires a single forward pass for each denoising step, a CDM requires both a forward pass and a backward pass. Nevertheless, the computational cost of performing a forward and a backward pass through a network depends on its architecture. In this work, we chose to use the same architecture as that used by DDPM [19] to isolate the impact of our algorithmic approach from the choice of the model architecture when comparing it to DDMs. However, it is an important future direction to explore architectures that are particularly optimized for CDMs and that alleviate the gap in computational complexity. Such architectures should have the property that performing a forward pass and a backward pass through them is computationally similar to performing only a forward pass in a regular DDM. This could potentially be achieved e.g., by relying only on the encoder part of the UNet. However, we leave this exploration for future work.
**Minor grammar typos**
Thanks. We will fix them in the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply! I thinks the connection between the DDPM's denoiser and noise-predictive classifier is interesting, but the large-scale effect and inefficiency concern still holds. I will keep the score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Context-Aware Testing: A New Paradigm for Model Testing with Large Language Models | Accept (poster) | Summary: This paper introduces Context-Aware Testing (CAT), a novel approach that uses context as an inductive bias to guide the search for meaningful model failures. Unlike previous methods that rely solely on data to find slices where a model's predictions underperform compared to average performance, CAT addresses the limitations of these data-only approaches. Traditional methods often fail to consider the problem context and can lead to multiple testing problems. To overcome these issues, the authors propose loosening the restrictive assumption of relying solely on data. They introduce CAT as a new testing paradigm that incorporates context to provide a more comprehensive evaluation of model performance. Additionally, the paper presents SMART Testing, which employs large language models (LLMs) to hypothesize relevant and likely failures. These hypotheses are then evaluated on data using a self-falsification mechanism, allowing for the automatic identification of more relevant and impactful failures compared to traditional methods.
Strengths: The topic is highly interesting, and I agree with the authors' motivation that diverse perspective testing is necessary for practical deployment in real-world cases. These dimensions can improve the practical utility of tabular prediction models. In particular, the core limitation of data-only testing—the lack of a principled failure mode discovery procedure that can incorporate prior knowledge to guide the search for meaningful errors—is also persuasive.
Weaknesses: Despite the importance of the topic and motivation, it is difficult to agree that the suggested method fully addresses all the issues. Most importantly, in the first step of hypothesis generation, we need to define the contextual information and dataset contextualization. If my understanding is correct, I am quite confused about how this contextual information differs from rule-based methods defined by human experts to use contextual information to find specific data slices. For example, in Table 1, if we have contextual information about age or comorbidities, we can directly utilize that information to find data slices without using LLMs or the operationalization step. How are these different, and how are steps 1 and 2 beneficial? I feel that utilizing LLMs is quite redundant for practical use. Additionally, too much prior knowledge is needed to utilize SMART testing, raising concerns about its practicality.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Can you provide examples for step 1 like in Table 1? What specifically constitutes contextual information and dataset contextualization?
- In Section 5.1, how are the descriptions for synthetic features provided in SMART testing? If the context corresponds to "it is synthetic" or "it is independent from the other," I think the results are too trivial.
- If there are no accurate descriptions related to the features or datasets, how is the method applicable?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: - As mentioned in the weaknesses, the reason why LLM is needed should be elaborated. As the authors mentioned in the manuscript, using LLMs could even hinder the total procedure of SMART testing if there is no clear reason for their use.
- While the motivation is quite practical, rather than research-friendly, which is valuable, it would be beneficial to provide "practical" examples to illustrate the application of the proposed methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear R-ctad,
Thank you for the feedback which helped to clarify our paper. We'll address misunderstandings about contextual information and LLM necessity.
We provide grouped answers A-D & highlight paper updates.
---
# (A) Clarification on contextual information
We apologize for any confusion caused by our use of the term 'context'. To clarify, we use 'context' in two distinct ways: (a) external input by a human (not strictly necessary for the framework) and (b) external knowledge (such as relevant background knowledge, domain understanding) within an external system (LLM). While we made a small footnote about this on page 5, we agree this is not enough and could have resulted in confusion about the nature of our work.
| Aspect | Context as input (external input) | Context as external knowledge |
-|-|-
| Definition | Additional problem/dataset information in the form of a textual string | Relevant background knowledge of a system (e.g. an LLM) that informs the generation of targeted hypotheses |
| Source | User input (optional) | An external system (e.g., GPT-4) which includes an understanding of the relationships between dataset features and their meaning|
| Example | `info = "this is a dataset on heart disease prediction"` | Use of a language model (e.g., GPT-4) to generate hypotheses or justifications by understanding of the data |
| Required? | Not necessary (can be left blank) | Required for CAT to work |
"Context-aware testing" refers to external knowledge use, not user input. SMART uses tabular context (which is inherently encoded in the feature names) for hypothesis generation without needing human input. For example, **Table 1** in the paper showcases the hypotheses that were generated by SMART leveraging the feature names without any external human effort.
## Need for prior knowledge & practical example.
Further, based on your suggestion to include a 'practical' example to illustrate the application of the proposed method, we thought it would be useful to show how easy it is to use SMART as well as the minimal contextual information and dataset contextualization needed.
As shown below, users do not need any prior knowledge to use SMART. Rather, we make use of the context inherently encoded in the dataset, feature names and task.
```python
import SMART
model_tester = SMART('gpt-4')
# Optional external input (can be left blank)
context = "Find where a model fails. This is a prostate cancer prediction task."
model = XGBoost()
description_dataset = X_train.describe()
model_tester.fit(X_train, context, description_dataset, model)
print(model_tester.subgroups)
```
This implements SMART with just a few lines, requiring no additional human effort beyond providing the dataset and model. In our implementations, we typically provide generic, easily accessible context that does not vary across tasks (see **Appendix C.3**. for examples).
**UPDATE**: Add table to Sec. 3.3 to clarify the two uses of context and rename the *context input* as *external input* which we hope will disambiguate the two meanings.
---
# (B) Difference from Rule-Based Methods
CAT and SMART fundamentally differ from rule-based methods:
- No human input required: Unlike rule-based methods, CAT does not need experts to define rules based on domain knowledge.
- Automated hypothesis generation: SMART uses feature names as implicit context to guide hypothesis generation automatically.
- Scalability: This approach can be applied to any tabular dataset without human intervention.
**UPDATE**: Add a contrast to rule-based in Sec. 3.2
---
# \(C) Why we need an LLM in SMART
We wish to avoid using humans for large-scale testing because human intervention is expensive, time-consuming and often unavailable. Instead, we desire an *automated* approach to testing ML models. For this, we suggest to employ LLMs. The use of an LLM is crucial because (as per L177-186):
- Contextual understanding: LLMs can interpret feature names and generate relevant hypotheses without human input
- Targeted sampling: LLMs limit the number of tests, focusing on relevant model failures without testing all possible combinations
**Benefits**. We show that by using LLMs to search for model failures, we are able to test ML models better. SMART finds data slices where models are unreliable much more often than data-only methods (see Sec. 5). Therefore, SMART offers a solution when human experts are unavailable or too expensive.
Ultimately, by using LLMs to *automatically* generate context-aware hypotheses, we can test models for model failures without succumbing to the issues that data-only testing methods face (see Sec. 3.2).
**UPDATE**: Enhance current description around L174-186.
---
# (D) Questions
Q1: What constitutes contextual info?
A1: See response (A) clarifying contextual information.
Q2: How are the synthetic features provided in SMART testing in Sec. 5.1?
A2: We add new random features to the dataset which have uninformative names, such as "feature1", "feature2". We show that data-only methods will test for *everything* which results in systematic errors. SMART avoids this by selectively (and automatically) testing for *important* data slices.
Q3: How is the method applicable if there are no descriptions of features?
A3: See response \(C): (a) feature names provide sufficient context for hypothesis generation; (b) generic prompts are used that do not change across tasks (see **Appendix C.3** for examples). There is no need to provide additional descriptions.
Q4: How are steps (1) and (2) in SMART beneficial if we already have contextual information about age and comorbidities?
A4: They are beneficial because we do not assume prior access to any knowledge (see response (A)). These hypotheses are automatically generated by the system purely from tabular feature names & generic external input about the task.
---
# Thank you
Given our clarifications, we hope you consider revising the paper's evaluation ☺️
---
Rebuttal 2:
Comment: Dear reviewer ctad,
We are sincerely grateful for your time and energy in the review process.
We hope that our responses have been helpful. Given the limited time left in the discussion period, please let us know if there are any leftover concerns and if there was anything else we could do to address any further questions or comments :)
Thank you!
Paper authors
---
Rebuttal Comment 2.1:
Comment: Thank you for your detailed rebuttal. After careful consideration, I still believe that my initial assessment is accurate. The primary concern is that the proposed method seems more suited for research purposes rather than practical use, which is crucial for testing methodologies.
- Limited Applicability Without Feature Descriptions: The effectiveness of your approach is significantly limited when feature descriptions are unavailable. This restricts its utility in practical scenarios where such descriptions might not be provided.
- Cost of Utilizing LLMs: While LLMs are powerful, their use can be costly, both in terms of computation and resources. This raises concerns about the practicality of applying your method in real-world settings.
Moreover, from a research perspective, the novelty of the approach seems constrained. The example provided in the rebuttal merely demonstrates using an LLM to identify failure cases by inputting a dataset description. This does not appear to offer substantial insights for practitioners.
For instance, in the OpenML benchmark, datasets with the same data but different IDs can have vastly different descriptions. While the model fitting results would be identical, the CAT outcomes could vary significantly based on the description. Additionally, simply removing feature names could lead to different results despite the data being the same. Can we truly trust these outcomes?
I was hoping to see a response addressing these specific concerns, but I did not find a satisfactory explanation in the rebuttal.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer ctad
Thank you for your response. We feel there has been a major misunderstanding. Please find answers to your questions below, many of which have _already_ been answered in our earlier response or in the paper itself, which we re-clarify below.
---
**Comment 1**: Limited Applicability Without Feature Descriptions: The effectiveness of your approach is significantly limited when feature descriptions are unavailable.
**Answer 1**. There seems to be a misunderstanding, as this is *not true*. We have answered this question in **Response (D) A3** -- we do not require feature descriptions. Our inputs, apart from a one-sentence string describing the nature of the task, are equivalent to the inputs of data-only testing (despite showing superior performance).
To put this misclarification to an end: *Our method does not rely on any additional feature descriptions*. We *do* require interpretable feature names, just as any human requires interpretable feature names to understand what they are working with. Feature names (e.g. column labels such as sex, age, race etc) are present in almost all tabular datasets both in the research field and in the real world where data is stored in SQL tables with column names.
*This means that the utility of this method directly extends to all the users who work with such data*.
---
**Comment 2**: Cost of Utilizing LLMs: While LLMs are powerful, their use can be costly, both in terms of computation and resources.
**Answer 2**. There seems to be a misunderstanding about the costs associated with LLMs and the computational requirements. It is *not true* that our model is costly of computationally expensive. We refer the reviewer to our **global response (and our response pdf)** where we address the exact point on cost showing SMART is cheap to use and hence very practical. For your convenience, see the snippet below
*Table 1. Cost of SMART (USD). We quantify the usage of smart in USD for different closed-source GPTs. SMART is extremely accessible and cheap to use. Generating 100 hypotheses with GPT-4o Mini costs less than 1 cent.* (GPT-4 < 0.3).
Finally, given the stark differences in the effectiveness SMART vs data-only, we think it is vital for the community (both research and industry) to have access to improved testing methods for more reliable ML testing.
---
**Comment 3**. From a research perspective, the novelty of the approach seems constrained. The example provided in the rebuttal merely demonstrates using an LLM to identify failure cases by inputting a dataset description.
**Answer 3**. We'd like to address the misunderstanding you got from our rebuttal example. The code snippet we provided demonstrates how easy SMART is to use and that it requires minimal prior knowledge from the user. **This is a benefit --- we want research code to be usable by practitioners**. The ease of use does not negate the technical novelty of our approach.
To re-clarify, the core technical novelties of SMART from our paper:
- Reframing model testing as a frequentist hypothesis testing problem and giving a statistical explanation of the mechanisms why current methods fail at testing ML models
- New paradigm for ML testing: using context as an inductive bias to guide the search for meaningful model failures.
- Showing that with context-aware testing, we can define and target task-relevant subgroups, limiting the number of tests conducted with better false positive control and greater statistical power.
It seems you have not acknowledged the reframing part of model testing as our contribution, even though it forms a significant part of our paper.
Additionally, our model reports provide insights to practitioners about the tests.
---
**Comment 4**: For instance, in the OpenML benchmark, datasets with the same data but different IDs can have vastly different descriptions. Can we truly trust these outcomes?
**Answer 4**. You're correct that different datasets might have different feature names. That's okay.
- SMART doesn't rely solely on feature names. It also leverages feature names and basic statistics of the data, which are consistent across different descriptions of the same dataset.
- Self-falsification: While SMART uses descriptions to generate hypotheses, they are validated with actual data via our self-falsification mechanism. This ensures that regardless of the description, only hypotheses supported by the data are retained.
We also ensure trustworthiness via the transparency and auditability provided by the model reports.
**Comment 5**: Removing feature names hurts performance.
We agree that removing feature names would affect SMART — as we’ve shown in **Fig 1 (Response pdf)**. This highlights that feature names play an important role in finding model failures. Hence, SMART should not be used in settings with no feature names (i.e. we provide guidance to practitioners).
---
*We hope these help clarify things. Please let us know.*
---
Rebuttal 3:
Comment: Thanks for your continued engagement. We're glad we've addressed prior concerns and reached a common understanding on some points. We clarify the remaining comments, which we hope persuades you to increase your score.
**a) Should we be using SMART without clear feature names?**
We're glad that we're in agreement that we also do not recommend using context-aware testing, and SMART specifically without feature names. While we see this reliance on the contextual nature of feature names as a benefit (by the nature of context-aware testing), we acknowledge we can be more explicit about this. We will expand the discussion section that data-only methods should still be the go-to testing approach in the _rare cases in practice_ where feature names are uninformative.
**b) What about when feature names change?**
Regarding your OpenML benchmark example, SMART can handle variations in feature names as long as they remain informative (e.g., "age" vs "patient_age"). We have consistently demonstrated in the paper that SMART can generate meaningful hypotheses with diverse column names. Thus, changing the name would not affect the reliability. That said, we have another safety mechanism in place --- our *self-falsification mechanism* (L209-224) which uses empirical data to evaluate each hypothesis. *Finally, to clarify, in real applications in both research and industry, we find that the models are tested against a single dataset version containing informative feature names.*
**c) Is this a critical limitation in deep learning tabular problems?**
You also mention that this is a limitation with "LLMs in tabular deep learning problems". This is the first time you have raised this concern. However, this critique is simply not true. As discussed, both our framework context-aware testing and our method SMART specifically are model-agnostic (discussed in L140-162). We give both theoretical and empirical reasons for this.
- **We show theoretically why our framework is model agnostic**. We develop context-aware testing because we show that the alternative --- data-only testing--- implictly falls into the 'multiple testing problem' (Sec. 3.1.). This issue of ML testing does not depend on the underlying model. We address these theoretical issues of data-only testing methods (Sec. 3.2.) *even in deep learning models*.
- **We provide empirical results to support this claim with deep learning methods**. In addition to our original evaluation which included MLPs (**Table 12 - main paper**), we have updated with two new deep learning tabular models (**Table 4 - response pdf**).
**d) What is the novelty of our study?**
We're quite surprised to see your comment that our novelty is limited.
In your 'summary' section of the original review, you summarized the following (taken as quotes):
- Unlike previous methods that rely solely on data to find slices where a model's predictions underperform compared to average performance, CAT addresses the limitations of these data-only approaches.
- Traditional methods often fail to consider the problem context and can lead to multiple testing problems.
- To overcome these issues, the authors propose loosening the restrictive assumption of relying solely on data
In addition, you highlighted our contributions in the strengths:
- 'the core limitation of data-only testing—the lack of a principled failure mode discovery procedure that can incorporate prior knowledge to guide the search for meaningful errors—is also persuasive.'
We are concerned that there may be some **misalignment between your original review and your current comments**.
To wrap up the discussion around our contributions, it seems that you acknowledge that we:
- **Identify a fundamental limitation in testing methods** which rely only on data (Sec. 3.1 in the paper).
- **Provide a theoretical mechanism explaining why data-only testing falls short**, relating it to the foundational view in statistics of multiple model testing in frequentist statistics. (Sec. 3.2)
- **Proposing a new paradigm which avoids the theoretical issues**: selectively testing the most relevant tests instead of testing for everything --- and falling into the multiple hypothesis testing problem again; (Sec. 3.3)
- **Building a concrete system - SMART** which is an auditable, easy-to-use, automated system as an alternative to data-only testing (Sec. 4)
- **Showing that this works empirically** on over 12 quantitative & 5 qualitative evaluations. (Sec. 5)
Therefore, we respectfully disagree with your most recent comment that our work "merely introduces an LLM to identify model failures" --- which also overlooks our clarification in our previous **Answer 3**
**e) Finding consensus**.
We hope our responses and clarifications addressed your main questions and concerns? If so, we kindly ask that you consider increasing your score to better reflect this. If not, please let us know what specific unadressed issues remain, and we'll do our utmost to resolve them. | Summary: The paper introduces Context-Aware Testing (CAT), a new method for testing machine learning (ML) models. Current ML testing methods rely only on data, which often leads to high false positive and false negative rates and misses meaningful failures. CAT improves this by adding external knowledge or context to the testing process. The paper presents SMART Testing, the first version of CAT, which uses large language models (LLMs) to identify potential failures and a self-falsification mechanism to test them. The results show that SMART Testing finds more relevant and impactful failures than traditional data-only methods.
Strengths: 1. The paper is well-written and easy to follow.
2. The concept of Context-Aware Testing represents a significant advancement in ML testing, offering a new perspective that goes beyond traditional data-only methods.
3. The SMART Testing framework effectively uses large language models (LLMs) and self-falsification, which demonstrates the practical application of CAT.
4. The paper presents valuable empirical evaluations that demonstrate the effectiveness of SMART Testing in identifying significant model failures.
Weaknesses: 1. The effectiveness of CAT depends on the quality and relevance of the contextual information. While context information may be available, effectively using it to generate meaningful hypotheses and justifications is not guaranteed, and there are cases where CAT may not be effective.
2. Using LLMs for hypothesis generation could introduce biases from the data used to train these models, potentially impacting the reliability of the testing results.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How can we identify the most relevant contextual information for specific applications? How do we know that one context information is more valuable than another?
2. Are there alternative methods or models that could complement or substitute LLMs for hypothesis generation in CAT?
3. Are there any strategies to mitigate biases inherent in LLMs when using them for hypothesis generation in CAT?
4. Can the SMART Testing framework be adapted to handle non-tabular data such as images?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. The paper mainly concentrates on tabular data and does not investigate how CAT could apply to other data types like images.
2. The reliance on LLMs for hypothesis generation in CAT poses a constraint, particularly in scenarios where LLMs are not accessible or applicable. This dependency may restrict the method's usability and applicability across diverse settings and applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear R-H6db,
Thank you for your thoughtful comments to improve the paper. We provide answers (A)-(E) & highlight updates to the paper
---
# (A) Clarifying contextual information
We'd like to clarify that when we refer to "context", we mean the *relevant background knowledge of a system (e.g. an LLM) that informs the generation of targeted hypotheses*.
(i) Dependence on context: To clarify, we don't _require_ external contextual information or human expert input. For tabular data, context is inherently encoded in the feature names. Our focus on tabular data leverages this built-in context. For instance, in a medical dataset, features like age, sex, or patient features provide context to guide LLM hypothesis generation. This contrasts with data-only approaches which only use the numerical data values alone and ignore the context surrounding the feature names.
(ii) Identifying relevant context: We _do not_ require manual selection of contextual information. The relevance or value of the contextual information is automatically determined by the LLM: (a) when proposing hypotheses based on tabular data's inherent context and (b) via our self-falsification mechanism which empirically evaluates hypotheses and prunes those not supported by data.
**UPDATE**: We will update Sec. 4 using the extra camera ready page to clarify this concept of context and how it is used.
Finally, to highlight the minimal input needed, we thought it would be useful to show SMART's ease of use. i.e. additional contextual info input is not necessary but possible. If not provided the LLM leverages the context inherently encoded in the dataset `X`.
```python
import SMART
# Instantiate SMART
model_tester = SMART('gpt-4')
# Give desired context (this can be left as an empty string)
context = """Find where a model fails. This is a prostate cancer prediction task."""
# Load ML model
model = XGBoost()
dataset_description = X.describe()
# Test the model
model_tester.fit(X, context, dataset_description, model)
```
**UPDATE**: include the code snippet in the revision to show usage of SMART.
---
# \(B) Alternatives to the LLM for hypothesis generation
To the best of our knowledge, the current main alternative for generating targeted hypotheses would be human input. However, this lacks SMART's automation and scalability. This underscores both the novelty of our LLM-based approach for targeted model testing. Additionally, this highlights exciting research opportunities to develop alternative targeted samplers for hypothesis generation within the proposed CAT framework.
--
# (C) LLM bias & mitigation strategies
We agree with you that is important to look at directions for removing biases from SMART. Our discussion on precisely this issue can be found in **Sec 5.4**. We realize that the title of the section might not fully convey this and hence will rename it (see update below).
To summarize, in **Sec 5.4** we have outlined several strategies to mitigate biases inherent in LLMs.
- (a) Using the training dataset to guide hypothesis generation.
- (b) Use our self-falsification mechanism to empirically test hypotheses against the actual data, thereby filtering out spurious hypotheses.
- \(c) Transparent Reporting: SMART generates comprehensive model reports (example in Appendix D.7) that provide clear justifications for each hypothesis, allowing for human oversight and intervention if needed.
Empirically, Sec. 5.4 (Table 4) demonstrates SMART's ability to correctly identify underperforming subgroups, even in scenarios where LLMs might have prior biases (e.g., ethnicity-related biases).
**UPDATE:** Rename section 5.4 as “Assessing and mitigating potential LLM challenges and biases” to better flag this.
---
# (D) CAT beyond tabular data - images
Our focus on tabular data is intentional due to its importance in many high-stakes domains such as healthcare, finance, and criminal justice, making ML testing particularly impactful for these critical applications by addressing a significant gap in current data-only ML testing methodologies.
Additionally, tabular data inherently includes context through feature names and metadata. This context is not naturally present in images (tensor of pixel intensities). As discussed in Appendix A, extending CAT to other domains like images would require incorporating metadata to provide necessary context — which is often unavailable.
While extending CAT to other domains is an interesting future direction, it's non-trivial. We hope future work can build on our work and adapt the context-aware paradigm to images.
**UPDATE:** add this discussion to our discussion in Appendix A.
---
# (E) Constraint on LLM: might not be accessible or applicable in all scenarios
We wish to clarify the value of SMART even with possible constraints. First, LLMs are becoming increasingly accessible, especially from a cost perspective. As shown in **Table 1 (Response pdf)** it costs less than 0.10 USD for 5 hypotheses and less than 0.50 USD for 100 hypotheses for state-of-the-art LLMs.
Second, we show that SMART outperforms data-only methods even when using cheaper, less capable LLMs like GPT-3.5 (**Appendix D.3**) which are widely available to the public.
Third, research done within the ML community is not always accessible or applicable (as illustrated by the vast research agenda of deep learning which presupposes access to expensive GPUs). Similarly, we see our research as being important in pushing the frontier of ML testing despite presupossing access to an LLM.
Finally, given the stark differences in the effectiveness of data-only and context-aware methods (refer to the global response table), we think it is extremely important for the research community to have access to improved testing methods for more reliable ML testing in high-stakes domains.
---
# Thank you
Thank you for your engagement. Given these changes, we hope you might consider revising your assessment of the paper's impact.
---
Rebuttal 2:
Comment: Dear reviewer H6db
We are sincerely grateful for your time and energy in the review process.
We hope that our responses have been helpful. Given the limited time left in the discussion period, please let us know if there are any leftover concerns and if there was anything else we could do to address any further questions or comments :)
Thank you!
Paper authors
---
Rebuttal 3:
Comment: Thank you for your clarification. After reviewing your answer (A), I remain unclear about how the context information is generated. You mentioned that 'context is inherently encoded in the feature names' and that no external contextual information or human input is required. Does this imply that the LLM can automatically infer context purely from feature names?
Consider two identical tabular datasets with the same feature names—such as name, age, gender, occupation, and income—but with different prediction objectives: one for deciding bank loans and the other for predicting job promotions within five years. Both have binary outputs (yes or no). How would the LLM infer different context in these two different scenarios?
---
Rebuttal Comment 3.1:
Comment: Dear reviewer H6db,
Thank you for engaging with our work.
To clarify, we use 'context' in two distinct ways: (a) external input by a human (not strictly necessary but usually helpful for the framework for specifying the overall task) and (b) external knowledge (such as relevant background knowledge, domain understanding) within an external system (LLM). While we made a small footnote about this on page 5, we agree this is not enough and could have resulted in confusion about the nature of our work and will clarify this confusion.
SMART uses an LLM to generate hypotheses about what to test for, *assuming that the feature names indeed encode clear information*, such as age.
In your illustrative example where we have the same covariates but different meanings of the target label, it would be necessary to add external information/context about the nature of the task as a short input. The short description of the task is always easily available info when testing a model.
**Example when we are predicting whether to give bank loans**
```python
import SMART
# Instantiate SMART
model_tester = SMART('gpt-4')
# Give desired context <--- need to specify this in your example
context = """Find where a model fails. This is a prediction task about whether to give bank clients bank loan or not."""
# Load ML model
model = XGBoost()
dataset_description = X.describe()
# Test the model
model_tester.fit(X, context, dataset_description, model)
```
Notice the variable ``context`` is used as an input to SMART which provides required information about the nature of the task. In case this task is about job promotion within the upcoming five years, this would have to be changed as follows:
```context = """Find where a model fails. This is a dataset about whether people get promoted within a 5-year period. """```
You are correct that it would not be possible to identify what the task is if the target label (`y`) is not labeled in a clear manner (e.g. `5_year_job_promotion`) or if this is not given any additional context, such as the example perovided in the variable `context`.
When we say that there is no need for manual human testing, we mean that there is no need to provide explicit information about *what to test for* and the only human input is the variable `context` which provides required information. Within SMART, we use a prompt template (provided in **Appendix C.3.**) that passes information about the context task, description of covariates, and feature names, which generates hypotheses about where the model is likely to fail which are then evaluated.
**UPDATE**: To address this confusion, we will rename the context input as external input which we hope will disambiguate the two meanings.
Does this help to clarify your question?
---
Rebuttal 4:
Comment: Dear reviewer H6db,
Thank you for the engaging exchange we had that helped to clarify our work. As the discussion period is coming to a close, we'd like to follow up on your prior concerns.
Have our responses and clarifications addressed your main questions and concerns? If so, we kindly ask that you consider increasing your score from a 5 to better reflect this. If not, please let us know what specific issues remain, and we'll promptly provide additional information to resolve them.
We appreciate your reconsideration and look forward to your feedback.
---
Rebuttal Comment 4.1:
Comment: Thank you for your efforts in developing SMART and for your thoughtful engagement in the rebuttal process.
Based on my latest understanding of the context, I believe it’s crucial to reconsider how the role of context in SMART is communicated. On one hand, overestimating the role of context could be misleading, particularly given the black-box nature of LLMs, where it’s not clear how they actually utilize contextual information. On the other hand, simply stating that "LLMs + context" magically works might understate the significance of your contributions, potentially giving the impression that the work lacks strong scientific insights.
I feel that the current version of the paper doesn’t fully address these concerns. Could you find a more balanced way to present this aspect of your work before the end of the discussion period? I would be inclined to raise my score if these issues are addressed effectively.
---
Reply to Comment 4.1.1:
Comment: Many thanks for the continued engagement to improve the paper. We'll put everything we've talked above into a single answer which can hopefully address this concern (such that no one is left with the impression that our paper claims "LLMs + context" magically works" ☺).
**Q1: How can the role of context in SMART be more accurately communicated?**
**A1**: To summarize and wrap up the changes we're making on context (valuable discussion -- thanks again):
- Emphasize that context-aware testing is a general framework and SMART is one implementation.
- **Change the way we use the word context**. Clarify that "context" in CAT refers to two aspects: (i) Minimal task description (to be renamed to external input); (ii) The LLM's inductive biases ('knowledge') used for hypothesis generation
- **We rely on both context and data**. Emphasize that SMART *does not* rely solely on context, but on a combination of hypothesis generation and data-driven evaluation (which forms a significant part ouf our evaluation framework).
- **Context-aware testing avoids the multiple testing problem prevalent in data-only testing**. Link clearly the context-aware mechanism with the problem with data-only testing to better motivate this design choice, explaining that context helps to avoid the multiple hypothesis testing problem implicit in current data-only ML testing (Sec. 3.1.).
- **Highlight ablation study**. Show with an ablation study that not using data results in much worse performance --- a context-guided sampling mechanism (LLM) is not sufficient (ablation study already performed -- **Appendix D.3.**).
**Q2: How do we address the black-box nature of LLMs?**
**A2**: While we answer this concern throughout the paper in separate sections, we'll use the additional paper in the camera-ready version to summarize everything into a single paragraph:
- Using LLMs offers concrete insights into which hypotheses are tested and why (Table 1 in our paper). This is not the case for data-only methods which test for everything, resulting in the issues described in Sec 3.1-3.2.
- The testing procedure is fully auditable via model reports (unlike data-only methods), described in Appendix D.7.
- Our *self-falsification* module uses empirical data to validate the hypotheses on actual data.
**Q3: Can we strengthen scientific insights?**
**A3**: We firmly believe that strong foundational statistical intuition which explains complicated phenomena are extremely valuable as scientific insights. That's why we employ a foundational view in statistics --- multiple hypothesis testing --- to explain for the first time: _why_ and _when_ data-only methods fail in finding ML model failures (Sections 3.1-3.2).
We'll add more statistical intuition why using a context-aware sampling mechanism (e.g. an LLM) resolves the problem of multiple hypothesis testing in ML evaluation (to add after line 165). We'll further link it to our experimental confirmation of these insights, where we indeed show that our theoretical analysis conforms with our experiments on false positives in Sec. 5.1. or false negatives in Sec. 5.2.
---
These should fully clarify the (i) meaning of context; (ii) value of context; (iii) mechanism why it works, in addition to the practical benefits.
Given the above outlined, we trust that future readers will find our work's contributions and scientific rigor to be more apparent. Thank you for engagegment and with the above in mind we hope you reassess the paper's impact on the field of ML testing---and the associated score for NeurIPS. ☺ | Summary: The paper offers a multiple hypothesis testing view of ML evaluation (in regard to test slice
finding). Authors identify problems with data-only approaches (like high amounts of false positive
and false negative model failure triggers). The paper proposes SMART, a context-aware LLM based ML
model testing methodology to mitigate said problems. SMART uses LLMs for hypothesis generation (data
slices where model could potentially underperform). The method is empirically evaluated and compared
to a set of data-only baselines, showing less false-positives and recovering more model failure
modes.
Strengths: The paper is well structured and well written, it is easy to follow and is enjoyable to read. It
tackles an important problem in model testing at an interesting angle, identifies an issue with
existing (and common) methodology. The paper presents a novel, clever and well motivated solution
that incorporates strong sides of LLMs (contextualization, hypothesis generation) with strong sides
of prior data-based solutions. The discussion on mitigating potential LLM challenges is important
and well considered.
Weaknesses: 1. *It is unclear how the method degrades with less informative context*.
- Most experiments validate the method in conditions well suited for it. e.g. easily
distinguishible irrelevant features (could you clarify what are the feature names in the
experiment, is it truly the case that the LLM could easily differentiate them from potentially
useful features?)
3. *Some details of the method are missing from the main text*
- What are the prompts (better to include some information on this in the main text, I had questions when exploring the experimental results)
- Not sure from which subset of the data the slices are selected (it is intuitive that slices are
subsets of the test data, but if I'm not mistaken it is not clearly defined in the text).
4. *Some related work may be missing*. I think that the subject of ML model testing in industry and production scenarios might be
related, while it is not discussed in the paper [1, 2].
Technical Quality: 3
Clarity: 4
Questions for Authors: - Is it possible to run a method when the data schema is limited (e.g. some feature names are unknown, and the context is rather shallow in general)?
- Could you provide experiments on more real world datasets with real known model failure cases? The
TableShift [3] benchmark could help with such an experiment as it already has subsets (OOD eval sets)
that could serve as ground-truth problematic data slices.
- In what form the code would be available? I suggest making a package with said methodology to increase adoption.
- Are open-weights free LLMs able to generate sufficiently high quality hypothesis for the method to work?
References:
- [1] Shankar, Shreya, et al. "Automatic and precise data validation for machine learning." Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 2023.
- [2] Polyzotis N. et al. Data validation for machine learning //Proceedings of machine learning and systems. – 2019. – Т. 1. – С. 334-347.
- [3] Gardner, Josh, Zoran Popovic, and Ludwig Schmidt. "Benchmarking distribution shift in tabular data with tableshift." Advances in Neural Information Processing Systems 36 (2024).
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer bfyv,
Thank you for taking the time to carefully review our work. We provide answers below (A-F) and highlight our paper updates.
---
## (A) Method degradation with less informative context
We appreciate your interest in SMART's performance under varying context quality. We've developed the context-aware framework primarily for the tabular domain because feature names encode useful information for reasoning about potential failures. Therefore, we *do not* recommend using CAT when feature names do not encode useful information.
We address your concern in two ways. First, we perform a qualitative study where we limit the data schema by hiding the feature names and inspect the hypotheses and justifications generated. We find that in the limited-schema case, SMART generates hypotheses based on inferences about the feature information (e.g. "the model might fail on feature_4 if feature_4 represents gender"). In contrast, informative names guide meaningful hypothesis generation. Such hypotheses and justifications are illustrated in **Table 3 in the response pdf**.
Second, we evaluate whether limiting the data schema by hiding some feature names and leaving minimal external context affects detection rates of model failures. We compare two versions of SMART, original and with corrupted feature labels, in identifying data slices with high performance discrepancies from average (**Fig. 1 in the response pdf**). We find that across two real-world private datasets, hiding the feature names hinders model evaluation. This highlights that feature names play an important role in finding model failures.
**UPDATE**: We'll discuss SMART's sensitivity to feature names in the appendix.
---
## (B) Moving content from the appendix to the main text
We agree with you that highlighting the specific prompt structure used would improve readability and we'll add more discussion in the camera-ready version.
We will also clarify our data subset selection methods (slices are always selected based on train data and evaluated on test data).
---
## \(C) Related work
We appreciate you bringing attention to relevant related work. Both works [1] and [2] differ from our approach in that they focus on data validation techniques for general ML pipelines, while our context-aware testing framework specifically aims to use context to guide the search for model failures.
**UPDATE**: We will cite these works in the revision and provide a discussion of how context-aware testing differs from existing data validation techniques used in industry.
---
## (D) Comparison on TableShift benchmark
While TableShift [3] is a useful benchmark, it's not directly applicable to SMART. TableShift primarily focuses on OOD-detection. In contrast, SMART searches for model failures within the existing data distribution (recall that the context-guided sampling mechanism in Def. 1 samples from the existing data distribution).
That said, we agree that addressing dataset shift would be a valuable extension of context-aware testing. We'll mention this as future work, highlighting TableShift as a potential benchmark for extending context-aware testing to the OOD domain.
---
## (E) Code release
We expect to release the code in the form of a package mirroring your suggestion. We demonstrate the usage capabilities here and will expand it in the appendix.
```python
import SMART
# Load SMART
model_tester = SMART('gpt-4') # Instantiates smart
# Give desired context
context = """Find where a model fails. This is a prostate cancer prediction task."""
# Load a desired ML model
model = XGBoost()
# Test the model
model_tester.fit(X_train, context, model)
```
Once fitted, many attributes can be accessed, such as found subgroups ```model_tester.subgroups```, generated hypotheses ```model_tester.hypotheses```, justifications ```model_tester.justifications```, model report ```model_tester.model_report```, and more. We can also evaluate it: ```print(model_tester.top_underperforming_subgroup)```
```python=
Overall accuracy of the model: 85.1%
Accuracy on 'Age > 75' subgroup: 55.2%
Discrepancy of: 29.9%
```
**UPDATE:** Add code usage to appendix.
---
## (F) Open-weight models for hypothesis generation
Before jumping in, we wish to highlight **Appendix D.3**, which notes our goal is not to exhaustively test the SMART framework with every LLM. Rather, the goal is to showcase that SMART is feasible with at least the capabilities of GPT-4. While we assessed SMART's performance with different LLMs, we caution against using it with less capable models.
That said, based on your comment we have conducted an experiment with Mistral-7b, Qwen-1.5-7b, Llama-3-8b, Llama-70b, where for the OULAD and SEER datasets we generate 5 hypotheses and assess overlap to the hypotheses generated by GPT-4. This is presented in **Table 2 in the response pdf**.
To summarize, the overlap between open-source models and GPT-4 is between 60-80%. We find that open-source models propose similar hypotheses, but they are not replacements for more capable models. This corresponds to our findings in **Appendix D.6** --- less capable models might propose similar hypotheses, yet they still catch fewer model failures.
**Closed-source models are cheap.** In case your worry is that closed-source models are expensive to run, we conduct an additional evaluation calculating the cost of running SMART with closed-source models (**Table 1 in the response pdf**). The cost does not scale with dataset size but with the number of tests conducted. We find that generating 100 hypotheses with a powerful closed-source model (GPT-4o-mini) costs less than 1 cent and using state-of-the-art models (GPT-4) costs about 0.30 USD.
---
## Thank you
Thank you for your engagement. Given these changes, we hope you might consider revising your assessment of the paper's impact.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response with clarifications and additional experiments. I believe my concerns and questions are adequately addressed. I intent to keep the score, as I already recommend acceptance.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer bfyv,
Thank you for your thoughtful consideration of our paper and your positive response to our rebuttal.
We appreciate your recommendation for acceptance and, if you feel it is appropriate, would be grateful if you could consider whether our paper might align more closely with the criteria for a score of 7 rather than 6.
We believe our work meets the criteria for a score of 7 for these reasons:
- High Impact: Our work improves ML testing by addressing limitations found in all of the current data-only methods. More effective ML testing enabled by SMART is impactful to improve the reliability, safety, and trustworthiness of ML models across various applications and industries.
- Potential for Wide Adoption: Our implementation of SMART as an accessible Python package (mirroring your suggestion) has the potential for widespread adoption and impact within the ML community.
- Thorough Evaluation: Our paper presents a comprehensive evaluation, including 11 quantitative and 2 qualitative experiments. We've also conducted 5 new experiments, providing additional insights into SMART's performance.
We see that these points align with the criteria for a score of 7: "Technically solid paper, with high impact on at least one sub-area of AI or moderate-to-high impact on more than one area of AI, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations."
We respectfully ask you to consider revising your assessment based on these alignments.
Regardless of your decision, we're deeply appreciative of your support and recommendation for acceptance. Your feedback has really helped us to improve the paper!
Thank you again for your time and consideration. | Summary: This paper introduces context-aware testing (CAT), which is a novel tabular model testing paradigm that uses context as an inductive bias to guide the search for meaningful model failures, and build a CAT system named SMART Testing. Detailedly, SMART includes four steps: (1) use an LLM to generate hypotheses of failures based on a combination of contextual information and dataset contextualization, (2) transform natural-language-based hypotheses into a form that can directly operate on the training dataset, (3) discard “incorrect” hypotheses by testing out each hypothesis through a self-falsification mechanism, and (4) automatically generate a report of the overall performance of the model under varying hypotheses. Evaluation shows that SMART is more robust than data-only methods in terms of false positives and false negatives, and can target model failures more accurately.
Strengths: - The paper explores an important and interesting research direction.
- The improvements achieved by SMART shown in the evaluation are significant.
Weaknesses: - There is a lack of systematic comparison between the practicality of testing subgroups found by SMART and other data-only methods. While the paper includes some examples of the reports generated by SMART, it would be better for authors to provide both qualitative and quantitative comparison between SMART and data-only methods in terms of the practicality of testing subgroups.
- The tabular models included in the evaluation are outdated. In the paper, SMART testing is only applied to evaluate the performance of simple machine learning models such as Logistic Regression and SVM. It would be better to include more up-to-date deep learning models, such as transformer-based models.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the cost of using GPT-4 to generate hypotheses in SMART and is it scalable to larger evaluation benchmarks?
- Since the idea of SMART is general, is it generalizable to other domains apart from tabular testing, such as testing NLP models?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - It would be good to discuss the potential future directions of removing biases from SMART.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear R-6t3M,
Thank you for your thoughtful comments to improve the paper. We provide answers (A)-(F) & highlight updates to the paper.
---
## (A) Highlighting our systematic comparison of SMART vs data-only
We appreciate your suggestions and believe there may be a misunderstanding regarding the extent of our comparisons. We wish to clarify we provide in total **11 quantitative** and **2 qualitative experiments** (**see global response table above**), as well as **five new additional insight experiments** in the response pdf.
**UPDATE:** To help clarify, we add a **new summary table (Global response table**) with the main takeaway and references to the relevant sections, tables and figures.
We hope this addresses your concerns and demonstrates the thoroughness of our comparisons.
---
## (B) Clarifying tabular models used
We thank you for your comment on the choice of downstream tabular models. We clarify that a core contribution of SMART is its targeted sampling of hypotheses, which is entirely independent of the model used. As detailed in **Section 3.3 and Section 4**, SMART's context-guided slice sampling mechanism $\pi$ is used to generate hypotheses independently of the downstream model. This is why the specific tabular model used is not the primary focus of our paper.
Despite this, we demonstrate SMART's effectiveness across multiple models, including Logistic Regression, SVM, XGBoost, and MLP, showing the applicability of our findings.
While we understand the reviewer's concern about "outdated" models, we argue that our chosen models remain highly relevant for tabular data:
- Interpretability: Models like logistic regression offer crucial interpretability in domains such as healthcare and finance and are still widely used in industry.
- Performance: Recent work has shown that traditional ML models like tree-based boosting methods (XGBoost) often outperform deep learning approaches on typical tabular data [Grinsztajn et al., NeurIPS 2022] and we would argue that tabular deep learning has not been universally adopted for tabular data.
That said, we have also run additional results with two tabular deep learning method: **TabPFN and TabNet (Table 4 in the response pdf)**. Across all the methods, SMART is the best at finding subgroups where the models are least reliable.
---
## \(C) Cost of LLM hypothesis generation & scalability to larger datasets
We clarify the cost and scalability of SMART — demonstrating not only that SMART is cheap but also easily scalable to large datasets
- Scalability: SMART's scalability depends on the number of hypotheses generated, not dataset size (unlike data-only methods). This allows SMART to easily scale to arbitrarily large datasets.
- Cost Analysis: In practical terms, cost then also scales primarily with the number of hypotheses generated, not dataset size. We provide a rough estimate based on token counts of input and outputs for 2 datasets (SEER and OULAD). This would be $<0.10$ USD for 5 hypotheses and $<0.50$ USD for 100 hypotheses for state-of-the-art models.
| Model | Cost SEER 5 Hypothesis (USD) | Cost SEER 100 Hypothesis (USD) | Cost OULAD 5 Hypothesis (USD) | Cost SEER 100 Hypothesis (USD) |
|----|-----|----------|------|------|
| GPT-4 | 0.017 | 0.249 | 0.022 | 0.316 |
| GPT-3.5 | 0.004 | 0.050 | 0.005 | 0.064 |
| GPT-4o Mini | 0.0003 | 0.005 | 0.0005 | 0.006 |
| GPT-4o | 0.008 | 0.125 | 0.011 | 0.158 |
---
## (D) SMART beyond tabular data (e.g. NLP)
We appreciate your interest in the generalizability of SMART. Our current focus is on tabular data due to its importance in high-stakes domains like healthcare and finance, as well as, context being naturally encoded in tabular data which is used to guide the targeted sampling of failures. Extending SMART to other domains like NLP requires non-trivial adaptation, warranting dedicated investigation.
Applying SMART to text would require addressing the lack of explicitly interpretable features and developing new ways to operationalize hypotheses on unstructured data. In contrast, tabular data offers inherent context due to the interpretability of features, which is crucial for context-aware hypothesis generation and operationalization.
**UPDATE:** We'll use the extra camera-ready page to highlight this as an interesting future avenue for extending the context-aware testing paradigm.
---
## (E) Discussion on future bias mitigation when using SMART
We agree with you that is important to look at directions for removing biases from SMART. Our discussion on precisely this issue can be found in **Sec 5.4**. We realize that the title of the section might not fully convey this and hence will rename it (see update below).
To summarize, in **Sec 5.4** we have outlined several strategies to mitigate biases inherent in LLMs.
- (a) Using the training dataset to guide hypothesis generation.
- (b) Use our self-falsification mechanism to empirically test hypotheses against the actual data, thereby filtering out spurious hypotheses.
- \(c) Transparent Reporting: SMART generates comprehensive model reports (example in Appendix D.7) that provide clear justifications for each hypothesis, allowing for human oversight and intervention if needed.
**UPDATE:** Rename section 5.4 as “Assessing and mitigating potential LLM challenges and biases” to better flag this.
---
## Thank you
Thank you for your engagement --- we believe these changes should improve the paper's contribution and clarity. Given these changes, we hope you might consider revising your assessment of the paper's impact and the corresponding evaluation for NeurIPS.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal. While I still think it is important to extend SMART to scenarios beyond tabular data for more comprehensive evaluation, I would like to adjust my rating from 5 to 6.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer 6t3M,
We are glad our response was helpful and would like to thank you for raising your score! Thanks again for your time and suggestions which have helped us to improve the paper.
Regards
Paper Authors
---
Rebuttal 2:
Comment: Dear reviewer 6t3M
We are sincerely grateful for your time and energy in the review process.
We hope that our responses have been helpful. Given the limited time left in the discussion period, please let us know if there are any leftover concerns and if there was anything else we could do to address any further questions or comments.
Thank you!
Paper authors | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful and positive feedback!
We are encouraged that the reviewers found our work on model testing "important" (**R-6t3M, R-bfyv**) and "interesting" (**R-6t3M, R-bfyv, R-ctad**). They agreed our motivation for diverse perspective testing is "necessary for practical deployment" (**R-ctad**). Our Context-Aware Testing concept, which incorporates prior knowledge to guide the search for meaningful errors, was deemed "persuasive" and recognized as "a significant advancement in ML testing" (**R-H6db**) by "identifying an issue with existing (and common) methodology" (**R-bfyv**) and addressing the "core limitation of data-only testing" (**R-ctad**). The reviewers appreciated our "novel, clever and well motivated solution" (**R-bfyv**), which is the SMART testing LLM instantiation deemed to effectively demonstrate "the practical application of CAT" (**R-H6db**). Our "valuable" (**R-H6db**) empirical evaluations showed the effectiveness of SMART Testing in identifying significant model failures (**R-H6db**), demonstrating "significant" improvements (**R-6t3M**), complemented by our "important and well considered" discussion on mitigating potential LLM challenges (**R-bfyv**).
---
# Summary of experiments and findings
We'd like to highlight the experiments we've conducted as a summary. We hope these will clearly showcase the value of context-aware testing.
| Type | Figure/Table | Purpose | Finding |
|------|--------------|---------|---------|
| Quantitative | Figure 4 | Assess robustness to irrelevant features | SMART consistently avoids irrelevant features, outperforming data-only methods. |
| Quantitative | Table 9 | Evaluate ability to satisfy testing requirements | SMART satisfies most requirements while maintaining statistical significance. |
| Quantitative | Table 2 | Assess ability to identify significant model failures | SMART discovers slices with larger performance discrepancies across models. |
| Quantitative | Table 3 | Measure false negative rates in identifying underperforming subgroups | SMART achieves the lowest false negative rates in all settings. |
| Quantitative | Table 4 | Assess robustness to potential LLM biases | SMART effectively mitigates biases in identifying underperforming subgroups. |
| Quantitative | Figure 8 | Assess how sample size affects irrelevant feature detection | SMART consistently avoids irrelevant features regardless of sample size. |
| Quantitative | Table 10 | Evaluate performance in different deployment environments | SMART identifies more significant failure slices in new environments. |
| Quantitative | Figure 9 | Assess impact of sample size on performance | SMART consistently outperforms data-only methods across all sample sizes. |
| Quantitative | Table 11 | Evaluate tendency to flag non-existent failures | SMART avoids spurious failures, unlike data-only methods. |
| Quantitative | Table 6 | Compare performance of GPT-3.5 and GPT-4 in SMART | Both GPT versions in SMART outperform benchmark methods. |
| Quantitative | Table 12 | Evaluate SMART across different tabular ML models | SMART identifies larger performance discrepancies across various models. |
| Qualitative | Table 7 | Compare hypotheses generated by GPT-3.5 and GPT-4 | Both GPT versions generate similar, relevant failure hypotheses. |
| Qualitative | Appendix D.7 | Showcase SMART's practical output | SMART generates comprehensive model reports, providing clear justifications for each hypothesis. |
---
# Response pdf
In addition to the above experiments, we provide new insights based on our discussions. In the response pdf, we provide the following:
- **Table 1. Cost of SMART (USD)**. We quantify the usage of smart in USD for different closed-source GPTs. SMART is extremely accessible and cheap to use. Generating 100 hypotheses with GPT-4o Mini costs less than 1 cent.
- **Table 2. Comparison hypotheses between GPT-4 and open-weight models**. We provide insights into whether smaller open-weight models can be used as an alternative to more capable closed-source models. While we do not recommend this, we find significant overlap between smaller and larger models.
- **Table 3. Example hypotheses and justifications when dataset column names are hidden**. This table showcases that SMART cannot generate meaningful hypotheses if the feature names are not interpretable.
- **Table 4. Identifying slices with the highest performance discrepancies across two deep learning tabular classifiers**. We additionally run how SMART compares to other approaches for two state-of-the-art deep learning classifiers.
- **Figure 1. Identifying the importance of feature names as a source of information**. We show that the ability to detect model failures is smaller if column names are hidden.
---
# Thank you
The review has been extremely productive. We thank everyone for their help in shaping the paper to be in better form.
Pdf: /pdf/10c3687db9042b7e50150d0f8328ff5ab15467ff.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
EPIC: Effective Prompting for Imbalanced-Class Data Synthesis in Tabular Data Classification via Large Language Models | Accept (poster) | Summary: The paper proposes a method by which large language models are utilized to generate synthetic tabular data in order to mitigate class imbalances in existing datasets. The authors provide some tips which they discovered to result in more reliable data being generated, such as enforcing a CSV format. Their methods are validated on 6 real world datasets and one toy dataset.
Strengths: 1. the justifications for CSV formatting, class presentation, variable mapping, and task specification are all clear and make sense
2. the choice of experimental results is motivated sufficiently well
3. the provided results are extensive, including the plethora of ablation studies in the appendix
Weaknesses: 1. In Table 1, outside of the Travel dataset, the other 5 datasets do not show a significant increase in F1 score or balanced accuracy between the proposed method and the next closest baseline. Given this, it is not clear how to interpret the utility of the proposed method.
2. The motivation for why a large language model would be used to generate synthetic data for tabular settings, at scale, is a bit lacking. In particular, there needs to be discussion about the reasons for why the use of an LLM (with all of the computational and prompt-related overhead) would be preferable to existing data augmentation techniques for tabular data. In particular, it is not clear that the stated results provide enough evidence to persuasively demonstrate the utility of using LLMs for this purpose
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In Table 1, for the row "Ours" --- which LLM was used to generate the synthetic data? If it one of the gpt-3.5 models, then that should be stated up front so as to not increase confusion.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations are addressed in Appendix G. However, they should be moved up to the main paper for full transparency. Additionally, the provided limitations are quite sparse. It would be helpful if the authors could think a bit more about the real-world usability of their method, as well as the drawbacks associated with using potentially closed-source large language models for their synthetic data generation purposes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1. In Table 1, outside of the Travel dataset, the other 5 datasets do not show a significant increase in F1 score or balanced accuracy between the proposed method and the next closest baseline. Given this, it is not clear how to interpret the utility of the proposed method.
A1. Please see the general response, titled “**General Response: 2**.”
***
> Q2. The motivation for why a large language model would be used to generate synthetic data for tabular settings, at scale, is a bit lacking. In particular, there needs to be discussion about the reasons for why the use of an LLM (with all of the computational and prompt-related overhead) would be preferable to existing data augmentation techniques for tabular data. In particular, it is not clear that the stated results provide enough evidence to persuasively demonstrate the utility of using LLMs for this purpose
A2. Please see the general response, titled “**General Response: 1**.”
***
> Q3. In Table 1, for the row "Ours" --- which LLM was used to generate the synthetic data? If it one of the gpt-3.5 models, then that should be stated up front so as to not increase confusion.
A3. We appreciate the reviewer's attention to detail. The results labeled 'Ours' in Table 1 were generated using the GPT3.5-turbo model. To enhance clarity and avoid any potential confusion, we will explicitly specify the model used in Table 1 in our paper.
***
> Q4. The limitations are addressed in Appendix G. However, they should be moved up to the main paper for full transparency. Additionally, the provided limitations are quite sparse. It would be helpful if the authors could think a bit more about the real-world usability of their method, as well as the drawbacks associated with using potentially closed-source large language models for their synthetic data generation purposes.
A4. We appreciate your suggestion and will address the limitations in the main manuscript for full transparency.
Regarding the real-world usability of our method, our initial explanation may have been unclear. When the training dataset is large and cannot be fully included in the LLM prompt due to token size limitations, only a subset can be used as examples for generating samples. If these prompt samples do not fully represent the original data distribution, the generated data may be incomplete and of low quality. As illustrated in Fig. 10, an LLM can only generate a half-circle when prompted with one, highlighting its inability to produce data beyond what is presented in the input.
To address this limitation, our method employs multiple rounds of random sampling with replacement to create a combined dataset that more accurately represents the original distribution, resulting in improved machine learning classification performance. However, this approach still carries the risk that the samples may not fully capture the original data distribution. Future research could focus on developing techniques to better identify and sample key examples that more accurately reflect the entire dataset.
The selection of input examples can either reinforce or mitigate existing data biases. We proposed methods such as balancing and grouping to provide examples aimed at reducing these biases. While our primary focus was on addressing class imbalance, this approach could be extended to feature grouping, allowing for the generation of less biased data across specific features.
Additionally, we would like to clarify that our method has been tested with both open-source LLMs (Llama2 and Mistral) and closed-source LLMs (GPT3.5-turbo). The underlying code, data, and algorithms of the open-source LLMs are publicly available and can be accessed, reviewed, and modified by the research community. Notably, Mistral generated data with feature correlations more closely aligned with the original data than GPT3.5-turbo, particularly for minor classes, as shown in Figs. 4 and 5.
**Our method is model-agnostic and can be effectively used with open-source LLMs, not just closed-source ones, thereby not being limited by the drawbacks associated with closed-source LLMs.** However, when using closed-source LLMs, it is important to consider potential risks such as data leakage via API calls when handling privacy-sensitive data and the inability to directly access and verify the model being used. Ensuring the quality and security of the generated data in these circumstances is crucial.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed feedback and responses. I have updated my review accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our work and for your valuable feedback. We appreciate your consideration and are pleased that our responses have been helpful in updating your review.
We are committed to making further improvements and are prepared to address any additional suggestions or clarifications that could strengthen our work. | Summary: This paper proposed an LLM-driven tabular data class balancing approach, adapting the in-context learning paradigm and trying various formats and templates to explore optimal prompts to mitigate imbalanced tabular data. Experimental results showed that the proposed method alleviated the CSV-format data imbalance.
Strengths: 1. The authors probed CSV-format data generation from different perspectives, including data format, class presentation, variable mapping, and task specification.
2. Experiments and data analysis of this paper were adequate for understanding the proposed data balancing approach, and data performance has been improved to some degree.
Weaknesses: 1. The research purpose of this paper covers merely the CSV-format data, limiting the further adaptivity and impact of the proposed method.
2. This paper merely tried to find plausible optimal prompts by handcraft, lacking theory analysis, making the prompting even a black box.
3. Empirical results are not consistent on all test data, especially in the specificity aspect, indicating the instability of the prompting approach.
4. According to the prompting design, it seems that the prompting process is a bit sophisticated, however, there are no computing cost comparison experiments.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. This paper proposed to mitigate the label imbalance issue for tabular data by generating scarce label data, but how to guarantee the generated data and labels were matched?
2. The presented method is basically an instruction prompting approach, and why didn’t the authors analyze the LLM during the prompting process to make it more traceable?
3. How did the proposed method handle missing data or noisy features in the original dataset?
4. Empirical results on the Travel data outperform other data, does that mean that such a prompt is more suitable for it and there exists better prompts for other datasets?
5. The overall performance of the proposed method achieved limited improvement, and have the authors conducted the improvement-computing cost ratio to make the profit more intuitive?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have adequately discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Due to space constraints, we denote weaknesses and questions as W and Q, respectively.
> W1. Research scope
A. Focusing on tabular data does not limit the adaptivity or impact of our method. Tabular data, composed of mixed variable types such as numerical and categorical variables, represents a widely applicable and essential data format. It is the backbone of many datasets in various fields, including finance, healthcare, manufacturing, and natural sciences [1]. Enhancing performance on tabular data has a significant impact on real-world applications.
Substantial research has been dedicated to tabular data, such as data generation and classification [2]. By addressing class imbalance and enhancing classification outcomes, our work contributes to the advancement of this crucial field, significantly impacting various domains.
***
>W2, Q2. LLM analysis
A. With the remarkable advancements in LLMs, extensive research has been conducted in prompt engineering to maximize their potential, yielding significant value. Optimizing prompts for specific tasks is inherently a combinatorial challenge, and in the absence of established optimization principles, progress has often been driven by heuristic methods validated through rigorous empirical evaluations [3].
Our study adopts this empirical approach, conducting extensive experiments to develop effective prompts for synthetic tabular data generation. **Our work is distinguished by the thorough and comprehensive evaluations we performed across six real-world datasets from various domains and synthetic toy data, aiming to provide deeper insights into the prompting process.** We addressed experiments and analyses often overlooked in prior research on tabular data generation:
* We used a synthetic dataset to observe how different prompts influence the accuracy of generated data distributions (Figs. 1, 7). For example, multi-class prompts yielded more accurate distributions than single-class prompts, as they allow LLMs to contrast different classes (Fig. 1, 4th row).
* We explored the sampling of input examples and their corresponding outputs in LLMs using the toy set, providing insights into the variability and reliability of generated data (Figs. 8, 9, 10).
* We conducted three unique experiments to analyze the utility of our method in enhancing ML classification performance: augmenting the original dataset with generated data (Tables 1, 17), augmenting only the minority class similar to SMOTE (Table 6), and using only generated data (Table 7).
* We distinctively analyzed feature correlations by separately examining minor and major classes, comparing them across all prompt variations (Figs. 4, 5).
* We investigated how varying numbers of generated samples affect classification performance (Figs. 6, C).
* We ablated prompt elements to compare classification performance (Tables 3, 5) and conducted unique analyses of token usage and LLM generation efficiency (Tables 4, 8, 9).
* We conducted experiments using three LLMs: Mistral, Llama2, and gpt3.5-turbo (Table 2, Figs. 4, 5).
These analyses demonstrate how specific prompt design components impact the quality of generated data, addressing concerns about prompt optimization being a black box. While our study provides substantial empirical insights, we acknowledge that some areas may need further exploration and welcome detailed feedback, with additional analyses available.
***
>W3. Performance
A. Please see General Response 2.
***
>W4. Cost
A. We compared the computational cost of our method and its ablated versions in Tables 4, 8, and 9. Given a fixed number of input samples, we evaluated (1) the number of input tokens required, (2) the number of valid generated samples, and (3) the generation success rate. Our proposed prompt demonstrates superior efficiency compared to other prompt designs.
***
>Q1. Label matching
A. We demonstrated that the data generated by our method aligns well with the labels through detailed experiments:
* Our method successfully produces data samples that match the classes, as denoted by color in Fig. 1, even when compared to fine-tuned GReaT.
* For a machine learning model to perform well in classification, the correlation between input data and labels in the training data must be precise. Adding our synthetic data to the original data consistently improved the F1 score and balanced accuracy across six datasets (Table 1).
* Our synthetic data exhibits the closest feature correlation with the original data for each class, outperforming the baselines. (Figs 4, 5).
These results indicate that our method generates data that is more accurately matched to the classes than the baselines.
***
>Q3. Preprocessing
A. Similar to GReaT, we retained the original data, including missing or noisy features, leveraging the capabilities of LLMs. The exception was the Sick dataset, where we followed the source’s method, replacing categorical missing values with "?" and imputing numerical values with the mean.
***
>Q4. Optimal prompt
A. We carefully selected key prompt design choices and validated our method through extensive ablation studies across **multiple** datasets from diverse domains, including social, healthcare, and marketing:
* ML classification performance on three datasets (Tables 3, 5)
* Feature correlations on two datasets (Figs. 4, 5),
* Generated data distribution on the toy set (Figs. 1, 7)
While it is possible that more advanced prompts could be developed in future work, our approach currently represents the state-of-the-art method for tabular data generation.
***
>Q5. Cost-benefit analysis
A. Please see General Response 1.
***
[1] Breugel et al., Why Tabular Foundation Models Should Be a Research Priority, ICML’24
[2] Fang et al., Large Language Models (LLMs) on Tabular Data: Prediction, Generation, and Understanding - A Survey, TMLR’24
[3] Sahoo, et al., A systematic survey of prompt engineering in large language models: Techniques and applications., arXiv’24
---
Rebuttal 2:
Comment: Thank you for the responses, some concerns were addressed, however, there are still some questions.
1. Since the proposed approach is a prompt-based generation, have the authors tried to optimize the fixed prompt with some prompt optimizing methods, such as the Self-discover [1] and OPRO [2]?
2. There was no detailed introduction for various tabular datasets and no case study, which made me confused about the generalizability of the proposed method.
3. For the label matching of generated data, the authors need to employ advanced tabular data classification algorithms to evaluate the quality.
[1]. Self-Discover: Large Language Models Self-Compose Reasoning Structures.
[2]. Large Language Models as Optimizers.
---
Rebuttal 3:
Comment: Thank you for your valuable comments and for taking the time to review our paper. We are glad that some of your concerns have been addressed, and we appreciate the opportunity to clarify and expand on the additional points.
***
> Q1. Prompt optimization
A1. We appreciate the suggestion, but there are significant differences between the typical applications of these methods and the challenges of synthetic tabular data generation.
Self-discover and OPRO are designed to enhance LLM accuracy in structured tasks like multiple-choice questions or math problems, where correctness is relatively explicit. However, synthetic tabular data cannot be directly evaluated for correctness, making these methods less applicable to our domain.
Recognizing the potential, we adapted OPRO, which provides official code, to our task by optimizing prompts based on the classification accuracy of a robust classifier, CatBoost. Using the Thyroid dataset with GPT-3.5-turbo, we applied OPRO to optimize our prompt. Unfortunately, **the optimized prompts failed to produce valid synthetic tabular data**. For instance, one generated prompt was:
`No,45,F,No,No,No,Euthyroid,Single nodular goiter-right,No,Papillary,Multi-Focal,Intermediate,T3a,N1b,M0,II,Excellent`
This is because OPRO is designed to optimize simple instructions, e.g., `Take a deep breath and work on this problem step-by-step` or `Break this down`, as shown in Table 1 of the OPRO paper. **While effective for question-answering and reasoning tasks, these types of prompts are unsuitable for constructing the complex structures required in synthetic tabular data generation.**
Our experiments indicate that our proposed approach remains the most effective method for generating high-quality synthetic tabular data. Future research could explore prompt optimization methods specifically designed for tabular data generation to further enhance performance.
***
> Q2. Dataset details & Generalizability
A2. As noted in Section 3, we provided dataset details in Appendix I.2. However, **we acknowledge that this information may not be sufficiently highlighted in the main text**.
To address this, **we will revise the manuscript to include a comprehensive introduction to these datasets within the main text, clearly demonstrating the broad applicability and generalizability of our method.** We will also emphasize the importance of tabular data research in enhancing decision-making and efficiency in various real-world applications, better communicating the impact and significance of our work in multiple fields.
***
> Q3. Advanced classifier for label matching
A3. We employed top-performing tabular classifiers, XGBoost, CatBoost, LightGBM, and Gradient boosting classifier, known for their strong performance, often surpassing recent deep learning models. These models have served as robust baselines in tabular classification (TabR, ICLR’24 [1]) and have been used to assess generated data quality in tabular generation studies (the benchmark paper, NeurIPS’23 [2] and TabDDPM, ICML’23). In our work, we evaluated label matching quality by averaging results across 20 runs with these models, five runs each, demonstrating the superiority of our method.
We conducted preliminary experiments using recent in-context learning-based tabular classification methods, TabPFN (ICLR’23) [3] and T-Table (KDD’24) [4], on Travel, as detailed in **Table B of the attached PDF**. We also tested TabR (ICLR’24) [1], an advanced deep-learning tabular classification model, using its official code.
|Model|Original|+Ours|+TabDDPM|+GReaT|
|-|-|-|-|-|
|TabR|46.41|**60.78**|44.88|32.41|
The F1 score results indicate that **while the advanced models like TabPFN, T-Table, and TabR did not outperform traditional classifiers, adding synthetic data generated by our method consistently led to significant performance improvements, even with advanced tabular classification models.** In contrast, baselines resulted in performance decreases. Our method uniquely and consistently enhanced the performance of different classifiers, demonstrating superior label matching quality.
Furthermore, as discussed earlier, our label matching quality is validated, regardless of the classifier, by:
* Distinct class distribution in the generated data (Fig. 1, Toy set)
* Feature correlation similarity with the original data across classes (Fig. 4, Travel & Fig. 5, Sick)
These findings affirm that our model produces data with the best label matching compared to other baselines.
***
[1] TabR: Tabular Deep Learning Meets Nearest Neighbors, ICLR’24
[2] Reimagining Synthetic Tabular Data Generation through Data-Centric AI: A Comprehensive Benchmark, NeurIPS’23
[3] TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second, ICLR’23
[4] From Supervised to Generative: A Novel Paradigm for Tabular Deep Learning with Large Language Models, KDD’24
[5] Self-consistency improves chain of thought reasoning in language models, ICLR’23
---
Rebuttal Comment 3.1:
Comment: Thank you for the reply, my concerns were addressed, and I will update the rating score correspondingly.
---
Reply to Comment 3.1.1:
Comment: We are pleased that our clarifications addressed your concerns, and we appreciate your decision to raise the score. Thank you for your insightful feedback and for taking the time to review our work.
We remain committed to addressing any further suggestions or clarifications that could strengthen our work. | Summary: The paper explores the domain of tabular data generation using in-context learning with LLM, in order to improve performance of an ML classifier, especially in imbalanced classes scenarios. The paper explores different prompting techniques, with detailed results on 6 datasets as well as a visualized study on a toy dataset. The paper also compares the result performance boost on several ML classifiers, which is indeed significant. There's a further discussion of efficiency and stability of the generation itself, and how it is affected given the different prompting methods proposed. Results using three different LLMs are presented.
Strengths: The paper is very clear and discusses almost all important points soundly. It can be used as a guide both for researches facing the problem of generating data for imbalanced class learning, and for researches who are planning to study on LLM prompting techniques in a methodical way. It also presents clear benefits to using the proposed method. I enjoyed reading this paper very much.
Weaknesses: The main discussion that I missed in the paper was regarding - how much extra data ends up being generated, and does generating more and more data using this method make any sense? I understand that the method uses actual data points, and samples from the original dataset without replacement. This probably creates a limitation to data creation - a point which is not discussed in the main paper body, and should be added. Furthermore, it might make sense to scramble and re-sample the dataset to create even more samples. I would have liked to see a discussion of this point, both to how much the datasets are actually enlarged using the no-replacement sampling, and also if it makes sense to generate more data with replacement, with the limits of this method discussed - when do we reach repetition and duplication, without any further improvement to the classifiers.
I'm also missing the original dataset sizes and minority vs majority class distributions - to show more strongly the class imbalance. In fact, no dataset descriptions are given at all, which in my opinion takes away from the paper.
Technical Quality: 4
Clarity: 4
Questions for Authors: I'd add which model was used in table 1 (I'm assuming it's ChatGPT).
I would stress better the counts of additional synthetic data and why do you sometimes use 1K and sometimes 10K.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Limitations are discussed in the appendix, which is okay. I would perhaps add further discussion on social implications, clarifying the meaning of fig 10 - this method might create more bias in already biased data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1. The main discussion that I missed in the paper was regarding - how much extra data ends up being generated, and does generating more and more data using this method make any sense? I understand that the method uses actual data points, and samples from the original dataset without replacement. This probably creates a limitation to data creation - a point which is not discussed in the main paper body, and should be added. Furthermore, it might make sense to scramble and re-sample the dataset to create even more samples. I would have liked to see a discussion of this point, both to how much the datasets are actually enlarged using the no-replacement sampling, and also if it makes sense to generate more data with replacement, with the limits of this method discussed - when do we reach repetition and duplication, without any further improvement to the classifiers.
A1. Thank you for your insightful comment. You are correct that sampling without replacement limits the amount of synthetic data that can be generated to the number of actual data points. However, we would like to clarify that our method uses sampling with replacement to generate synthetic data. While we ensure that there are no overlapping examples within each prompt to maintain diversity, each prompt is constructed using sampling with replacement. We will clearly state this in our main manuscript.
In response to your suggestion, we conducted **experiments to evaluate how much the datasets can be enlarged and how this impacts classification performance, comparing sampling with and without replacement**. As shown in **Fig. C of the attached PDF**, performance improves steadily in both scenarios as the volume of generated data increases. However, the improvement is constrained when sampling without replacement due to the limited number of possible samples.
When sampling with replacement, as the dataset size expands, there is a noticeable improvement in the balance between sensitivity and specificity, which contributes to enhanced overall performance, including gains in balanced accuracy (BAL ACC) and F1 score. Generating up to 40K synthetic data points resulted in even better performance than the 20K synthetic data points reported in Table 1 of our main manuscript. We also observed that as the volume of generated data continues to increase, the gains in balanced accuracy and F1 score eventually plateau, indicating diminishing returns and suggesting that further data generation beyond a certain point offers limited additional benefit.
***
> Q2. I'm also missing the original dataset sizes and minority vs majority class distributions - to show more strongly the class imbalance. In fact, no dataset descriptions are given at all, which in my opinion takes away from the paper.
A2. We appreciate your observation. The dataset descriptions, including the original dataset sizes, are provided in Table 10 of Appendix I.2. However, we recognize that we omitted the class distribution information, which is now included in **Fig. D of the attached PDF**. Notably, the datasets where our method showed the most significant performance improvements (Travel, Income, and Sick) exhibit substantial class imbalances. We will incorporate this information into the main paper to enhance the clarity and impact of our findings.
***
> Q3. I'd add which model was used in table 1 (I'm assuming it's ChatGPT).
A3. We appreciate the reviewer's attention to detail. The results labeled 'Ours' in Table 1 were generated using the GPT3.5-turbo model. To enhance clarity and avoid any potential confusion, we will explicitly specify the model used in Table 1 in our paper.
***
> Q4. I would stress better the counts of additional synthetic data and why do you sometimes use 1K and sometimes 10K.
A4. The number of synthetic data points we generated was based on the size of the original datasets. For datasets with fewer than 10K samples, we generated 1K synthetic data points; for larger datasets, we generated 10K.
***
> Q5. Limitations are discussed in the appendix, which is okay. I would perhaps add further discussion on social implications, clarifying the meaning of fig 10 - this method might create more bias in already biased data.
A5. Our initial explanation may have been unclear. When the training dataset is large and cannot be fully included in the LLM prompt due to token size limitations, only a subset can be used as examples for generating samples. If these prompt samples do not fully represent the original data distribution, the generated data may be incomplete and of low quality. As illustrated in Fig. 10, an LLM can only generate a half-circle when prompted with one, highlighting its inability to produce data beyond what is presented in the input.
To address this limitation, our method employs multiple rounds of random sampling with replacement to create a combined dataset that more accurately represents the original distribution, resulting in improved machine learning classification performance. However, this approach still carries the risk that the samples may not fully capture the original data distribution. Future research could focus on developing techniques to better identify and sample key examples that more accurately reflect the entire dataset.
The selection of input examples can either reinforce or mitigate existing data biases. We proposed methods such as balancing and grouping to provide examples aimed at reducing these biases. While our primary focus was on addressing class imbalance, this approach could be extended to feature grouping, allowing for the generation of less biased data across specific features.
Additionally, in Appendix F, we discuss the social implications, particularly the potential misuse of generated data to deceive systems or individuals. This highlights the importance of ethical considerations in applying our methods. | Summary: The paper investigates how to use LLMs to generate synthetic tabular data for mitigating class imbalance in machine learning tasks. By exploring various prompting methods, the authors aim to identify key design elements that optimize the generation performance. The paper shows that using GPT3.5/Mistral/LLaMA to balancing classes, and employing unique variable mapping produces realistic and reliable data, enhancing classification performance (XGBoost etc) for minor classes in imbalanced datasets.
Strengths: 1. The paper provides a detailed exploration of various prompt design elements, such as data format, class presentation, and variable mapping, offering valuable insights for future research in this direction.
2. The proposed methods are easy to implement and require minimal preprocessing, making them accessible for a wide range of applications in tabular data classification tasks.
3. The experimental results show that the proposed approach consistently improves machine learning classification performance, particularly for minor classes.
Weaknesses: 1. Why not use the LLM itself to perform classification: The designed method uses LLMs to generate data examples for imbalanced classes, which means the LLMs must already have a good ability in modeling the data distribution even for the imbalanced classes. If that's the case, why not use the LLM itself to perform classification directly with the help of in-context learning? I expect that it will be a strong baseline compared to the XGBoost classifiers. The only downside of LLM-based classifiers may be the inference cost, but we should include this baseline.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you provide the experiment results of in-context learning LLM-based classifiers on these tasks?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes in Appendix G
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1. Why not use the LLM itself to perform classification: The designed method uses LLMs to generate data examples for imbalanced classes, which means the LLMs must already have a good ability in modeling the data distribution even for the imbalanced classes. If that's the case, why not use the LLM itself to perform classification directly with the help of in-context learning? I expect that it will be a strong baseline compared to the XGBoost classifiers. The only downside of LLM-based classifiers may be the inference cost, but we should include this baseline.
> Q2. Could you provide the experiment results of in-context learning LLM-based classifiers on these tasks?
A1. Thank you for your insightful suggestion. We concur that the demonstrated ability of LLMs to generate high-quality synthetic data for imbalanced classes suggests their potential for directly addressing classification tasks. Indeed, this represents a promising avenue for future exploration.
However, the challenge lies in **designing prompts to fully harness this potential.** The effectiveness of LLMs is heavily influenced by how the prompts are crafted. Numerous studies have shown that performance can vary significantly depending on prompt design, leading to ongoing research aimed at optimizing prompts for specific tasks [1,2]. Our work focuses on designing prompts that enable LLMs to efficiently generate high-quality tabular data, particularly to address class imbalance. While leveraging LLMs for direct classification is an intriguing direction to improve classification performance, it is beyond the scope of our present study.
We conducted **preliminary experiments using existing in-context learning-based tabular data classification methods**, such as TabPFN [3], which utilizes a pretrained transformer, and T-Table [4], which employs LLMs. Specifically, as detailed in **Table B of the attached PDF**, we evaluated classification performance on the Travel dataset using the original data and synthetic data generated by tabular data generation methods. For T-Table, we employed the GPT3.5-turbo model and, due to token limits, included all the original data but only 200 synthetic samples within each input prompt. Additionally, we used voting across five inferences to enhance performance [5].
The results indicate that while the newly introduced models did not outperform traditional classifiers, adding synthetic data generated by our method to the original data consistently led to the greatest performance improvements in both TabPFN and T-Table models. These findings underscore **the value of high-quality synthetic data in enhancing classifier performance across diverse models**, highlighting the importance of research on data generation as a distinct research area, separate from classifier development.
Tabular data generation is a critical area of research with significant implications [6,7,8]. This task serves two primary purposes:
* Enhancing classification performance in a model-agnostic manner through data augmentation, similar to SMOTE, as the results demonstrated in Tables 1, 2, and 6.
* Generating synthetic data to replace original data in security or privacy-sensitive contexts, as the results demonstrated in Table 7.
In fields such as healthcare, where obtaining new samples or achieving balanced class labels is challenging and where data may be noisy or incomplete, generating high-quality synthetic data is crucial.
In conclusion, even as more advanced classification models for tabular data are developed in the future, **our proposed method could continue to play a crucial role in enhancing performance by generating high-quality synthetic data**, thereby potentially making a significant impact on the tabular data classification community and related tasks.
***
[1] Kojima, et al., Large language models are zero-shot reasoners., NeurIPS’22
[2] Sahoo, et al., A systematic survey of prompt engineering in large language models: Techniques and applications., arXiv’24
[3] Hollmann, et al., TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second., ICLR’23
[4] Wen, et al., From Supervised to Generative: A Novel Paradigm for Tabular Deep Learning with Large Language Models., KDD’24
[5] Wang, et al., Self-consistency improves chain of thought reasoning in language models., ICLR’23
[6] Yang, et al., Language-Interfaced Tabular Oversampling via Progressive Imputation and Self-Authentication. ICLR’24
[7] Seedat, et al., Curated llm: Synergy of llms and data curation for tabular augmentation in ultra low-data regimes., ICML’24
[8] Borisov, et al., Language models are realistic tabular data generators., ICLR’23
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply! The response resolved my concerns. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback and for taking the time to review our work. We are glad our clarifications addressed your concerns, and we appreciate your consideration in raising the score.
We remain committed to addressing any further suggestions or clarifications that could enhance the quality of our work. | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers' valuable feedback and positive support, recognizing our method as
* Providing valuable guidance for researchers addressing class imbalance (zoZq, p2sf).
* Clear, well-articulated, and comprehensive in covering the method (zoZq, p2sf, 8bof, kZFF).
* Supported by extensive, well-motivated experiments (8bof, kZFF).
* Demonstrating clear benefits and consistent improvements, especially for minor classes (zoZq, p2sf).
**We will thoroughly incorporate their feedback into the camera-ready version.**
Please refer to **the attached PDF** for the updated figures and tables.
We hope our responses address all reviewers’ concerns and would greatly appreciate any further comments or clarifications.
***
# General response 1: Cost-benefit considerations of using LLMs (kZFF, 8bof)
A. The motivation for using LLMs for synthetic tabular data generation stems from the limitations of existing methods, particularly in addressing underrepresented classes within imbalanced datasets. Traditional approaches often replicate existing biases, which can degrade ML classification performance, sometimes resulting in worse outcomes than using the original data. Our approach overcomes these by using in-context learning to present balanced, grouped examples to LLMs, effectively leveraging their advanced pattern recognition to mitigate data imbalance.
While the costs associated with LLMs are a valid concern, our findings show that these costs are manageable. For example, **generating 1,000 new samples costs less than $1 and takes under 100 seconds** with GPT API calls:
* Sick (27 features): $0.71, 85.0 seconds
* Income (14 features): $0.45, 95.6 seconds
Additionally, using open-source LLMs like Mistral or Llama2 requires approximately 25GB of GPU memory.
**Our approach also offers significant reductions in various overheads**:
* **Minimal preprocessing**: Unlike existing methods that often require extensive data preprocessing and handling of noisy or missing data, our LLM-based approach minimizes these steps.
* **No training**: Our method eliminates the need for extensive model training and hyperparameter tuning.
* **Single model versatility**: Our approach uses a single model across multiple datasets, unlike traditional methods that require separate models for each dataset.
* **Optimized prompting**: We provide a prompt specifically designed for synthetic tabular data generation, adaptable across various datasets with minimal adjustments, significantly reducing prompt engineering efforts.
* **Open access**: Our provided code allows users to easily generate new samples using open-source LLMs via Hugging Face or the GPT API.
**Despite the reduced costs, our method consistently outperforms existing baselines, producing the highest quality synthetic data**:
* **Consistent performance improvement**: To our knowledge, our method is the first to consistently outperform using only original data or SMOTE among the baselines, achieving state-of-the-art results. This was validated across six real-world datasets from diverse domains, including marketing, medical, finance, and social sciences (Tables 1, 6, 17, **A**).
* **Effective class imbalance mitigation**: Our method excels in generating balanced and representative data for minor classes in imbalanced datasets, preserving feature correlations (Figs. 4, 5) and improving overall model performance. As shown in **Fig. A**, our method achieves balanced improvements across key metrics, underscoring its robustness and practicality for real-world applications.
Our LLM-based approach offers a cost-effective and high-quality solution for synthetic tabular data generation, demonstrating the significant advantages of using LLMs for a wide range of real-world applications.
***
# General Response 2: Significance of performance improvements (kZFF, 8bof)
A. The relatively modest improvements in F1 score and balanced accuracy are due to the challenges of imbalanced datasets. As shown in **Fig. C**, while the gains in F1 score and balanced accuracy between 0 and 40K samples may seem modest, the sensitivity actually increases significantly from around 50% to 70%. This improvement balances performance across all classes, leading to a more meaningful outcome and greatly enhancing model usability.
Baselines often learn biases in the training data, resulting in abnormally high specificity at the expense of sensitivity. This imbalance can inflate balanced accuracy or F1 scores, but such performance is ineffective for real-world tasks. In contrast, our method significantly enhances sensitivity with only a slight reduction in specificity. As a result, **our method achieves the most balanced performance across all four metrics**, as shown in **Fig. A**.
Our method is the only approach among the 7 baselines that consistently improves both F1 score and balanced accuracy compared to using only the original data (Table 17). Extensive experiments across six real-world datasets from diverse domains (finance, medical, marketing, and social) using four classifiers, each tested five times, demonstrate the state-of-the-art performance of our method, with significant gains in the highly imbalanced Travel, Sick, and Income datasets (**Figs. B, D**). **While GReaT and TabDDPM are the closest baselines, they exhibit inconsistent performance.** As highlighted in **Table A**, they reduce original data performance in more than half of the cases (red). For example, both underperform on HELOC and Diabetes.
In stark contrast, **our method consistently outperforms the original data across all datasets** (blue). In the challenging Diabetes dataset, only our method improves over the original data. While specificity decreases in all methods, our method still achieves a superior balance of metrics, leading to greater practicality.
These results underscore that our method offers the greatest practical utility and a meaningful performance advantage among the models tested.
Pdf: /pdf/d53750dca32c28866abb737249a5a6f00a93ee29.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
RankUp: Boosting Semi-Supervised Regression with an Auxiliary Ranking Classifier | Accept (poster) | Summary: The submission presents a new semi-supervised learning algorithm for deep regression models: in addition to the standard supervised regression head, an auxiliary classification head is trained using a semi-supervised learning algorithm for classification that learns to classify pairs of examples. A pair is classified correctly if the example with the larger target value is ranked above the example with the smaller target value. The loss for supervised regression is used in a weighted sum with the loss for semi-supervised classification. A further improvement to the algorithm is obtained by introducing an additional component in the loss that encourages the distribution of predicted target values to be aligned with the distribution of ground-truth target values. Experiments on three regression problems, one involving images, one involving audio, and one involving text, indicate that the proposed method outperforms semi-supervised regression algorithms from the literature.
Strengths: The submission presents a nifty idea for semi-supervised regression: formulate a label ranking problem using the regression task at hand to be able to apply a semi-supervised learning algorithm for classification. It additionally introduces a distribution alignment loss for regression that yields further improvements in performance.
Weaknesses: The only potential problem I see is the influence of augmentation on the results. The FixMatch method used to train the auxiliary classifier, shown in Algorithm 1, applies weak augmentation for the supervised component of the loss. For a fair comparison, it seems crucial to apply this weak augmentation also on the labeled data for all the other semi-supervised and purely supervised configurations evaluated in the empirical comparison. The submission does not state whether this was done.
Another, smaller issue is that the submission does not state exactly which operators were used for augmentation in the three different domains.
My negative ratings for soundness, contribution, and the overall quality are based purely on this concern regarding the use of augmentation. *** I HAVE INCREASED BY RATINGS AFTER THE REBUTTAL ***
Technical Quality: 3
Clarity: 3
Questions for Authors: Was weak augmentation on the labeled data applied consistently across all the semi-supervised and supervised learning algorithm configurations evaluated in the comparison?
Which forms of augmentation were applied in the three different domains?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The submission discusses limitations of the proposed approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and highlighting the strengths of our approach. We appreciate your careful consideration, particularly regarding augmentation. We'd like to address your concerns and questions:
### **W1) Influence of augmentation on results**
We confirm that **weak augmentation was applied** consistently to the labeled data across **all** semi-supervised and supervised learning algorithm configurations in our evaluation. This ensures a fair comparison between our proposed method and the baselines, ensuring that performance improvements are not due to differences in augmentation strategies. We will include these details in the revised paper for better clarity. Our codes along with implementation details will be published if the paper is accepted.
To further analyze the effect of weak augmentation, we conducted a **new ablation study** on the UTKFace dataset with 250 labeled samples, **without** weak augmentation applied to the labeled data. Below is a comparison of the results:
| | | | | |
|:----------------------|:------------------:|:--------------:|:-----------:|:----------:|
| | Weak Augmentation | MAE ↓ | $R^2$ ↑ | SRCC ↑ |
| Supervised | Yes | 9.42±0.16 | 0.540±0.014 | 0.712±0.010|
| **Supervised** | **No** | **11.73±0.17** | **0.315±0.012** | **0.556±0.020** |
| $\pi$ Model | Yes | 9.45±0.30 | 0.534±0.030 | 0.706±0.015|
| **$\pi$ Model** | **No** | **11.97±0.39** | **0.319±0.029** | **0.567±0.026** |
| MixMatch | Yes | 7.95±0.15 | 0.692±0.013 | 0.832±0.008|
| **MixMatch** | **No** | **10.80±0.09** | **0.446±0.021** | **0.716±0.003** |
| RankUp + RDA (Ours) | Yes | 6.57±0.18 | 0.782±0.012 | 0.856±0.005|
| **RankUp + RDA (Ours)** | **No** | **7.16±0.25** | **0.742±0.022** | **0.843±0.010** |
| Fully-Supervised | Yes | 4.85±0.01 | 0.875±0.000 | 0.910±0.001|
| **Fully-Supervised** | **No** | **5.58±0.02** | **0.837±0.002** | **0.888±0.000** |
| | | | | |
This table illustrates the importance of weak augmentation. The performance metrics (MAE, $R^2$, SRCC) decrease when weak augmentation is not applied **(please note that weak augmentation was applied on all the baselines in the paper)**. Despite this drop, the order of effectiveness among the methods remains consistent, whether weak augmentation is used or not. This consistency in ranking highlights the robustness of our proposed method.
### **W2) Lack of specificity regarding augmentation operators**
We followed the settings in the USB \[1\] codebase for augmentation operators, with an adjustment to the strong augmentation for audio data. This adjustment was necessary because our task involves quality assessment, and the original strong augmentation method would have affected the quality of the data. Specifically, we used the following augmentation techniques:
1. Image:
* Weak augmentation: Random Crop, Random Horizontal Flip
* Strong augmentation: RandAugment \[2\]
2. Audio:
* Weak augmentation: Random Sub-sample
* Strong augmentation: Random Sub-sample, Random Mask, Random Trim, Random Padding
3. Text:
* Weak augmentation: None
* Strong augmentation: Back-Translation \[3\]
### **Q1) Consistent application of weak augmentation**
Yes, weak augmentation on the labeled data **was applied consistently** across all semi-supervised and supervised learning algorithm configurations in our evaluation. More information can be found in the response to W1.
### **Q2) Forms of augmentation in different domains**
As detailed above (W2), we used domain-specific augmentation techniques based on the USB \[1\] codebase, with an adjustment for audio data.
We appreciate you bringing these important points to our attention. Addressing these issues will improve the clarity and reproducibility of our work. We will revise our manuscript to incorporate these details.
---
\[1\] Wang, Yidong, et al. "Usb: A unified semi-supervised learning benchmark for classification." *Advances in Neural Information Processing Systems* 35 (2022): 3938-3961.
\[2\] Cubuk, Ekin D., et al. "Randaugment: Practical automated data augmentation with a reduced search space." *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops*. 2020.
\[3\] Xie, Qizhe, et al. "Unsupervised data augmentation for consistency training." *Advances in neural information processing systems* 33 (2020): 6256-6268.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your rebuttal. My concerns have been addressed, and I will raise my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and positive consideration! | Summary: This paper presents a simple yet effective approach that adapts existing semi-supervised classification techniques to enhance the performance of regression tasks. The perspective is novel and can effectively use existing technologies to solve regression problems.
Strengths: The article has novel ideas and solid experimental results.
Weaknesses: The method description needs to be improved.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The quality of the figures in the article needs to be improved.
2. In the method, several variables are unclear. For example, ℓarc, ℓregression, etc.
3. The description of some processes in Section 3.3 is not very clear and needs to be improved.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: as above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive comments on the novelty of our ideas and the strength of our experimental results. We appreciate your constructive feedback and will address your concerns as follows:
### **W1) The method description needs to be improved**
We will thoroughly revise Section 3 to enhance clarity by:
1. A more detailed step-by-step explanation of our approach
2. Clearer definitions of all variables and components
3. Illustrative examples to aid understanding
4. Revise the unclear sentences.
### **Q1) The quality of the figures in the article needs to be improved.**
We have thoroughly improved the quality of figures by:
1. Increasing the resolution of all figures
2. Enlarging the font size in all figures
3. Changing the font color to black for better readability
4. Adding more informative captions and labels
5. Unifying the size of subfigures in Figure 3
The revised figures are included in the uploaded PDF in the global response section for your review.
### **Q2) Several variables are unclear. For example, ℓarc, ℓregression, etc.**
We will provide clear definitions for all variables in the revised manuscript. To address the specific examples you mentioned:
1. ℓarc represents the loss of the auxiliary ranking classifier (ARC).
2. ℓregression denotes the regression loss.
### **Q3) The description of some processes in Section 3.3 is not very clear and needs to be improved**
We will thoroughly revise Section 3.3 to improve clarity by:
1. Providing a more detailed explanation of each process
2. Using consistent terminology throughout the section
3. Adding a step-by-step algorithm or flowchart to illustrate the processes
4. Including examples to demonstrate how each process works in practice
Thank you again for your constructive feedback. We believe these revisions will significantly improve the clarity and readability of our paper, making our method and contributions more accessible to readers.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. I have no quetions.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and positive feedback! | Summary: The authors present two components to improve the problem of semi-supervised regression - 1) RankUp which considers the regression problem as a ranking problem and then adapts existing semi-supervised classification methods, and 2) Regression distribution alignment (RDA) which is to refine pseudo-labels. The experiments demonstrate the effectiveness on the problem.
Strengths: - The manuscript is clear and easy to follow;
- The method is technically sound;
- The experimental demonstration shows effectiveness over other baselines.
Weaknesses: I'm concerned about the technical novelty.
The proposed RankUp and RDA are combination from existing methods [1][2][3], and the core idea of adding ranking auxiliary objective to the regression object is not novel [4][5]. The author should include more discussion on the related work of using ranking auxiliary objective to improve regression problems.
The two proposed components seem to be orthogonal, especially for RankUp is directly designed to improving semi-supervised problems, but for more general regression tasks.
[1] Burges et al., Learning to rank using gradient descent. ICML 2005
[2] Sohn et al., Fixmatch: Simplifying semi-supervised learning with consistency and confidence, NeurIPS 2020
[3] Kim et al., Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning, NeurIPS 2020
[4] Rank-N-Contrast: Learning Continuous Representations for Regression, NeurIPS 2023
[5] Gong et al., RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression, ICML 2022
Technical Quality: 3
Clarity: 3
Questions for Authors: To better understand the effectiveness of each component on the semi-supervised regression, it can be better if the authors add more discussion and ablation studies on RankUp and RDA individually - e.g. only with RDA.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and for bringing these important points to our attention. We appreciate your feedback on the clarity and technical soundness of our manuscript. Here are our responses to your concerns and questions:
### **W1) Regarding concerns about technical novelty**
We acknowledge that RankUp and RDA are developed based on existing methods. However, our work uniquely integrates and adapts these methods for **semi-supervised regression tasks.** Below, we highlight the key differences between our work and the relevant existing methods:
**\[1\] Burges et al., Learning to rank using gradient descent. ICML 2005:** While RankNet uses pairwise ranking loss for ranking the data, the original study focuses on information retrieval tasks. We adapt the concept of RankNet for semi-supervised regression tasks, transforming the regression problem into a classification problem. This allows us to apply existing **semi-supervised** classification models to **regression problems**, bridging a gap in the field.
**\[2\] Sohn et al., FixMatch: Simplifying semi-supervised learning with consistency and confidence, NeurIPS 2020:** FixMatch is a popular method for semi-supervised classification tasks. Our work first verified that FixMatch can also be used for semi-supervised **regression tasks.** Additionally, we would like to point out that we adopted FixMatch as a representative method to train RankUp's auxiliary ranking classifier. Our framework is flexible and can incorporate various semi-supervised classification methods, not just FixMatch.
**\[3\] Kim et al., Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning, NeurIPS 2020:**
1. Similarity: Align the pseudo-labels distribution with the labeled data distribution.
2. Difference: DARP can’t be applied to regression tasks, and thus we designed a fundamentally different approach called RDA that can be used for **regression tasks**. Additionally, we introduced techniques to **overcome computational bottlenecks** for RDA, as detailed in Section 3.3 of our paper.
**\[4\] Rank-N-Contrast: Learning Continuous Representations for Regression, NeurIPS 2023:**
1. Similarity: Uses ranking loss in regression tasks.
2. Difference: Rank-N-Contrast is designed for supervised settings and can’t directly apply to semi-supervised settings. The focus of our study is **semi-supervised regression tasks**. Rank-N-Contrast requires the true labels of the data, whereas in a semi-supervised setting, the majority of the data is unlabeled. Moreover, one of the motivations of our proposed method is to leverage existing semi-supervised classification techniques for regression tasks, which Rank-N-Contrast can't achieve since it's a contrastive learning approach, not a classification approach.
**\[5\] Gong et al., RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression, ICML 2022:**
1. Similarity: Uses ranking loss in regression tasks.
2. Difference: RankSim is similar to Rank-N-Contrast, which can't directly apply to semi-supervised settings. It requires the true labels of the data, whereas in a semi-supervised setting, the majority of the data is unlabeled. More detail can reference the above response to Rank-N-Contrast.
We will expand our related work section to provide a more comprehensive discussion of existing ranking-based regression methods.
### **W2) Interconnection between RankUp and RDA**
Although RankUp and RDA may appear unrelated at first glance, they are suitably interconnected in our semi-supervised regression framework:
1. One of the key assumptions of RDA is reliant on the ranking of the pseudo-labels. The better the quality of the ranking of pseudo-labels, the better RDA will perform.
2. RankUp enhances the ranking of pseudo-labels by incorporating ranking information.
3. The enhanced pseudo-label quality from RankUp directly benefits RDA's effectiveness, creating a synergistic relationship.
### **Q1) Add more discussion and ablation studies on RankUp and RDA**
We appreciate your suggestion for additional ablation studies. Here are the results comparing supervised learning, RDA only, RankUp only, and RankUp \+ RDA on the UTKFace dataset:
| | | Labels \= 50 | | | | Labels \= 250 | |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| | MAE ↓ | $R^2$ ↑ | SRCC ↑ | | MAE ↓ | $R^2$ ↑ | SRCC ↑ |
| Supervised | 14.13±0.56 | 0.090±0.092 | 0.371±0.071 | | 9.42±0.16 | 0.540±0.014 | 0.712±0.010 |
| **RDA (Ours)** | **14.34±1.27** | **0.060±0.125** | **0.442±0.104** | | **8.64±0.22** | **0.609±0.023** | **0.772±0.012** |
| RankUp (Ours) | 9.96±0.62 | 0.514±0.043 | 0.703±0.019 | | 7.06±0.11 | 0.751±0.011 | 0.835±0.008 |
| RankUp \+ RDA (Ours) | 9.33±0.54 | 0.552±0.041 | 0.770±0.009 | | 6.57±0.18 | 0.782±0.012 | 0.856±0.005 |
These results demonstrate that:
1. RDA alone may not improve upon the supervised baseline (MAE and $R^2$ of labeled = 50 setting).
2. RankUp significantly outperforms the baseline and RDA alone.
3. The combination of RankUp and RDA yields the best performance, showcasing their synergistic relationship.
We will include this ablation study and a detailed discussion of the results in our revised manuscript. This will provide a clearer understanding of each component's contribution and their combined effect on semi-supervised regression performance.
Thank you again for your insightful comments. We believe addressing these points will significantly strengthen our paper and provide a more comprehensive understanding of our method's contributions to semi-supervised regression.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your time and effort in reviewing our work. We have addressed your comments and welcome any further feedback or questions you may have. We would be happy to discuss them with you. | Summary: The work introduces a novel SSL-regression method called RankUp. A a pairwise ranking loss enables the SSL-method FixMatch to utilize also the unlabled split of the data to learn a regression task. The addition of the Regression Distribution Alignment (RDA) loss enables the method to also take the overall distribution of the labeled samples into account. RankUp achieves SOTA on multiple SSL regression benchmarks.
Strengths: The paper is well written and easy to follow. Although simple, the proposed combination of pseudo-label based SSL approach FixMatch with a pairwise ranking loss is novel. The effect of RDA is plausible and its limitations are discussed. Both variants report promising performance gains in small scale regression benchmarks, especially when the number of labeled samples is small. The qualitative analysis via t-SNE visualization underlines the effect of RankUp on the learned representation.
Weaknesses: The scale of the experiments. Although this might be an issue of the SSL-regression domain in general.
Technical Quality: 3
Clarity: 3
Questions for Authors: Regarding the baseline methods: "Specifically, we adapt the popular semi-supervised learning codebase USB [20], modifying it for regression tasks." What modifications are necessary to turn e.g. the Mean Teacher approach into a regression method?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive comments and insightful questions. We're pleased you found our approach novel and well-supported. Regarding the weakness and questions:
### **W1) Scale of the Experiments**
We acknowledge that the scale of our experiments is limited, which is indeed a common challenge in semi-supervised learning for regression tasks. Traditionally, research in this area has focused on small-scale image datasets \[1\]\[2\]. To address this limitation and enhance the robustness of our findings, we have evaluated our methods not only on the image datasets but also across diverse datasets including image, audio, and text modalities. These varied experiments consistently demonstrated promising results, reinforcing the efficacy of our approach.
### **Q1) Modifications for Adapting USB Codebase for Regression Tasks**
The USB \[3\] codebase is designed for semi-supervised classification tasks. To adapt it for regression tasks, we made the following modifications:
1. We replaced the softmax loss function with the mean absolute error (MAE) loss function.
2. We adjusted the output layer to produce a single continuous output instead of multiple outputs used for multi-class classification.
3. For methods like Mean Teacher and $\pi$ Model, we didn't need to change the core algorithms as they are inherently applicable to regression tasks.
4. For MixMatch, we excluded components specifically designed for classification, such as sharpening and one-hot label encoding. We retained the input mixing and consistency regularization aspects, as these are valuable for regression tasks as well.
We appreciate you highlighting the need for clarity on these modifications. We will include these details in the final paper to provide a clearer understanding of our experiments.
---
\[1\] Dai, Weihang, Xiaomeng Li, and Kwang-Ting Cheng. "Semi-supervised deep regression with uncertainty consistency and variational model ensembling via bayesian neural networks." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 37\. No. 6\. 2023\.
\[2\] Dai, Weihang, et al. "Semi-supervised contrastive learning for deep regression with ordinal rankings from spectral seriation." *Advances in Neural Information Processing Systems* 36 (2023): 57087-57098.
\[3\] Wang, Yidong, et al. "Usb: A unified semi-supervised learning benchmark for classification." *Advances in Neural Information Processing Systems* 35 (2022): 3938-3961.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. I keep my positive score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and positive feedback! | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for taking their time to review our work. We greatly appreciate the thoughtful feedback and insightful comments, which have significantly contributed to improving the quality of our paper. We have attached the revised figures in the PDF for the reference.
Pdf: /pdf/7e5c490198ee3c406b8dc8f7598b05817481be33.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Knowledge Graph Completion by Intermediate Variables Regularization | Accept (poster) | Summary: This paper proposes a general model for regularizing a variety of tensor-decomposition-based knowledge graph completion models (based on variants of real / complex CP decomposition, Tucker decomposition, and others). The authors first observe that all of these tensor decomposition models can be expressed as a block-term decomposition of a general dense tensor, with various restrictions placed on the block term core tensors / factor matrices. The authors then use their general tensor model structure to determine necessary (but not sufficient) rank-based conditions on whether a particular knowledge graph model can express symmetry, antisymmetry, and inverse relations. The authors then propose a regularization strategy for this general tensor decomposition model based on the norms of the factor matrices and various intermediate terms that arise when contracting the tensor diagram corresponding to the block term decomposition. The authors evaluate their regularization strategy on three well-known knowledge graph datasets and six different knowledge graph completion models, showing accuracy improvements resulting from their regularization strategy.
Strengths: The rank-based justification of whether particular knowledge graph models can learn certain logical rules is interesting, since it explores these models from a linear algebraic perspective. The use of the hyperparameter tuner was a good idea as well, given that prior studies have established that careful hyperparameter tuning specifically for knowledge graph completion can significantly boost model performance (https://openreview.net/forum?id=BkxSmlBFvr). The knowledge graph regularization strategy appears to have non-negligible effects on accuracy, and the results are competitive with other state-of-the-art papers reporting results on the FB15K-237, YAGO, and WN18RR datasets. This experiments in this paper seem to suggest that many state of the art knowledge graph models can benefit (in significant ways) from regularization beyond what the literature currently suggests.
Weaknesses: The utility of Theorem 1 is somewhat unclear to me, given that the proofs of the symmetry / anti-symmetry / inverse properties for individual models such as CP, ComplEX, etc. are sufficiently simple. For example, the fact that that CP decomposition can learn the symmetry rules follows from a single step from the commutativity of the three-vector dot product mentioned in this paper if any pair of arguments is exchanged. The symmetry / anti-symmetry proofs for ComplEX are similar. As the authors point out, the usefulness of Theorem 1 stems from its ability to give a unified proof and to prove whether other, more complex tensor decomposition models could learn such logical rules. Again, in this case, it’s not clear whether applying this rank-based theorem would be simpler than specialized proofs, or would provide any more insight into these models. Although this theorem is interesting by bringing a linear algebraic perspective to the problem, I would like to see a case where this theorem proves a property / gives a result that cannot be derived more simply through other means.
I have a second question about Theorem 1 as it relates to DistMult and TuckER; see below.
The proposed regularizer has theoretical justification, although it adds a significant number of terms to the loss function. I'm somewhat skeptical about the novelty of the regularization method that arises from tacking on the variety of intermediate terms (while leaving out others), but I also acknowledge that the experiments indicate indicate that there is room for additional progress in knowledge graph model regularization.
The authors note that this does not increase the asymptotic complexity, but the number of terms added gives me some pause. I would like to see some data about the additional runtime, if any, required in practice by adding these regularization terms. The proposed regularizer appears to improve accuracy in most case, although at what cost in additional computational runtime in practice?
Overall, the experiments in this paper are thorough. I am skeptical about the utility of Theorem 1 and the utility of viewing these models through the lens of a block-term decomposition. The experiments indicate that adding several regularization terms improves the accuracy of tensor completion models in general; this has some novelty in that it shows that knowledge graph models are heavily overparameterized. I would also like to see the runtime impact of adding several non-trivial terms to the loss function, even if the asymptotic complexity does not change.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. There is an application of Theorem 1 for ComplEX (P=2), and I was able to follow both the proof of the theorem and its application. Can the authors also provide the corresponding analysis for ANALOGY and QuatE? I would like to see this theorem applied for cases P > 2 (and even an example for the case P = 1 would be helpful. I feel as if this theorem becomes more difficult to apply and less practical as the permutation matrix becomes larger. For P = 1, I believe the condition in Theorem 1 is exactly that W is a symmetric tensor.
2. As the authors point out, DistMult forces H = T to enable the model to learn symmetric relations. TuckER is similar. Does Theorem 1 account for such restrictions? It would seem that Theorem 1 in its current form puts no conditions on H and T.
3. Line 197: When you refer to the “asymptotic complexity of Equation 4, I presume this means the cost to compute the expression. Does the cost of computing the derivative of the expression with respect to a single (h, r, t) tuple also remain the same?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful and constructive comments. We have addressed the questions that you raised as follows. Please let us know if you have any further concerns.
$\textbf{Q1:}$ It’s not clear whether applying this rank-based theorem would be simpler than specialized proofs, or would provide any more insight into these models. Although this theorem is interesting by bringing a linear algebraic perspective to the problem, I would like to see a case where this theorem proves a property/gives a result that cannot be derived more simply through other means.
$\textbf{A1:}$ Theorem 1 transforms the problem of learning logical rules in the TDB model into a linear algebraic problem. By Theorem 1, we only need to calculate the ranks of several matrices. We can calculate ranks through software such as PyTorch and Matlab without manually calculating them.
Previous proofs of the problem of learning logical rules involved complex algebraic expressions computations, such as those in the paper of QuatE [1], particularly challenging for large $P$. In contrast, matrix operations are generally simpler and can be performed in parallel, making the proofs utilizing Theorem 1 more straightforward and easier than previous proofs, especially for large $P$.
One potential application of Theorem 1 is to use these conditions as constraints in training to ensure that TDB models are capable of learning logical rules.
[1] Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. Quaternion knowledge graph embeddings. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pages 2735–2745, 2019.
$\textbf{Q2:}$ Can the authors also provide the corresponding analysis of Theorem 1 for ANALOGY and QuatE? I would like to see this theorem applied for cases $P>2$. For $P=1$, I believe the condition in Theorem 1 is exactly that $\mathbf{W}$ is a symmetric tensor. I feel as if this theorem becomes more difficult to apply and less practical as the permutation matrix becomes larger.
$\textbf{A2:}$
For $P=1$, $\mathbf{W}\in \mathbb{R}^{1\times 1\times 1}$ is a scalar, then we have that $\text{rank}(\mathbf{W}\_{(2)}^{T}-\mathbf{S}\mathbf{W}\_{(2)}^{T})=0<1=P$, $\text{rank}(\mathbf{W}\_{(2)}^{T}+\mathbf{S}\mathbf{W}\_{(2)}^{T})=1=P$, $\text{rank}(\mathbf{W}\_{(2)}^{T})=\text{rank}([\mathbf{W}\_{(2)},\mathbf{S}{W}\_{(2)}^{T}])=1$, thus TDB model with $P=1$ is able to learn the symmetry rules and inverse rules.
For QuatE with $P=4$, $\mathbf{W}\in \mathbb{R}^{4\times 4\times 4}$, then we have that $\text{rank}(\mathbf{W}\_{(2)}^{T}-\mathbf{S}\mathbf{W}\_{(2)}^{T})=3<4=P$, $\text{rank}(\mathbf{W}\_{(2)}^{T}+\mathbf{S}\mathbf{W}\_{(2)}^{T})=1<4=P$, $\text{rank}(\mathbf{W}\_{(2)}^{T})=\text{rank}([\mathbf{W}\_{(2)},\mathbf{S}\mathbf{W}\_{(2)}^{T}])=4$, thus, QuatE is able to learn the symmetry rules, antisymmetry rules and inverse rules. ANALOGY is similar.
The permutation operation in Theorem 1 can be efficiently computed by permuating the dimensions of $\mathbf{W}$. For example, we can use the “permuate()” function in PyTorch to compute it.
$\textbf{Q3:}$ As the authors point out, DistMult forces $\mathbf{H} = \mathbf{T}$ to enable the model to learn symmetric relations. TuckER is similar. Does Theorem 1 account for such restrictions? It would seem that Theorem 1 in its current form puts no conditions on $\mathbf{H}$ and $\mathbf{T}$.
$\textbf{A3:}$ Yes. Theorem1 involes the constraint $\mathbf{H}=\mathbf{T}$, which mainly aims to reduce overfitting [2]. Existing TDB models for KGC except CP all forces $\mathbf{H}=\mathbf{T}$. The proof of Theorem 1 has used this constraint. We may omit some details, leading to such misunderstandings. We add the following steps to make the proof complete.
According to the symmetry rules, for any triplet $(i,j,k)$, we have that $f(i,j,k)=f(k,j,i)$, i.e., $f(\mathbf{H}\_{i:}, \mathbf{R}\_{j:}, \mathbf{T}\_{k:})= f(\mathbf{H}\_{k:}, \mathbf{R}\_{j:}, \mathbf{T}\_{i:})= f(\mathbf{T}\_{k:}, \mathbf{R}\_{j:}, \mathbf{H}\_{i:})$ (we use $\mathbf{H}=\mathbf{T}$ here). We replace $\mathbf{H}\_{i:}$ by $\mathbf{h}$, replace $\mathbf{R}\_{j:}$ by $\mathbf{r}$, replace $\mathbf{T}\_{k:}$ by $\mathbf{t}$, then we have $f(\mathbf{h},\mathbf{r},\mathbf{t})=f(\mathbf{t},\mathbf{r},\mathbf{h})$, i.e., the first row of the proof of Theorem 1. We will make the proof clearer in the revision.
For CP model, which does not involve the constraint, we can easily proof that it is able to learn the symmetry rules, antisymmetry rules and inverse rules.
[2] Kadlec, R., Bajgar, O., and Kleindienst, J. Knowledge Base Completion: Baselines Strike Back. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pp. 69–74, 2017.
$\textbf{Q4:}$ I would like to see some data about the additional runtime, if any, required in practice by adding these regularization terms. The proposed regularizer appears to improve accuracy in most case, although at what cost in additional computational runtime in practice?
$\textbf{A4:}$ The regularization terms only increase the training time. The training time per epoch and MRR metric are shown in the following table. IVR slightly increases the running time but enhances performance. We also provide the running time regarding hyperparameter $P$ on Page 22.
Table: The running time and MRR metric of ComplEx model on WN18RR dataset.
|Model|Time|MRR|
|-|-|-|
|ComplEx|36s|0.464|
|ComplEx-F2|39s|0.467|
|ComplEx-N3|39s|0.491|
|ComplEx-DURA|41s|0.484|
|ComplEx-IVR|44s|0.494|
$\textbf{Q5:}$ Line 197: When you refer to the “asymptotic complexity of Equation 4, I presume this means the cost to compute the expression. Does the cost of computing the derivative of the expression with respect to a single $(h,r,t)$ tuple also remain the same?
$\textbf{A5:}$ Yes. Since $\frac{\partial ||\mathbf{A}||\_F^{\alpha}}{\partial \mathbf{A}}=\alpha ||\mathbf{A}||\_F^{\alpha-2}\mathbf{A}$, the cost of computing the derivative remains the same.
---
Rebuttal 2:
Title: The final day for discussions
Comment: Dear reviewer:
Thank you once again for your meticulous comments. As today is the final day for discussions, we kindly request your feedback once more. We anticipate that our brief responses will not require much of your time. Your feedback is invaluable to us, and we eagerly await your response.
---
Rebuttal Comment 2.1:
Title: Response Acknowledged
Comment: I thank the authors for their response. I'm increasing my score to a 5, on the basis of the following: it does seem that even strong knowledge graph models can be improved by regularization, and the rank based analysis is intriguing. I echo the other reviewer's assessment that this framework can be viewed through the lens of a block term decomposition. | Summary: This paper proposes a general framework for tensor decomposition methods on knowledge graph completion. Based on the proposed framework, the authors further introduce a novel regularization method that regularizes the norms of intermediate variables in tensor decomposition. Theoretical analysis demonstrates that regularization on intermediate variables provably upper bounds the original loss, and empirical results on different data sets verify the theoretical analysis and show significant improvements of the proposed method when combined with various knowledge graph completion methods.
Strengths: - The proposed method is clearly introduced and easy to understand
- Theoretical analysis is sound and well supports the proposed method
- Empirical results on different data sets demonstrate that the proposed method improves upon existing KG completion methods by tensor decomposition models
Weaknesses: Several detail points about the proposed method are not clear enough
Technical Quality: 3
Clarity: 3
Questions for Authors: - Despite the Frobenius norm, can IVR also use other matrix norms (e.g., the spectral norm, though the computation cost may be a problem)? Some additional experiments may be useful to better understand how the proposed regularization affect model training.
- The captions of table 8 and 9 seem to contain an error. Should these tables be for different data sets?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper does not have direct potential negative societal impact from my perspective.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful and constructive comments. We have addressed the questions that you raised as follows. Please let us know if you have any further concerns.
$\textbf{Q1:}$ Several detail points about the proposed method are not clear enough.
$\textbf{A1:}$ We will make our statements clearer in the revision.
$\textbf{Q2:}$ Despite the Frobenius norm, can IVR also use other matrix norms (e.g., the spectral norm, though the computation cost may be a problem)? Some additional experiments may be useful to better understand how the proposed regularization affect model training.
$\textbf{A2:}$ We choose the Frobenius norm because it can be computed efficiently and is conducive to theoretical analysis. In Section 3.3, we leverage the relationship between the Frobenius norm and the trace norm to demonstrate that IVR serves as an upper bound for the overlapped trace norm of the predicted tensor.
The spectral norm, with its computational complexity of $\mathcal{O}(n^3)$, poses challenges for practical computation. Establishing a theoretical framework for alternative norms may also be difficult.
We have conducted several experiments to show how IVR affect model training.
First, in Section 4.4, we verify that minimizing the upper bounds IVR can effectively minimize the overlapped trace norm. Lower values of overlapped trace norm encourage higher correlations among entities and relations, and thus bring a strong constraint for training.
Second, we analyze the impact of hyper-parameters on model performance in Appendix C. You can refer to the Paragraph "The hyper-parameter $\alpha$", "The hyper-parameter $\lambda_{i}$" and “The Number of Parts $P$” on Page 22.
Furthermore, we add an experiment to demonstrate the effect of IVR on training time. The regularization terms only increase the training time. The training time per epoch and MRR metric are shown in the following table. IVR slightly increases the running time but enhances performance.
Table: The running time and MRR metric of ComplEx model on WN18RR dataset.
|Model|Time|MRR|
|-|-|-|
|ComplEx|36s|0.464|
|ComplEx-F2|39s|0.467|
|ComplEx-N3|39s|0.491|
|ComplEx-DURA|41s|0.484|
|ComplEx-IVR|44s|0.494|
$\textbf{Q3:}$ The captions of table 8 and 9 seem to contain an error. Should these tables be for different data sets?
$\textbf{A3:}$ We will fix the errors in table 8 and table 9 in the revision. Since similar phenomena are observed in results on other datasets, we only present the results for the WN18RR dataset.
---
Rebuttal 2:
Title: The final day for discussions
Comment: Dear reviewer:
Thank you once again for your meticulous comments. As today is the final day for discussions, we kindly request your feedback once more. We anticipate that our brief responses will not require much of your time. Your feedback is invaluable to us, and we eagerly await your response.
---
Rebuttal Comment 2.1:
Title: Acknowledging your responses
Comment: Thank you for your response which clarifies my previous concerns. After checking reviews from other reviewers as well as corresponding rebuttal, I would like to keep my score towards acceptance. | Summary: The paper addresses the challenges in Knowledge Graph Completion (KGC) using Tensor Decomposition-Based (TDB) models. The authors present a detailed overview of existing TDB models and establish a general form for these models, which is intended to serve as a foundational platform for further research and enhancement of TDB models. In addition to the new regularization technique, the paper also contributes a theoretical analysis that supports the effectiveness of their method and experimental results that demonstrate its practical utility.
Strengths: 1. The introduction of Intermediate Variables Regularization (IVR) is a significant original contribution. This new method addresses overfitting in tensor decomposition-based models for knowledge graph completion by focusing on the norms of intermediate variables, which is a novel angle in this field.
2. The paper also offers a theoretical analysis to support the effectiveness of IVR, presenting a comprehensive mathematical foundation that is not commonly found in many practical applications-focused papers.
3. The paper is well-organized, with clear sections dedicated to the introduction, methods, theoretical analysis, and experimental results. This structure facilitates easy understanding of the complex concepts discussed.
Weaknesses: 1. While the paper commendably unifies existing KGC methods through a comprehensive mathematical form, it significantly lacks an in-depth discussion on the rationale behind dividing the models into P parts, treating P more as a 'magic' parameter without a clear explanation of its theoretical or practical implications. This could leave readers questioning the basis of choosing a specific value of P and its impact on the model's performance and applicability.
2. Although the paper acknowledges the increase in parameters with the Tucker model, it does not provide a detailed theoretical or empirical analysis of these parameters. A deeper exploration into how these additional parameters affect the model’s complexity, training time, and potential for overfitting could enhance the paper's contribution and provide more actionable insights for readers looking to implement or extend the proposed methods.
3. Eq. 2 is essentially described as a special case of block-term Tucker decomposition where all Tucker's core tensors are consistent. This raises concerns regarding the originality of the design, as it may seem to be a slight variation of existing models rather than a fundamentally new approach. Expanding on how this adaptation provides unique benefits or differs in application from traditional Tucker decompositions could help in strengthening the originality aspect.
4. There is a notable absence of discussion regarding the computational efficiency of the proposed methods. Given that the introduction of IVR and the handling of increased parameters could potentially lead to higher computational costs, it would be beneficial for the paper to address these aspects. Analysis or benchmarks on the computational load compared to other models could provide a clearer picture of the practicality of implementing the proposed methods in real-world applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you provide a more detailed explanation or theoretical basis for the division of the tensor decomposition into P parts? What are the implications of different values of P on the model's performance and how do you recommend choosing P for different scenarios?
2. Given that Formula 2 closely resembles a block-term Tucker decomposition, can you elaborate on the specific innovations or unique advantages that your adaptation provides?
3. How generalizable is the IVR method to other types of tensor decomposition models or even outside tensor-based approaches? Are there limitations in its applicability that should be considered?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper introduces parameters such as the number of parts (P) in the tensor decomposition but lacks a thorough analysis of how these parameters impact the overall model performance and stability. A deeper examination of the sensitivity of the model to these parameters would strengthen the discussion on limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful and constructive comments. We have addressed the questions that you raised as follows. Please let us know if you have any further concerns.
$\textbf{Q1:}$ Could you provide a more detailed explanation or theoretical basis for the division of the tensor decomposition into $P$ parts? What are the implications of different values of $P$ on the model's performance and how do you recommend choosing $P$?
$\textbf{A1:}$ Due to space constraints, we have to put some content into the appendix. Several of your questions have been addressed there.
As mentioned from Line 132 to Line 133, the number of parts $P$ determines the dimensions of the dot products of embeddings. If we treat dot product of three vectors with dimension $D/P$ as $D/P$ interactions between three vectors, then Eq.(1) results in $P^3 \times D/P = DP^2$ interactions. Thus, $P$ can be considered as a hyperparameter that controls the expressiveness of TDB models.
As discussed from Line 150 to Line 154, the number of parameters in $\mathbf{W}$ equals $P^3$, and the computational complexity of Eq.(1) is $\mathcal{O}(DP^2)$. Thus, $P$ is related to expressiveness and computation. We have provided an experiment on Page 22 Paragraph “The number of Parts $P$” to study the impact of $P$ on performance. The results show that the performance generally improves and the running time generally increases as $P$ increases.
Therefore, the choice of $P$ involves a trade-off between expressiveness and computation.
$\textbf{Q2:}$ Provide a detailed theoretical or empirical analysis of parameters in Eq.(2) and a deeper exploration into how the additional parameters affect the model’s complexity, training time, and potential for overfitting.
$\textbf{A2:}$ We have analyzed the impact of the parameter tensor $\mathbf{W}$ on model performance. As stated in Line 150 to Line 154, the number of parameters of $\mathbf{W}$ is equal to $P^3$ and the computational complexity of Eq.(1) is equal to $\mathcal{O}(DP^2)$. Table 10 on Page 21 show that the performance generally improves and the running time generally increases as the size of $\mathbf{W}$ (or $P$) increases. TuckER is a special case of Eq.(2) with $P=D$. The results show that TuckER has the best performance but the longest running time.
$\textbf{Q3:}$ How Eq.(2) provides unique benefits or differs in application from traditional Tucker decompositions? Given that Eq.(2) closely resembles a block-term Tucker decomposition, can you elaborate on the specific innovations or unique advantages that your adaptation provides?
$\textbf{A3:}$ The differences between traditional Tucker decomposition models and block-term Tucker decomposition Eq.(2) have been stated in Section 3.1 Paragraph “TuckER and Eq.(2)”. TuckER does not explicitly consider the number of parts $P$ and the core tensor $\mathbf{W}$, which are pertinent to the number of parameters, computational complexity and logical rules.
Eq.(2) reveals the relationship between block-term TuckER decompostions and existing TDB models, which serves as a foundation for further analysis of TDB models. Although block-term Tucker decomposition has been proposed, it has not been introduced into knowledge graph completion (KGC) field. Just like exsiting TDB models only introduces tensor decomposition models into KGC field, they do not propose new tensor decomposition models. Eq.(2) presents a unified view of TDB models and helps the researchers understand the relationship between different TDB models. Moreover, the general form motivates the researchers to propose new methods and establish unified theoretical frameworks that are applicable to most TDB models.
Most of your questions are about Eq.(2). We want to stress that the core contribution of our paper is the intermediate variables regularization rather than Eq.(2). Eq.(2) mainly serves as a foundation of IVR. Thus, we put more content on IVR.
$\textbf{Q4:}$ Provide an analysis on the computational load compared to other models.
$\textbf{A4:}$ We discuss the computational efficiency from three aspects.
First, the regularization terms only increase the training time. The training time per epoch and MRR metric are shown in the following table. IVR slightly increases the running time but enhances performance.
Second, we show an approach to choose the hyper-parameters in Paragraph “Hyper-parameters” on Page 20. Our approach requires only a few runs to determine optimal hyper-parameters.
Third, table 10 in Page 21 show that the performance generally improves and the running time generally increases as P increases. Further analysis of other hyper-parameters is provided on Page 22 Paragraph "The hyper-parameter$\alpha$" and Paragraph "The hyper-parameter$\lambda_i$".
Table: The running time and MRR metric of ComplEx model on WN18RR dataset.
|Model|Time|MRR|
|-|-|-|
|ComplEx|36s|0.464|
|ComplEx-F2|39s|0.467|
|ComplEx-N3|39s|0.491|
|ComplEx-DURA|41s|0.484|
|ComplEx-IVR|44s|0.494|
$\textbf{Q5:}$ How generalizable is the IVR method to other types of tensor decomposition models or even outside tensor-based approaches? Are there limitations in its applicability that should be considered?
$\textbf{A5:}$ As far as we are know, all existing TDB models for KGC can be represented by Eq. (2). As stated in the Conclusion section, we intend to explore regularizations that are applicable to other types of KGC models.
Our regularization, IVR, can be directly extended to other types of KGC models such as translation-based models and neural network models by incorporating the intermediate variables in these models.
One limitation is the challenge of developing a theoretical framework for other types of KGC models. For TDB models, we offer a comprehensive theoretical analysis demonstrating that IVR serves as an upper bound for the overlapped trace norm. Establishing a similar theoretical foundation for other types of models remains challenging.
---
Rebuttal 2:
Comment: Thank you for your response. Based on your feedback, I have decided to raise my score. | Summary: The paper considers the problem of knowledge graph completion where the knowledge graph is encoded as a 3rd-order binary tensor. The authors provide an overview of existing tensor decomposition based models for KGC and propose a unifying general form that enables representing each of these models by choosing the partitioning $P$ and a core tensor $\boldsymbol{W}$ accordingly. Further, they introduce a novel regularization method called Intermediate Variables Regularization to handle overfitting in TBD models. In contrast to existing regularization approaches, the regularization term combines the norms of intermediate variables of different representations of $\boldsymbol{X}$ including its unfoldings w.r.t. each mode. Lastly, a theoretical analysis is provided followed by experiments highlighting the efficacy of IVR.
Strengths: - The paper is well written and organized.
- The paper addresses an important problem and gives a broad overview of existing solutions. I think the proposed general form will facilitate future research in this research area.
- It is an intriguing approach to include and combine the norms of several intermediate variables for TDB model regularization.
Weaknesses: - Currently, the paper starts directly with the main section. I think it would be beneficial to have a small (sub-)section either before or after related work to provide background on, e.g., tensors and CP/Tucker decomposition with one or two illustrating figures, and/or background on how the embedding matrices are usually obtained.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Does the choice of $P$ and $\boldsymbol{W}$ affect the performance of IVR? If yes, how?
- The paper states that IVR is applicable to most TDB models. Could you specify to which models it is not (directly) applicable?
- Are there (theoretical) cases in which IVR is expected to perform worse than existing regularization techniques?
- The justifications in the checklist are missing.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your careful and constructive comments. We have addressed the questions that you raised as follows. Please let us know if you have any further concerns.
$\textbf{Q1:}$ Currently, the paper starts directly with the main section. I think it would be beneficial to have a small (sub-)section either before or after related work to provide background on, e.g., tensors and CP/Tucker decomposition with one or two illustrating figures, and/or background on how the embedding matrices are usually obtained.
$\textbf{A1:}$ Thank you for your valuable suggestion. We will complement the related backgroud by using the extra page in the revision. You can also refer to [1] for more related backgroud.
[1] Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM review, 51(3): 341 455–500, 2009.
$\textbf{Q2:}$ Does the choice of $P$ and $\mathbf{W}$ affect the performance of IVR? If yes, how?
$\textbf{A2:}$ Yes. We have provided an experiment in Page 22 Paragraph “The number of Parts $P$” to study the impact of $P$ on performance. The results indicate that the performance generally improves and the running time generally increases as $P$ increases.
It is difficult to theoretically demonstrate how the core tensor $\mathbf{W}$ affects the performance. When $W$ is a predetermined tensor, as shown in Table 1, varying $\mathbf{W}$ leads to different performance outcomes. Establishing a clear relationship between IVR performance and $\mathbf{W}$ from Table 1 is not straightforward.
When $\mathbf{W}$ is a parameter tensor, as discussed in Page 22 Paragraph “The number of Parts $P$”, the performance of IVR generally improves as the size of $\mathbf{W}$ (or $P$) increases.
$\textbf{Q3:}$ The paper states that IVR is applicable to most TDB models. Could you specify to which models it is not (directly) applicable?
$\textbf{A3:}$ As far as we know, IVR is applicable to all exsiting TDB models in knowledge graph completion field. However, we can not guarantee that we know all TDB models, so we claim that IVR is applicable to most TDB models.
$\textbf{Q4:}$ Are there (theoretical) cases in which IVR is expected to perform worse than existing regularization techniques?
$\textbf{A4:}$ Since IVR can be reduced to F2 or N3 by setting suitable hyper-parameters, IVR must perform better than F2 and N3. IVR may perform worse than DURA on some datasets or some metrics. For example, MRR metric of ComplEx-IVR is slightly worse than that of ComplEx-DURA on FB15k-237 dataset.
$\textbf{Q5:}$ The justifications in the checklist are missing.
$\textbf{A5:}$ We will add the justifications in the revision.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. After reading the rebuttal and the other reviews, I will keep my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LeDex: Training LLMs to Better Self-Debug and Explain Code | Accept (poster) | Summary: The paper addresses the goal of self-debugging of generated code, while also explaining it.
The approach is to: (a) sample code outputs to natural language inputs, and keep only the wrong code outputs according to unit tests; (b) sample **refinements** ("fixes") to the wrong code outputs and test those refinements using unit tests, to get both correct and incorrect refinements; (c) the authors train an LLM using SFT and RL on those correct and incorrect refinements.
The resulting trained LLM is shown to provide empirical gains, especially when allowing it to self-refine its outputs at test time, with the test-time unit tests.
Strengths: * The proposed pipeline provide strong empirical gains
* The authors experiment with 3 base LLMs (StarCoder-15B, CodeLlama-7B, CodeLlama 13B) and 2 "teacher" LLMs (gpt-3.5-turbo, CodeLlama-34B)
* The authors experiment with multiple datasets: they train on MBPP, APPS, CodeContest, and test on HumanEval and MBPP.
Weaknesses: * The paper is mostly applicative, and includes a mix of known techniques. I feel that the paper is low on conceptual novelty. The concept of self-debugging and verifying correctness using unit tests was introduced by [Chen et al., 2023](https://arxiv.org/pdf/2304.05128) (although only using prompting), and RL using the signal coming from the unit tests was done in several papers (that the authors cite).
* It feels like there are so many different techniques and heuristics involved, that it is hard to pinpoint the exact contribution of each of them. For example: the design of the explanation score as $CosSim(RoBERTa(e), RoBERTa(ec))$, the exact hyperparameters of the reward design (e.g., $\frac{50 · S_{ex}(e) 35}{3}$), and the exact design of $S_{cb}$ using CodeBLEU. The use of explanations before fixing the code is a form of Chain-of-Thought (Wei et al., 2022), or the "feedback" in Self-Refine (Madaan et al., 2023).
* Further, teaching the models to self-debug is done using **larger models**.
That is, the refinements are sampled from larger models such as gpt-3.5-turbo, CodeLlama-34B, and then these refinements are used to train the smaller StarCoder-15B, CodeLlama-7B, CodeLlama 13B models.
This adds a dimension of distillation (from large models to small models), and further makes it difficult to pinpoint the exact source of contribution.
* The approach is not compared to any related work, or to the teacher models themselves.
* Position in literature - Although many papers are cited, I feel that the paper does not position itself well in the literature. I do not remember exactly what did each related work do, but the paper does not help me understand the differences and its novelty compared to the related work. For example, what's the difference between this paper and [18] and [20]? They seem very similar, but the Related Work does not highlight the novelty over them.
* Another new paper that was not cited: [Ansong et al., NExT: Teaching Large Language Models to Reason about Code Execution, ICML'2024](https://arxiv.org/pdf/2404.14662). I am not interested in the authors just citing the paper as an additional number between brackets, I am interested in discussing the actual differences and novelty over that paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. At test time, when evaluating the SFT/RL models: does the model see the execution results of its generated test code before refinement? That is, can the model use the unit tests of the test examples as well, or are unit tests used only at training time?
2. In the definition of $S_{cb}$ - if we rely on unit tests to verify correctness, why do we need to encourage the refinement to be similar (in terms of CodeBLEU) to **all** the possible correct refinements?
3. RL training on benchmark data may over-specialize on their specific domain, while degrading the general coding abilities of the LLM. After all, these benchmarks are only benchmarks, and over-specializing on them may hurt the usability of the model in practical use. For example, have the authors checked whether applying their SFT+RL hurts the perplexity on general code?
To summarize, I think that the paper presents strong empirical gains, but the scientific novelty is low, as the contribution is mostly applied. I thus vote for a borderline reject.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for insightful suggestions and questions.
## 1. Paper novelty
While there are several related works on self-debugging, our paper focuses on how to improve the model’s self-debugging capability, which is important but not yet extensively investigated.
We believe "NExT: Teaching Large Language Models to Reason about Code Execution" is a concurrent work with us. Yet there are still differences between the NExT paper and our paper:
1. One of the most notable differences is the proposed **RL training with explanation and execution reward**. RL training is important as it helps LLMs to learn from the failure generations as well, and the separation of explanation reward and execution reward helps LLMs learn differently about the explanations and fixes.
2. Differences also lie in how we synthesize explanations while they focus on traces reasoning, and how we **utilize much larger training datasets such as APPS and CodeContests** to improve generalization while they primarily focus on MBPP and HumanEval as the training data.
3. Also, they mainly conduct training experiments on PaLM 2, while we **train multiple open-sourced backbones to prove the generalizability of our approach**.
We will add a discussion on the differences with this paper in our draft, but we do not think that the contribution and novelty of our paper should be questioned based on this concurrent work.
## 2. Position in literature
As mentioned above, our paper focuses on how to improve the model’s self-debugging capability via training. Most existing works on self-debugging focus on prompting LLMs to do self-debugging, which does not work well on open-sourced smaller LLMs as we have shown.
A few related works that train LLMs as we cited in the paper:
1. ILF requires human-annotated explanations, while we train LLMs to generate bug explanations by themselves.
2. CYCLE and Self-Edit only train LLMs to generate refinement using SFT. We train LLMs to explain the bug which not only enhances LLMs’ reasoning but also helps developers to understand the wrong code (as our human evaluation shows). We also explored using RL to further improve the self-debugging performance.
3. NExT, a concurrent work from ICML this year that we have discussed above.
We are also different from all these works by designing RL training. The RL training brings improvement to the strong SFT models over several baselines, leading to higher Pass@K, higher refinement rate, and better bug explanation.
## 3. Self-taught refinement
We provide the experiment results using the synthetic data generation from the model itself for CodeLlama 7B. We highlight some results here. Below is the CodeLlama-7B SFT/RL using the data collected from itself, evaluated on MBPP+ and HumanEval+.
| Approaches | | MBPP+ | | HumanEval+ | |
|-|-|-|-|-|-|
| | | Pass@1 | Pass@10 | Pass@1 | Pass@10 |
| | Init. | 37.18 | 61.23 | 27.40 | 60.81 |
| Prompt | Refine | 42.97 | 66.89 | 31.84 | 65.08 |
| | Expl. + Ref. | 42.46 | 67.41 | 32.49 | 66.58 |
|||||||
| | Init. | 41.78 | 61.77 | 33.25 | 61.50 |
| SFT | Refine | 46.26| **66.39** | 40.15 | 67.15 |
| | Expl. + Ref. | 45.94 | 65.77 | 39.10 | 67.33 |
|||||||
| | Init. | 41.61 | 61.29 | 33.66 | 62.17 |
| RL | Refine | **46.28** | 65.86 | **41.54** | 68.14 |
| | Expl. + Ref. | 46.10 | 65.99 | 40.79 | **68.50** |
The full results are in our attached author response PDF file, Tables 1 and 2. Results show that self-taught SFT and RL also achieve big improvements. CodeLlama-7B SFT/RL models achieve up to 5% improvement in self-debugging, compared with the baseline prompting method, and the model trained with code generation data only. Though comparing the experiments using data from CodeLlama-34B and GPT-3.5-Turbo, the improvement is less.
## 4. Comparison to the teacher models
We provide the comparison with the teacher models CodeLlama-34B and GPT-3.5-Turbo on the self-debugging setup in the global rebuttal pdf. Comparing it with Table 2 and Table 10 in the paper, we see that with CodeLlama-34B as the teacher, CodeLlama 7B SFT/RL achieves close to CodeLlama-34B self-debugging performance, while CodeLlama-13B SFT/RL significantly outperforms the CodeLlama-34B teacher (e.g. in HumanEval+ pass@1 56.24% vs 48.51%).
## 5. Unit Test at test time
The model first generates an initial solution for a given problem description. The initial problem description contains one or more test case examples (example input and expected output).
Then the initial solution is tested against all the test cases provided for the problem. If one test case fails, the model will take the failed test case and the error message to generate refinement.
That is, at the test time, the generation of the initial solution will see some example test inputs and outputs. And the generation of refinement (and explanation) will see the exact failed test cases.
## 6. Reward Design
CodeBLEU: We find that only using binary execution feedback as a reward does not train the model properly as the reward is too sparse. That is, a completely wrong solution will get the same reward as an almost correct solution, which hurts the RL training. Although the CodeBLEU reward is weak, we find that having it helps stabilize the training by densifying the reward distribution. This is the main reason we introduce the CodeBLEU score in the reward.
Similarity: The formula for R(e) might look strange, but the main goal is to scale the majority explanation similarity (Figure 3 c in paper) to the range of [-5, 5] so that it is in the same value range as the code rewards (Figure 3 d in paper). Our reward design is based on the statistics of the training data.
## 7. Generalization
To avoid overfitting, we use a large batch size (128) to only update the model for a few thousand steps. We test our models’ perplexity on 10000 samples from BigQuery Python code. The CodeLlama-7B pre-trained model’s perplexity is 1.457, the SFT/RL models' are 1.597 and 1.599, just slightly higher.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for your response.
Can you please edit your response and mention which part of my review does each part in your response refer to?
For example, I did not ask for a *comparison* to the teacher models.
I also asked:
>>2. In the definition of $S_{cb}$ - if we rely on unit tests to verify correctness, why do we need to encourage the refinement to be similar (in terms of CodeBLEU) to **all** the possible correct refinements?
The authors' response seems to be answering a different question. My emphasis was on the word "all" - why does it make sense to encourage the refinement to be similar to **all** the possible correct refinements? What if there are multiple ways to solve the same problem?
Regarding generalization, the authors mention that the perplexity increases from 1.457 to 1.597, which might be evidence that the model's coding ability is indeed hurt. How can we be convinced that the model does not over-specialize on specific domains?
---
Reply to Comment 1.1.1:
Comment: Thank you for following up.
We re-organized the rebuttal so that the response to each question is clear. Due to character limits, we abbreviate the reviewer’s questions, and refer some questions to our first response. Below, we focus primarily on addressing the reviewer’s new questions.
## 1. Paper novelty
Reviewer: “The paper is mostly applicative, and includes a mix of known techniques. I feel that the paper is low on conceptual novelty.”
Response:
Please refer to the “Paper novelty” in our first response.
## 2. Position in literature
Reviewer: “Position in literature - Although many papers are cited, I feel that the paper does not position itself well in the literature.”
Response:
Please refer to the “Position in literature” in our first response.
## 3. Self-taught refinement
Reviewer: “This adds a dimension of distillation (from large models to small models), and further makes it difficult to pinpoint the exact source of contribution.”
Response:
Please refer to “Self-taught refinement” in our first response.
## 4. Comparison to the teacher models or related work
Reviewer: “The approach is not compared to any related work, or to the teacher models themselves.”
Response:
The prompting baseline mentioned in our paper refers to the related work [1], and we compare with prompting method in our experiments.
We provide the comparison with the teacher models CodeLlama-34B and GPT-3.5-Turbo on the self-debugging setup in the global rebuttal pdf. Comparing it with Table 2 and Table 10 in the paper, we see that with CodeLlama-34B as the teacher, CodeLlama 7B SFT/RL achieves close to CodeLlama-34B self-debugging performance, while CodeLlama-13B SFT/RL significantly outperforms the CodeLlama-34B teacher (e.g. in HumanEval+ pass@1 56.24% vs 48.51%).
## 5. Unit Test at test time
Reviewer: “At test time, when evaluating the SFT/RL models: does the model see the execution results of its generated test code before refinement?”
Response: please refer to “Unit Test at test time” in our first response.
## 6. Reward Design
Reviewer: “why does it make sense to encourage the refinement to be similar to all the possible correct refinements?”
Response:
If we only consider one correct solution, there will be correct refinement that uses a different way but get very low CodeBLEU score. And it is also unreasonable to train the model to follow only one correct solution. Our solution does not penalize the model too much when it uses a different way to solve the problem, as long as there exist correct solutions using the similar way.
Besides, the CodeBLEU score is mainly used to densify the reward distribution, unit test is more important to separate the wrong and correct solutions.
## 7. Generalization
Reviewer: “RL training on benchmark data may over-specialize on their specific domain, while degrading the general coding abilities of the LLM.”
Response:
We have already considered the practical usage of the proposed training, and all of our experiments in the paper use rather comprehensive data, not only the self-debugging data we collected but also the original code generation data provided in the MBPP, APPS, CodeContests training set and Magicoder dataset [2] to avoid over-specialization. Besides, to avoid overfitting, we use a large batch size so the model weights are only updated for about 2000 steps..
We test our models’ perplexity on general code, e.g., 10000 samples from BigQuery Python code. The CodeLlama-7B pre-trained model’s perplexity is 1.457, the SFT model’s perplexity is 1.597, and the RL model’s perplexity is 1.599. Both are just slightly higher than the pre-trained model. **Code generation is one of the most important code tasks for LLMs and our trained model is much better than the pre-trained model on it.**
**The SFT/RL models’ perplexity on the pretraining data being higher than the pretrained model doesn’t mean they generalize worse. The SFT (or instruction-tuned) LLMs typically have higher perplexity than the pre-trained foundation model, since they learn to follow human instructions.** We also test the instruction-tuned CodeLlama-7B released by Meta (https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf), which gets a perplexity of 1.682, even higher than ours. We generally don’t see concerns regarding the higher perplexity caused by instruction-tuning, because instruction-tuned models follow users’ instructions better and are more useful in developing AI-assistant.
Reference:
[1] Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug, ICLR 2024.
[2] Wei, Y., Wang, Z., Liu, J., Ding, Y., & Zhang, L. (2024). Magicoder: Empowering code generation with oss-instruct. In Forty-first International Conference on Machine Learning. | Summary: This work proposes a novel framework to enhance the self-debugging capabilities of smaller language models that do not benefit much from self-refine or other prompt-based debugging approaches. Sampling incorrect code samples produced by LMs, they pass the execution feedback on these to GPT-3.5/4 and prompt it to explain the reason for the errors and propose code refinements. Accurate refinements are used to fine-tune and create a code-correcting model. This is further enhanced with a PPO based learning from a novel reward assignment mechanism accounting for both explainability and code refinement. Overall, they demonstrate the importance of having explanations for incorrect codes and how RL can be used to enhance the debugging ability of models to show superior performance across benchmark datasets.
Strengths: - Technically solid
- This work addresses a significant issue of little coding improvement from self-refinement prevalent in smaller LMs
- The reward setup incorporating both code refinement and explainability in PPO is novel and shows good gains
Weaknesses: Nothing major, future evaluation on datasets like APPS etc. could be beneficial to understand the impact on harder tasks
Technical Quality: 4
Clarity: 3
Questions for Authors: - Was there any reason behind choosing the range of score to be [-5,5]?
- It seems in the bigger models tested (\geq 13B) the refine itself is quite effective, any intuitions on this scale effect?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments.
## 1. Evaluation of APPS and CodeContests
This is a good suggestion and we plan to add the results to the final version if accepted.
Below are the results on APPS and CodeContests. We test StarCoder on the full 5000 APPS test samples. However, due to the large amount of test samples in APPS (5000), we only test the CodeLlama-7B/13B on a subset of 200 samples. We plan to keep running to complete the evaluation and add it to the paper.
| Approaches | | APPS (5000) | | CodeContests (165) | |
|-|-|-|-|-|-|
| | | Pass@1 | Pass@10 | Pass@1 | Pass@10 |
| | Init. | 2.57 | 8.59 | 0.58 | **3.88** |
| Prompt | Refine | 2.84 | 9.10 | 0.65 | 4.23 |
| | Expl. + Ref. | 2.95 | 9.53 | 0.78 | 4.84 |
|||||||
| | Init. | 3.80 | 11.52 | **0.62** | 3.67 |
| SFT | Refine | 6.89 | 17.01 | 1.16 | 5.25 |
| | Expl. + Ref. | 6.86 | 17.18 | 1.48 | 5.94 |
|||||||
| | Init. | **4.50** | **13.62** | 0.44 | 2.19 |
| RL | Refine | **7.81** | **19.28** | **1.80** | **6.04** |
| | Expl. + Ref. | **8.10** | **19.75** | **1.80** | **6.37** |
On APPS and CodeContests, the prompting baseline also only refines very few solutions and the improvements brought by refinement are marginal. However, with our SFT and RL-trained models, we see significantly stronger self-refinement ability. The final Pass@k is also about doubled from the prompting baseline.
## 2. Reward Range
Our RL algorithm is based on the PPO algorithm. According to the details in Appendix A3, the rewards of each token are either the code reward, explanation reward, or the KL divergence. From our calculation of the KL divergence distribution on a subset of the training data, the trajectory tokens’ KL divergences have a minimum value of -5.22 and a maximum value of 5.37. Thus, we scale our code and explanation rewards to the range of [-5, 5], which is similar to the KL divergence value range.
Actually, we think as long as the code/explanation rewards are not too large or too small to cause gradient explosion, the setup should be reasonable. For example, [-1, 1], [0, 1], are also common choices of reward range.
## 3. Scale Effect
We observed a similar scaling effect in Table 4 in Section 4.1.2 where we show the successful refinement rate of different models across prompting, SFT, and RL. We see that from CodeLlama 7B to CodeLlama 13B, the refinement rate improves by around 3% (absolute rate) across the board for all approaches. We think that larger models have better capabilities in general, and the trend applies to self-debugging capability as well. Our proposed training seems to help push the self-debugging performance closer to the model’s limit, and in general, a larger model should have a higher upper bound. | Summary: The authors teach language models to better self-debug and explain code. Particularly, they utilize code explanations in the repair process where explanations are generated before refining the programs. This is accomplished via training the models with SFT and RL on data curated from different sources and model generations with test case based rejection sampling.
Strengths: The paper is nicely written with enough details about experiments. Improving code repair capabilities of LLMs is an important problem and the authors propose a rejection-sampling-based technique to automatically curate repair trajectories from LLMs. The associated ablations are useful and convey useful insights.
Weaknesses: Weak results
* Benefit of explain + refine over just refine.
The proposed explain + refine approach, associated loss on explanations does not seem to be improving performance. Infact, Expl.+Refine rows sometimes do worse than the only Refine rows. These results highlight a lack of benefit from adopting this approach. While I understand that having associated explanations is appealing and perhaps might even provide models with more inference-time-compute before performing refinements, I think the current approach or experiments does not convey that. The associated human evaluations (Table 7) also point to similar findings.
* Optimizing single-turn vs multi-turn performance.
Finally, a lot of the instruction-tuned variants of the models used in this work claim better single-turn HumanEval and MBPP performance. I wonder how the findings would change if they started from a strong instruction-tuned model and improved its repair performance. For instance, the OpenCodeInterpreter-CL-7B model shows performance improvements of 3 points from execution feedback on HumanEval (from 72 to 75) due to strong instruction tuning data pushing pass@1 to 72. Perhaps to a larger point, the effect of repair depends on the choice of the underlying model and broader data mixture which this paper does not study.
* Choice of evaluation datasets.
Since the authors use competition programs (like APPS or CodeContests) in their study, perhaps it is fair to also evaluate the models on competition programming datasets.
* Missing related works.
[1] is also pertinent to LLM reasoning. [2] and [3] also train and release open LLMs on repair trajectories. [3] in fact uses a similar explanation format.
[1] Reflexion: Language Agents with Verbal Reinforcement Learning
[2] OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement
[3] Advancing LLM Reasoning Generalists with Preference Trees
Technical Quality: 3
Clarity: 3
Questions for Authors: * Table 1 further details. I anticipate that many problems in the training sets remain unsolved post-refinement, particularly for APPS and CodeContests which are more challenging benchmarks. Can the authors list details about problems beyond number of solutions.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed adequately (besides some of the above-mentioned weaknesses)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for insightful suggestions and questions.
## 1. Explanation improvement
The reviewer might have some misunderstanding of the explanation evaluation in Table 7. In Table 7, we performed both human evaluation and LLM Judge-based evaluation to evaluate the explanation quality before and after training. It clearly shows that the SFT/RL training significantly improves the explanation quality, by generating more correct and helpful bug explanations. Appendix A6 Table 13 lists the rubrics.
We also provide some case studies on human evaluation in Appendix A6 of the paper to show qualitatively how the finetuned model generates better explanations.
## 2.Comparison with Instruction-tuning
In Appendix A.4.1 of the paper, we have presented the comparison of SFT on strong code instruction data with SFT on the full data we collected. The code instruction data we used includes the Magicoder dataset [ref], similar to the OpenCodeInterpreter-CL-7B the reviewer mentioned. The results in Appendix A.4.1 show that even when training with strong code instruction data, the model does not obtain self-debug capability out-of-the-box and it only refines up to 3% of its wrong solutions. And this is one of the main motivations of the proposed approach.
## 3. Evaluation of APPS and CodeContests
This is a good suggestion and we plan to add the results to the final version if accepted.
Below are the StarCoder-15B’s results on APPS and CodeContests. Results of the other two backbones can be found in our attached author response PDF file (Tables 4 and 5). Due to the large amount of test samples in APPS (5000), we only test the other two backbones on a subset of 200 samples. We plan to complete the evaluation and add it to the paper.
| Approaches | | APPS (5000) | | CodeContests (165) | |
|-|-|-|-|-|-|
| | | Pass@1 | Pass@10 | Pass@1 | Pass@10 |
| | Init. | 2.57 | 8.59 | 0.58 | **3.88** |
| Prompt | Refine | 2.84 | 9.10 | 0.65 | 4.23 |
| | Expl. + Ref. | 2.95 | 9.53 | 0.78 | 4.84 |
|||||||
| | Init. | 3.80 | 11.52 | **0.62** | 3.67 |
| SFT | Refine | 6.89 | 17.01 | 1.16 | 5.25 |
| | Expl. + Ref. | 6.86 | 17.18 | 1.48 | 5.94 |
|||||||
| | Init. | **4.50** | **13.62** | 0.44 | 2.19 |
| RL | Refine | **7.81** | **19.28** | **1.80** | **6.04** |
| | Expl. + Ref. | **8.10** | **19.75** | **1.80** | **6.37** |
On APPS and CodeContests, the prompting baseline also only refines very few solutions, and the improvements brought by refinement are marginal. However, with our SFT and RL-trained models, we see significantly stronger self-refinement ability. The final Pass@k is also about doubled from the prompting baseline.
## 4. Related works
We thank the reviewer for pointing out missing related works and providing some discussions here.
In the Reflexion paper, an LLM verbally reflects on task feedback signals and generates improvements. It is similar to the prompting method.
OpenCodeInterpreter constructs a multi-turn interaction dataset that integrates execution and human feedback for code refinement using GPT-4, and shows that models on such dataset achieve good refinement performance.
In EURUS paper ([3] mentioned by the reviewer), it curates an UltraInteract dataset as an alignment dataset for complex reasoning tasks using GPT-3.5-Turbo and GPT-4, and includes multi-turn interaction trajectories with the environment and the critique.
Both of the last two papers obtain multi-turn feedback interaction dataset using close models GPT-3.5-Turbo and GPT-4, and perform finetuning. Different from them, our paper shows that even without strong LLMs like GPT-3.5-Turbo and GPT-4, we can generate synthetic data for self-debugging using much smaller open pretrained/instruct models or the same LLM itself from the additional results from this rebuttal, and they are quite effective. We also proposed a novel RL training with explanation and execution reward in our paper.
## 5. Details on training data
Here is a summary of the APPS dataset and the number of problems that has at least one correct solution (either correct initial solution or correct refinement from GPT-3.5-Turbo) based on difficulty levels:
| | Introductory | Interview | Competition |
|-|-|-|-|
| Count | 2410 / 2639 | 493 / 2000 | 306 / 361 |
For the CodeContests dataset, the data is from multiple sources and difficulty levels are not comparable. We give the sample distribution from CodeChef with difficulty levels 1-4:
| | EASY (1) | MEDIUM (2) | HARD (3) | HARDER (4) |
|-|-|-|-|-|
| Count | 19 / 86 | 65 / 330 | 18 / 90 | 1 / 6 |
Problems without any correct initial solutions or refinements are discarded from the training data.
---
Rebuttal 2:
Comment: Thanks for the response.
> The reviewer might have some misunderstanding of the explanation evaluation ... SFT/RL training significantly improves the explanation quality
From what I can see, the absolute ratings from developers is below 3 on average on a 1-5 scale. While the performance improvement from untrained model baseline is considerable, the absolute scores are still not great.
I suspect a reason why absolute numbers are low is that not all explanations lead to refinement and the average rating might be higher in the second scenario.
> The results in Appendix A.4.1 show that even when training with strong code instruction data, the model does not obtain self-debug capability out-of-the-box and it only refines up to 3% of its wrong solutions
Perhaps I was not clear enough -- this work depicts considerable mutliturn improvements from RL training (48% to 57% pass@1 on HumanEval for CodeLLama-7B). My concern is that if the authors start their RL training with a stronger model (say OpenCodeInterpreter-CL-7B ) which already has 70+ pass@1 on HumanEval will the RL training be as effective? For example, OpenCodeInterpreter-CL which trains models with multi-turn SFT dataset only achieves 3% improvements from 72 to 75. This makes it challenging to interpret the performance improvements achieved in this paper.
---
Rebuttal Comment 2.1:
Comment: Thank you for following up.
## Human Rating
Below is the breakdown of the number of explanations having the score in each range. Although the overall average score is below 3, SFT and RL can generate 24 and 27 explanations with scores higher than 3 (50 samples in total).
| Score | Prompt | SFT | RL | GPT-3.5-Turbo |
|-|-|-|-|-|
| 4.5 <= score | 1 | 3 | 5 | 13 |
| 4 <= score | 1 | 10 | 11 | 21 |
| 3.5 <= score | 3 | 16 | 17 | 26 |
| 3 <= score | 7 | 24 | 27 | 35|
If we look at poor explanations, the RL model generates a similar number of explanations with “score <= 1.5” compared to GPT-3.5.
| Score | Prompt | SFT | RL | GPT-3.5-Turbo |
|-|-|-|-|-|
| score == 1 | 19 | 6 | 4 | 6 |
| score <= 1.5 | 33 | 14 | 10 | 9 |
| score <= 2 | 39 | 21 | 21 | 14 |
| score <= 2.5 | 43 | 26 | 23 | 15 |
We also find that human annotators tend to be more harsh and give lower scores than GPT-4. Figures 12 and 13 include examples of explanations with “score 4”, which we think the model explains pretty well.
## SFT/RL Improvement
There seems to be some misalignment from the number you mentioned.
If we look at Table 2, HumanEval Pass@1 using CodeLlama-7B. With code explanation and refinement (Expl. + Refine.), prompting’s Pass@1 is 40.13%, SFT’s Pass@1 is 52.98%, RL’s Pass@1 is 55.84%. So SFT increases it by about 13%, and RL (on top of SFT) further increases it by about 3%.
And also by looking at Table 9, with only single-turn data (MBPP, APPS, CodeContests, MagiCoder’s data), the Pass@1 on HumanEval is 43.88%. SFT (with multi-turn refinement data) can still get 9% improvement.
For OpenCodeInterpreter, it has been trained on high-quality data collected using GPT-4, so the base Pass@1 is already high enough (72%), which may nearly reach the model’s limit and leave less space for mult-turn SFT refinement to improve. That could be why multi-turn SFT only improves 3%. **We think this is not contradictory with our results.**
Our focus is on how to train LLM to self-debug starting from standard code generation training data (MBPP, APPS, CodeContests). Thus we only use GPT-3.5 as the teacher to collect multi-turn data. We also test by using CodeLlama-34B as the teacher (Table 5), and even using CodeLlama-7B itself to self-bootstrap multi-turn data (mentioned in global response). Our method can work without relying on GPTs.
This work is not competing on data quality, so the final results are not supposed to outperform OpenCodeInterpreter. But we would like to add OpenCodeInterpreter as related work and discuss it in our paper. | Summary: This paper proposed a pipeline to obtain code explanation and refinement data from a stronger model (mainly GPT-3.5 with CodeLLaMA as ablation) to train weaker models (i.e., StarCoder-15B, CodeLLaMA-7B/13B) using SFT and RL methods. More specifically, it uses the weaker models to sample the incorrect solutions and use the stronger models to give code refinement and explanations. It also designed reinforcement learning rewards specifically for the code refinement and explanation tasks. Experiments are conducted on MBPP and HumanEval, as well as their EvalPlus versions, and results show that the both SFT and RL yields improvements over the baselines, while the improvements from RL are relatively marginal compared to those of SFT.
Strengths: S1. The tasks of code refinement and explanation are increasingly important due to the popularity of using language model agents for coding tasks. And this work shows a way to reliably improve the performance of LLMs in these two tasks, which may be useful for broader domains such as code editing and reasoning;
S2. The RL reward design and ablations could be useful for further research on using RL for code explanations and refinements;
S3. The paper is well-written, with clear motivations, details of methodology and comprehensive experiments.
Weaknesses: W1. Something slightly disappointing is that this work chooses to generate training data **from a stronger model**, which classifies it into the category of distillation, while the entire pipeline could have been done with the same LLM to explore the potential of self-improvement;
W2. While the paper focus a lot on the design of the RL method (which seems to be a big part of the contribution from my understanding), the actual improvements yield by the RL methods are quite marginal. However, I do note that at least the performance does not decrease for the most cases;
W3. There are some baselines experiments and ablations that can be added to make the evaluation part stronger (see questions below).
Technical Quality: 3
Clarity: 4
Questions for Authors: Q1. According to Olausson et al., 2023 on self-repair, after $k$ rounds of self-refinement after the initial code generation, the success rate is often less than simply pass@$k+1$. And also note that the sampling can be done in parallel while the iterative refinement can only be done sequentially. Have you compared the refinement results with pass@$k+1$ to see if self-refinement yields any benefit before / after the SFT/RL training?
Q2. From Fig. 3(a), it seems to me that the CodeBLEU score is not a good metric as it can barely separate the correct and wrong outputs distribution-wise, is there any reason for CodeBLEU to still be factorized into the reward function despite this?
Q3. Can you comment on the reliability of using the RoBERTa embedding for measuring the similarity of the explanations? Are there better ways to do this?
Q4. From Tab. 2, it seems that after SFT, the "Init." performance also significantly improved, does this means that **only** training on code refinement and explanation can also improve code generation?
Q5. (Pertain to W3) Have you tried to use the same model as the LLM to create the training data?
Q6. The training data are created on top of APPS and CodeContest as well, but why are those two datasets not used in evaluation?
**References**
Olausson, Theo X., et al. "Is Self-Repair a Silver Bullet for Code Generation?." The Twelfth International Conference on Learning Representations. 2023.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for insightful suggestions and questions.
## 1. Data collected from the same model itself
We provide the experiment results using the synthetic data generation from the model itself for CodeLlama 7B.
We highlight some results here. Below is the CodeLlama-7B SFT/RL using the data collected from itself, evaluated on MBPP+ and HumanEval+.
| Approaches | | MBPP+ | | HumanEval+ | |
|-|-|-|-|-|-|
| | | Pass@1 | Pass@10 | Pass@1 | Pass@10 |
| | Init. | 37.18 | 61.23 | 27.40 | 60.81 |
| Prompt | Refine | 42.97 | 66.89 | 31.84 | 65.08 |
| | Expl. + Ref. | 42.46 | 67.41 | 32.49 | 66.58 |
|||||||
| | Init. | 41.78 | 61.77 | 33.25 | 61.50 |
| SFT | Refine | 46.26| **66.39** | 40.15 | 67.15 |
| | Expl. + Ref. | 45.94 | 65.77 | 39.10 | 67.33 |
|||||||
| | Init. | 41.61 | 61.29 | 33.66 | 62.17 |
| RL | Refine | **46.28** | 65.86 | **41.54** | 68.14 |
| | Expl. + Ref. | 46.10 | 65.99 | 40.79 | **68.50** |
The full results are in our attached author response PDF file, Tables 1 and 2. Results show that self-taught SFT and RL also achieve big improvements. CodeLlama-7B SFT/RL models achieve up to 5% improvement in self-debugging, compared with the baseline prompting method, and the model trained with code generation data only. Though comparing the experiments using data from CodeLlama 34B and GPT-3.5-Turbo, the improvement is less.
## 2. Pass@K+1 versus refinement Pass@K
This is a very interesting point. We evaluate the Pass@2 of the initial solution and Pass@1 after one round of refinement.
Below is the result when the models are not trained (the prompting baseline). We do observe that Pass@2 is better than Pass@1 after refinement. **This shows that using a prompting approach to self-debug is not effective**.
| Before Training | | MBPP+ | HumanEval+ |
|-|-|-|-|
| StarCoder-15B | Pass@2 | **45.20** | **36.38** |
| | Expl. + Ref. Pass@1 | 39.27 | 30.09 |
| CodeLlama-7B | Pass@2 | **46.91** | **38.17** |
| | Expl. + Ref. Pass@1 | 42.46 | 32.49 |
| CodeLlama-13B | Pass@2 | **48.08** | **41.52** |
| | Expl. + Ref. Pass@1 | 45.77 | 38.36 |
However, after the models are trained using our pipeline. The refinement Pass@1 is clearly higher than the Pass@2 of initial solutions. This shows that the model’s self-debugging performance is bad without training. The prompting approaches proposed by existing works such as (https://arxiv.org/pdf/2306.09896 and https://arxiv.org/abs/2304.05128) are not as effective on open-sourced LLMs. **This experiment further supports our motivation to train LLMs to self-debug and proves the effectiveness of our approach.**
| After SFT | | MBPP+ | HumanEval+ |
|-|-|-|-|
| StarCoder-15B | Pass@2 | 51.19 | 39.29 |
| | Expl. + Ref. Pass@1 | **53.83** | **43.54** |
| CodeLlama-7B | Pass@2 | 50.93 | 40.95 |
| | Expl. + Ref. Pass@1 | **51.55** | **47.62** |
| CodeLlama-13B | Pass@2 | 50.93 | 44.78 |
| | Expl. + Ref. Pass@1 | **54.59** | **51.32** |
## 3. CodeBLEU in RL reward
We find that only using binary execution feedback as a reward does not train the model properly as the reward is too sparse. That is, a completely wrong solution will get the same reward as an almost correct solution, which hurts the RL training. Although the CodeBLEU reward is weak, we find that having it helps stabilize the training by densifying the reward distribution. This is the main reason we introduce the CodeBLEU score in the reward.
## 4. Roberta embedding for text similarity
Judging the correctness of code explanation is non-trivial. We use the Roberta model (https://huggingface.co/sentence-transformers/all-roberta-large-v1) that has been massively fine-tuned for text similarity using 1B sentence pairs.
We try to analyze the reliability of this approach as shown in Figure 3(c) in the paper. The similarity can separate the explanations that lead to correct and wrong solutions most of the time.
A potential alternative approach could be using powerful LLMs such as GPT-4 to rate the code explanation, however, this is not scalable enough the handle our RL training data.
## 5. Code generation improvement
It should be noted that the SFT training includes code generation data (provided in the original MBPP/APPS/CodeContests) + self-debug data. It could be the code generation data improves the initial solution generation. We will make this clearer in our experiment setup.
We compare our approach with fine-tuning using purely code-generation data. The results are in Appendix A4.1 Table 9 in the paper.
## 6. Evaluation of APPS and CodeContests
This is a good suggestion and we plan to add the results to the final version if accepted.
Below are the StarCoder-15B’s results on APPS and CodeContests. Results of the other two backbones can be found in our attached author response PDF file (Tables 4 and 5). Due to the large amount of test samples in APPS (5000), we only test the other two backbones on a subset of 200 samples. We plan to complete the evaluation and add it to the paper.
| Approaches | | APPS (5000) | | CodeContests (165) | |
|-|-|-|-|-|-|
| | | Pass@1 | Pass@10 | Pass@1 | Pass@10 |
| | Init. | 2.57 | 8.59 | 0.58 | **3.88** |
| Prompt | Refine | 2.84 | 9.10 | 0.65 | 4.23 |
| | Expl. + Ref. | 2.95 | 9.53 | 0.78 | 4.84 |
|||||||
| | Init. | 3.80 | 11.52 | **0.62** | 3.67 |
| SFT | Refine | 6.89 | 17.01 | 1.16 | 5.25 |
| | Expl. + Ref. | 6.86 | 17.18 | 1.48 | 5.94 |
|||||||
| | Init. | **4.50** | **13.62** | 0.44 | 2.19 |
| RL | Refine | **7.81** | **19.28** | **1.80** | **6.04** |
| | Expl. + Ref. | **8.10** | **19.75** | **1.80** | **6.37** |
On APPS and CodeContests, the prompting baseline also only refines very few solutions, and the improvements brought by refinement are marginal. However, with our SFT and RL-trained models, we see significantly stronger self-refinement ability. The final Pass@k is also about doubled from the prompting baseline.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I'd like to thank the authors for the detailed response, especially the additional experiment results.
I find the additional results interesting and promising, and I think the argument on using CodeBLEU stabilizes training makes a lot of sense.
I think adding the results / discussions from at least 1+2+3 from above would make the paper more interesting and stronger, so I hope the authors would add them in the next version of the paper.
I have improved my score accordingly, good luck! | Rebuttal 1:
Rebuttal: # Global Response
## 1. Contribution and Novelty
While there are more and more works on self-debugging, most of the existing works focus on how to prompt existing LLMs to do self-debugging. Few works investigate the self-debugging capability of LLMs and how to improve it at the time of submission. Our paper primarily focuses on how to improve the self-debugging capabilities of LLMs via **training**.
We propose a systematic way from synthetic data generation to SFT and RL training with novel explanation and execution rewards. This is the main contribution and novelty of our paper. RL training is important as it helps LLMs to learn from both success and failure generations, and the separation of explanation reward and execution reward helps LLMs learn differently about the explanations and fixes. We also show that LLMs without the proposed training have poor self-debugging capability even if it is trained with strong code-instruction data, and the proposed method is important in improving this capability.
While there seems to be a concurrent work from ICML 2024 on the same topic, there are significant differences and novelties in our paper, for example, the proposed RL training with explanation and execution reward, how we synthesize explanations to explain bug, how we conducted much larger scale experiments such as APPS and CodeContests, and the ablation studies we performed to showcase the importance and effectiveness of our method.
## 2. Self-taught Synthetic Data Generation
Many reviewers are interested in this. The synthetic data generation can also be done on the same model forming a self-taught manner, instead of “distillation” from stronger models. We provide additional experiment results of this kind in the attached PDF for CodeLlama-7B, where the synthetic data is generated from the model itself and it is used to fine-tune the same model.
Experiments show that self-taught synthetic data generation and training also achieve significant improvements. CodeLlama-7B SFT model achieves up to 5% improvement in self-debugging using data generated from CodeLlama-7B, compared with the baseline prompting method and code-instruction-only model. Though comparing the experiments using data from CodeLlama-34B and GPT-3.5-Turbo, the improvement is slightly less.
## 3. Evaluation on APPS and CodeContests
Also many reviewers are interested in this. We provide evaluation results on the test set of APPS and CodeContests with StarCoder-15B below and the other two backbones in the attached PDF file (Tables 4 and 5). On APPS and CodeContests, the prompting baseline also only refines very few solutions, and the improvements brought by refinement are marginal (on APPS 2.57%->2.84%; on CodeContests 0.58%->0.78%). However, with our SFT and RL-trained models, we see significantly stronger self-refinement ability (on APPS 4.50%->8.10%; on CodeContests 0.44%->1.80%), about doubled from the prompting baseline.
| Approaches | | APPS (5000) | | CodeContests (165) | |
|-|-|-|-|-|-|
| | | Pass@1 | Pass@10 | Pass@1 | Pass@10 |
| | Init. | 2.57 | 8.59 | 0.58 | **3.88** |
| Prompt | Refine | 2.84 | 9.10 | 0.65 | 4.23 |
| | Expl. + Ref. | 2.95 | 9.53 | 0.78 | 4.84 |
|||||||
| | Init. | 3.80 | 11.52 | **0.62** | 3.67 |
| SFT | Refine | 6.89 | 17.01 | 1.16 | 5.25 |
| | Expl. + Ref. | 6.86 | 17.18 | 1.48 | 5.94 |
|||||||
| | Init. | **4.50** | **13.62** | 0.44 | 2.19 |
| RL | Refine | **7.81** | **19.28** | **1.80** | **6.04** |
| | Expl. + Ref. | **8.10** | **19.75** | **1.80** | **6.37** |
## 4. Comparison with teacher model
We provide the comparison with the teacher models in our attached author response PDF file.
Comparing it with Table 2 in the paper, with GPT-3.5 as the teacher, CodeLlama-13B SFT/RL cannot outperform GPT-3.5, which could be because GPT-3.5 is a much stronger teacher model than our backbones.
Comparing it with Table 10 in the paper, with CodeLlama-34B as the teacher, CodeLlama-13B SFT/RL **sometimes outperforms the CodeLlama-34B teacher** (e.g. in HumanEval+ pass@1 56.24% vs 48.51%, and in MBPP+ pass@1 56.60% vs 53.19%). This is non-trivial and surprising that our approach enables the smaller LLMs to outperform the teacher models sometimes.
## 5. Pass@K+1 versus refinement Pass@K
The reviewer mention a very interesting point: generation followed by one round of refinement may not be better than simply regenerating one more solution. That is, is Pass@K of generation and refinement better than Pass@K+1 of generation only?
We evaluate the Pass@2 of the initial solution and Pass@1 after one round of refinement. Below is the result when the models are not trained (the prompting baseline). We do observe that Pass@2 is better than Pass@1 after refinement. **This shows that using a prompting approach to self-debug is not effective**.
| Before Training | | MBPP+ | HumanEval+ |
|-|-|-|-|
| StarCoder-15B | Pass@2 | **45.20** | **36.38** |
| | Expl. + Ref. Pass@1 | 39.27 | 30.09 |
| CodeLlama-7B | Pass@2 | **46.91** | **38.17** |
| | Expl. + Ref. Pass@1 | 42.46 | 32.49 |
| CodeLlama-13B | Pass@2 | **48.08** | **41.52** |
| | Expl. + Ref. Pass@1 | 45.77 | 38.36 |
However, after the models are trained using our pipeline. The refinement Pass@1 is clearly higher than the Pass@2 of initial solutions. This shows that the model’s self-debugging performance is bad without training. The prompting approaches proposed by existing works such as (https://arxiv.org/pdf/2306.09896 and https://arxiv.org/abs/2304.05128) are not as effective on open-sourced LLMs. **This experiment further supports our motivation to train LLMs to self-debug and proves the effectiveness of our approach.**
| After SFT | | MBPP+ | HumanEval+ |
|-|-|-|-|
| StarCoder-15B | Pass@2 | 51.19 | 39.29 |
| | Expl. + Ref. Pass@1 | **53.83** | **43.54** |
| CodeLlama-7B | Pass@2 | 50.93 | 40.95 |
| | Expl. + Ref. Pass@1 | **51.55** | **47.62** |
| CodeLlama-13B | Pass@2 | 50.93 | 44.78 |
| | Expl. + Ref. Pass@1 | **54.59** | **51.32** |
Pdf: /pdf/699944a23822a34f1618eda351fcbca9fdaea0e8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Authors propose a framework to perform SFT and RL training to achieve superior performance in generating code with open code LMs on the self-debugging task. They leverage the test suite present in benchmarks like APPS, CodeContests to obtain execution feedback for model refinements on CodeLM generated code. The resulting dataset is then used for SFT training, followed by RL training that can also leverage failed trajectories besides the successful ones (where the model succeeded in generating a code that passes all unit tests). Authors show promising gains for open weight models in the model's capability to self-debug after training with their proposed methodology.
Strengths: - Authors present a strong motivation for this work (Lines 43-52) on achieving strong code generation performance with open weight models.
- This work makes a strong contribution in the form of a framework to construct data for SFT and RL training of a model that can perform self-debugging after explaining faulty code. Authors propose a clever way of leveraging execution feedback in constructing their datasets
- The reward construction based on environment feedback is a particularly important contribution that significantly adds to the novelty of this work. To my knowledge prior work hasn't utilised environment feedback in this manner.
- Convincing results that confirm the utility of training open codeLMs on the task of self-debugging.
Weaknesses: - While the experiments and analysis of the results and datasets are fairly exhaustive, I believe the choice of RL algorithms should be justified by considering or eliminating alternatives like preference optimisation using DPO or KTO. I'd suggest at least adding a discussion on the pros and cons of preference learning compared to the RL setup that the authors advocate in this work.
- Some missing baselines: teacher model (GPT-4/3.5/CodeLlama35B) performance is missing in Tables 2 and 5. Authors do not discuss the persisting gap in performance in any between the models used in creating the datasets and the performance their approach attains on the benchmarks used.
- A very relevant related work (Teaching LLMs to Self-Debug https://arxiv.org/abs/2304.05128) mentions gains in sample efficiency as one of the major benefits of performing self-debugging/refinement. I could not find a discussion or results on this aspect for the fine-tuned models presented in this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: - APPS and CodeContests are used to train the CodeLMs, but I could not find evaluation on the APPS-test or CodeContests test set. Could you explain this choice?
- Can you provide details on the number of GPU hours required in the experiments?
- Have the authors considered training a separate model using the SFT and RL techniques to solely solve code refinement for code generated from the base model?
- What do the authors think about the generality of this method to improve on attributes beyond correctness in refining code? e.g. readability, performance and secureness of generated code.
- I'm curious how the proposed approach would compare against a simple baseline where the model is SFT-trained on the final refinement collected in your training set. The current setup involves using problem description $x$ and ground truth code $y$, test suite $T$ to generate a synthetic code solution $y'_l$ that fails and is explained by $x_l$ followed by a successful refinement $y'_w$. Your approach then trains the model to generate $x_l, y'_w$ given $x, y, y'_l$. A simple baseline to compare against could involve training the model to generate $y'_w$ given $x$. This would confirm the value in framing code generation as a explanation + refinement task.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Briefly discussed by the authors in Section 5. Could be expanded to include a discussion on other aspects of code refinement not covered in the paper, and acknowledging gaps if any in performance of open models trained with this method and closed models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for insightful suggestions and questions.
## 1. APPS and CodeContests Evaluation
This is a good suggestion and we plan to add the results to the final version if accepted.
Below are the StarCoder-15B’s results on APPS and CodeContests. Results of the other two backbones can be found in our attached author response PDF file (Tables 4 and 5).
| Approaches | | APPS (5000) | | CodeContests (165) | |
|-|-|-|-|-|-|
| | | Pass@1 | Pass@10 | Pass@1 | Pass@10 |
| | Init. | 2.57 | 8.59 | 0.58 | **3.88** |
| Prompt | Refine | 2.84 | 9.10 | 0.65 | 4.23 |
| | Expl. + Ref. | 2.95 | 9.53 | 0.78 | 4.84 |
|||||||
| | Init. | 3.80 | 11.52 | **0.62** | 3.67 |
| SFT | Refine | 6.89 | 17.01 | 1.16 | 5.25 |
| | Expl. + Ref. | 6.86 | 17.18 | 1.48 | 5.94 |
|||||||
| | Init. | **4.50** | **13.62** | 0.44 | 2.19 |
| RL | Refine | **7.81** | **19.28** | **1.80** | **6.04** |
| | Expl. + Ref. | **8.10** | **19.75** | **1.80** | **6.37** |
On APPS and CodeContests, the prompting baseline also only refines very few solutions, and the improvements brought by refinement are marginal. However, with our SFT and RL-trained models, we see significantly stronger self-refinement ability. The final Pass@k is also about doubled from the prompting baseline.
## 2. Preference learning (DPO/KTO) vs PPO
Preference learning like DPO or KTO has the advantage of its simplicity without the need for a reward function or reward model. In the setup of self-debugging with execution feedback in this paper, we could construct preference data in such a way: the fix that passes unit tests is preferred over the one that fails unit tests. Such preference data could also work, but might seem less direct as execution feedback/reward is easily obtained from the execution engine, unlike the human preference setup. One advantage of our RL training is that we can assign different rewards to different parts of the sequence, i.e. the explanation reward on the explanation part and the execution reward on the generated fix part.
In this work, our focus is exploring how to train LLMs for self-debugging and bug explanation with the lack of training data. Our proposed data collection pipeline and SFT, RL (PPO-based) training do outperform existing prompting approaches significantly. Exploring alternative RL algorithms could be interesting future work in this domain.
## 3. Teacher models performance
We provide the comparison with the teacher models in our attached author response PDF file.
Comparing it with Table 2 in the paper, with GPT-3.5 as the teacher, CodeLlama-13B SFT/RL cannot outperform GPT-3.5, which could be because GPT-3.5 is a much stronger teacher model.
Comparing it with Table 10 in the paper, with CodeLlama-34B as the teacher, CodeLlama-13B SFT/RL **sometimes outperforms the CodeLlama-34B teacher** (e.g. in HumanEval+ pass@1 56.24% vs 48.51%, and in MBPP+ pass@1 56.60% vs 53.19%).
## 4. Comparison with “Teaching LLMs to Self-Debug”
**The method in “Teaching LLMs to Self-Debug” is the prompting baseline that this paper refers to in Tables 2, 4, and 7.** We directly compare the results of our proposed method with this baseline in our experiments, and the results show that such a prompting approach cannot work as well on open-sourced LLMs.
The most notable difference is that “Teaching LLMs to Self-Debug” investigates the self-debugging of commercial LLMs by prompting, while our paper investigates how to improve open-source LLMs’ self-debugging capability via training.
Another difference is that our paper tries to **generate the explanation of the bug to help humans and LLMs better understand the reasoning**, while the paper “Teaching LLMs to Self-Debug” tries to generate an explanation for the code instead of reasoning the bug.
## 5. GPU hours
The GPU hours for training:
| Models | SFT | RL |
|-|-|-|
| StarCoder-15B | 320h | 192h |
| CodeLlama-7B | 80h | 96h |
| CodeLlama-13B | 280h | 178h |
Experiments are conducted on 8 NVIDIA A100 GPUs, each with 40GB of memory
## 6. Generate 𝑦𝑤′ given 𝑥
If we understand correctly, the suggestion is generating the correct refinement (𝑦𝑤′) given only the problem description (𝑥). We think this is essentially fine-tuning with code generation data.
We compared our approach (framing code generation as an explanation + refinement) with fine-tuning with code generation data only, and the results are shown in Appendix Table 9.
We find fine-tuning for code generation improves the initial solution, achieving comparable or sometimes higher Pass@1 than our approach. **But the model’s self-debugging ability is not improved**, and the model still cannot benefit from self-debugging. The model trained for code generation cannot successfully self-debug very often and eventually is surpassed by our approach dramatically. We hope this confirms the benefit of “framing code generation as an explanation + refinement task” over simply training a better code generation model.
## 7. Separate code generation and refinement models
Training a separate dedicated debugging code refinement model is definitely an option. The main motivation of the paper is to improve LLM’s self-debugging capability as one part of the capabilities of LLMs, and not be over-specialized so that the training recipe can directly be incorporated in practice.
## 8. Generalize to other aspects
We think that the method can be generalized to improve other aspects of code such as readability and secureness. The key lies in how to properly design the reward with respect to each aspect. The readability and secureness are most likely to be determined by a set of static analysis rules. Combining all aspects together to obtain a final reward is necessary and used to provide feedback to the training. | null | null | null | null | null | null |
Self-Consuming Generative Models with Curated Data Provably Optimize Human Preferences | Accept (spotlight) | Summary: The authors investigate the properties of self-consuming loops that arise in the training of generative models. In particular, they investigate the impact that data curation has on the iterative retraining performance of these models. The paper contains theoretical and empirical analysis of how model performance is affected by various data curation assumptions (only synthetic examples, or real data being injected at each step, or human-preference curated synthetic samples, etc).
Strengths: Originality: I believe this work is highly original, largely because it considers a new problem formulation--"what happens to self-consuming generative models when the synthetic data that they re-train one has been curated via human preferences?". To my knowledge, nobody has considered this problem, and it is an excellent and timely problem to consider.
Clarity: The authors motivated the problem very clearly.
Significance: many previous works have investigated self-consuming loops, but the area is still burgeoning, and right now, the area seems like "the wild west"--there are lots of papers out there with different assumptions, different results, no agreed-upon benchmarks tasks, etc. This paper is significant because it considers a more realistic setup than previous papers; it considers the setting where the web-scale data contamination happens because of human-curated data. This is an important case to consider, and represents a large step towards modeling the data contamination issue more rigorously. This more realistic setup comes with more mathematical overhead, which is challenging to deal with.
Weaknesses: Clarity: in my opinion, the presentation of statements of the theorems in the paper needs improvement in order to be useful to the community. Consider for example Theorem 2.1. The assumptions for this theorem are distributed in the section preceding it, which makes it much harder to understand and contextualize that theorem. I would strongly suggest summarizing the assumptions and key notations in the statement of that theorem (and likewise for all the other results.) An excellent model for this would be the paper that the authors cited the most, Bertrand et al's "On the Stability of ..." . That paper's Theorem 1 (from the latest arXiv version) begins--"Given theta^\star as defined in Equation (7) that follows assumptions 1 and 2....". Given that a large part of this paper's contribution is its theorems, I would be likely to raise my score, and champion this paper, if I could first see an updated draft which makes explicit every assumption in the statement of each theorem.
See also the limitations section.
Technical Quality: 3
Clarity: 2
Questions for Authors: I'm not sure I understand how the experiment supports the theory here, please help clarify this for me--at line 250, the authors say "applying theorem 2.1, the density will converge to a renormalized Gaussian distribution restricted to the ball centered at x_* of radius r_min". But this isn't what I see in Figure 4, it looks like there are a bunch of different Gaussian balls with different densities, corresponding to different intensities. Should these gaussian balls all be the same density, or did I misunderstand something?
Some misc points that didn't affect my score/judgment of the paper, but the authors should consider fixing in the next draft:
- improper formatting in equation 8 (spacing/parentheses)
- awkward/grammatically incorrect wording in the sentence immediately after equation (7)
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: The two main things preventing me from giving a higher score are the following two limitations. I'm looking forward to hearing the author's rebuttal to these points (and hopefully seeing an updated draft, if possible).
1. Presentation of statements of theorems (what are the specific hypotheses? eg take a look at the latest version of the bertrand et al paper on the arXiv for a good way to do this, since that paper is structured in a similar way. They number their assumptions and make them more clear. See "Weaknesses" section for more details.) This is the primary limitation, from my perspective, as I think the usefulness of the paper is very limited by unclear theorem statements.
2. Proper contextualization of these results relative to the literature--namely, there's a difference between the work in Bertrand et al and Alemohammad et al, but the authors seems to be comparing them in an "apples to apples" way. Namely, the former work considers the case of iterative fine-tuning, whereas the latter considers the case of retraining from scratch. In the former case, it is strictly easier to avoid model collapse, since the model parameter update is "local", and in the latter case, the updates are "global." Specifically, on line 218 it says: "Alemohammad et al. (2024); Shumailov et al. (2023) first evidenced catastrophic degradation of the generated data in the fully synthetic loop. Bertrand et al. (2024) mitigate these conclusions in the setting where the model is retrained on a mixture of synthetic and real data and they show the stability of the process around the data distribution." I think this is a false statement, because Alemohammad et al considered re-training from scratch at each iteration, which wasn't considered in the Bertrand et al paper--but please correct me if I'm misunderstanding. And in that same vein, I think it is important to properly contextualize the present paper in the presence of that dichotomy--does this paper consider iterative fine-tuning, or iterative re-training from scratch? That is important information for the readers/the literature.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive comments and are pleased that they find the question raised "highly original" and "very clearly" motivated. We further appreciate that they find it "significant" and "more realistic" than previous works. We now address the key clarification points raised by the reviewer.
## Presentation of statements of the theorems
We acknowledge the reviewer's concern that the dissemination of all of our assumptions may not have been sufficiently clear. Below, we present updated assumptions and theorems that we hope address the clarity issue.
We propose to restructure assumption 2.1 to sub-assumptions of increasing strength:
**Assump. 2.1-A:** The distribution $p \in \mathcal{P}(\mathbb{R}^d)$ has a density w.r.t. Lebesgue measure and $E_p[e^{r(x)}] < \infty$.
**Assump. 2.1-B:** The distribution $p\in \mathcal{P}(\mathbb{R}^d)$ has a density w.r.t. Lebesgue measure and there exists $r_* \in \mathbb{R}$ such that: (a) $p$-almost surely, $r(x)\leq r_*$ and (b) $p$ puts positive mass in a neighborhood of $r_*$ i.e., $\forall \epsilon > 0, P_0(r(x)\geq r_* - \epsilon) > 0$.
**Assump. 2.1-C:** Assum. 2.1.B and $P(r(x) = r^*) > 0$.
Using these assumptions, our results of section $2$ will have the following overheads:
> **Thm. 2.1:** Let for all $t\geq 0$, $p_{t+1}$ be the distribution induced from a discrete choice model on $p_t$ (4) where $\mathcal{P}= \mathcal{P}(\mathbb{R}^d)$ is the set of probability distributions on $\mathbb{R}^d$. If $p_0$ satisfies Assump 2.1-C, then we can define $p_*(x):=\frac{p_0(x)1_{r(x)=r_*}}{P_0(r(x)=r_*)}$ and the self-consuming loop on curated samples $p_t$ converges to $p_*$: $KL(p_*||p_t) \xrightarrow{t\rightarrow \infty} 0$
The other results have been updated similarly. If the reviewer would like to see the other exact statements, we can share them in another response.
## Contextualization of our results in the literature on model collapse
We agree with the reviewer that better contextualization of our results on whether the retraining is performed from scratch or via fine-tuning can improve the clarity. **We will update the paper to clarify this and present below how we would proceed.**
## On retraining from scratch vs iterative fine-tuning:
**1.) Experiments:** All retraining step in experiments on mixture of Gaussians (MoG) and two moons are performed from scratch, whereas in the case of CIFAR dataset, due to the high compute cost of retraining the model from scratch (20 hours on an A100 GPU) we performed fine-tuning at each step. Fine-tuning is always performed on $10^6$ images. We use the same amount of images for fair comparison between different proportions of real data injected. In contrast, in [1], the collapse is shown when the model is retrained from scratch at each iteration. In [3], the experiments are performed using retraining from scratch for VAEs and GMMs and sequential fine-tuning for LLMs. In [2], toy experiments on two moons and MoG are performed by retraining from scratch, while experiments on CIFAR10 and FFHQ are performed using iterative fine-tuning. We agree with the reviewer's remark that stability using real data in the setting of iterative fine-tuning is easier to obtain than when retraining from scratch since the model parameters are initialized around a good potential set of parameters. However, we would also like to point out that model collapse occurs also in the setting of iterative fine-tuning as shown in Figure 2 of [2] (red curves).
**2.)Theory:** Finally regarding our theoretical results, only thm 2.2 happens in the setting of iterative fine-tuning (since it uses the same setting as in [2]). However, all our other results and in particular thm 2.1, 2.3, 2.4 do not explicitly assume a special learning algorithm in parameter space. Instead, we consider having a perfect learning model and consider the evolution of the expected reward for such a learning model. In that sense, it applies to learning from scratch with the additional assumption that the model attained perfectly fits the curated distribution. We will make this point clear in the updated draft.
## On fresh real vs fixed real data:
[1] studies the self-consuming loop in three different settings where the model is retrained a) only on synthetic data b) on a mixture of synthetic data and a fixed set of real data samples b) on synthetic data and a fresh set of real data samples at each step. In setting (a), the retraining loop collapses. In (b), it collapses, too, but with some delay related to the amount of fixed real data. In (c) the retraining loop does not degrade performances provided there is enough fresh real data at each step. [2] proved stability in setting (b) under some theoretical assumptions and in the iterative finetuning framework. Comparatively, our experiments on MoG are performed using fresh real data at each step while the CIFAR experiments are performed in the fixed real data framework.
## Experiment clarification:
We thank the reviewer for highlighting a relevant point on the presence of multiple spots with different densities of fig. 4 at iteration 4. These are remaining errors due to the non-fully convergence. To improve clarity, we ran the experiment for more iterations (global response fig 3) in which case it is clearer that the final distribution is a unique renormalized Gaussian distribution restricted to the ball centered at $x_*$ of radius $r_{min}$.
If the reviewer finds our updates agreeable, we would appreciate it if the reviewer considers championing/upgrading their score for this paper, as mentioned in the original review. We are also happy to answer any further questions that the reviewer may have.
[1] Alemohammad et al Self-consuming generative models go mad. ICLR 2024
[2] Bertrand et. al On the Stability of Iterative Retraining of Generative Models on their own Data. ICLR 2024
[3] Shumailov et al. The curse of recursion: Training on generated data makes models forget. Nature 2024
---
Rebuttal 2:
Title: Response to rebuttal
Comment: I would like to thank the authors for the thoughtful reply to my review.
In particular, I appreciate the additional contextualization of works [1] and [2]; I think that the literature would greatly benefit from having that explanation from the section "On fresh real vs fixed real data". I also appreciate the more clear statement of Theorem 2.1.
Would it be possible to share here the other exact statements as well--namely, how the authors would propose to update the other exact statements of the other theorems, 2.2, 2.3, 2.4--in another response?
**I have updated my score to above the acceptance threshold, with the expectation that the authors will update the camera-ready version's main result statements with more clearly stated hypotheses and clearly referenced terms, similar to the rebuttal above.** Although my concern was mainly with Theorem 2.1 (it contained the least context out of all the other theorems), in my opinion, Theorems 2.2, 2.3, 2.4 should each be improved via clearer context, and I would want to see the updated statements if possible.
I strongly believe that having these statements stated that clearly would better allow the community to benefit from the authors' work.
---
Rebuttal Comment 2.1:
Title: Response to the Reviewer's Comment
Comment: We are grateful to the reviewer for their valuable feedback and for increasing their score. We will include in the updated manuscript the new presentation of the theorem statements with their assumptions, along with the contextualization of our results in the literature. We present below how we aim to update the statements of Assumption 2.2 and theorems 2.2, 2.3, and 2.4.
**Assumption 2.2:** For $\theta$ close enough to $\theta_*$, the mapping $x \mapsto \nabla^2_\theta\log p_\theta(x)$ is $L$-Lipschitz and the mapping $\theta \mapsto E_{p_{data}}\left[\log p_\theta(x) \right]$ is continuously twice differentiable with $E_{p_{data}}\left[\nabla^2_\theta\log p_\theta(x)\right] \preceq -\alpha I \prec 0$. Further suppose $W_1 (p_{\theta_*}, p_{data})\leq \epsilon$, i.e. $p_{\theta_*}$ is close to the data distribution $p_{\text{data}}$.
**Theorem 2.2:** Under Assumption 2.2, if $L \epsilon < \alpha$ and $\lambda<\frac{\alpha}{2L\epsilon}$, then there exists a neighborhood of the optimal distribution parameters $\theta_*$ such that for any initial parameters $\theta_0$ in that neighborhood, $p_{\theta_t}$ converges to $p_{\theta_*}$ exponentially fast:
$$ KL(p_{\theta_*}||p_{\theta_t}) = \tilde{\mathcal{O}}\left(\left(\frac{\lambda(\alpha+\epsilon L)}{\alpha+\lambda(\alpha-\epsilon L)}\right)^{2t}\right)$$
**Theorem 2.3:** Let $\lambda > 0$ and consider the process $(p_{t})$ defined in eq. 8, with $p_0 = p_{ref}$. If $p_{ref}$ satisfies Assumption 2.1 B, then for all $t\geq1$:
$$E_{p_t}\left[e^{r(x)}\right] \geq E_{p_{ref}}\left[e^{r(x)}\right] + \frac{\lambda}{(1+\lambda)^3}\frac{(K-1)Var_{p_{ref}}\left[e^{r(x)}\right]}{Ke^{r_*}}$$
**Theorem 2.4:** Let $\lambda > 0$ and $p_{ref}\in \mathcal{P}(\mathbb{R}^d)$ with a density w.r.t. Lebesgue measure. Consider the process $(p_{t})$ defined in Equation 8, with $p_0 = p_{ref}$. Suppose that $\lambda < \frac{1}{K-1}$, then, for all $t\geq1$:
$$KL(p_t||p_{ref})\leq -\log\left({1-\lambda(K-1)}\right)$$
We hope this answers the reviewer’s remaining concerns and are happy to provide clarification to any additional questions the reviewer may have. | Summary: This paper studies the impact of data curation on iterated retraining of generative models. Theoretical results are derived for the convergence state of the retraining loop when using a fraction of curated synthetic data or a mixture of real data and curated synthetic data at each step. Empirical experiments on both synthetic datasets and CIFAR-10 demonstrate that the proposed approach can bias the generative model to generate samples with higher reward.
Strengths: The problem of iterated retraining of generative models using curated data is important and interesting. The authors have indeed made some achievements in this direction. The theoretical results presented in this paper are interesting and reasonable, especially in their connection to preference optimization. The toy experiments are consistent with the theoretical claims.
Weaknesses: The writing of this paper needs further improvement. Some notations are unsuitable, and certain mathematical notations are introduced without explanation. Some literature references are missing.
1. In Eq.2, $\mathcal{BT}\left(x_1, \ldots, x_K\right)$, where "BT" refers to the Bradley-Terry model, is typically used to model pairwise preferences, whereas the Plackett-Luce (PL) model is proposed for preferences involving more than two items.
Eq.5: If $p_{t+1}(x)$ is not a normalized density function, the same applies to $p_{t+1}(x)$ in Eq.6.
2. There are already studies on deep generative models (GANs) that investigate the convergence state of iterative retraining on curated data from the perspective of preference [1, 2]. It should include a discussion of these works.
3. The theoretical proofs especially the part in Section 2.2 are heavily inherited from the previous work [3], which somehow makes the theoretical contributions marginal.
[1] Gupta, A., Zou, J. Feedback GAN for DNA optimizes protein functions. Nat Mach Intell. 2019.
[2] Yao, Y., Pan, Y., et. al. Differential-Critic GAN: Generating What You Want by a Cue of Preferences. IEEE Transactions on Neural Networks and Learning Systems, 2022.
[3] Bertrand, Q., Bose, et. al. On the Stability of Iterative Retraining of Generative Models on their own Data. In The Twelfth International Conference on Learning Representations.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Eq.3: The term $p_{ref}$ is not explained. What distinguishes $p_{ref}$ from $p_{data}$?
2. In Theorem 2.2, deriving $\lambda < 0.5$ based on the assumption $L \varepsilon < \alpha$ and $\lambda < \frac{\infty}{2 D \varepsilon}$ appears to conflict with the claim in line 181.
3. The explanation of $r_{\ast}$ in lines 134-137 is somewhat complex. Adding some equations would clarify its meaning.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and constructive comments. We appreciate that the reviewer finds the problem we tackle "important and interesting" and that our theory in connection to preference optimization “interesting and reasonable”. We now address the key points raised in the review:
## Plackett-Luce model
We thank the reviewer for referring us to the Plackett Luce model. We would like to mention that we were aware of the Luce choice rule model that we cited on lines 87-90. We will change our notation from $BT$ to $PL$ referring to the Plackett-Luce model since it provides a more unified framework consistent with the literature.
## Normalization of $p_t$
We agree with the reviewer on the fact that **if** $p_t$ is not normalized, then $p_{t+1}$ is not normalized either. However we do not foresee a problem since in practice we apply our results to an **initial normalized probability distribution**. We hope this answers the reviewer’s question and are happy to clarify further.
## Related Work
We are grateful to the reviewer for referring us to the two important references [1, 2] that we will add to our related work section. In [1], the authors tackle the problem of generating synthetic DNA sequences using GANs. They introduce an external function analyzer to rate synthetic samples from the generator and add the highest-scored ones into the discriminator training set. *This work [1] is mostly experimental and tied to the GAN architecture while we adopt a more theoretical framework to understand the self-consuming loop without specifying any architecture*.
Furthermore our study takes the point of view of the recently developed model collapse literature and relies only on $K$-wise preferences as in [2]. In [2], the authors propose a new GAN framework to incorporate user’s preferences in the training. They show state of the art results in generating the user-desired data distribution and theoretically prove the convergence of their method. *The key difference to our work is that they aim to generate a diversity of samples that are desired by users and their focus is therefore not on the collapse of their method to a maximal reward set.*
## Closeness with [3]
We acknowledge the reviewer’s comment that our proof of theorem 2.2 is adapted from the proof of [3], since our goal was to improve their paper’s main result in the same setting. Note that despite the similarity in the proof technique **the theoretical improvement we provide is significant**.
Moreover, we believe that it does **not** consist in our main theoretical contributions, and was serving as an introduction to the rest of the section on how real data provides stability to the retraining loop. Instead, our major theoretical results, i.e. **theorem 2.1, 2.3 and 2.4 are proved in a different setting** and with proof techniques that completely differ from [3].
## Clarification of the $p_{ref}$ notation
We acknowledge the reviewer's concern about the notation $p_{ref}$ and we will clarify this notation in the updated draft. Namely, we denote $p_{ref}$ any probability distribution with density with respect to Lebesgue measure. It may indeed be of interest to apply thm 2.3 to other cases than retraining on a mixture of synthetic data at iteration $t$ and the data distribution. For example, it may be of interest to retrain on a mixture of synthetic data at iteration $t$ and of the synthetic data at initialization. In such a case, $p_{ref}$ would be identified to $p_0$. It bridges the gap with RLHF, as the KL regularization in the RLHF objective is done with respect to the supervised fine-tuned policy (see equation 3 in the DPO paper [5]). This makes our result more general as we show that stability occurs around any reference distribution that is injected in the retraining loop (either real data samples or samples from the initial model). It also provides an avenue to study the retraining process over a mixture of all previous iterations of the generative models.
## Clarification on the assumption on $\lambda$
We respectfully disagree with the reviewer that our assumptions $$L\epsilon < \alpha \quad \text{and} \quad \lambda < \frac{\alpha}{2L\epsilon}$$ are in conflict with the claim that previous work was restricted to $\lambda < \frac{1}{2}$. Indeed, the necessary conditions for theorem 1 of [3] are $$\lambda \leq \frac{1+ \frac{L\epsilon}{\alpha}}{2}\quad \text{and}\quad L\epsilon< \alpha$$ which necessarily requires $\lambda < \frac{1}{2}$. These conditions are more restrictive than ours. In particular, we see that for fixed $\alpha$, if $L\epsilon\rightarrow 0$ then both our conditions are satisfied (in particular $\lambda$ bigger than $\frac{1}{2}$ will be allowed).
## Clarification of $r_*$ notation
We agree upon the reviewer's remark that our definition of $r_*$ could be clarified and we will improve this paragraph in the updated paper. In a nutshell, $r_*$ should be thought as the smallest number that upper-bounds the random variable $p_0$ with probability $1$. For example, a Uniform distribution on the interval $[0, 10]$, $r_*=10$ whereas for unbounded distributions such as $\mathcal{N}(0, 1)$, $r_*$ does not exist. As suggested by the reviewer, an equation that could clarify this is $r_* = inf \\{r \in \mathbb{R}, \mathbb{P}(r(x)\leq r_*) = 1\\}$.
We thank the reviewer for their valuable feedback and great questions. We believe we have answered to the best of our ability all the great questions raised by the reviewer and we kindly ask the reviewer to potentially upgrade their score if they are satisfied with our responses. We are also more than happy to answer any further questions that arise.
[4] Gerstgrasser, Matthias, et al. "Is model collapse inevitable? breaking the curse of recursion by accumulating real and synthetic data." arXiv 2024
[5] Rafailov, Rafael, et al. "Direct preference optimization: Your language model is secretly a reward model." NeurIPS 2023
---
Rebuttal Comment 1.1:
Title: Thanks for the responses.
Comment: I appreciate the authors' clarification. I increased my score to "weak accept".
About $p_t$. Usually, the probability is normalized to 1. If not, it should add the clarification. | Summary: This paper extends Bertrand et al. 2024’s analysis of model collapse to study settings where data is filtered (based on a particular preference model) before being used for training the next iteration of generative models.
Strengths: - The paper extends prior work studying self-consuming generative models to integrate preference learning, which is highly sensible
- The paper is well written
Weaknesses: - I do not think that Equation (3) faithfully models reality. Specifically, it assumes that the $t+1$-th model is fit to a mixture of (1) real data and (2) preference-filtered data from the $t$-th model. But realistically, synthetic data should amass over time, as I believe is the case with the datasets that the authors mention in their introduction, e.g., LAION-5B. This is a point made by "Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data." and I agree with them.
- I might be missing something, but intuitively, Theorem 2.1 seems wrong, specifically the claim “eventually converges to the highest level set of the reward reached **at initialization**”. For the simplest possible counter example, suppose p_0 is a discrete Uniform distribution on integers 0 to 10 and the reward function is $r(x) = x$. Defining $r_* := 10$, we see that Assumption 2.1 is satisfied, including the variant where $\epsilon \geq 0$. It is hard to imagine that the iterative loop would concentrate on $10$ rather than something greater than $10$. The intuition is that sampling and filtering from the first model iteration might shift the distribution towards regions where it is possible to sample higher rewards.
- I think the experiments corresponding to Theorem 2.1(Figures 4 and 5) could be improved to better connect with the maths. Specifically: (1) Provide a heatmap of the level set of $r_*$ to show what $p_*$ is. (2) As I understand, the theorem doesn’t say that the distribution’s variance collapses, but rather, that the _reward variance_ collapses. In the figures, the reward functions are unimodal, and so we see the distributions converge towards unimodal behavior. You should modify the reward functions to be multimodal to demonstrate this distinction between the distribution's variance collapsing versus the reward's variance collapsing. One way to do this might be to define the reward function as 4 of the 8 MoGs (i.e. the reward is the max of the set of negative distances to the 4 chosen centroids). Then, we should see the model concentrate on those 4 chosen centroids.
- I feel like I don’t understand Theorems 2.3 or 2.4. In Theorem 2.3, the lower bound on the right hand side does not appear to depend on the model fitting iteration $t$, as best as I could tell, nor does the upper bound in Theorem 2.4. Here, I’m expecting the answer to depend heavily on the model fitting iteration. I read the adjacent discussion but didn't receive any clarity. I'm consequently not sure how to evaluate the significance of Section 2.2. Perhaps the authors could clarify?
- I think there are many additional papers you might want to cite. Some may be concurrent with yours (I intentionally did not search for a preprint in order to preserve double blind reviewing), and if so, that’s fine. Here are some suggestions on several different topics:
**On Model Collapse:**
Beyond Model Collapse: Scaling Up with Synthesized Data Requires Reinforcement. https://arxiv.org/abs/2406.07515
Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data.
https://arxiv.org/abs/2404.01413
**On mode collapse in RLHF:**
Understanding the Effects of RLHF on LLM Generalisation and Diversity. https://openreview.net/forum?id=PXD3FAVHJT
A Distributional Approach to Controlled Text Generation.
https://openreview.net/forum?id=jWkw45-9AbL
Red Teaming Language Models with Language Models
https://arxiv.org/abs/2202.03286
Improving alignment of dialogue agents via targeted human judgements
https://arxiv.org/abs/2209.14375
Aligning Language Models with Preferences through f-divergence Minimization
https://arxiv.org/abs/2302.08215
**On filtering data using reward models - often known by multiple names in the RLHF literature including “Best of N” or “rejection sampling” or “reranking” in the RLHF literature:**
Scaling Laws for Reward Model Overoptimization
https://arxiv.org/abs/2210.10760
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
https://arxiv.org/abs/2204.05862
There are many more
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their nuanced review and constructive feedback. We appreciate that the reviewer has found our paper "well-written" and the integration of preference learning to the model collapse literature to be "highly sensible". We took note of the reviewer's concerns and provide clarifications:
## Accounting for the accumulation of data
We thank the reviewer for referring us to the paper [1] and pointing out this interesting extension. We believe, together with the reviewer, that the setting of [1] could be adapted to show that accumulating data provides additional stability to the retraining loop and avoids collapse. However, we believe such a study is out of the scope of our work whose aim was to introduce and theoretically develop a new research question of model collapse from the view of preferences. We will, therefore, update our draft to provide additional clarification of this aspect and mention it as an exciting future direction.
## Clarification on Thm 2.1
We understand the reviewer's natural concern. The crucial point is that the curated distribution at time $t+1$, $p_{t+1}$ is issued from learning a modified distribution from $p_t$ constructed by **sampling from** $p_t$ and curating samples using preferences. This implies that if a set has probability $0$ for $p_t$, it will never be sampled and therefore never be preferred over other samples. This means that this set will also have probability $0$ for $p_{t+1}$.
Finally, we note that the reviewer’s intuition is valid when the retraining step is not perfect (for example due to bounded expressivity of the class $\mathcal{P}$ involved in equation 4), or when noise is injected in the process. Then the support of $p_{t+1}$ is not necessarily included in the support of $p_t$ anymore. In that case, the self-consuming loop iteratively explores regions that had probability $0$ at initialization and converges to the maximal reward possible, validating the reviewer’s intuition.
## Improvement of the experiments on MoGs
We thank the reviewer for their suggestion which will help clarify the theoretical results. We provided in the additional pdf for figures a heat map of the level set of the reward when using $4$ centroïds (Fig 1a). We additionally provided a heat map of the raw mixture of Gaussians (MoG) distribution (Fig 1b) and the corresponding limit distribution defined L154, as the renormalized MoG restricted to the set of maximal reward at initialization (Fig 1c). We also re-ran our experiments for this reward model and observed that the learned distribution converges as expected to $4$ Gaussian restricted to $4$ balls around the designed centroïds (Fig 2a). We additionally plotted the reward variance and showed that it vanishes in the purely synthetic setting. This demonstrates that the reward variance can vanish independently of the overall distribution variance. We will clarify this in the updated manuscript as it is central to our analysis.
## Clarification of Thm 2.3, 2.4
The goal of these two theorems is to respectively provide a lower bound on the expected reward and an upper bound on the KL divergence of a self-consuming generative model loop **with respect to a reference distribution $p_{ref}$**. These theorems formally connect our results to the KL regularization in RHLF ensuring that the KL divergence between the aligned model and the reference model is not too large. This is why only $p_{ref}$ and not $p_t$ appear on the right-hand side. The reviewer’s observation is, however, insightful as our proof proceeds by an induction argument in which each induction step uses Lemma A.1 (see line 563) and hence a right-hand side dependent on $t$. We finally achieve to make this term not dependent on $t$ by induction.
## Related work
We thank the reviewer for pointing us to these important references that we will cite and discuss in our revision. We will especially provide more discussion on the relationship between RLHF and rejection sampling fine-tuning. We now discuss the most relevant works mentioned by the reviewer (because of the space constraint we do not have space to discuss them all extensively here):
Note that [2,3] are concurrent work according to NeurIPS guidelines:
>papers that appeared online within two months of a submission will generally be considered "contemporaneous"
The key difference between [2] and our work is that they show the benefit of using feedback on the quality of a sample to prevent model collapse.
In [3], the authors investigate how the different stages of alignment affect a model’s generalization capabilities and output diversity. They empirically show that the output diversity of the RLHF policy is decreased w.r.t. the supervised finetuned policy, which is consistent with our theoretical insights (e.g. lem. 2.2, thm 2.1). However, there are major differences with our setting as their contribution is empirical, and we investigate the impact of iteratively retraining a model several times on synthetic samples while they study a single training round using RLHF.
[4] experimentally investigates the self-consuming loop, specifically in the case of LLMs, and evidences model collapse in that setting.
We thank the reviewer again for their review and detailed comments that helped strengthen the paper. We hope our answer here and in the global response allows the reviewer to consider potentially upgrading their score if they see fit. We are also more than happy to answer any further questions.
[1] Gerstgrasser, Matthias, et al "Is model collapse inevitable? breaking the curse of recursion by accumulating real and synthetic data" 2024
[2] Feng, Yunzhen, et al "Beyond Model Collapse: Scaling Up with Synthesized Data Requires Reinforcement" 2024
[3] Kirk, Robert, et al "Understanding the effects of RLHF on LLM generalisation and diversity" 2023
[4] Briesch, Martin, et al "Large language models suffer from their own output: An analysis of the self-consuming training loop" 2023
---
Rebuttal 2:
Title: Response to Authors' Rebuttal [Part 1]
Comment: Thank you to the authors for their response!
> Improvement of the experiments on MoGs
> We provided in the additional pdf for figures a heat map of the level set of the reward when using centroïds (Fig 1a).
This is wonderful. Thank you for running these additional experiments - I really like them, and I think they'll improve the paper (in my opinion; if you disagree, you don't need to include them).
> Note that [2,3] are concurrent work according to NeurIPS guidelines:
Yes, then it's fine to not cite them. I tried to allude to that above ("Some may be concurrent with yours") but apparently didn't finish the sentence. I appreciate your care to the other citations.
---
Rebuttal 3:
Title: Response to Authors' Rebuttal [Part 2]
Comment: > We understand the reviewer's natural concern. The crucial point is that the curated distribution at time $t+1$, $p_{t+1}$ is issued from learning a modified distribution from $p_t$ constructed by sampling from $p_t$ and curating samples using preferences. This implies that if a set has probability $0$ for $p_t$, it will never be sampled and therefore never be preferred over other samples. This means that this set will also have probability $0$ for $p_{t+1}$.
I'm not sure I buy this argument. I agree that if a set has probability $0$ for $p_t$, it will never be sampled and therefore will never be preferred over other samples. But you lose me for two reasons:
1. Depending on the choice of realizable distributions $\mathcal{P}$, the support might be the entire space of outcomes e.g., if $\mathcal{P}$ is the set of Gaussian distributions. In this case, no set would have probability $0$ for $p_t$. Your explanation then seems to hinge on a condition "if" that might not be applicable.
2. Even if the conditional statement is true i.e. there is some set with mass/density 0 under $p_t$, why is $p_{t+1}$ prohibited from extending its support to this set? In general, probabilistic models are often capable of placing mass/density on sets that were not in their training data.
Could the authors please clarify?
---
Rebuttal 4:
Title: Response to Authors' Rebuttal [Part 3]
Comment: > We thank the reviewer for referring us to the paper [1] and pointing out this interesting extension. We believe, together with the reviewer, that the setting of [1] could be adapted to show that accumulating data provides additional stability to the retraining loop and avoids collapse. However, we believe such a study is out of the scope of our work whose aim was to introduce and theoretically develop a new research question of model collapse from the view of preferences. We will, therefore, update our draft to provide additional clarification of this aspect and mention it as an exciting future direction.
I feel like the authors and I miscommunicated here. The point I was trying to raise is: what are realistic assumptions to make about how model-data feedback loops should be modeled? I wasn't so much interested in that other paper as much as I was interested in whether this paper faithfully captures the settings we care about ("I do not think that Equation (3) faithfully models reality. Specifically, it assumes that the -th model is fit to a mixture of (1) real data and (2) preference-filtered data from the -th model. But realistically, synthetic data should amass over time, as I believe is the case with the datasets that the authors mention in their introduction, e.g., LAION-5B.")
My thinking is that the assumptions of this paper are not especially realistic because synthetic data should increase over time and the total amount of data should increase over time too.
**TLDR: I think your assumptions are not realistic and I think this harms the significance & relevance of your paper.**
If I missed the response to this point by the authors, I apologize and I would appreciate being pointed in the correct direction. Thank you!
---
Rebuttal 5:
Title: Response to the Reviewer's Comment [Parts 1 and 2]
Comment: We are grateful to the reviewer for their time and engaging with us during this rebuttal. We answer below the reviewer’s additional questions.
## On whether the support of $p_{t+1}$ is included in the support of $p_t$
We acknowledge the reviewer's comments regarding the fact that the support of $p_{t+1}$ is included in the support of $p_t$. We understand the two points provided by the reviewer as
>1) Depending on the choice of realizable distributions $\mathcal{P}$, the support might be the entire space of outcomes e.g., if $\mathcal{P}$ is the set of Gaussian distributions.
Note that having that the support is the entire space of outcome is not a problem at all for our theory. Our only requirement is that **the reward is bounded over the support of $p_0$** (at initialization) by $r_*$ (Assumption 2.1) which implies that the reward is bounded for all timesteps (since the support of $p_t$ is included in the support of $p_0$).
There exist many functions with unbounded support that are bounded (e.g. the sigmoid function).
We believe that it is reasonable to assume that the intrinsic human reward is bounded.
>2) In general, probabilistic models are often capable of placing mass/density on sets that were not in their training data (because they generalize)
Our setting prevents this scenario from happening: we are working in a setting were we neglect the errors due to the finiteness of **model’s capacity** and **training data**. In other words, we work in a setting where we have:
i. **infinite capacity regime**: *the set $\mathcal{P}$ of achievable distributions is the entire set of probability distributions* (lines 112-113 in lemma 2.1). In that case, equation 6 holds and hence the support of $p_{t+1}$ is included in the support of $p_t$.
ii. **infinite training data regime**: This comes from the fact that we are assuming that the next distribution minimizes the population likelihood (and not the empirical one) in equations 3 and 4 ($p_{t+1}$ is defined using an expectation on the distribution $p_t$ and $p_{data}$).
Finally, i), ii) together prevent the points 1) and 2) from happening in our setting.
Note that we mentioned i) explicitly in the statement of Lemma 2.1 “If $\mathcal P$ is the set of probability distributions on $\mathbb{R}^d$” and on Line 174.
The point ii) was indicated by the fact that we were using expectations and not finite sum in our paper.
We acknowledge that we should have made i) and ii) appear more clearly in our paper to avoid any confusion. **We will clarify and highlight our setting in the updated manuscript**. Current generative models are getting larger and larger, and are trained on more and more data with more than billions of parameters and datapoints. That is why, we believe it is reasonable to neglect the errors due to the finiteness of **model’s capacity** and **training data** in our theory and that our results satisfyingly capture the reward maximization phenomenon induced by human curation.
----------
edit: we updated this comment to enhance clarity of the response
---
Rebuttal Comment 5.1:
Title: Response to the Reviewer's Comment [Part 3]
Comment: We are grateful to the reviewer for this interesting discussion on whether our assumptions realistically reflect the practical setting when today’s large models are retrained on web-scaled dataset.
We agree together with the reviewer that the accumulation of data is arguably a realistic feature of web-scaled datasets such as LAION-5B. Especially, it is expected that new next-generation web-crawling datasets will incorporate synthetic images from previous years generative models together with the current state-of-the-art generation of generative models. We did not incorporate such feature in our setting as it would complicate the statements and the notations. The main focus of our work was on a new type of collapse centered on human preferences which is different from the previous literature. However, we believe such an extension is relatively easy and we present now how to address it:
Let $\\{\lambda_k^t\\}$ for $0 \leq k \leq t$ be a family of positive numbers such that forall k, $\lambda_k^t$ is decreasing in t and normalized such that $\forall t\geq 0, \sum_{k=0}^t\lambda_k^t=1$. Consider that the data accumulates with proportions given by the family $\\{\lambda_k^t\\}$ for $0\leq k \leq t$. In that case, equation 6 which states $p_{t+1}(x) = p_t(x)\cdot H^K_{p_t}(x)$ becomes $p_{t+1}(x) = \sum_{k=0}^t \lambda_k^t p_k(x)\cdot H^K_{p_k}(x)$.
We believe that similarly to our theory, it is straightforward that $E_{p_t}[e^{r(x)}]$ is increasing. Further assume that $\forall k, \lambda_k^t\overset{t\rightarrow \infty}{\rightarrow} 0$. In that case, for all $k$ the contribution of each $p_k$ in the retraining at iteration $t$ of $p_t$ decreases to $0$. We additionally believe that **in that case, the expected reward will converge to $r_*$ the maximal reward at initialization and that $p_t$ will collapse to maximal reward regions**.
We think that this case is realistic, as *the proportion of data on the web generated by any model is doomed to vanish as more models are trained and deployed*. In that setting, accounting for the accumulation of data would therefore not prevent the collapse of the self-consuming model to maximal reward regions. This constitutes an interesting extension and we will add to the updated manuscript precise statements, along with proofs if the reviewer believes it strengthens our work.
Finally, we mention that the work [1] is concurrent with ours since it was first posted on Arxiv on 1st April 2024. This was an additional reason why we did not incorporate their setting in our study, as we were aware of such extension only late in this project. | Summary: This paper explores the scenario where generative models are iteratively trained on self-generated data curated by human users with some implicit reward. The key idea is that each iteration of training on the self-generated data reweights the previous distribution based on the implicit reward, which converges to reward maximization as the iteration approaches infinity. The paper also studies the scenario where the curated self-generative data is mixed with natural data and analyzes its implication on the stability of iterative training. Experiments on synthetic data and CIFAR 10 validate the insights from the theoretical analysis.
Strengths: 1. This paper studies an interesting problem of generative models being iteratively trained on self-generated data curated by human users with some implicit reward. This is arguably an accurate description of what happens when new generative models are trained nowadays.
2. The theoretical framework analyzed the convergence and stability of iterative training with and without reference data.
3. Experiments on synthetic data and CIFAR 10 are interesting, especially the one on CIFAR with replay, which shows how bias amplification can be mitigated with natural data.
Weaknesses: 1. Human preference can be heterogeneous, so in some cases, Eq. (2) does not hold. It would be interesting to see if the theoretical analysis can be extended to a mixture of rewards.
2. This paper didn't provide experiments on realistic datasets and large models such as LAION and Stable Diffusion (SD). Thus, it's hard to map the theoretical insights to how we should train the next generation of SD. It would be really interesting if the experiments on CIFAR were reproduced on SD to see what realistic biases are picked up, how much replay is needed to mitigate that, etc.
3. Conceptually, retraining with data curation is related to iterative finetuining in language models (e.g., rejection-sampling-based SFT). Would like to see more discussion on that.
Technical Quality: 3
Clarity: 4
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: I think the authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding the problem we tackle “interesting” and to be “an accurate description of what happens when new generative models are trained nowadays”. We also appreciate that they find our experiments “interesting”. We now address below the key points raised in the review:
## Extension to mixture of reward
Extending our results to hold for a mixture of rewards is a very interesting point that cannot be straightforwardly tackled in our framework. We believe it is an exciting avenue for future work and refer the reviewer to our global response for a more detailed discussion. The crucial difference with our framework is that while we showed that the learned distribution concentrates around the maximal level set of the reward, in the presence of multiple rewards it is not clear to which level set the learned distribution will concentrate.
## Larger scale experiment
We acknowledge the reviewer's remark that experiments on larger scales dataset would be interesting to shed light on practical implications of such a collapse in terms of the expected reward. However due to the computational cost of such an experiment, we couldn't perform during the rebuttal period. We would however like to point out that existing studies already show how the style of samples of a generated model evolves when fine-tuned on synthetic and real data with preference optimization. For example, in [1] the authors retrain a generative model by iteratively fine-tuning using Direct Preference Optimization on a mixture of synthetic and real data and where the reward model favors real data. It is demonstrated in Figure 5 [1] that the style of large scale vision models can be shifted by retraining on synthetic data using a preference model. In particular, the model’s interpretation of “a very cute boy” drastically changes, illustrating how the reward influences the realistic biases that are picked up.
## Relationship with rejection sampling fine-tuning
We thank the reviewer for encouraging us to strengthen the discussion between iterative finetuning of language models and retraining with data curation. While there is already a large literature on RLHF to iteratively finetune LLMs the reviewer highlights rejection-sampling as one way to optionally see finetuning as a sampling problem amenable to probabilistic inference. Indeed, recent works [2,3] frame iterative finetuning as drawing samples—using rejection sampling, Twisted Sequentional Monte Carlo etc…—from the unnormalized posterior distribution $p(x) \propto e^{r(x)} p_0(x)$,where $p_0(x)$ is an initial generative model trained on real data. From this perspective, our framework studies the case where we curate data by using human reward and obtain $x \sim p(x)$ which are samples from the posterior, without access to the density. This allows us to then finetune $p_0(x)$ to approximate the posterior $p(x)$, which in our notation is a step of iteratively finetuning on curated data. We will include a small discussion on this connection in our updated paper.
We thank the reviewer again for their valuable feedback and great questions. We hope our answer here and in the global response allows the reviewer to consider potentially upgrading their score if they see fit. We are also more than happy to answer any further questions that arise.
[1] H. Yuan, Z. Chen, K. Ji, and Q. Gu. Self-play fine-tuning of diffusion models for text-to-image generation, 2024.
[2] Zhao, Stephen, et al. "Probabilistic inference in language models via twisted sequential monte carlo.", 2024.
[3] Kong, Lingkai, et al. "Diffusion models as constrained samplers for optimization with unknown constraints.", 2024. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their detailed feedback which has allowed us to strengthen the updated manuscript. In particular, we are heartened that all reviewers (WZgs, cUrK, ej3h, cgiY, orKH) found our research question to tackle an interesting, highly-sensible, and timely problem. We are also glad that reviewers (WZgs, ej3h) appreciated the writing of the paper. We below address some key points that were raised in multiple reviews:
**Extension of our framework to using a mixture of rewards (reviewers WZgs, cUrK):**
We thank the reviewers for raising a very interesting point regarding the use of mixed rewards. Frameworks going beyond a single reward model are especially relevant in practical scenarios in LLM alignment. An interesting reference on this topic is the recent work [1] which addresses such extension by learning a preference model of samples given a prompt $P(x ≻ x’|y)$ (as a function of the two variables $x, x’$)--- instead of the Bradley-Terry reward model $r(x)$ (less general when preferences are non-transitive) which they refer to as Nash Learning from Human Feedback.
We now outline how to derive an extension to a mixture of reward to our setting:
First, we can introduce a new latent variable $u$ that describes the randomness in the reward used, which leads to the following expression of the curated distribution after one step of curation:
$$
p_{t+1}(x) = p_t(x) \cdot H^K_{p_t}(x) \quad \text{with} \quad H^K_{p_t}(x):= E_{x_1,\ldots,x_{K-1} \sim p_t, u} \left[\frac{K \cdot e^{r(x; u)}}{e^{r(x; u)} + \sum_{i=1}^{K-1}e^{r(x_i; u)}} \right]
$$
In our setting, we were able to prove that the expected reward increases and the distribution converges to the maximum level set of a unique reward (Lem 2.2, Thm 2.1). However, in the presence of multiple rewards, it is not straightforward that the rewards have the same maximal level sets. Therefore this may yield interesting dynamics and the convergence of $p_t$ may differ. We believe such an extension of our results is outside of the scope of this work and think that it is a fascinating avenue for future work. For example, it may be interesting to study if a reward component in the mixture dominates, thereby dictating the convergence, e.g. if it gets large differences between two samples. In that case, the distribution may converge to only one maximal level set introducing a new model collapse behavior as the mixture of rewards would be dictated by a single reward. We added this discussion on the multiple rewards setting and [1] our revised manuscript.
**Extended related work and contextualization of our results in the literature** We thank the reviewers for inviting us to develop our discussion of related works. We will include in the updated paper an in-depth discussion on rejection sampling finetuning (cUrK, ej3H, cGiY), RLHF (ej3H) and comparison of our setting with respect to the previous model collapse literature (orKH).
**Clarification and additional experiments** We thank reviewers orKH and ej3h for suggesting improvements to our experiments which we believe will help clarify them and illustrate the theory better. Especially, our new experiment in Figures 1 and 2 of the figures pdf shows that the reward variance may vanish independently of the overall distribution variance. This underlines the crucial difference between model collapse from the view point of previous works and collapse of the reward variance, that we introduce and develop in our work.
## References
[1] Munos, Rémi, et al. "Nash learning from human feedback." 2023.
Pdf: /pdf/e34cfb27ea97012773c6214f336a18d73c6f79b1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Self-consuming generative models are known to have collapse or stability problems, and the curation of synthetic data is often ignored. This paper theoretically studies the impact of data curation and proves that it optimizes the expected reward.
Strengths: 1. The paper is well-written, and works on synthetic data are extremely important to the field.
2. The connection between retraining with a mixture of curated data and original data and RLHF (Reinforcement Learning with Human Feedback) with KL regularization is novel.
Weaknesses: 1. The theoretical results are applied in a simplified setting, focusing on distribution and reward on $x$ only, instead of considering a joint distribution of $x$ given $y$ mimicking text-to-image generation. The theoretical results apply only to learning the distribution directly, without considering finite samples and optimization, and there is only one reward function. In recent works combining RLHF and diffusion models [1], multiple rewards are considered. How would the results hold with a (random) weighted sum of multiple rewards?
2. On the connection with model collapse: The authors motivate the results from previous literature on synthetic data leading to model collapse but do not discuss how and whether curation with a reward model will avoid model collapse. The reviewer thinks that improvements can occur when inconsistencies between text and image and implausible images are discarded through curation. However, other problems persist in model collapse, such as the inherent lack of diversity in synthetic data [2], and existing results show that human interaction with GPTs still produces data lacking diversity [3]. In this case, curation does not address the loss of diversity in synthetic data. The authors should discuss this clearly, especially since the paper is motivated by model collapse and spends considerable time discussing it.
3. Could the authors explain the contribution of Section 2.2.1? Comparing Sections 2.2.1 and 2.2.2, there seems to be no direct comparison to be made.
### Reference
[1] Liang, Youwei, et al. "Rich human feedback for text-to-image generation." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.
[2] Guo, Yanzhu, et al. "The Curious Decline of Linguistic Diversity: Training Language Models on Synthetic Text." *Findings of the Association for Computational Linguistics: NAACL 2024*. 2024.
[3] Padmakumar, Vishakh, and He He. "Does Writing with Language Models Reduce Content Diversity?." *The Twelfth International Conference on Learning Representations*.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
One related work: [4] propose to use a correction function on the synthetic data to mitigate the model collapse.
[4] Gillman, Nate, et al. "Self-Correcting Self-Consuming Loops for Generative Model Training." Forty-first International Conference on Machine Learning.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations and societal impacts are well discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and review of our paper. We are glad that the reviewer finds our paper "well-written" and our connection with RLHF to be "novel" in the "extremely important" area of work on synthetic data. We address below the key points raised by the reviewer:
## Extension of the theory
### Theory for conditional distribution $p(x|y)$
As remarked by the reviewer, our theoretical results focus only on an unconditional distribution $p(x)$. However we believe that our theory can be easily applied to a conditional distribution $p(x|y)$. Indeed for any sampling of $x_1, \dots, x_K$ given a prompt $y$, and a subsequent curation by humans of these samples, we can apply our theoretical results conditioned on $y$ (especially because the reward $r(x,y)$ also depends on the prompt $y$). In particular, our results state that the expected reward conditioned on $y$, i.e. $E_{p_t(x|y)}[r(x,y)]$ will be maximized and converge to a variable $r^*(y)$ where $r^*(y)$ is given as in our Assumption 2.1 (but now conditionally on $y$). Note that this is valid for any $y$, hence our framework extends to conditional generation.
### Finite samples and optimization:
We agree with the reviewer that we work under the assumption of perfect learning. We believe that accounting for finite sample optimization is a crucial extension, but that it would necessitate a more involved framework beyond the scope of our paper, especially in the case of deep generative models optimization. However, we note that a starting point to such analysis could be similar to the assumption 3 made by [5] to include finite sample optimization to their framework.
### Mixture of reward
We thank the reviewer for referring us to an interesting related work [1] which investigates the use of rich human feedback to improve text-to-image models. This enhanced human feedback is no longer a single scalar score but consists of multiple evaluation such as delimiting part of an image with implausibility content, or labels on prompts words that are misrepresented in the image. While we proved convergence of $p_t$ to the maximal level set in the presence of a single reward, this raises an interesting question on what dynamics would arise when the maximal level sets of each reward from the mixture are disjoints. Please refer to our global response for more details on mixtures of rewards.
## Can curation prevent collapse?
The reviewer highlights an important point related to the difference between collapse of the distribution (i.e. the model only generates a single sample $x$ given $y$), and collapse of the expected reward to its maximum level set.
- Retraining with curated samples can avoid the former but not later as mentioned L71 of our paper:
> Retraining with curated samples both maximizes an underlying reward whose variance collapses and converges to maximum reward regions.
- We also refer the reviewer to the new experiments of the global response, especially Figs.1 and 2 that illustrate how reward’s variance may vanish but not the overall distribution’s variance.
- However, such a collapse of the expected reward will induce **a decrease of diversity** as the samples with high but non-maximal reward will not be generated. This is additionally shown in our Figs. 7 where we show on CIFAR that, even if the average reward increases, the **FID consistently increases**.
Finally, we mention that we believe that **both the reward and the quality are related since it is expected that low-quality images would have low reward* following human standards and understanding to what extent is an interesting avenue for future works. We will make sure to clarify this point in the updated draft and discuss the two important works [2, 3] mentioned by the reviewer as follows:
In [2], the authors empirically investigate the linguistic diversity of generated text in the self-consuming loop. They consistently observe a decrease in diversity across the retraining iterations. However, unlike our work, they don’t study the impact of curation. In [3], the authors compare the diversity of essays written by humans, assisted either by GPT3, a feedback-tuned LLM (InstructGPT) or without LLM assistance. They found that when using InstructGPT, the overall diversity decreased. As pointed out by the reviewer, this shows that feedback-tuned models have a diversity decrease with respect to the original distribution and that human supervision is not sufficient to compensate for it. It is consistent with our theoretical insights: we show how retraining with curation incurs a collapse of the reward’s variance similar to feedback fine-tuning.
## Sec. 2.2.1
As rightfully pointed out by the reviewer, Sec. 2.2.1 revisits the framework proposed by [5] which differs from the rest of the section. We chose to incorporate this section as a preliminary to the rest of the section for the following reasons:
- It introduces the reader to how reusing real samples in the retraining loop can provide stability.
- It formulates the stability results of [5] using a Kullback-Leibler divergence term, which is new and is the same formulation as in Thm 2.4.
- It yields **a significant improvement on prior work** for the upper bound on the parameter $\lambda$ which [5] left as future work.
[5] Bertrand, Q., et. al. On the Stability of Iterative Retraining of Generative Models on their own Data. In ICLR 2024
We will make the setting distinction clearer in the updated draft to avoid confusion.
## Related work
We thank the reviewer for this reference to [4]. We note that this work is presently cited on line 47 but are happy to provide more discussion.
We are grateful to the reviewer for their great questions and hope that our answers in conjunction with the global response clarified them. We politely encourage the reviewer to ask any further questions they may have and, if they are presently satisfied with our responses, to consider a fresher evaluation of our paper.
---
Rebuttal Comment 1.1:
Comment: I appreciated the author's discussion on the multi-reward setting and on the difference between the collapse of distribution and the collapse of reward variance. I have increased my score. | null | null | null | null | null | null |
Can neural operators always be continuously discretized? | Accept (poster) | Summary: This paper studies the question of whether a neural operator (or a general diffeomorphism on an infinite dimensional Hilbert space) can be continuously discretized through the lens of category theory. It first proves that there does not exist a continuous approximation scheme for all diffeomorphisms. Then, it shows that neural operators with strongly monotone layers can be continuously discretized, followed by a proof that a biliptschitz neural operator can be approximated by a deep one with strongly monotone layers. Some further consequences are discussed.
Strengths: 1. The paper studies the discretization of a continuous operator, which is a source of error and an important issue in operator learning if one is not careful.
2. The paper is fairly comprehensive, encompassing both positive and negative results. The study of positive results contains theorems of different flavors.
3. Although the theory is based on the category theory, the presentation and explanation of the results are relatively clear and accessible for people who are unfamiliar with it.
Weaknesses: 1. Most results presented in the paper are purely theoretical and lack a quantitative or asymptotic estimate. For example,
* the notion of continuous approximation functor in Definition 8 does not care about the rate of convergence, and
* there is no estimate for the number of layers $J$ in Theorem 4.
2. No empirical results are there to support the theory. While this is a theoretical paper, some toy experiments that exemplify the theory would be very helpful.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In general, is there any assumption of the Hilbert space studied in this paper? For example, are the Hilbert spaces assumed to be separable?
2. What is the definition of the convergence of finite-dimensional subspaces used in this paper? Strong convergence of the projection operator? I do not think there is a standard definition for this in elementary functional analysis so it would be helpful to say it explicitly in the paper.
3. Have you studied the role of the bilipschitz constant in your theorems? For example, how does the number of layers $J$ in Theorem 4 depend on it?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed comments and fair criticisms. We address all of these below:
1. "Most results presented in the paper are purely theoretical and lack a quantitative or asymptotic estimate."
The proof of Theorem 4 makes it possible to estimate the number $J$ of layers as a function of $\epsilon$, the Lipschitz constants of the map $F$ and $F^{-1}$, the $C^2$-norm of $F:B(0,2r_1)\to X$ in a ball having double the radius $r_1$ where we consider the approximation of $F$. More precisely, $$ J\leq C\epsilon^{-2} $$ where $C$ depends on the radius $r_1$ of the ball where $F$ is approximated and the
$C^2$-norm of the map $F$ in a ball of radius $2r_1$. We will add an explicit formula for $C_0(r_1,\|F\|_{B(0,2r_1);X)})$ and its proof in the final version of the paper.
The main steps of the proof are the following: First, we use spectral theory of the compact operators $T_1$ and $T_2$ to find a finite dimensional subspace $W$ and the projection $P_W$ onto it so that $$\hat F(x)=(I-P_W)x+P_WF(P_WF) $$ is a diffeomorphism that is close to the operator $F:X\to X$. Let $f:W\to W$ be the restricition of $\hat F$ to $W$. After this we deform to the invertible linear map $A_1:W\to W$ is the derivative of $f$at $x=0$, along the path $t\to f_t,$ $$f_t(x):=\frac 1t(f(tx)-f(0))+tf(0),\quad\hbox{for }t>0,$$ $$ f_t(x):=A_1,\quad\hbox{for }t=0.$$ All operators $f_t:W\to W$ are bi-Lipschitz maps, $f_0(x)=f(x)$ and $f_0(x)=A_1$. We consider the values $t_1=c_1\epsilon$ and $t_j=t_1+jh$, for $j=2,\dots,J_1$, and the operators $$ Id+B_j=f_{t_{j+1}}\circ f_{t_j}^{-1}\ : W\to W,\quad j=0,1,2,\dots,J_1.$$ We show that in the ball $B_W(0,2r_1)$ $$\hbox{Lip}(B_0)=\hbox{Lip}(f_{t_1}\circ A_1^{-1}-Id)\le C_2t_1=C_2c_1\epsilon.$$ Here, $C_2$ depends on $r_1$ and $C^2$-norm of $F$ as well as on the Lipschitz constants of $F$ and $F^{-1}$ and $c_1$ is chosen to be sufficiently small.
Moreover, we show that for $j\ge 1$, $$\hbox{Lip}(B_j)\leq C\frac 1{t_j}(t_{j+1}-t_j)=C\frac 1{t_j}h,$$ where the factor $\frac 1{t_j}$ appears due to the multiplier $\frac 1t$ in the definition of $f_t$. To obtain $\hbox{Lip}(B_j)\le \epsilon$, we choose $h\le c_2\epsilon^2$. This causes the factor $\epsilon^{-2}$ in the bound for $J$.
In addition to this, we consider paths on the Lie group $O(n)$, $n=\dim(W)$ and show that there are sequence of invertible matrixes $A_j\in O(n)$, $j=1,2,\dots,J_2$ such that $$ Id+B_{J_1+j}=A_{j+1}\circ A_j^{-1}\ :W\to W$$ where $\hbox{Lip}(B_j)<\epsilon$ and $A_j$, where $j=J_2$, is either the identity operator $Id:W\to W$ or a reflection operator $B_e:x\to x-2\langle x,e\rangle e$. Here, we can choose the number of steps to be $J_2\leq c_3\epsilon^{-1}.$ Combining the operators $Id+B_j$ with $j=1,\dots, J_1+J_2$, we see that $F$ can be deformed to a linear operator or a reflection operator by combining $J=J_1+J_2$ operators of the form $Id+B_j$. This yields the bound for $J$.
2. "While this is a theoretical paper, some toy experiments that exemplify the theory would be very helpful."
We much appreciate the reviewer's comment. In Appendix A.1 we had given an example where we approximate the solution operator $x(t)\to u(t)$ of the nonlinear elliptic equation $$ \partial_t^2 u(t)-g(u(t))=x(t),\quad t\in (0,1), $$ with $u=0$ on the boundary using a discretization that is based on Finite Element Method. When $g$ is convex and the source term $x(t)$ is represented in the form $$x(t)=\frac {d^2}{dt^2}h(t),$$ the map $F:h\to u$ is a diffeomorphism in the Sobolev space $H^2$ with the Dirichlet boundary values. The approximation $F\to F_V$ can be obtained by Galerkin method. We will expand upon this example in the final version of the manuscript. We will give an example on no-go theorem using the elliptic (but not not strongly elliptic) problem $$B_su:=-\frac d{dt}\bigg(\hbox{sign}(t-s)\frac d{dt} u(t)\bigg)=f(t),\quad t\in [0,1],$$ $$u(0)=0,\ \ \frac d{dt} u(1)=0.$$ For all $0\le s\le 1$ these equations are uniquely solvable, but we show that when we use FEM to approximate those, some of the obtained finite dimensional problems has also a zero eigenvalue and is not solvable.
3. "Do the Hilbert spaces need to be separable?"
The no-go theorem applies also to non-separable Hilbert spaces. Naturally, for such spaces the partially ordered set $S(X)$ of finite dimensional linear subspaces of $X$, that is used as an index set, is huge. In most of our positive results on existence of approximation operations we have assumed that the Hilbert space is separable as we use finite rank neural operators as approximators, and have used this in the orthogonal projectors $P_n$ form $X$ to $\hbox{span}(\phi_1,\dots,\phi_n)$, where $\phi_j,$ $j \in \mathbb Z_+$, is an enumerable orthonormal basis. However, it seems to us that our results can be generalized to non-separable Hilbert spaces $X$ that have a non-enumerable orthonormal bases. We will check carefully if this generalization is possible.
5. "What is the definition of the convergence of finite-dimensional subspaces used in this paper?"
After Definition 7, line 223, we defined that the limit $$ \lim_{V\to X} y_V=y. $$ This limit can be defined also by endowing the set $S_0(X)\cup \{X\}$ by the topology associated to the partial ordering of $S_0(X)\cup \{X\}$, that is, the topology generated by the sets $U_V:=\{W\in S_0(X):\ X\supset V\\}\cup \{X\}$.
6. "Have you studied the role of the bi-Lipschitz constant in your theorems? For example, how does the number of layers $J$ in Theorem 4 depend on it?"
The bi-Lipschitz constraint (and form of neural operator layers) enable us to decompose the map into strongly monotone neural operator layers $G_k=Id+B_k$ (Theorem 4), where $\mathrm{Lip}(B_k)<\epsilon$. Here, $\epsilon \in (0,1)$ is arbitrary. The bi-Lipschitz constant appears in the proof of the estimate of $J$ (under 1.) We will mention this observation in the final version of the manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. Since I am not absolutely familiar with category theory and other related work, I am unable to further raise my score, but I acknowledge that I have read through the rebuttal and it appears to be a nice paper overall.
---
Rebuttal 2:
Comment: Well, I felt sorry for the authors because a reviewer of NeurIPS, once well-known for a top theory-oriented conference of machine leaning, **cannot raise his/her score** simply because he/she is **not familiar with the category theory**. I even feel this is symbolic of the current state of a theory-oriented machine learning conference. It should not be the problem of the individual reviewer him/herself, but the problem of conference's matching systems that mistakenly assign a perfect amateur of the category theory for reviewing a category theoretic research.
This is a suggestion for chairs for future avoidance of mismatches, that the reviewers should be examined if they have fundamental knowledge/background/understandings in the field. I am an expert of expressive power analysis, but not at all of category theory and tropical geometry. Unfortunately, this kind of mismatch happens every year, so I am usually skeptical to any mathematical ''theorems'' published in machine learning conferences. | Summary: This paper focuses on the continuous discretization in operator learning. This is a very important question since it involves reducing the infinite-dimensional space to a finite-dimensional space in operator learning. The authors present cases where discretization is continuous and cases where it is not. The results are interesting and can be applied to design methods in operator learning.
Strengths: The proof is solid, and the paper is well-written and organized. I appreciate the results presented in this paper.
Weaknesses: Since this paper is submitted to NeurIPS and not a mathematical journal, I hope the authors can provide some practical examples, such as solving the Poisson equation \(\Delta u = f\) to learn the operator relationship between \(f\) and \(u\). By using methods like DeepONet and FNO, it would be beneficial to determine whether the discretization in these methods is continuous or not. I believe this could make the paper more accessible to a broader audience.
Technical Quality: 3
Clarity: 3
Questions for Authors: Mentioned in the Weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: All right.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the suggestion to include an example based on the discretization of simple differential equations as it surely helps the readers to quickly understand the essential features of the no-go theorem of the approximation of invertible operators.
On the positive results, in Appendix A of our paper, we have considered nonlinear discretiation of the operators $u\to -\Delta u+G(u)$; see the reply to reviewer mJde under point 3. To exemplify the negative result, we will add in the appendix of the paper the following example on the solution operation of differential equations and the non-existence of approximation by diffeomorphic maps: We consider the elliptic (but not not strongly elliptic) problem (below, called as "PDE1")
$$
B_su:=-\frac d{dt}\bigg((1+t)\hbox{sign}(t-s)\frac d{dt} u(t)\bigg)=f(t),\quad t\in [0,1],
$$
with the Dirichlet and Neumann boundary conditions
$$
u(0)=0,\quad \frac d{dt} u(1)=0.
$$
Here, $0\le s\le 1$ is a parameter of the coeffient function and $\hbox{sign}(t-s)=1$ if $t>s$ and $\hbox{sign}(t-s)<0$ if $t<s$. We consider the weak solutions of PDE1 in the space
$$
u\in H^1_{D,N}(0,1):=\\{v\in H^1(0,1):\ v(0)=0\\}.
$$
We can write
$$
B_su=-D_t^{(2)}A_sD_t^{(1)}u,
$$
where
$$
A_sv(t)=(1+t)\hbox{sign}(t-s)v(t),
$$
parametrized by $0\le s\leq 1$, are multiplication operations that are invertible operators, $A_s:L^2(0,1)\to L^2(0,1)$ (this invertibility makes the equation PDE1 elliptic). Moreover, $D_t^{(1)}$ and $D_t^{(2)}$ are the operators $v\to \frac {d}{dt}v$ with the Dirichlet boundary condition $v(0)=0$ and $v(1)=0$, respectively. We consider the Hilbert space $X=H^1_{D,N}(0,1)$; to generate an invertible operator $G_s:X\to X$ related to PDE1, we write the source term using an auxiliary function $g$,
$$
f(t)=Qg:=-\frac {d^2}{dt^2}g(t)+g(t).
$$
Then the equation,
$$
B_su=Qg ,
$$
defines a continuous and invertible operator,
$$
G_s:X\to X,\quad G_s:g\to u.
$$
In fact, $G_s=B_s^{-1}\circ Q$ when the domains of $B_s$ and $Q$ are chosen in a suitable way. The Galerkin method (that is, the standard approximation based on the Finite Element Method) to approximate the equation PDE1 involves introducing a complete basis $\chi_j(t)$, $j=1,2,\dots$ of the Hilbert space $X$, the orthogonal projection
$$
P_n:X\to X_n:= \hbox{span}\\{\chi_j:\ j=1,2,\dots,n\\} ,
$$
and approximate solutions of PDE1 through solving
$$
P_nB_sP_nu_n=P_nQP_ng_n,\quad u_n\in X_n, \ g_n=P_ng.
$$
This means that operator $B_s^{-1}Q:g\to u$ is approximated by $(P_nB_sP_n)^{-1}P_nQP_n:g_n\to u_n$, when $P_nB_sP_n:X_n\to X_n$ is invertible.
The above corresponds to the Finite Element Method where the matrix defined by the operator $P_nB_sP_n$
is $m(s)= [b_{jk}(s)]_{j,k=1}^n\in \mathbb R^{n\times n}$, where
$$
m(s)=\int_0^1 (1+t)\hbox{sign}(t-s)\frac d{dt} \chi_j(t)\cdot \frac d{dt} \chi_k(t)dt,\quad j,k=1,\dots,n.
$$
Since we used the mixed Dirichlet and Neumann boundary conditions in the above boundary value problem, we see that for $s=0$ all eigenvalues of the matrix $m(s)$ are strictly positive, and when $s=1$ all eigenvalues are strictly negative. As the function $s \to m(s)$ is a continuous matrix-valued function, we see that there exists $s\in (0,1)$ such that the matrix $m(s)$ has a zero eigenvalue and is no invertible. Thus, we have a situation where all operators $B_s^{-1}Q:g\to u$, $s\in [0,1]$ are invertible (and thus define diffeomorphisms $X\to X$) but for any basis $\chi_j(t)$ and any $n$ there exists $s\in (0,1)$ such that the finite dimensional approximation $m(s):\mathbb R^n\to \mathbb R^n$ is not invertible. This example shows that there is no FEM-based discretization method for which the finite dimensional approximations of all operators $B_s^{-1}Q$, $s\in (0,1)$, are invertible. The above example also shows a key difference between finite and infinite dimensional spaces. The operator $A_s:L^2(0,1)\to L^2(0,1)$ has only continuous spectrum and not eigenvalues nor eigenfunctions whereas the finite dimensional matrices have only point spectrum (that is, eigenvalues). The continuous spectrum makes it possible to deform the positive operator $A_s$ with $s=0$ to a negative operator $A_s$ with $s=1$ in such a way that all operators $A_s$, $0\le s\le 1$, are invertible but this is not possible to do for finite dimensional matrices. We point out that the map $s\to A_s$ is not continuous in the operator norm topology but only in the strong operator topology and the fact that $A_0$ can be deformed to $A_1$ in the norm topology by a path that lies in the set of invertible operators is a deeper result. However, the strong operator topology is enough to make the FEM matrix $m(s)$ to depend continuously on $s$.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I will keep my score. | Summary: This paper investigates theoretical limitations of discretizing neural operators on infinite-dimensional Hilbert spaces. The authors first prove a "no-go theorem" (Theorems 1,2) showing that diffeomorphisms between infinite-dimensional Hilbert spaces cannot generally be continuously approximated by finite-dimensional diffeomorphisms. Then, they provide positive results for certain classes of operators such as strongly monotone (Theorem 3) and bilipschitz neural operators (Theorem 4). They finally provide concrete example of approximation by finite residual ReLU networks (Theorem 5).
Strengths: - The universality of neural networks has been demonstrated in various settings. However, research on the approximation abilities of operators is relatively scarce. Particularly, the characterization of classes that cannot be approximated is intriguing. This study is important as it succinctly demonstrates the differences between finite-dimensional and infinite-dimensional properties in the manageable setting of Hilbert spaces.
- Moreover, the novel approach of expressing approximation sequences in terms of category theory is noteworthy.
Weaknesses: - On the other hand, the proofs are based on conventional analytical arguments rather than category-theoretic arguments. Therefore, the "category theory" framework might be somewhat exaggerated. It is expected that with refinement of notation and sentence structure, the description could become more perspicuous in the future.
- There is concern that the categorical description may have obscured the contributions typically seen in __traditional approximation theory__ papers. As the authors likely recognize, various topologies are used in function approximation, and this study focuses __only__ on approximation in the norm topology of Hilbert spaces, and does not negate "all considerable approximation sequences". So, the impossibility theorem presented here might simply be due to the norm topology being too strong. While the Hilbert structure sounds natural as a generalization of Euclidean structure, in reality, concepts like L2 convergence of Fourier series are quite technical and not necessarily an inevitable notion of convergence. It seems that in pursuit of an elegant categorical description, the diversity of function approximation may have been compromised.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Definition 3, why $\sigma$ is imposed besides $G$?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors did not discuss the validity of assumptions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the detailed suggestions, criticisms and endorsement of the reviewer. We address all of these below:
1. "On the other hand, the proofs are based on conventional analytical arguments rather than category-theoretic arguments."
The proofs are indeed based on analytical arguments. We used category theory as a formalism (similar to the one for object oriented programming) to describe approximation operations in all Hilbert spaces (including non-separable ones). As the collection of all Hilbert spaces cannot be considered as a set (cf. Russel's paradox) but can as a category, we chose to use the language of category theory. In the beginning of the paper, we considered an "approximation operation" to avoid difficulties related to formal category theory.
2. "The impossibility theorem presented here might simply be due to the norm topology being too strong."
We much appreciate the issue raised by the reviewer. We will include the analysis and discussion below in the revised manuscript, in an appendix on generalizations.
We formulated the approximation functor using norm topology as uniform convergence in compact sets is extensively studied in the theory of neural networks. Norm topology also makes it possible to consider quantitative error estimates. However, we agree with the referee that it is important to understand no-go results in weaker topologies. It turns out that our results can be generalized to a setting where the norm topology is partially replaced by the weak topology. Definition 7 is replaced by the following
Definition [Weak Approximation Functor]
When $S_0(X)=S(X)$ we define the _weak approximation functor_, that we denote by $\mathcal A: \mathcal D\to\mathcal B$, as the functor that maps each$(X,F)\in\mathcal O_{\mathcal D}$ to some$(X,S(X),(F_V)_{V \in S(X)})$ and has the following the properties
(A')
For all $r>0$, all $(X,F)\in O_{\mathcal D}$ and all $y\in X$, it holds that$$\lim_{V\to X}\ \sup_{x\in B(0,r)\cap V} \ \langle F_V(x)-F(x),y\rangle_X=0.$$Moreover, when $F:X\to X$ is the operator $Id:X\to X$ or $-Id:X\to X$, then $F_V$ is the operator $Id_V:V\to V$ or $-Id V:V\to V$, respectively.
In (A') we added conditions on the approximation on the operators $Id$ and $-Id$. Similarly, the continuity of the approximation functors can be generalized in the case where the convergence in the norm topology is replaced by the weak topology. The proof of the no-go theorem generalizes also to this setting; we will add in the Appendix a theorem which states that there are no weak approximation functors that are continuous in the weak topology.
3. In Definition 3, why $\sigma$ is imposed besides $G$?
This definition needs to be interpreted with care, which we will clarify in the revised manuscript. The appearance of the compact operators $T_1$ and $T_2$ makes the discretization of activation function $\sigma$ and the activation functions inside $G$ in Definition 3 different, and this is one reason why we have introduced both $\sigma$ and $G$. To consider invertible neural operators, we will below assume that $\sigma$ is an invertible function, for example, the leaky Relu function. In the operation $$N:u\to u +T_2(G(T_1u)),$$ the nonlinear function $G$ is sandwiched between compact operators $T_1$ and $T_2$. The compact operators map weakly converging sequences to norm converging sequences. This is essential in the proofs of the positive results for approximation functors as discussed in the paper. However, we do not have general results on how the operation $$u\to \sigma \circ u$$ can be approximated by finite dimensional operators in the norm topology, but only in the weak topology in the sense of the above
{Definition} 1 of the Weak Approximation Functor. Nonetheless, one can overcome this difficulty, for example, for using the explicit form of the activation function and choosing different finite dimensional spaces $V_j$ in each layer of the neural operator.
We address the question whether the activation function $\sigma$ is relevant in universal approximation results. If the activation function $\sigma$ is removed, the operator $F$ becomes a sum of a (local) linear operator and a compact (nonlocal) nonlinear integral operator. Moreover, if we compose above operators of the above form, the resulting operator, $H$ say, is also a sum of a (local) linear operator, $W$, and a compact (nonlocal) operator, $K$. The Fr\'{e}chet derivative of $H$ at $u_0$ is equal to $W$ and a compact linear operator. This means that the Fredholm index of the derivative of $H$ at $u_0$ equal to the index of $W$ is constant, that is, independent of the point $u_0$ where the derivative is computed. In particular, this means that one cannot approximate an arbitrary $C^1$-function $X\to X$ in compact subsets of $X$ by such neural operators. Indeed, for a general $C^1$-function, the Fredholm index may be a varying function of $u_0$. Thus, $\sigma$ appears to be relevant for obtaining universal approximation theorems for neural operators. Again, we will add this analysis to the final version of the manuscript.
4. "The authors did not discuss the validity of assumptions."
We appreciate this criticism and will address it in the final version of the manuscript. The key assumption is that the neural operator is bilipschitz while being of the general form (4). the expressability properties and applicability in designing generative models is discussed in global comments. We also point of that the strong monotonicity used as an assumption in several lemmas and theorems is an intermediate assumption that is absorbed in Theorem 4 where we consider approximation of bi-Lipschitz neural operators.
In Theorem 5, finite-rank residual neural operators appear as explicit natural approximators of bilipschitz neural operators. Such a perspective has been empirically studied as in [Behrmann, et al PMLR 2019, pp. 573-582], although in the finite-dimensional case.
---
Rebuttal Comment 1.1:
Comment: Thank you for detailed clarifications. I would like to keep my score as is.
> The proofs are indeed based on analytical arguments.
If so, I recommend the authors to reconsider the following phrases in the abstract and conclusion:
> Using category theory, we give a no-go theorem
> We used tools from category theory to produce a no-go theorem
It would be much impactful and significant if the authors could more directly point out any incorrectness of the proof or inappropriateness of the assumption in the previous studies.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We would will address your points in the following way.
- If so, I recommend the authors to reconsider the following phrases in the abstract and conclusion: "Using category theory, we give a no-go theorem." "We used tools from category theory to produce a no-go theorem"
We appreciate the advice, and will follow it. We will replace
"Using category theory, we give a no-go theorem"
with
"Using analytical arguments, we give a no-go theorem framed with category theory."
and replace
"We used tools from category theory to produce a no-go theorem"
with
"We give a no-go theorem framed with category theory"
in the abstract and conclusion.
- It would be much impactful and significant if the authors could more directly point out any incorrectness of the proof or inappropriateness of the assumption in the previous studies.
There are several papers which use continuous functions (either as elements of infinite dimensional function spaces or metric spaces) to model images or signal and apply statistical methods and invertible neural networks or maps modeling diffeomorphisms. Often in these papers one derives theoretical results in the continuous models and presents numerical results using a finite dimensional approximations. In this process the errors are caused by the discretization and the effect of changing the dimension of the approximate models. We believe that our work meaningfully addresses these questions as applied to injective/bijective neural operators, an important architecture. We hope that our paper inspires further study these points. We can include citations to the following papers, related to these issues.
The below papers which combine neural networks and approximation of diffeomorphisms, as applied to imaging.
- Elena Celledoni · Helge Glöckner · Jørgen N. Riseth, Alexander Schmeding
Deep neural networks on diffeomorphism groups for
optimal shape reparametrization.
BIT Numerical Mathematics (2023) 63:50
- GradICON: Approximate Diffeomorphisms via Gradient Inverse Consistency
Lin Tian · Hastings Greer · François-Xavier Vialard · Roland Kwitt · Raúl San José Estépar · Richard Jarrett Rushmore · Nikolaos Makris · Sylvain Bouix · Marc Niethammer
West Building Exhibit Halls ABC 153
The below papers combine invertible neural networks and statistical models, especially for solving inverse problems (including imaging problems).
- Alexander Denker , Maximilian Schmidt , Johannes Leuschner and Peter Maass
Conditional Invertible Neural Networks for Medical Imaging. Journal of Imaging 2021, 7(11), 243
- Ardizzone, L.; Kruse, J.; Rother, C.; Köthe, U. Analyzing Inverse Problems with Invertible Neural Networks. In Proceedings of
the 7th International Conference on Learning Representations (ICLR 2019), New Orleans, LA, USA, 6–9 May 2019.
- Anantha Padmanabha, G.; Zabaras, N. Solving inverse problems using conditional invertible neural networks. J. Comput. Phys.
2021, 433, 110194
- Denker, A.; Schmidt, M.; Leuschner, J.; Maass, P.; Behrmann, J. Conditional Normalizing Flows for Low-Dose Computed
Tomography Image Reconstruction. In Proceedings of the ICML Workshop on Invertible Neural Networks, Normalizing Flows,
and Explicit Likelihood Models, Vienna, Austria, 18 July 2020.
- Hagemann, P.; Hertrich, J.; Steidl, G. Stochastic Normalizing Flows for Inverse Problems: A Markov Chains Viewpoint.
SIAM/ASA Journal on Uncertainty QuantificationVol. 10, Iss. 3 (2022) 10.1137
- Papamakarios, G.; Nalisnick, E.T.; Rezende, D.J.; Mohamed, S.; Lakshminarayanan, B. Normalizing Flows for Probabilistic
Modeling and Inference. Journal of Machine Learning Research 22 (2021) 1-64 | Summary: The paper addresses the problem of discretizing neural operators, maps between infinite dimensional Hilbert spaces that are trained on finite-dimensional discretizations. Using tools from category theory, the authors provide a no-go theorem showing that diffeomorfisms between Hilbert spaces may not admit continuous approximations by diffeomorfisms on finite spaces. This highlights the fundamental differences between infinite-dimensional Hilbert spaces and finite-dimensional vector spaces. Despite these challenges, the authors provide positive results, showing that strongly monotone diffeomorphism operators can be approximated in finite dimensions and that bilipschitz neural operators can be decomposed into strongly monotone operators and invertible linear maps. Finally, they observe how such operators can be locally inverted through an iteration scheme.
Strengths: - The paper provides theoretical results addressing the challenging problem of discretizing inherently infinite-dimensional objects (neural operators)
Weaknesses: - The text and presentation require significant polishing. It contains numerous typos, poorly formulated sentences, and instances of missing or repeated words
- While the paper's theoretical focus is valuable, it lacks examples of specific neural operator structures that meet the theorems or remarks
- A more detailed discussion on the practical impact of this work, accompanied by examples, would be beneficial for the audience
Please note that my review should be taken with caution, as I am not familiar with category theory and did not thoroughly check the mathematical details. My feedback primarily focuses on the presentation and potential impact of the results rather than a rigorous validation of the theoretical content.
Technical Quality: 2
Clarity: 1
Questions for Authors: - Neural operators are typically defined between Banach spaces. Why does your theory focus on maps between Hilbert spaces instead?
- Comment: The work in [1] might have been relevant to cite as well.
- The main neural operator paper [2] develops theoretical results on the universal approximation theory of neural operators. How do your results relate to the ones in that paper?
[1] F. Bartolucci, E. de Bézenac, B. Raonić, R. Molinaro, S. Mishra, R. Alaifari, Representation Equivalent Neural Operators: a Framework for Alias-free Operator Learning, NeurIPS 2023.
[2] Nikola B. Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew M. Stuart, and Anima Anandkumar. "Neural operator: Learning maps between function spaces with applications to PDEs," J. Mach. Learn. Res., 24(89):1–97, 2023.
Confidence: 2
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The paper lacks examples of applications of the theorems to specific neural operator structures, and some further discussion on the practical impact of the results with examples
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable comments and constructive feedback of the reviewer. We are pleased to address all of these below.
1. "The text and presentation require significant polishing."
We agree and sincerely regret this, and have already made many corrections to the manuscript.
2. "While the paper's theoretical focus is valuable, it lacks examples of specific neural operator structures that meet the theorems or remarks."
In our approximation results (Theorem 5 and Corollary 1), while we consider a large class of bijective neural operators, approximators can be obtained through finite rank neural operators (see Def. 10). Finite-rank neural operators are encountered, e.g., as FNOs [37], wavelet neural operators [Tripura and Chakraborty, Wavelet Neural Operator for solving parametric partial differential equations in computational mechanics problems, Comp. Meth. Appl. Mech. 2023], and Laplace neural operators [Chen et al, arXiv:2302.08166v2, 2023].
The neural operators,$$F:u\to \sigma\circ (u+T_2(G(T_1u))),$$studied in our paper include neural operators that are close to those introduced in Kovachki-Lanthaler-Mishra (KLM) [28, 31]. This is discussed in the comments for all reviewers.
3. "A more detailed discussion on the practical impact of this work, accompanied by examples, would be beneficial for the audience."
We much appreciate this suggestion. In Appendix A.1 we had given an example where we approximate the solution operator $x(t)\to u(t)$ of the nonlinear elliptic equation $$\partial_t^2 u(t)-g(u(t))=x(t),\quad t\in \Omega=(0,1),$$with $u=0$ on$\partial \Omega,$using a discretization that is based on Finite Element Method.In the case when $g$ is a convex function and the source term $x(t)$ is represented in the form $$x(t)=\frac {d^2}{dt^2}h(t),$$ the map $F:h\to u$ is a diffeomorphism in the Sobolev space $H^2$ with the Dirichlet boundary values $u(0)=0$ and $u(1)=0$. The approximation $F\to F_V$ can be obtained by Galerkin method. We will expand upon this example in the final version of the manuscript.
4. "Neural operators are typically defined between Banach spaces. Why does your theory focus on maps between Hilbert spaces instead?"
Via our general framework, we found that strong monotonicity is one of the key ingredients to obtain a "positive" result, that is, preserving invariant discretization. Strong monotonicity is defined by using inner products, which is why we have focused on Hilbert spaces.
However, the no-go theorem which states that diffeomorphisms of Hilbert spaces cannot be continuously approximated by finite dimensional diffeomorphisms implies directly that the same "negative" result holds for general Banach spaces.
The main challenge for using general Banach spaces for the "positive" result is that a map $P_Y : X \to Y$, that maps a point to the closest points in the subspace $Y \subset X$, may be set-valued, that is, there may be several nearest points. Nonetheless, several of our results can be generalized to uniformly convex Banach spaces, $X$. For these,
$$\|x\| = \|y\| = 1\hbox{ and }x\not = y\quad \implies \|(x+y)/2\|<1.$$In such a space, for a closed subspace $Y\subset X$ and $x \in X$, there is a unique closest point $y\in Y$ to $x$. (In fact, uniformly convex Banach spaces are strictly convex Banach spaces where the above inequality is given in a quantitative form). This makes, e.g., the linear discretization $F \to F_V = P_V F|_V$ well defined. We will include a detailed discussion in the revision on generalizations to strictly convex spaces.
5. "Comment: The work in [1] might have been relevant to cite as well.
We thank the reviewer for bringing this paper to our attention. We will add [1] to our references.
6. "The main neural operator paper [2] develops theoretical results on the universal approximation theory of neural operators. How do your results relate to the ones in that paper?"
We agree with the reviewer that this is an important point. The approximation result in [2] is the universality of neural operators, i.e., to approximate any continuous map by a neural operator. Diffeomorphisms are contained in this result, which holds in the function space setting. However, even though a general diffeomorphism, $F :\ X \to X$, can be approximated by a neural operator, $F^{NO} :\ X \to X$, and the neural operator can be approximated by a finite dimensional operator, $F^{NO}_V :\ V \to V$, the proof of the no-go theorem implies that either the approximating infinite dimensional neural operators $F^{NO} : X \to X$ are not diffeomorphisms or that the approximation of neural operators by finite dimensional operators, that is, operation $F^{NO}\to F^{NO}_V$, is not continuous. Note that the present universal approximation results for neural operators have mainly analyzed the approximation of functions $F : X \to X$ by neural operators in norms of the spaces $C(K)$, where $K\subset X$ is compact, but not in the $C^1$-norms.
We will add this discussion to the Introduction.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. As I mentioned in my initial review, my understanding of category theory is somewhat limited. My feedback has mainly focused on the presentation and potential impact of the results rather than an in-depth validation of the theoretical content. I am not in a position to increase my score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable comments and detailed questions. We will provide replies to the individual reviewers below, but first would like to make some general statements addressing a few issues raised by all the reviewers.
Common questions: Practical impact/examples of this work?
A practical implication of our result is the description of bi-Lipschitz neural operators as a composition of discretization invariant layers of invertible finite dimensional neural operators (i.e. neural networks). Such neural operators are useful in generative models where a probability distribution $\mu_0$ supported on a given model manifold $M_0$ is pushed forward by a map $F_\theta$ to a distribution that one would like to be close to an empirical target distribution $\mu_{data}$ supported on some submanifold $M_{data}$ of the Hilbert space $X$. (Here $\mu_{data}$ and $M_{data}$ are unknown and $\theta$ are optimized with samples from $\mu_{data}$). Suppose that we know a priori the topology of the data manifold $M_{data}$ and there is a diffeomorphism $f_0:M_0\to M_{data}.$ As all smooth finite dimensional submanifolds of a Hilbert space are close to some finite dimensional subspace $V$, one can start by assuming that there exists an embedding $f_1:M_0\to P_V(M_{data})$ that is close to $f_0$, where $P_V$ is a finite dimensional orthoprojection onto $V$. By considering the model manifold $M_0$ as a subset of $V$, we can extend the embedding $f_1:M_0\to V$ to a diffeomorphism $F_0:V\to V$. This can be done when the dimension of $V$ is sufficiently large [Puthawala et al., ICML 2022].
Furthermore, $F_0$ can be extended to a diffeomorphism $$F_{ext}=F_0\times Id_{V^\perp}:X\to X,$$where $F_{ext}$ maps $x=v+w\in V\oplus V^\perp$ to$$F_{ext}(v+w)=F_0(v)+w.$$The map $F_{ext}$ can be written as$$F_{ext}=Id+P_V\circ G\circ P_V, $$where $P_V$ is a compact linear operator and $G=F_{ext}-Id$. By definition the map $F_{ext}$ is a neural operator diffeomorphism. Thus, diffeomorphic neural operators can be used to obtain generative models. As the finite dimensional subspace $V$ is not a priori known, and its dimension depends on the accuracy required for the generative model, it is natural to consider infinite dimensional neural operators $F:X\to X$ and study their approximation properties.
In our paper we show, in Theorem 3, that strongly monotone neural operators can be approximated continuously by finite dimensional neural operators that are diffeomorphisim (so, invertible). In Theorem 4 we show that any bi-Lipschitz neural operator (not necessarily strongly monotone) can locally be represented as a composition of strongly monotone neural operator layers. This implies that bi-Lipschitz neural operators can be approximated by a composition of invertible, finite dimensional neural networks in a continuous way. This makes invertible neural operators a class that behaves well in finite dimensional approximations. Our results can also be summarized by stating that neural operators conditionally serve as a class of diffeomorphisms of function spaces that are simple enough for well-working approximations but still sufficiently expressive (and may model a rich variety of deformations).
The neural operators$$F:u\to \sigma\circ (u+T_2(G(T_1u)))$$we study include the neural operators that are close to those introduced in Kovachki-Lanthaler-Mishra (KLM) [28, 31]. We have assumed that operators $T_1$ and $T_2$ are compact linear operators; in several cases these can be chosen to be identity embeddings that are maps between different function spaces so that these embeddings are compact.
Consider a KLM neural operator $F:X\to X$ of the form$$F:u\to \sigma\circ (u+S_2(H(S_1u))),$$where $X=H^m(D)$ and $D\subset \mathbb R^d$ is a bounded set. Moreover, let $Y = C(\overline D)$ and $Z = C^{m+1}(\overline D)$, where $m>d/2$. Let $h:Y \to X$ be a nonlinear (integral) operator,$$H(u)(x)=\int_D k_\theta(x,y,u(y))u(y)dy,$$where $k_\theta$ is a kernel given by a neural network with sufficiently smooth activation functions $\sigma_j$ of the form$$k_\theta(x,y,t)=\sum_{j=1}^{J} c_{j}(x,y,\theta)\sigma_j(a_{j}(x,y,\theta)t+b_j(x,y,\theta)),$$and $S_1 : X \to Y$ and $S_2 : Z \to X$ the identity embedding operators mapping between function spaces,$$S_j(u)=u.$$Thus,$$F(u)(x)=\sigma(u(x)+\int_{D}k_\theta(x,y,u(y))u(y)dy).$$The Hilbert spaces $X,Y$, and $Z$ are isomorphic and by writing, e.g.,$$S_2\circ H=T_2\circ G,$$where$$T_2=S_2\circ J_Z^{-1},\ G=J_Z\circ h,$$where $J_Z : Z \to X$ is an isomorphism, we can write $F$in the form$$F(u)=\sigma\circ (u+T_2(G(T_1u))),$$in which $T_1:X\to X$ and $T_2:X\to X$ are compact linear operators and $G:X\to X$ a continuous nonlinear operator. In this way, the KLM-operator $F$ can be written in the form studied in our paper.
Furthermore, by choosing $k_{\theta}(x,y,u(y)) = k_{\theta}(x - y)$ and $D = \mathbb{T}^d$ as the convolutional kernel and the torus, the map $F$ takes the form of an FNO [37]. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Many-shot Jailbreaking | Accept (poster) | Summary: The jailbreaking problem, where a harmful output is desired to be obtained from aligned language models is studied in this paper. The paper addresses in-context learning jailbreaking, where examples of malicious queries and answers are given before asking the desired question. This paper extends from the previously studied few-shot jailbreaking to many-shot jailbreaking. Extensive experimental evaluation in their generated harmful questions and the standard HarmBench benchmark show the effectiveness of the approach. Moreover, authors warn about the dangers of long contexts and large model size with an empirical analysis of the scaling laws of the likelihood of harmful answers.
Strengths: - Clear writing, motivation and discussion of limitations of all the design decisions.
- Simple and effective approach.
- Extensive experimental evaluation.
- The scaling law analysis provides relevant insights, i.e., the increased jailbreaking success with larger context lengths and model sizes.
Weaknesses: I find some experimental aspects could be improved in the paper.
- **Missing experimental details and error bars:**
Authors report the negative log likelihood (NLL) of harmful answers attack success rate (ASR). Nevertheless, they do not specify how many samples are taken in order to estimate the NLL. If the NLL is estimated as the average across different harmful answer targets, what is the standard deviation of the NLL? Do all harmful targets behave similarly to the average scaling law?
- **On the use suffixes in the MSJ + GCG attack:**
In Figure 3, I find the increase in NLL from 0-shot (standard GCG) to 1-shot very weird. Authors speculate the GCG suffix is “heavy location specific”, I have some questions about this.
Authors repeat the same suffix after each harmful question in the in-context demonstrations. Do authors obtain the adversarial suffix in the 0-shot setup and then simply employ it in the multi-shot case?
If positive, have authors tried optimizing the adversarial suffix in the in-context setup? That is, appending the adversarial suffix to every question and optimizing it with GCG. To avoid problems with the gradient estimation, the suffix could be put just in the last question. This approach has the same complexity as GCG with the only disadvantage that long-context prompts take longer in inference.
Technical Quality: 3
Clarity: 3
Questions for Authors: See **Weaknesses**.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The main limitation of MSJ is that this attack can easily be detected by checking the Question-Answer format and rejecting the prompt if the number of such pairs is uncommonly high. Additionally, since Jailbreaking appears with very large context lengths, model servers could limit the context length to defend against such approaches. Authors accordingly discuss such limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
We appreciate that you found the simplicity of MSJ to be a strength given its effectiveness, and highlighted the scaling laws as providing insight into the nature of the jailbreak, as well as mitigation attempts.
Here are our responses to some of your critique:
* **Augmenting experimental details and error bars:** Here's how we've tried to improve the paper accordingly:
* **Specifying number of samples:** We've added this information in the paper. All our measurements use dataset sizes of at least $512$, which ensures a standard deviation of at most approximately $5$ percent under a Bernoulli model (relevant for computing error bars on attack success rates).
* Concrete action taken: Paper edited with the requested information.
* **NLL information:** We've redone some of our plots with the error bars on the NLL values. You can find these in the attached. Please note the technique we used to reduce cross-datapoint variance between measurements within the same scaling law plots, described in Appendix C.2. This allows us to obtain clean scaling laws despite each individual measurement being relatively noisy. Note that the in-context power laws have been observed independently by Agarwal et al. as well, a work concurrent to ours.
* Concrete actions taken: Requested plots generated and provided in the supplementary rebuttal figures.
* **Question about GCG:**
* **New GCG results:** It is unfortunately difficult for us to run new GCG experiments on time for the rebuttal. However, we can still try to address a portion of the critique.
* **Mechanics of our GCG experiments:**
* The way we construct the GCG-augmented MSJ prompts is similar to what you describe: we first compute the GCG string that zero-shot jailbreaks the model on a variety of prompts, then stack [question + gcg-string + answer] pairs to construct the MSJ prompts. Just to be clear, we find a "universal" GCG string that works very well in the zero-shot setup, and use the same GCG string when we form the MSJ prompt.
* Our experiment attempts to answer whether one can stack MSJ on top of an existing GCG attack. Our result here is mostly negative — the zero-shot benefit of the attack doesn't translate to the many-shot setting.
* The opposite case that you've brought up (what if we take an MSJ prompt then optimize a GCG string on top of it) is also very interesting, and something we didn't try. A particularly interesting version of this could have been: optimize a GCG string for 10-shot MSJ prompts, then test it on 5 shot and 20 shot MSJ prompts. Our results on location specificity suggest that the benefit we get on 10-shot prompts will not transfer to 5-shot or 20 shot cases. We'll quite likely not have the opportunity to run this experiment by the end of the rebuttal phase, but with your permission we can mention this as a promising experimental extension in the paper.
[1] Agarwal, Rishabh, et al. "Many-shot in-context learning." arXiv preprint arXiv:2404.11018 (2024).
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thanks for your response. I appreciate the inclusion of the error bars in the NLL estimates.
Regarding the GCG experiment, I would suggest removing the experiment or doing it properly, results are currently inconclusive. I do not believe that to "mention this as a promising experimental extension in the paper" is up to the NeurIPS 2024 quality standards. Even though authors did nor disclose the computational resources employed in the experimental evaluation, provided the vast amount of experiments present in the paper, I believe they are more than capable to run this simple experiment before the camera ready version.
I will maintain my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your response.
Comment: Thank you for your response and engagement.
There might have been a misunderstanding regarding our response to the suggested GCG experiments.
**We can absolutely commit to adding the result of the experiment you suggested before the camera ready deadline** — just not by the end of the rebuttal, which was not possible due to the timing of the rebuttal period. The GCG results also aren’t load-bearing for the paper — we’d strongly argue that our submission would still be (hopefully comfortably) above the acceptance threshold even if we completely removed the GCG section as you suggest.
We’d like to also argue that the **existing results do deserve being shared with the adversarial robustness community as they are, provided we appropriately scope our claims such that they are supported by the evidence we give**. Our results show that composing MSJ and GCG such that MSJ is in the outer layer (i.e. MSJ(GCG(query))) does not result in a stronger attack — we believe this is an interesting finding that might be counterintuitive to some readers. You’re right that we don’t (yet) present results in the other order of composition (GCG(MSJ(query))). It’s important that we don’t over-claim here, and we’ve updated the language to make sure of this.
Here’s a summary of our claims:
* The results we presented at the time of submission are already worthwhile as they are, provided they are discussed appropriately (which, given our latest edits, they are).
* Adding the experiment you suggest would make the GCG section stronger, and we can commit to doing this before the camera ready deadline. Note that no other claim in the paper hinges upon how the result of this experiment turns out.
* Adding these results would only constitute a relatively minor revision of the paper, in comparison to the contributions elsewhere.
Thank you for your consideration. | Summary: In this paper, the authors investigated many-shot jailbreak (MSJ), a jailbreaking method that exploits LLMs' ever-growing context window length to prefix malicous requests with a large amount of demonstrations of jailbroken dialogs. The constituents of MSJ prompts are reltively simple but MSJ manages to breach the safety guardrails of various LLMs with substantially higher probability than existing jailbreaking methods as long as the number of demonstrations is large enough. The authors accordingly identified a possible power scaling relation between the liklihood of harmful responses and the number of demonstrations. Non-signicant defense brought by differnt mitigation stretegies further validated the hardness to defend against such an attack.
Strengths: + The work reveals the length of the jailbreaking prompt as a novel attack surface (as well as places where positive controls can happen) which is inspiring.
+ This work comes with extensive experiments on a number of datasets.
+ The investigation in this paper is very comprehensive. It involves not only the effectiveness of a single jailbreaking attack, but also discusses the underlying pattern, potential fixes, extension to non-safety-related data, analysis about the influence of model size, etc.
Weaknesses: + The paper lacks numbers and tables. While it uses a lot of plots for visualization, numeric results are still valued.
+ MSJ features and requires a very longer jailbreaking prompt to be effective. In Figure 18 where MSJ was compared against other jailbreaking methods, it is using 128 demonstrations, while reading from Figure 1 and its caption, that means more than 4096 tokens. It might not be entirely fair for the other attacks which uses far less tokens. It would be nice to show the relation between the number of tokens and the ASRs.
+ MSJ requires access to a lot of jailbroken examples, which isn't readily available.
+ Consisting of so many harmful demonstrations, the jailbreaking prompt of MSJ is likely to be easily filtered.
Technical Quality: 4
Clarity: 3
Questions for Authors: + While it is mentioned in Appendix E that the LLM can learn the format in context, does it mean the MSJ can be formulated as a single round conversation of extra long message instead of a dialogue with long history? What's the difference between the two formulations and does the results from Appendix E mean it is possible to hijack the special tokens in the conversational template through interactions like MSJ?
+ To what extent will the content of the the demonstration impact the jailbroken response? Appendix D.3 mentioned that the in-context demonstrations are expected to come from a sufficiently wide distribution, but what if the demonstrations share a clearly identifiable feature that is non-existent in the target domain? As excessive influence of context on the jailbroken response might not be a desirable thing, is it possible longer conversations suffer more from these issues?
+ Why is the investigation dedicated to many "shots" instead of just long prefix? As there haven't been many jailbreaking attacks that come without a moderate token budget, it is unclear if the mere length of MSJ prompt is contributing more to the success of jailbreak. There have been studied which juxtapose obfuscated malicious requests with unrelated questions in a single utterance and also receives promising results.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors identified the need for the ability to set the conversation histroy and possible lack of robustness under demonstration target domain shift as the two major limitations of the work. However, the cost of collect quality demonstrations, chance of being filtered, cost effectiveness, etc. are all also limiting the use of MSJ. The authors discussed the border imapct of this work in Appendix A. What MSJ reveals can draw peoples' attention to the context window length as a previously less-explored perspective for controlling model behaviors. A responsible discloure meeting was also held among model prividers to share the finding and attempts at mitigation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging review!
We appreciate that you highlighted our focus on context length as a novel attack surface, which was the inspiration behind our work. We also appreciate that you found our experiments comprehensive.
We've tried to address some of your feedback — some directly on the writeup. Please take a look!
Addressing critique:
* **Tables and numbers:** Thank you for this suggestion, we agree with you! We've added a portion of our results (especially the results on the Malicious Use-Cases dataset) in Appendix D.
* Concrete action taken: Requested tabular data provided. Please find two of the tables in the supplementary rebuttal figures.
* **Longer prompts:** Thanks for bringing this up! The effect of prompt length on the robustness of harmlessness training was studied in [1], Section 5.2 The authors find that prepending long conversations in the prompt, then asking adversarial questions doesn't manage to reduce the robustness of the LLama 2 models. This finding suggests that length alone is not a significant causal factor in jailbreaking success.
There do exist methods (such as Greedy Coordinate Descent, which we discuss in the paper) that can find very short prompt suffixes that reach high degrees of jailbreaking success. A significant caveat is that this class of attacks use gradient information — something MSJ doesn't make use of. There exist jailbreaks such as that discussed in [2] that is a lot more effective than MSJ in the bounded context setup. We view that fact that the effectiveness of MSJ improves with context length as a strength: The performance of many jailbreaks such as [2] is constant in number of tokens available — they don't get better with increasing context length. On the contrary, MSJ gets much better — and in a predictable, power law relation.
[3] links token length and steerability in an adversarial robustness context: This paper suggests that, as expected, having longer contexts makes it possible to steer model behavior more easily. (note that this paper finds adversarial attacks on token activations)
* **Already jailbroken examples are (unfortunately) readily available:** Unfortunately, finding jailbroken examples in Google is trivial: Simply look at the HarmBench dataset, which has publicly available question-harmful_response pairs. Also, with the release of Llama 3 400B, we will soon have open source, fully jailbroken models from which it's trivial to generate harmful responses.
* **Data filtering:** Thanks for bringing this up! We are actively following promising leads on how Many-shot Jailbreaking interacts with filtering-based methods, but are not in a position to be able to share some of our results just yet.
Answers to questions:
* **Single conversation vs. dialogue:** This is a great question! The vanilla version of MSJ that we present in the paper does not extend to single turn conversations — the many-shot structure is quite important for the attack to be effective. That being said, we're actively following some leads on this direction. For the attack to be effective on platforms such as ChatGPT and Claude.ai, not only does the attack need to be single-turn, but also effective at circumventing other safety layers at detecting jailbreak attempts. To do this effectively, one has to consider:
1. How the effectiveness of separate evasions techniques stacks with MSJ
2. Whether there could be MSJ specific evasion techniques
We hope that this presents a rich research agenda for the robustness community to pursue!
As an aside, all companies that are offering their language models via APIs allow for inserting faux steps in the dialogue history, making MSJ trivial to execute.
* **Discussion on diversity of in-context exemplar distribution:** This is an interesting question that touches upon the fundamentals of how in-context learning handles out-of-distribution datapoints. To our knowledge, there doesn't exist a set of general results that can help us here. We expect there to be a fair degree of domain-specific effects here — maybe certain OOD behaviors will be more difficult to elicit via MSJ. Empirically, we find a monotonic relationship between the prompt diversity and chance of transfer to OOD data.
* **Long prefix vs. many-shot structure:** We omitted this kind of analysis from the paper, as it already has been conducted by [1]. They tested the consequence of long, unstructured prompts on robustness, and have mostly not identified any significant degradation in performance. This points to the importance of the many-shot structure of the prompt.
Thanks again for your review and insightful questions. Please do Please let us know if you have any further questions, and (respectfully) consider
[1] Xiong, W., Liu, J., Molybog, I., Zhang, H., Bhargava, P., Hou, R., Martin, L., Rungta, R., Sankarara-
man, K. A., Oguz, B., Khabsa, M., Fang, H., Mehdad, Y., Narang, S., Malik, K., Fan, A., Bhosale,
S., Edunov, S., Lewis, M., Wang, S., and Ma, H. Effective long-context scaling of foundation
models, 2023
[2] Liu, Yi, et al. "Jailbreaking chatgpt via prompt engineering: An empirical study." arXiv preprint arXiv:2305.13860 (2023).
[3] Fort, S. Scaling laws for adversarial attacks on language model activations, 2023.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I would like to thank the authors for the detailed response. The tabular results in the complementary document are very helpful. However, a number of concerns remain:
!. The authors referred to section 5.2 of [1] and believed that it confirmed that longer context alone is not the major reason for MSJ to work. However, the experiment in [1] uses a single and elementary implementation which prepends a long benign context. It is questionable if the results are sufficient for the authors to conclude that existing attacks "don't get betting with increasing context length". Additionally, it is unclear if the experiment in [1] uses a single round or a multi round setup. As the authors believed that the "many-shot structure is quite important", I recommend them conducting the experiments with long dialog history instead of just long context on their own.
2. The authors referred to HarmBench (and Google) as a potential source of jailbroken query-response pairs. However HarmBench wasn't mentioned anywhere in the paper. What are the demonstrations used in the MSJ experiments? Is there a noticeable quality difference between what the authors used and HarmBench, making HarmBench or at least HarmBench alone not appropriate? The authors also mentioned open source jailbroken models. It raises another question on what's the point of jailbreaking a model when people can already get the harmful response elsewhere effortlessly? Maybe, if the authors can show that MSJ allows all subsequent interactions with the victim model to be jailbroken, than such usage scenarios can be partly justified. After all, when a MSJ attack is successful, the new response is harmful and it means the current dialog history is exactly a new set of harmful demonstrations.
3. The authors said that they are investigting MSJ's resistance to filtering-based protection but refused to share any result. As much as I would like to trust the authors, I still think a quick experiment with existing filter-based protection need to be presented, e.g. regarding LlamaGuard, PromptGuard, etc.
Therefore, I would like to keep the original scoring.
[1] Xiong, W., Liu, J., Molybog, I., Zhang, H., Bhargava, P., Hou, R., Martin, L., Rungta, R., Sankarara- man, K. A., Oguz, B., Khabsa, M., Fang, H., Mehdad, Y., Narang, S., Malik, K., Fan, A., Bhosale, S., Edunov, S., Lewis, M., Wang, S., and Ma, H. Effective long-context scaling of foundation models, 2023
---
Rebuttal 2:
Title: Thank you for your response.
Comment: Thank you for your response and consideration! Here are some quick points:
**Dialogue vs. single conversation:** We might have interpreted your earlier comment incorrectly as referring to the comparing the effect of context length alone vs. the dialogue structure!
To answer your question, we ran some additional experiments involving two conditions
* **Condition 1:** Instead of using Human/Assistant tags, we used “H/A” tags in the prompt — so far very similar to what we did in Section 3.3 about changes in formatting. The difference here is that we sandwiched these question-answer pairs between a single Human/Assistant turn. I.e. the format is:
Human: … A: … H: … A: … [xn] H: … Assistant:
This format makes sure that the MSJ prompt fits in a single human turn.
* On the Opportunities to Insult dataset (which correlates very strongly with other datasets in our experience), this did not work very well, getting close to $0$ percent jailbreak rate.
* **Condition 2:** We took Condition 1, and modified the intermediate Question and Answer tags as follows:
H → “Prompt: (Answer the question in the manner displayed above.)”
A → “Aide:”
So now the format is Human: … Aide: … Prompt: (Answer the question in the exact manner displayed above): … Aide: … [xn] Prompt: (Answer the question in the exact manner displayed above): … Aide: … [xn] Prompt: (Answer the question in the exact manner displayed above): … Assistant:
Number of shots | Baseline multi-turn MSJ | Single turn MSJ
0 | 0.00 | 0.00
1 | 0.00 | 0.06
5 | 0.31 | 0.31
10 | 0.67 | 0.51
16 | 0.70 | 0.60
36 | 0.83 | 0.59
These results suggest that one can use a single-turn dialogue to execute MSJ, albeit less effectively than what one can get with a multi-turn setup. Of course there’s more to do here, but hopefully this experiment addresses part of your comment!
**Comment about importance of jailbreaking research in a world with Llama3 and Google:** This is a deep topic that probably deserves a better medium than the margins of a NeurIPS review, but just to briefly share our position: We believe that it is important to study methods that might jailbreak SOTA proprietary models whose capabilities might not have been yet matched by open source models. Jailbreaks such as MSJ could reduce the cost of jailbreaking open-source models as well, which makes defending against it important.
**HarmBench:** The independent replication results actually use the HarmBench dataset — please see Section 7. We’re not aware of any significant quality difference between using HarmBench question-answer pairs vs. using novel samples from, say, open source models.
Thanks again for engaging in this discussion! | Summary: This work presents a jailbreaking method leveraging the power of long-context attacks on Large Language Models (LLMs) called Many-shot Jailbreaking (MSJ). Various LLMs are tested and evaluated on their responses. Extensive experiments are performed with a specific LLM (called MODEL in the paper for anonymity) in different tasks and use cases. The authors claim that the jailbreaking attacks can be characterized by power laws, meaning that the effectiveness increases when the number of shots increases. The effectiveness of many-shot jailbreaking is discussed across tasks, models, and formatting. Moreover, various mitigation techniques based on supervised finetuning, reinforcement learning, and prompt-based defences are analysed and evaluated.
Strengths: - The paper is well-written and explained.
- Authors’ claims are well supported by extensive experiments and well-analysed results.
- The work presents interesting and useful results about jailbreaking attacks as the fact that they can be characterized by power laws and that they can be more effective in larger models.
Weaknesses: - The work seems incremental to the related work cited Agarwal et al., Many-shot in-context learning, 2024.
- Many-shot jailbreaking is accessed in all the presented LLMs only for the task of psychopathy evaluation. For the rest of the tasks only a specific LLM (called MODEL for anonymity) is used.
- It would have been useful to provide the constructed dataset for reproducibility.
Technical Quality: 3
Clarity: 3
Questions for Authors: In functional form (1), do C and α have a particular meaning?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: One limitation is that most of the experiments were performed on a specific LLM. Only the psychopathy evaluation task run across different models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review!
We appreciate that you've found our results interesting and useful. The extent to which the scaling laws are so clean and reproducible did surprise us when we were initially getting the results.
Here's our attempt at addressing some of your critique:
* **Connection to Agarwal et al:**
* Agarwal et. al. is concurrent work, as it happens in a field that moves as fast as ours! All of our experiments had already finished, and the writing had been more or less been locked by the time Agarwal et al's work came out. In other words, Agarwal's results had no causal influence on our ideas, experiments and presentation.
* Concrete action taken: Revise the sentence in which Agarwal et al. is cited, and emphasize that it's concurrent work.
* Additionally, we believe that 1) our focus on safety 2) detailed analysis on mitigations, and especially 3) structural description of "what it means to address many-shot jailbreaking" are among contributions that are exclusive to our submission. Elaborating on item 3, we prescribe a concrete measurement (slope of the in-context learning curve) provides us with a very clean metric for how one can measure progress towards addressing MSJ.
* **Reason for why all negative-log-likelihood experiments were done on the psychopathy dataset:** The simple reason is that we don't have full log-likelihood access to some of the proprietary models, and working with a yes/no dataset like the psychopathy dataset was the only way we could have demonstrated that the power law trend is a general phenomenon. Unfortunately, since we collected our results, some of the companies completely shut down log-prob access, making this kind of experiment impossible even for psychopathy dataset.
* **Releasing datasets for reproducibility:** Releasing datasets is somewhat tricky for a paper like ours: releasing the dataset immediately makes it possible to run our proposed attack on any model in the world! Luckily, we have HarmBench, which actually does have data of the form needed to reproduce our results. This is why our independent replication results in Section `Independent Replication on HarmBench` is so important.
Answers to questions:
* Disambiguating the equation: C is the offset, α is the slope and K controls the infinite-limit lower bound.
* Concrete action taken: Added this information in the writeup.
If you find our response and paper improvements satisfactory, please consider improving your score! In any case, we'd be happy to answer any follow-up question you might have.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal response and additional results. My concerns are addressed and I am increasing my score. | Summary: This paper introduces a novel jailbreaking attack that exploits the extended context capabilities of the most advanced large language models. The authors conduct an in-depth analysis of various aspects of the attack, including its effectiveness across different models, the significance of turn formatting, its combination with other attack methods, and how important it is that the topic of the example matches with the target topic. Furthermore, the study explores the scaling laws for the attack, examining its efficacy in relation to model size and attack length. Additionally, the authors investigate potential mitigation strategies, including both fine-tuning and reinforcement learning approaches, as well as prompt-based techniques.
Strengths: **Effective attack**. The attack is very effective, with a high percentage of harmful responses generated across a number of victim models. The attack is also relatively effortless, requiring little manual work and, once the dataset of harmful response examples is generated, it requires only one query.
**Interesting analysis**. The analysis performed by the authors is extremely in-depth and covers interesting aspects of the attack.
**The attack is robust to formatting changes**. The attack is somewhat robust to changes in the formatting of the turn, which is a significant advantage for an adversary with limited access or knowledge to the victim model.
**Mitigations**. The authors test a number of mitigations, including alignment fine-tuning, reinforcement learning, and prompt-based techniques, and show that they are not very effective and only change the scaling law's intercept.
**Independent replication work**. The authors had their discoveries independently replicated by an independent team on a slightly different set-up (benchmark and model), which somewhat compensates for the lack of experimental details.
Weaknesses: **Poor experimental details**. The experimental details are not very complete because they are run with "proprietary code and models". This is partially compensated by the independent replication work, but still not ideal. There are some experimental details in the appendix, but they are not referenced in the main text and not easy to navigate. I suggest the authors provide more details on the experimental setup in the main text.
**Not fully clear evaluation algorithms**. While the authors provide in the appendix pseudo-code to show how they evaluate the NLL of the harmful responses and the percentage of harmful responses, it would be helpful if they could give a high-level idea in prose of the metrics in the main text of the paper. Moreover, Listing 1 is incomplete on line 5). Is the percentage of harmful responses done with the refusal classifier described in Appendix C.1.1? If so, it should be at least mentioned in the main text.
Technical Quality: 3
Clarity: 2
Questions for Authors: - How important is the quality of the examples generated by the helpful-only model? Would you expect this attack to work with a model sich as Mistral-7b? This is important because an adversary might not have access to a helpful-only model that is as powerful as a proprietary model.
- How would the attack perform if the adversary created a false conversation as part of a user message, as if the adversary was using a platform such as ChatGPT and Claude.ai? Did you try this? I see you listed the fact that the attack would not work as one of the limitations, but at the same time the attack seems to be somewhat robust to different types of turn formatting, so the fact that the attack would not work *at all* comes as a surprise to me
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The author discuss some of the limitations of their work, but do not list the lack of experimental details as one of them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your critique and questions.
We appreciate that you found our scaling and mitigation analysis interesting, which we perceive to be some of the central contributions of our work. We also thank you for highlighting the importance of the independent replication.
Overleaf doesn't allow us to upload updated versions of the paper, so we'll instead explicitly mark all the changes we have already made in the draft in our response — please see bullet points labelled "Concrete action taken".
**Regarding experimental details:**
* **Balancing anonymity and transparency:** We completely understand your concern and genuinely sympathize with it. A major reason why we weren't able to be a lot more transparent in our reporting is that doing so runs the risk of invalidating the integrity of the anonymous review system — revealing too much identity-revealing information might disqualify us from the conference. If the paper gets accepted to the conference, we'll be able to add more details to our experiments, such as model names, API endpoints etc. As you've noted, the independent replication result is very valuable in this scenario, and gave us a lot more confidence that our results are quite general and setting-independent.
Relatedly, thank you for flagging that some of the sections in the Appendix are not being properly referred to! We've tried to fix all instances where we thought this happened.
**Concrete action taken:** Identified missing references to Appendix sections and added them.
- - - - - - - -
**Towards clearer evaluation algorithms:**
* **Further clarity on NLL computations:** Thank you for your feedback here. We've tried to address this critique with the following actions.
* **What Listing 1 and Listing 2 do:** Listing 1 lines each question-answer pair in a list, and grabs consecutive slices of length num_shots from this list to form the MSJ prompts. Listing 2 lines the question-answer pairs in a list, grabs a large subsection of them of length equal to the maximum number of shots, and constructs MSJ prompts by cropping this large subsection from the left. The rationale for the latter procedure is explained below.
**Concrete action taken:** Explained, in prose, what the both pieces of pseudocode are implementing in the relevant part of the Appendix.
* **Intuition behind Listing 1 and Listing 2:** The algorithms we provide involve a procedure that is aimed at reducing cross-datapoint variance. The key invariant of the algorithms is that for all in-context prompts of different lengths, the set of final queries are the same. This makes cross-datapoint variance much smaller.
**Concrete action taken:** Fixed the broken line in Listing 1.
* **Referencing the refusal classifier at the right place:** We've also fixed this in the writeup!
**Concrete action taken:** Directly addressed the comment.
- - - - - - - -
**Questions:**
* **Dependence on quality of helpful-only models:** It's debatable how the size of the gap between the open and closed source models will evolve over time. That being said, we believe that the recently released Llama 400B is a model that's strong enough to elicit a massive class of very problematic behaviors that can be directly used in the context of MSJ. While Llama 400B itself is not a helpful-only model, it is "trivial" to finetune it to make it so (e.g. Gade et. al.[1]). As you mention above, MSJ is a relatively effortless attack, and a sufficiently motivated actor can be expected to have the resources to coax Llama 400B to constructing MSJ prompts — or even release the weights of a helpful-only version of this model.
*What if the adversary if forced to use a much less capable model?* We ran a quick experiment to test this: We measured the performance of MODEL on the GPQA dataset [2] in with 0, 64 and 128-shot MSJ prompts, where the question-answer pairs were generated using an earlier generation model. We observed that moving from 0-shot to 64-shot prompts lead to a decrease of ~5% (from 40% to 35%), and 64-shot to 128-shot prompts didn't lead to any further degradation. We can reach the following tentative conclusions from this quick experiment:
1. Moving from zero-shot to few-shot setup leads to some degradation in performance.
2. Moving from few-shot to many-shot setup doesn't lead to any further degradation in performance.
These results paint a relatively optimistic picture for MSJ retaining most of its effectiveness even if the MSJ prompt is generated using a less intelligent model. To be able to claim this with the full confidence that's required of a scientific publication, we need to replicate this with a variety of strong-weak model pairs and tasks, which is something we don't have the bandwidth to do during the rebuttal phase.
* **Effectiveness on chat interfaces:** This is a great question, and mirrors a relevant question asked by Reviewer 3GmK. Proprietary chat interfaces like ChatGPT or Claude.ai can be expected to rely on layered, "defense-in-depth" approaches that go beyond the robustness of models themselves. This means that a jailbreak attempt not only has to be effective on the model, but also evade any attempts at detecting jailbreak attempts. The keys points are:
1. Whether MSJ stacks with other complementary attacks that are aimed at evading detection
2. Whether there are MSJ-specific evasion tactics
We are actively following leads on how Many-shot Jailbreaking interacts with detection methods (which can be viewed as a separate research endeavor that build on our current submission), but are not in a position to be able to share some of our results just yet.
If you find our improvements and response satisfactory, please consider improving your score! In all cases, we'd be happy to engage in any follow-up discussion.
[1] Gade, Pranav, et al. "Badllama: cheaply removing safety fine-tuning from llama 2-chat 13b."
[2] Rein, David, et al. "Gpqa: A graduate-level google-proof q&a benchmark."
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply and for the clarifications. I trust that the improvements you will make to the camera ready will be sufficient, so I am raising my score. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their constructive feedback.
We appreciate the positive feedback we’ve received on the relevance of our contributions, the significance of the results and the extensiveness of our empirical evaluation. We have already incorporated the majority of the actionable feedback we received directly on the writeup.
We believe that Many-shot Jailbreaking is perhaps the conceptually simplest long context jailbreak that is still cheap, scalable and highly effective. We hope that our scaling analysis points to a concrete recipe for how to measure progress towards addressing MSJ, and our mitigations study sheds light on what approaches might be the most promising to expand on in the future to fix it.
We hope that Many-shot Jailbreaking can act as the “fruit fly” of long context jailbreaks and allow researchers to rapidly develop mitigations that will hopefully generalize against more sophisticated long-context attacks. Long-term reliable solutions to even the simplest form of MSJ still remain elusive today.
Based on the reviewers’ feedback, we’ve made some improvements to the submission. Since OpenReview won’t allow us to upload updated versions of the paper, we specifically noted down what changes we’ve made in our author response. We’ve also uploaded additional tables and figures to supplement our response.
Looking forward to answering any further questions the reviewers might have.
Pdf: /pdf/2e45b94307c33b4a55bd6da37a074b419c26ada4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes many-shot jailbreaking (MSJ). This jailbreak technique exploits longer context windows of modern large language models by providing hundreds of demonstrations of undesirable behavior. The authors demonstrate the effectiveness of MSJ follows a power law scaling with the number of demonstrations, which can be reduced when combined with other attacks. The paper analyzes potential mitigation strategies and finds that alignment techniques like supervised fine-tuning and reinforcement learning are insufficient to fully prevent MSJ at arbitrary context lengths.
Strengths: * This paper identifies and very thoroughly investigates a vulnerability in LLMs that exploits longer context windows, which is highly relevant given recent trends in model development.
* The empirical results are extensive, testing MSJ across multiple models, tasks, and settings. The authors provide a clear characterization of how the attack's effectiveness scales with context length and a scaling law
* The analysis of potential mitigations and their limitations is valuable, highlighting the challenges in addressing this vulnerability and providing direction for future work on AI safety.
Weaknesses: * The defenses the paper considers are simple, only at the prompt level. Would inference-based defenses such as prompt classification [1] be an effective defense?
[1] Inan, H., Upasani, K., Chi, J., Rungta, R., Iyer, K., Mao, Y., Tontchev, M., Hu, Q., Fuller, B., Testuggine, D., & Khabsa, M. (2023). Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations. ArXiv, abs/2312.06674.
Technical Quality: 3
Clarity: 4
Questions for Authors: See weaknesses
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Limitations are thoroughly discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your encouraging review!
We appreciate that you found our empirical foray into the scaling and mitigations of MSJ thorough and valuable.
**Inference-time defenses considered:** Thank you for bringing up inference-based defenses!
We are actively following promising leads on how Many-shot Jailbreaking interacts with these methods, but are not in a position to be able to share some of our results just yet.
One thing we'd like to note is that the version of MSJ described in the paper doesn't make any attempts at disguising itself as an attack. A human that takes a glimpse at the prompt will easily be able to tell that it's up to no good! The keys questions here are, as you suggest:
1. Whether MSJ stacks with other complementary attacks that are aimed at evading detection
2. Whether there are MSJ-specific evasion tactics
This presents a rich research agenda that we were deliberate about not tackling in this submission, and one that we hope the adversarial robustness community will pursue. We're especially excited about our scaling law being useful for measuring the effectiveness of solutions and counter-attacks to those solutions.
**Happy to answer more!**
Please don't hesitate to ask any further questions! We'd love the chance to highlight strengths of our submission that might perhaps encourage you to further increase your score!
---
Rebuttal Comment 1.1:
Title: response to authors
Comment: Thank you for the response and clarification. I will keep my score. | null | null | null | null | null | null |
Is A Picture Worth A Thousand Words? Delving Into Spatial Reasoning for Vision Language Models | Accept (poster) | Summary: This work presents 3 new synthetically generated VQA evaluation benchmarks and conducts a comprehensive evaluation of the limitations of current VLMs in spatial reasoning tasks. These tasks include spatial relationships, navigation, position understanding, and counting. The authors demonstrate using their introduced benchmarks that VLMs have limited performance on tasks requiring detailed visual understanding and reasoning of images.
Strengths: Evaluating the spatial reasoning capabilities of SoTA VLMs is highly impactful and very relevant. I find this work novel, intriguing, and highly useful as it points to very important limitations in VLMs. The experimental analysis is very comprehensive, incorporating both several open-source VLMs and proprietary models. Additionally, the paper is well-written and structured, contributing to its overall readability and clarity.
Weaknesses: Spatial Understanding and Evaluation: The use of synthetic data in the evaluation, while valuable, may introduce confounding factors unrelated to the task of spatial understanding/reasoning.
* For instance, a vision-only model with limited OCR capabilities may perform poorly on the spatial-map task, regardless of its actual spatial reasoning ability. It is challenging to disentangle the contributions of OCR performance from spatial understanding in such benchmark.
* The synthetic maze-navigation and spatial-grid images might be out-of-distribution for some models, especially open-source ones. This could also explain why a noise image improves the accuracy of Llava-1.6 on maze-navigation tasks, while a similarly out-of-distribution maze image does not necessarily harm or consistently improve the performance. It is important to consider the potential impact of out-of-distribution data on open-source models. By looking at fig 12 it looks the performance of the latest gpt4o on vision-only is similar to vision-and-text. That said, I still believe the work remains valuable and important.
Human Performance Comparison:
* The claim on line 172 that "the performance of these models still lags significantly behind human levels" would benefit from concrete numbers to substantiate it. Providing specific human performance metrics on these tasks or citations here could would make the claim more robust. Alternatively, softening the statement to reflect the need for further comparison could be considered.
Sampling Strategies and Prompting Techniques:
* Exploring different sampling strategies or reasoning/prompting techniques could yield valuable insights into the model's performance. It would be beneficial to include discussions on how these variations impact the results. To my understanding only a simple prompting technique is utilized "step-by-step explanation". Was there any particular reason to append the selected prompt vs others.
In line 157, the paper also mentioned "For each model, we adopt the default configurations and decoding strategies, e.g., argmax for deterministic decoding and top-p for non-deterministic decoding. " It would be very useful to be more specific, e.g. what top-p and temperature was used. How "deterministic" was the deterministic decoding in API based models?
VQA Evaluation Benchmark Details:
* The paper would benefit from clearly specifying the size of each VQA evaluation benchmark in terms of the number of samples or data points. This information is crucial for understanding the scope and scale of the evaluations conducted. I might have missed it but I couldn't find it in the paper. I would encourage the authors to particular address this in rebuttals.
Technical Quality: 3
Clarity: 3
Questions for Authors: please see above
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. It is discussed and properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your support of our work and insightful comments!
> *Q1: The use of synthetic data in the evaluation, while valuable, may introduce confounding factors (e.g., (1) It is challenging to disentangle the contributions of OCR)*. (2) The synthetic data might be out-of-distribution for some models. It is important to consider the potential impact of out-of-distribution data...That said, I still believe the work remains valuable and important.
Thank you for your insightful feedback, and we appreciate your recognition of the value and importance of our work! We agree that disentangling OCR capabilities from spatial reasoning abilities in VQA (Vision-only) input format for VLMs presents a significant challenge. This challenge motivated our development of the VTQA (Vision-Text) input format. In VTQA, we aim to enhance VLMs' OCR capabilities by providing a detailed textual description of the objects. This approach helps to mitigate the impact of OCR abilities and emphasizes spatial reasoning. Additionally, our benchmark avoids questions with direct textual answers provided in the prompt, requiring the model to engage in genuine reasoning to arrive at the correct answer.
Indeed, the synthetic datasets might be out-of-distribution for some models. We have intentionally include such synthetic examples in our benchmarks to test the robustness and generalization ability. By doing so, we ensure that good performance is not merely a result of data memorization during web-scale pre-training. We hope our work will provide a foundation for future advancements in VLM development.
We further curate a real-world task, Spatial-Real, based on real images with dense captions [1]. Detailed results are shown in **Table 1** in the one-page PDF. We find that the trends (summarized comparisons VQA vs. VTQA, TQA (LLM) vs. VTQA, TQA (LLM) vs. VQA in **Table G.2**) stated in the paper still holds. The modality gap (accuracy difference between VTQA and VQA) even grows from 7.0% on synthetic benchmarks to 30.0% on Spatial-Real on average.
> *Q2: The claim on line 172 that "the performance of these models still lags significantly behind human levels" would benefit from concrete numbers to substantiate it.*
Thank you for your suggestion! We have conducted a human evaluation on a subset of 900 samples for the three tasks. This subset revealed an average human accuracy rate exceeding 96% (VQA). While these results suggest high human performance on the evaluated tasks, we recognize the limitation of not extending this comparison across all 13,500 data points due to budget constraint. We are committed to an extensive human evaluation on the full dataset and will include the results in the revised manuscript.
> *Q3: The specifics of decoding configs and the impact of prompting techniques.*
Thanks for the suggestion! We provide details for each below:
**Decoding configurations:** We have included detailed decoding configurations in Appendix B.2. Our primary approach is to use the default decoding strategies provided by each model to ensure fair comparisons and report the best performance for each model given the same input. Specifically, we use deterministic decoding (argmax) for the following models: Bunny-Phi-2-SigLIP, CogAgent, CogVLM, InstructBLIP-Vicuna-13B, and InstructBLIP-Vicuna-7B. For all other models, we employ non-deterministic decoding with Top-p set to 0.9 and temperature set to 0.2.
**Impact of Temperature**: We conducted an ablation study to examine how different temperatures affect the model performance. A higher temperature allows for more diversity in model responses. Most models consistently underperform when the temperature is set to 1.0 compared to 0.2 (our default), as shown below. We have included the complete results of this study in Appendix E.
|Input modality|Model|Avg Acc (temperature=1)|Avg Acc (temperature=0.2)|
|-|-|-|-|
|Text-only|Mistral-7B| 0.62|0.62|
|Text-only|Vicuna-13B-1.5| 0.37|0.46|
|Text-only|Vicuna-7B-1.5| 0.36|0.41|
|Vision-only|LLaVA-1.6-Mistral-7B|0.37|0.47|
|Vision-only|LLaVA-1.6-Vicuna-13B|0.32|0.40|
|Vision-only|LLaVA-1.6-Vicuna-7B|0.26|0.31|
|Vision-text|LLaVA-1.6-Mistral-7B|0.46|0.59|
|Vision-text|LLaVA-1.6-Vicuna-13B|0.37|0.48|
|Vision-text|LLaVA-1.6-Vicuna-7B|0.33|0.46|
**Prompting techniques:** In line with our approach to sampling strategies, our primary goal in choosing prompting techniques is to report the best model performance given the same question. The prompt technique we use, which asks for step-by-step explanation is the most effective query among others in our initial studies.
As a concrete example, we compare the original prompting strategy, *"First, provide a concise answer in one sentence. Then, elaborate on the reasoning behind your answer in a detailed, step-by-step explanation"* (step-by-step explanation), with a simpler prompt, *Answer:* (completion).
The results are shown below. We can see that the simpler completion prompt consistently underperforms compared to the step-by-step explanation prompt.
|Input Modality|Model|Avg Acc (completion)|Avg Acc (step-by-step explanation)|
|-|-|-|-|
|Text-only|Mistral-7B|0.61|0.62|
|Text-only|Vicuna-13B-1.5|0.25|0.46|
|Text-only|Vicuna-7B-1.5| 0.31| 0.41|
|Vision-only|LLaVA-1.6-Mistral-7B|0.47|0.47|
|Vision-only|LLaVA-1.6-Vicuna-13B|0.25|0.40|
|Vision-only|LLaVA-1.6-Vicuna-7B| 0.26|0.31|
|Vision-text|LLaVA-1.6-Mistral-7B|0.59|0.59|
|Vision-text|LLaVA-1.6-Vicuna-13B|0.26|0.48|
|Vision-text|LLaVA-1.6-Vicuna-7B|0.30|0.46|
> *Q4: sample size of each VQA benchmark*
Thank you for pointing this out! In our benchmark, each task (Spatial-Map, Maze-Nav, Spatial-Grid) contains 4,500 samples, resulting in a total of 13,500 samples. Additionally, our benchmarks are designed to be easily scalable. We have included this information in the revised manuscript (Appendix B).
[1] Urbanek et al., A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions, CVPR 2024.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I thank the authors for the very comprehensive rebuttal and new experiments added to the papers.
I have also read and appreciate other reviewers comments and the authors response. I think this is an important paper, technically solid with potential moderate-to-high impact. I keep my score of 6, and recommend acceptance.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer WNLP,
Thank you for your time and thoughtful feedback on our paper. We greatly appreciate your positive recommendation!
Best,
Authors | Summary: The authors make a benchmark to test the spatial reasoning of multimodal language models (MLMs) using synthetic data consisting mostly of diagrams, mazes, and charts. They benchmark a lot of existing MLMs. Their key findings are: 1) Spatial reasoning is limited in most MLMs. 2) MLMs rely heavily on textual information and not the visual, and 3) interestingly, the VLMs are better at utilizing spatial text information than LLMs suggesting multimodal training helps understand the visual natural of language even though VLMs tend to rely less on visual information during test time.
Strengths: The idea of stress testing MLMs with spatial and figure understanding has been a recent trend and is very useful to push these models to do real-world tasks.
The analysis done is interesting and provides some useful insights into how these models use visual or textual information.
Weaknesses: - Some of the claims may be too broad to make based on results of just this dataset. For instance, the models using more textual information than visual. This could simply be because the visual inputs in this benchmark are of a starkly different domain than what the models are trained on (real images instead of synthetic graphs and mazes and charts), whereas the domain gap in language is much lesser since the language tokens remain the same. Hence, we can only make the statement that the model is using more textual cues for such kinds of data. It is unclear if that is also the case on real image data.
- There are many ways to test spatial reasoning using real VQA style datasets like BLINK, MMBench etc. So , it is unclear why they came up with these complicated tasks. Is there any real-world relevance for such tasks?
- The authors mention they opt for synthetic data due to controllability, but all the analysis they do can be simply done on real image QA datasets like GQA. For instance, for checking vision vs textual reliance, just feed in scene graphs vs the image to an MLM from GQA. Or, for mismatched image and text, randomly pair image and QA. At least a discussion on why such a benchmark is needed is missing. What special analysis in the paper is enabled by this benchmark that couldn't be enabled by an already existing benchmark like Visual Genome, GQA, BLINK etc.
- Some settings could be explained more clearly in the paper. For instance, for the vision-only input - I am guessing the authors mean that they do not describe the image in text, but the question is still in text, correct?
- A nit pick, but some grammar could be improved. e.g., intro, the first line (line 18):" had a transformation effect on research" - a transformational effect not affect. Other grammatical errors are also occasionally present in the paper.
- Another nit pick, but this paper seems more fitting for the datasets and benchmarks track.
Technical Quality: 2
Clarity: 3
Questions for Authors: See above concerns.
Overall I like the direction of the paper. But some analysis justifying the need of such type of synthetic data (mazes and custom diagrams) instead of real images or actual graphs (useful for scientific figures) would be nice.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors discuss some limitations of models, but not the benchmark. What can this benchmark power, and what are some analyses that the benchmark currently cannot power that would be good to have?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your feedback and comments!
> *Q1: Some claims may be too broad; we can only state the model uses more textual cues for such data*
Thanks for the comments and the suggestion! We acknowledge the visual domain gap and have added remarks in our revised manuscript. We intentionally include such synthetic data in our benchmarks to avoids potential data leakage from popular visual benchmarks, ensuring performance isn't due to data memorization during pre-training. Additionally, our benchmarks shift the focus from object recognition to evaluating spatial reasoning abilities with numerous objects. Similar paradigms have been explored in recent works evaluating visual diagram understanding for mathematical reasoning such as [1].
> *Q2: Why need such a benchmark is missing. What analysis does it enable that existing benchmarks like Visual Genome, GQA, and BLINK do not?*
We would like to highlight some key rationales of our benchmark:
1. Existing benchmarks focus only on VQA, where the image is required but the text description is often omitted or optional. Our study further explores spatial reasoning across different settings: TQA (LLM), TQA (VLM), and VTQA, where images or texts can be optional, thereby broadening the scope of tasks.
2. We utilize synthetic data due to its controllability, scalability, and the ability to create highly specific scenarios with flexible, long and detailed captions that are not adequately covered by existing benchmarks such as Visual Genome, GQA, and BLINK. While these datasets are valuable, they do not consistently offer the level of complexity we require, such as numerous objects containing dense visual information along with detailed natural language captions that fully convey the image content.
3. The textual descriptions in these datasets are often too brief or directly imply the answers. In our work, we provide dense or detailed captions for each image. As a result, no answers can be easily inferred from a short caption. We also try to isolate object detection capability from spatial reasoning ability by simplifying objects to symbols.
4. Although our tasks seem complicated, humans can still solve them with near-perfect accuracy. This indicates that the tasks are within the realm of human cognitive capabilities and are therefore realistic for evaluating advanced AI models.
Given the increasing use of VLMs, it is crucial to have a diverse suite of benchmarks to assess their abilities.
We also believe these awesome VQA datasets are relevant and have cited in our revised manuscript.
> *Q3: Real-world applications/relevance for such synthetic tasks?*
We would like to highlight some of the real-world applications:
- Diagram and Document Understanding: An increasingly relevant application for MLLMs is the interpretation of digital documents that include a dense array of visual elements. For businesses and educational sectors, the ability to understand symbols, figures, and diagrams within structured layouts is crucial.
- Map Understanding and Navigation: Our Spatial-Map and Maze-Nav benchmarks simulate map-like environments scattered with numerous objects, such as hotels and stores, each represented by distinct symbols. These configurations are typical in map apps.
- Warehouse Operations and Traffic Management: autonomous robots navigate through grid-like storage layouts densely packed with objects, requiring precise spatial understanding for efficient item retrieval and restocking. Similarly, in urban traffic management, systems must accurately identify and count vehicles within structured scenes, such as busy intersections where vehicles are compactly arranged in lanes and rows. These capabilities are critical for the safe deployment of MLLM-based systems.
> Q4: Will the statements still hold on real image data?
Thanks for your comments and suggestions! Inspired by your suggestion to explore natural images, we found a very recent work [2] released a Densely Captioned Images (DCI) dataset, featuring detailed captions with over 1000 words per image. However, this dataset lacks questions. Therefore, we carefully curated multiple-choice questions regarding spatial-reasoning (object counting, relation, and position understanding) and annotated the answers. We name this new dataset as **Spatial-Real**. Due to the rebuttal period's time constraints, we have created 200 TQA-VQA-VTQA pairs.
The evaluation results, shown in **Table 1** in the one-page PDF, indicate that the same trends still hold for real images (see VQA vs. VTQA, TQA (LLM) vs. VTQA, TQA (LLM) vs. VQA in **Table G.2**). In addition, compared to Fig 4 and 10 in the paper, overall accuracy increases across all three input modalities (Text-only, Vision-only, Vision-text) in the Spatial-Real benchmark. However, the modality gap (accuracy difference between VTQA and VQA) grows from 7.0% on synthetic benchmarks (avg) to 30.0% (avg) on Spatial-Real.
To ensure full reproducibility, we will open source this new dataset and extend the number of sample pairs to match Spatial-Map, Spatial-Grid, and Maze-Nav. The full results will be included in our revised manuscript.
> *W4: Some settings could be clearer (e.g., the meaning of vision-only input)?*
Thanks for the suggestion! We have clarified this in the revised manuscript. Your understanding is correct. We describe the Text-only, Vision-only, and Vision-text input modalities based on how we feed the image information to the models. Vision-only input means the image is fed directly to the models without textual description, while all questions are presented in text.
> *W5: Grammars and typos*
Thanks for the catch! We have carefully examined and fixed grammatical errors and typos throughout.
[1] Zhang et al., MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
[2] Urbanek et al., A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions, CVPR 2024.
---
Rebuttal 2:
Title: Thanks for the response
Comment: These responses are helpful and the introduction of a split with real images also broadens the scope of the paper.
While some of the tasks seem contrived, performance on such tasks can be seen as basic cognitive tests, and as the authors point out, they can have real downstream applications, such as navigation given maps, etc. However, evidence that improving on this benchmark will actually improve navigation performance is missing.
Some of the concerns are that the Spatial real example in the rebuttal seems to go back to simple image-based QAs again, and it's unclear how it is "spatial reasoning". It is also unclear how it is different from the QAs in existing datasets that also have spatial, counting, etc QAs.
However, I still lean towards recommending accepting the paper since it will help drive research into the cognitive abilities of multimodal foundation models.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer JHoY,
Thank you again for your valuable feedback. We sincerely appreciate your support of our work.
Q: What is new in Spatial-Real compared to existing image-based datasets that also have QAs on spatial understanding and counting?
In Spatial-Real, the form of QAs on spatial reasoning (object counting, relation, and position understanding) is similar to those in existing VQA datasets. However, the novel difference is that our dataset construction allows for a comprehensive study of spatial reasoning QAs across different modality settings—TQA (LLM), TQA (VLM), VQA, and VTQA, where images or texts can be optionally provided. This is largely unexplored in Vision and NLP communities. We aim to bridge this gap by offering a unique benchmark that we hope will be valuable and inspire future research, as new multimodal foundation models emerge that can handle interleaved text and image inputs with longer context lengths.
Best,
Authors | Summary: The paper proposes a set of synthetic tests to compare the spatial understanding in VLMs and LLMs. The tests include Spatial-Map, Maze-Nav, and Spatial-Grid, which all include an image, paired with text that describe the image, and a question. Using the synthetic data, the VLMs and LLMs are studied using different modalities, including VQA, TQA, and VTQA, where the models are asked to answer questions based on only the image, only the text, and both image and text. Findings include that the current VLMs are strongly biased towards texts (blind); VLMs outperform LLMs in TQA. Both open models and proprietary models have been tested.
Strengths: 1. The TQA, VQA, VTQA test is a smart way to reveal the modality bias in VLMs and LLMs, clearly showing the blindness of VLMs.
2. Intensive results and analysis are provided.
Weaknesses: 1. All datasets contain only 2D synthetic images, which is substantially different from real images. At least the images can be made more realistic using computer graphics, like [4,5].
2. The synthetic nature of the tasks introduces artifacts. Good spatial reasoning ability on this task may not generalize to real ones. The results may be highly dependent on whether the model has been trained on a similar task. For example, the performance of GPT-4O almost doubles that of Gemini in this paper, while the difference of these models on real images is not this significant.
3. The observation that VL models are highly biased towards text, and are “blind” visually, has been studied long in the VQA community, even before LLMs [1,2,3, etc.]. The related literature should be discussed.
[1] Eyes wide shut? exploring the visual shortcomings of multimodal llms
[2] Explicit Bias Discovery in Visual Question Answering Models
[3] Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
[4] Clevr: A diagnostic dataset for compositional language and elementary visual reasoning
[5] Super-clevr: A virtual benchmark to diagnose domain robustness in visual reasoning
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the findings in this paper differ from previous literatures like [1,2,3]. Are the findings different, or are they discovered in a new way as proposed in this paper?
2. Any discussions about the synthetic-real gap?
3. In Spatial-Map, without text descriptions, it looks like the VLMs may describe spatial using “left/right/up/down” instead of “west/east/north/south” - will this happen, and how is it addressed?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback and insightful comments!
> *W1: All datasets contain only 2D synthetic images, which is substantially different from real images.*
We choose synthetic data due to its controllability, scalability, and the ability to create highly specific scenarios with flexible, long and detailed captions that are not adequately covered by existing VQA benchmarks.
In addition, this approach avoids potential data leakage and ensures that model performance is not merely the result of data memorization during web-scale pre-training. Moreover, our benchmarks shift the focus from object recognition to evaluating spatial reasoning abilities involving numerous objects. Similar paradigms have been explored in recent works evaluating visual diagram understanding [6].
> *Q1: The observation that VLMs are highly biased towards text, and are “blind” visually, has been studied long in the VQA community, even before LLMs [1,2,3, etc.]. How does the findings in this paper differ from previous literatures like [1,2,3]?**
Thank you for the suggestion! We find these works intriguing and relevant, and have cited and discussed references [1-5] in our revised manuscript. We would like to highlight several key differences in our approach:
1. **Task Scope and Focus**: Previous works [1-5] primarily focus on the VQA task where images are required but text descriptions are often omitted or optional. In contrast, our study further explores spatial reasoning across different settings: TQA (LLM), TQA (VLM), and VTQA, where images or texts can be optional, thereby broadening the scope and complexity of tasks.
2. **Evaluation**: we primarily focus on multimodal language models (MLLM). We treat these models as generative models which are required to "elaborate on the reasoning behind your answer in a detailed, step-by-step explanation" (L154). In contrast, the evaluation strategy is different in prior works [2-5], where the task is often discriminative with no explicit reasoning. Therefore, it remains unknown if previous observations can be naturally transferred to foundation models pre-trained on web-scale data. Concurrent work [1] also evaluates MLLMs but focuses on visual representations by curating pairs of images adversarial for CLIP-based models. In contrast, our data is unrelated to CLIP features and our questions are not designed to be adversarial. Humans can solve our tasks with near-perfect accuracy. This indicates that the tasks are within the realm of human cognitive capabilities and are realistic for evaluating modern MLLMs.
3. **The Textual Representation of Images**: the textual descriptions in prior VQA benchmarks are often too brief or directly imply the answers. In contrast, we provide dense or detailed captions for each image. Therefore, our benchmarks do not include questions where answers can be easily inferred from a short caption. We also try to isolate object detection capability from spatial reasoning ability by simplifying objects to symbols.
4. **Different Interpretation of Visual Blindness**: While the visual blindness of VLMs has been long studied, the context differs in our benchmark. In typical VQA tasks, bias might stem from the questions themselves [2]. For example, the answer to "What is the color of the grass?" is usually "Green", allowing models to answer the question correctly without seeing the image. Multiple prior works have focused on addressing such biases with new benchmarks [3]. In contrast, none of the questions in our benchmark can be answered without looking at the image and the blindness is unrelated to the question.
> *W2 & Q2: The synthetic nature of the tasks introduces artifacts. Good spatial reasoning ability on this task may not generalize to real ones. Any discussions about the synthetic-real gap?*
Thanks for your comments! It is valuable to extend the scope of our work and validate our results on real images. We found that a very recent work [7] released a Densely Captioned Images (DCI) dataset where each image has a detailed caption with more than 1000 words on average. However, this dataset does not include questions. We then carefully curated multiple-choice questions regarding spatial-reasoning (object counting, relation, and position understanding) and annotated the answers. We name this new dataset as Spatial-Real.
The evaluation results are shown in **Table 1** in the additional PDF. The same trends still hold for real images (see VQA vs. VTQA, TQA (LLM) vs. VTQA, TQA (LLM) vs. VQA in **Table G.2**). In addition, compared to Fig 4 and 10 in the paper, the overall accuracy increases across all three input modalities (TQA, VQA, VTQA) in the Spatial-Real benchmark. However, the modality gap (accuracy difference between VTQA and VQA) grows from 7.0% on synthetic benchmarks (avg) to 30.0% (avg) on Spatial-Real.
We will open source this new dataset and the full results will be included in our revised manuscript.
> *Q3: In Spatial-Map, will VLMs describe directions using “left/right/up/down”?*
Great point! We carefully examined the VLMs' responses and did not observe notable differences in the way they describe spatial directions. This assessment was done by manually checking all results.
Given that the options with directions (e.g., A. Northwest, B. Southwest, C. Southeast, D. Northeast) are included in the question prompt, we observed consistent adherence to these instructions across all models. Rather than using relative terms like "left/right/up/down," the models consistently responded with the specified cardinal directions.
Examples of typical responses include:
- "A. Northwest"
- "Children's Choice Toys is located to the northeast of Yak Yarns."
- "The correct option is B. Northeast."
[6] Zhang et al., MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
[7] Urbanek et al., A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions, CVPR 2024.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I thank the authors for providing the rebuttal. The rebuttal discusses the differences with prior works. I also appreciate the effort for experimenting with real dataset - it is interesting to see that the modality gaps becomes even larger on real image.
I am still concerned about the artificially created toy tasks in the paper. I think more experiments with real data, or semi-real data like rendered images using graphics (e.g. CLEVR) that are 3D, will make the paper much stronger.
I will keep my score as 5.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer i8Xy,
Thank you for affirming our rebuttal, positive recommendation, and valuable suggestions! We acknowledge the importance of real datasets and are committed to expanding both the diversity and scale of Spatial-Real. While we recognize the value of natural data, we also believe that synthetic tasks serve an important role. As acknowledged by Reviewer JHoY, these tasks can serve as cognitive tests that evaluate basic capabilities, which are relevant to broader real world applications. This approach has precedence in fields like IQ testing, where evaluating foundational cognitive skills is valuable.
We will open source our benchmark to facilitate future research in this area.
Best,
Authors | Summary: This paper develops a novel benchmark to understand spatial reasoning ability of LLM and VLM. Using such a benchmark, authors conduct experiments to evaluate models' performance, and reveal several results.
Strengths: 1. This paper is well-written and well-structured.
2. This paper conducts a series of experiments and include very recent LLMs and VLMs.
Weaknesses: 1. The design rationales of the three benchmarks are not included or discussed. "Spatial-Map" and "Maze-Nav" look reasonable but "Spatial-grid" seems ill-posed. Would we really meet any similar application scenario in the real world?
2. In the experiments of "The impact of input modality", LLMs and VLMs take very different inputs in the benchmarks, with no part overlapping. In this case, it is unsure whether such a comparison is fair or even valid.
3. It is unclear why the results of "Spatial Map" are not provided in main paper or appendix. In addition, the performance of some VLMs are also missing. Given that it can be already observed that there exist some outliers, it is important to show full results of the study to ensure the validity of observations.
4. LLaVA-1.6-34B is the most performant model in this model family. However, in Figure 7-9, it consistently under-performs compared to other LLaVA models on Spatial grid. I would like to see authors' comments on these finding. What might be the cause to the weakness of LLaVA-1.6-34B?
5. This paper overall provides quantitative results of current VLMs without deeper analysis. It remains unclear what on earth causes the weakness of these VLMs. What are insights or takeaway to improve VLMs?
Technical Quality: 2
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: No negative societal impacts of the study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your comments and questions, which we address in detail below.
> *Q1: Design rationales of the three benchmarks. Real world application scenarios for "Spatial-Grid"?*
Our benchmarks are designed to cover diverse aspects of spatial reasoning, such as spatial relationships, navigation, position understanding, and object counting (L34-42). We illustrate the design rationale of each benchmark in more detail:
- Spatial-Map: This benchmark resembles a map scattered with numerous objects (e.g., hotels and stores), each represented by distinct symbols. It evaluates the model's ability to identify and understand the spatial relationships between these dispersed objects.
- Maze-Nav: This benchmark simulates navigation scenarios. It evaluates the model's capability to understand and navigate through complex environments, akin to finding a path in a maze.
- Spatial-Grid: This benchmark reflects scenarios with dense visual information in structured, grid-like environments where understanding the **layout** is crucial.
We highlight several real-world applications for Spatial-Grid:
- Warehouse Manipulation: Autonomous robots in warehouses navigate through grid-like storage layouts where objects are densely arranged. These robots need precise spatial understanding to efficiently retrieve and restock items, making them highly dependent on reliable spatial reasoning capabilities similar to those tested in Spatial-Grid.
- Traffic Monitoring: Systems that identify and count vehicles within structured scenes, such as busy intersections with vehicles arranged compactly in lanes and rows, rely on accurate localization and counting. Such capabilities are critical for the safe deployment of VLM-based systems in traffic management.
- Diagram and Document Understanding: An emerging application for MLLMs is understanding documents that contain dense collections of visual elements arranged in structured layouts. As businesses and educational sectors increasingly rely on digital data, the ability to parse and understand complex documents becomes crucial.
> *Q2: Fairness of the experiment on "The impact of input modality" as "LLMs and VLMs take very different, non-overlapping inputs in the benchmarks"*
It is indeed challenging to directly compare LLMs and VLMs due to their difference in input modality. This is exactly the purpose of our benchmarks: for each sample, we create semantic overlap between the textual description and the image, where the input in each modality has similar information sufficient to answer the question.
Instead of comparing LLM vs. VLM, we compare TQA (LLM), TQA (VLM), VQA, and VTQA. We would like to clarify the terminologies and input modalities considered for different models, summarized in **Table G.1** in the General Response.
To investigate "The impact of input modality", we conducted multiple set of controlled experiments between TQA, VQA, and VTQA. As acknowledged by R2 (i8Xy), "The TQA, VQA, VTQA test is a smart way to reveal the modality bias in VLMs and LLMs, clearly showing the blindness of VLMs."
Our comparisons and references in the paper are summarized in **Table G.2** in the General Response.
> *Q3: Missing results of Spatial-Map and some VLMs in the main paper*
Thanks for the question! The results of the "Spatial-Map" benchmark are provided in the main paper in Sec 4 (Fig 4 and 5), with further details available in the ablation studies in Sec 5.2 and 5.3 for both open-sourced and proprietary models (Fig 10, 11, and 12). Additional detailed results can be found in Appendix D.
In Sec 5.1, the ablation study comparing VTQA vs. TQA (VLM) does not include results for the InstructBLIP family because these models do not support using only their LLM backbone. We apologize for the oversight of not including the "Spatial-Map" results in Sec 5.1. This was an unintentional mistake. The full results have been included in the Appendix, and we will ensure they are added to the main paper in the revised manuscript.
> *Q4: Why LLaVA-1.6-34B underperforms other LLaVA models on Spatial-Grid?*
Great point! Fig 4, 5, 10, and 11 show that while LLaVA-1.6-34B consistently outperforms other LLaVA models on Spatial-Map and Maze-Nav, it lags in Spatial-Grid.
We conducted an ablation study, where we added three more questions Q4-Q6 (object in bottom-right, top-right, bottom-left corners) besides Q1-Q3 in Table 5 (Appendix D).
Detailed breakdowns can be found in **Table 2** in the additional one-page PDF. The results indicate that LLaVA-1.6-34B excels in counting (Q1) but struggles with layout and fine-grained visual interpretation (Q2-Q6), limiting its spatial reasoning in dense grid environments. However, we believe a deeper understanding of this phenomenon is beyond the scope of our work. We hope our benchmark serves as a springboard for further study on spatial reasoning and design of better reliable MLLMs.
> *Q5: It remains unclear what on earth causes the weakness of these VLMs. Insights or takeaway to improve VLMs?*
Due to the differences in model architecture, training pipelines, the scale and diversity of training data, pinpointing the precise causes of weaknesses in modern VLMs is fundamentally challenging and beyond the scope of our study.
The primary purposes of this paper include: (1) Introducing a pioneering benchmark that evaluates diverse aspects of spatial reasoning with multimodal inputs. (2) Conducting a timely and comprehensive evaluation of a wide range of both open-sourced and proprietary LLMs and VLMs.(3) Highlighting the current limitations of VLMs in spatial reasoning, thereby setting the stage for further investigations.
In Sec 6, we proposed several hypotheses to explain the observed discrepancies among VLMs. While we acknowledge understanding the 'why' behind performance differences is crucial, we believe first systematically documenting and decomposing the observed phenomena is equally valuable to the research community. | Rebuttal 1:
Rebuttal: **Review summary** We sincerely appreciate all reviewers for their time and effort in providing valuable feedback and suggestions on our work. We are glad that reviewers recognize our work to be _novel_ (R1, R4), _highly impactful_, and _intruguing_ (R4). Additionally, reviewers found our results and analysis _intensive_ (R2) and _interesting_ (R3), with useful insights (R3, R4). We appreciate the positive remarks on our manuscript being well-written and well-structured (R1, R4), and the appreciation for the direction of our research (R3).
We have addressed the comments and questions in individual responses to each reviewer. Below, we include two tables that are frequently referenced in our responses. The attached PDF contains an illustration of our new real-world benchmark **Spatial-Real**, evaluation results (Table 1), and an ablation study for the LLaVA family (Table 2, [R1]).
|Model|Input Modality|Term|Description|
|-|-|-|-|
|LLM| Text-only | TQA (LLM) | Input is purely textual and contains all necessary information of an image to answer the question.|
|VLM|Text-only| TQA (VLM) | Input is purely textual, but applied to VLMs (such as the LLaVA family). In the paper, this setting is called Text-only input with VLM (No Img) (Sec 5.1, Fig 7, Fig 11). We have name it as TQA (VLM) in the revised manuscript for easier reference.|
|VLM|Vision-only|VQA| Image-only input without an equivalent textual description |
|VLM|Vision-text|VTQA| Input includes both an image and its textual description|
#### Table G.1: Terminology and input modalities for LLMs and VLMs.
|Comparison|Results and Analysis|Summary of Findings|
|-|-|-|
|TQA (LLM) vs. VQA| Sec.4 Figure 5| VLMs (with image-only input) rarely enhance the performance compared to their LLM counterparts (with text-only input).|
|VTQA vs. TQA (VLM)|Sec.5.1 Figure 7|VLMs exhibit improved performance in spatial reasoning tasks when the image input is absent.|
|VQA vs. VTQA|Sec.5.2 Figure 10|Given the same image input, additional textual description enhances VLM's performance.|
|TQA (VLM) vs. TQA (LLM)|Sec.5.2 Figure 11 |Multimodal finetuning enhance LLM's spatial reasoning performance.|
|TQA (LLM) vs. VTQA |Appendix C Figure 15| No definitive winner. |
#### Table G.2: Summary of experiments on the impact of input modalities.
Pdf: /pdf/a693da4e32e84432779a98d7a4f6ac307458047e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
An exactly solvable model for emergence and scaling laws in the multitask sparse parity problem | Accept (poster) | Summary: The paper presents a model capable of predicting the appearance of emergent abilities, using only information on the emergence of the first ability. The model successfully predicts emergence in a 2-layer MLP solving the multitask sparse parity problem, a toy-model problem constructed with a power-law skill distribution, to mimic the theorized distribution of language datasets. The model also exhibits scaling laws known to exist in this problem.
Strengths: The paper expands on recent works, in a significant and active field of research. The authors present convincing results when evaluating the proposed emergence model against MLP training data. Extensive derivations are provided in the appendix, although I did not carefully check them. The paper is written in a clear manner and results are visualized in clear formats.
Weaknesses: - The scope of the experiments is too small in my opinion, specifically Figure 1. It would be nice to see results for a lot more than 5 skills, since it seems possible that predicting the emergence of less-frequent skills becomes harder at higher orders of magnitude. My impression is that the experimental setup is light enough that scaling up will not require unreasonable resources.
- The general scope of the paper is a bit small, focusing only on the multitask sparse parity problem. It would have been nice to see results on more natural settings. That said, the current results on the chosen problem setting are still interesting on their own.
- Regarding the scaling laws presented in Table 2, glossing over the paper gives the impression that these are new scaling laws found by the authors, when in fact they are reproductions of the scaling laws calculated by Michaud et. al. It might help to clarify the relation to Michaud et. al. in the main section of the paper.
Technical Quality: 4
Clarity: 4
Questions for Authors: The main section of the paper does not explain why the input data is distributed as a power law (Zipf's law), making it hard to understand the justification for readers unfamiliar with previous literature, i.e. Michaud et. al. I would suggest adding a short explanation when introducing the problem.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Limitations are adequately discussed, including scope limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you for taking the time to review our paper. We appreciate your comments and suggestions.
* Scope of the experiments
Additional learning of the skills, as correctly spotted, requires another order of magnitude of computation which challenges our computational budget. The predictive power of our model indeed relies on the effective-decoupling assumption (second paragraph of Section 6 and Appendix D.3) of the skills in MLP, which is weakened at larger $k$ because of the noise in SGD. Please see “Intuition and limitations of our model” in our global rebuttal for details.
* Scope of the paper
Indeed, we agree that we focused on a more idealized, abstract, and more feasible setup over more complicated but practical scenarios requiring significant computation budgets. We believe a more specific title correctly informs the scope and motivation of the paper. Please see “Scope of the paper and its title” in the global rebuttal.
* Scaling laws presented in Table 2
In fact, as described in the second paragraph of Section 4, we arrive at the same scaling law as Hutter 2021 for data and Michaud et al. 2023 for time, data, parameters. Table 2 was to emphasize our novel contribution: how the proposed model leads to rigorous derivation of the scaling laws (please see “Contributions beyond Michaud et al. 2023” in our global rebuttal for details). That said, we are happy to incorporate this information in the caption of Table 2 for clarification.
* Question: the power-law input data
We appreciate the reviewer's suggestion to provide more context on the power-law distribution of input data. We agree that a brief explanation in the main text would benefit readers less familiar with the literature. We will update our paper to include this explanation.
Regarding the justification for the power-law assumption: As the reviewer recognizes, real-world tasks often exhibit a spectrum of skill frequencies (such as Zipf's law). For instance, in language tasks, placing common words like 'the' or 'is' in a sentence occurs frequently, while correctly using specialized technical terms or rare words occurs much less often.
We again thank the reviewer for helpful comments and for taking the time to review our paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. In light of the changes proposed, I will change my score to accept (7).
One question about the experimental setup: What were the hardware requirements for the 2-layer MLP experiments in figure 1?
---
Reply to Comment 1.1.1:
Comment: Dear reviewer, we deeply appreciate updating the score.
The specification of setup is detailed in Appendix K.5. Each run of the experiment – one point in the figure – requires 2 to 5 hours for time emergence and 20 to 50 hours for other experiments on a CPU. We ran the experiments on a CPU cluster because the small size of the MLP allows less pronounced difference in the running on GPU and CPU (running on RTX 4090 GPU was typically only 3X faster than an average CPU in the cluster) while we benefited from parallelism in a larger CPU cluster.
The most demanding experiment was Fig.1(b) with data emergence. The figure has 30 different numbers of datapoints ($30$ runs), repeated $10$ times for the error bars. We ran for $5 \times 10^5$ steps for data and parameter emergence (compared to $3 \times 10^4$ for time emergence) to remove the potential effect from early stopping (i.e. to assure $T \gg D$).
To observe the emergence of an additional skill, we require a magnitude increase in $D$ which leads to a magnitude increase in $T$ – to assert $T \gg D$ – and a magnitude increase in the batch size – to mitigate the SGD noise. | Summary: I’ll write two summaries: one to state my “moral” understanding of the work and another to state the specific contributions
**“Moral” Summary:** The authors propose an analytically solvable model to study scaling laws and emergent abilities by combining the problem of Barak et al 2022 + the data distributional assumptions of Michaud et al. 2023/2024 + the model of Saxe et al. 2024 + their own innovations.
**Specific Summary:**
- The authors study the multi-task sparse parity problem studied by Barak & Michaud
- The authors propose a multilinear model that identifies each sparse parity problem as a “skill” and then define the learned function as a “multilinear” (i.e. two independent linear parameters) function of the “skill” basis functions
- In this model, one can then exactly compute scaling and emergent abilities as a function of the typical scaling parameters (compute, data, parameters)
- The authors also train 2 layer MLPs and transformers to test how closely their maths match empirical results
Strengths: - Overall, I think this is a really well done paper (although I’m concerned about Figure 1 - see weaknesses). What it does, it does thoroughly.
- Table 1 is useful in helping explain the multi-task parity problem
- Table 2 is a great way to summarize and organize both the results as well as the conditions of each result
Weaknesses: - Emergence is a phenomenon studied at scale, and while I strongly support trying to find simplified models of large-scale phenomena, I feel that there needs to be an attempt to connect back to the original phenomena of interest. For instance, Michaud et al. 2023/2024 (Citation 17 in this paper) - on which this work most closely connects - at least attempts to look into real language model pretraining data. In contrast, this paper makes no such attempt, which I think strongly limits it. This is the #1 reason I don't feel comfortable giving a higher score.
- Following the above point, I consequently think a more focused title (e.g., “An exactly solvable model for emergence and scaling laws in the multitask sparse parity problem”) would better represent this paper.
- For an exactly solvable model, Figure 1 shows that the predictions only roughly match the experimental results. This becomes especially pronounced for higher k. For instance, look at Figure 1(c) orange. The prediction is that there should be a rapid leap from 0 to 1, but instead, there is a sigmoid-like transition with long tapering tails. Green, red and purple are all similar. The inability to predict 2-layer MLPs makes me think that this analytically solvable model is already only an approximation of incredibly simple networks
Technical Quality: 4
Clarity: 4
Questions for Authors: Figure 4: What is the timescale of each skill’s emergence for the transformer? I can’t tell if the higher k lines are step functions or sigmoidal functions that are compressed by the log scaling of the x axis. To be clear, I’m not asking for when the skills emerge, but how long it takes for each skill to emerge.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you for your comments on the paper and for appreciating our tables. Your moral summary represents our motivation in the paper well.
* Realistic setup and the title of the paper
We fully appreciate your comment on the scope/limitation of the paper and thank you for suggesting to change the title to specify its scope and motivation. Please see “Scope of the paper and its title” in the global rebuttal.
* Theory and experiment matching in Fig.1(c)
We appreciate the reviewer's careful observation of Figure 1. The discrepancies noted, particularly for higher $k$, are indeed due to the abstraction of our model. We expect the deviation to enlarge at higher $k$ as the SGD noise becomes more comparable to the difference in the skill frequencies and the effective-decoupling assumption (second paragraph of Section 6) weakens. Please see “Intuition and limitations of our model” in the global rebuttal.
We acknowledge that our model is an approximation as its purpose is to serve as a baseline for understanding more complicated dynamics in NNs. The strength of our approach lies in its ability to provide analytical predictions and an intuitive understanding of key phenomena, which would be challenging to achieve with more complex models. We believe this trade-off between simplicity and predictive power is valuable for advancing theoretical understanding in the field. Please see “Scope of the paper and its title” in the global rebuttal.
We discuss the limitations of our model in Section 5.5 and share our intuition on why it closely approximates an MLP in Section 6. Regarding the specific case of Figure 1(c), the discrepancy observed in MLPs is partly due to initialization-dependent learning outcomes, which our model doesn't explicitly incorporate. As discussed in the last paragraph of Section 5.5 and in Table 5 in Appendix I, the initialization-dependent learning failures in MLPs affect $\mathcal{R}_k$, especially when MLP has only a handful of hidden neurons to learn the skills: Even when an MLP can express all skill functions, it may fail to learn due to unfavorable initialization or inefficient use of its hidden neurons. We have argued how such outliers increase the standard deviation of the overall performance metric $\mathcal{R}_k$, but the argument also explains why the mean of $\mathcal{R}_k$ fails to saturate to $S$.
* Question: The time scale of each skill’s emergence in transformers
The timescale of the transformer is in optimization steps. Analogous to the assumption in the stage-like training (Figure 6 in Appendix D), the saturation time is typically significantly smaller compared to the emergence time (when it starts to emerge) even in linear time scale. Please see Figure 1 of the attached pdf in the global rebuttal.
The saturation time (how long it takes to emerge) is approximately 500 steps for the first skill and 5,000 for the fifth skill. All saturations show a sigmoid-like saturation, but are more distorted compared to that of the 2-layer MLP. Please see Figure 2 of the attached pdf in the global rebuttal. If the reviewer has additional comments or questions, please do not hesitate to let us know.
We again thank the reviewer for the suggestion of the title and for dedicating the time to review our work.
---
Rebuttal Comment 1.1:
Title: Response to Authors' Rebuttal
Comment: Thank you for your comments! I appreciate the additional figures in the global response and your answers to my questions.
I'm going to increase my confidence but keep the score. I don't feel comfortable increasing my score because while I feel this paper is very thorough, I feel it is limited in its general applicability without strong connections back to more realistic (larger, more data, real data) models.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer, we thank you for increasing confidence in your assessment. We are glad that additional plots have clarified your question. | Summary: This paper provides an in-depth study of a toy dataset and model, both in terms of scaling laws and emergent skills. They study the ‘multitask sparse parity’ synthetic problem introduced by Michaud et al. (target is the parity function of a string of random bits, each task indicates a different subset of bit ids, task frequency follows a power law). New in this work, they consider a ‘multilinear’ model (i.e. $y=ab x$ rather than simply $y=cx$) to incorporate the dynamics of a two layer neural net (as per Saxe et al. in a different setting).
The paper recovers scaling laws (which agree with Michaud et al.’s coefficients for T, D & N, and additionally adds for C=TN). The model also gives rise to emergence of each skill with a sigmoidal shape. Rarer skills are learned later on.
Strengths: - From a technical perspective, the paper is a very strong and complete piece of work. It exhaustively sweeps through results and surrounding analysis of the setup studied in the paper (even, for example, having proofs of the scaling laws in increasing resolution).
- I have confidence in the claims and analysis. I spot checked several parts in depth, however, I have not been able to interrogate all claims and analysis in the paper (the appendix pushes the paper to 54 pages).
- Providing a model that unites sigmoidal-shaped skill emergence with scaling laws, is a tantalizing prospect, as these are two major properties of LLMs. A theoretical model combining the two aspects would be of high interest to several parts of the community.
Weaknesses: - The paper contains a huge amount of material. A lot of the good stuff is buried in the appendix (related work is important, the contrast with a linear $y=cx$ model, the scaling law proof sketches, the stage-like training discussion). The reading experience suffers from this, and a conference paper struggles to do it justice. It's possible it would be better suited to a long-form journal format (and would allow reviewers more time to comb through the details).
- The scaling laws derived by the paper match the coefficients found in Michaud et al. (with the addition of $C$). There is a slight difference in that the setup now uses $y=abx$, but I generally felt that given the repeated outcome, providing scaling laws derived with varying resolutions need not be such a major focus of the paper and appendix (maybe some nuance has been lost on me?). Reducing this would free up bandwidth to allow readers to absorb the other more interesting parts.
- A concern I have is in how realistic the setup of the dataset and model is. The emergence and scaling laws come from a lightweight two-parameter linear regressor which receives ideal task indicators as input. The closest interpretation in a real setting I can think of, is as a two-layer linear MLP head, placed on top of a deep pretrained model that is frozen with powerful representations for each task already learned. This paper is still valuable, but I might suggest a more scoped title because of this – ‘an exactly solvable model for emergence and scaling laws’ is not inaccurate, but might be a little broad.
Technical Quality: 4
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: Fine.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you for your kind comments and especially for dedicating the time to read through the appendix in depth.
* Material in the appendices
We appreciate your suggestion regarding the suitability of our paper for a journal format. We fully agree that a journal-style paper would allow for a more detailed narrative and potentially provide an easier reading experience. At the same time, we believe that presenting this work at a conference like NeurIPS is valuable: scaling laws and emergence are of great current interest to the broad ML community, and the conference format enables us to share our findings with a general audience and receive valuable community feedback, potentially inspiring further research.
We have strived to balance the conference format constraints with the depth of our work by including substantial appendices. These appendices serve to clarify our intuitions and make connections for readers from various backgrounds, while keeping the main paper focused and concise. We appreciate your thorough review of both the main paper and the appendices, and we thank you for recognizing the value of the additional material we provided.
* Focus on the scaling laws
Even though we wanted to put only the formal derivations in the appendix (mainly C, E, F, and J) and present all our intuition in the main text, some intuitive arguments had to be added to the appendix (mainly D, G, and H) for an integrated presentation of our contributions. That said, we still think there is room for further emphasizing the effective decoupled dynamics of MLPs and emergence in our work as suggested by the reviewer.
* Realistic setups
We appreciate your comment regarding the connection between our model and MLPs.
While our current paper provides a focused exploration of our theoretical model, we acknowledge that a comprehensive justification of its applicability to general setups, encompassing (both theoretical and empirical), would constitute a significant body of work meriting its own dedicated study.
We, however, share our intuition on why a simple model with prebuilt powerful representation approximates the properties of an MLP in the following paragraphs: in the second paragraph of Section 6 in the main text, in the discussion of the effective-decoupling in Appendix D.3, in an example scenario of MLP in Appendix G. Please see “Intuition and limitations of our model” in the global rebuttal for details.
Regarding the scope and the title, we fully appreciate your concerns and plan to change the title of the paper as suggested. Please see “Scope of the paper and its title” in the global rebuttal.
We again thank the reviewer for investing the time to thoroughly read our work. | Summary: The paper proposes to use a certain generalization of the well-known sparse parity problem, as a theoretical framework for neural scaling laws.
The theoretical setup doesn't seem to make sense (I'm open to changing my mind). Indeed, equations (2) and (4) taken together give
\begin{equation*}
f^*(i,x) = S\sum_{k=1}^{n_s} g_k(i,x) = S\sum_{k=1} \delta_{ki}g_i(i,x) = S g_i(i,x).
\end{equation*}
This is definitely not "a sum of $n_s$ skills" (as claimed by the authors); it is a single skill. The same issue repeats itself in (9) when the authors introduced their so-called multi-linear model. Indeed, that model simplifies to (again thanks to (2))
\begin{equation*}
f_T(i,x) = \sum_{k=1}^{n_s} a_k(T)b_k(T)g_k(i,x) = \ldots = a_i(T)b_i(T)g_i(i,x).
\end{equation*}
From this point onward, it is not clear what the paper is trying to do.
Second, it is not clear what the paper is trying to achieve beyond what was already done in Michaud et al's "Quantization Hypothesis" paper (for the record, the paper proposed the multi-task sparse parity problem as a example exhibiting scaling laws w.r.t sample size and model size, within the framework of their quantization hypothesis).
Finally, the paper seems to be missing some important literature, for example Cabannes et al. "Scaling Laws for Associative Memories" (ICLR 2024), which proposes finite-capacity extension of Hutter's model, and establishes an array of different scaling laws for different learning algorithms.
Strengths: As explained already, my low score is because I think the theoretical setup of the paper is unclear. I'm open to changing my mind.
Weaknesses: As explained already, my low score is because I think the theoretical setup of the paper is unclear. I'm open to changing my mind.
Technical Quality: 1
Clarity: 2
Questions for Authors: In what way do (3) and (9) represent "a sum of skills" ?
Confidence: 3
Soundness: 1
Presentation: 2
Contribution: 1
Limitations: Yes (in Section 5.4)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer, thank you for being open to discussing what was unclear to you in the paper. Please see our response below.
* Theoretical setup clarification
Regarding the sum of skill functions, we believe the confusion arose from interpreting the input variable $i$ – which depends on the data points – as a fixed index such as $k$: note that $(i,x)$ as a pair form the input.
The $k^{th}$ skill function $g_k$ and the target function $f^*$ are different functions. For example, a datapoint $(i,x)$ with $i=k+1$ and any $x$ will result in $|g_k(k+1,x)| = 0$ (Eq. (2)) while $|f^*(k+1,x)| = |Sg_{k+1}(k+1,x)| = S$. Of course, a datapoint $(i,x)$ with $i=k$ will lead to $S|g_k(k,x)| = f^*(k,x)$.
The Kronecker delta in your equation appears only if $i=k$, which is true only for datapoints from the $k^{th}$ skill (the skill bits are one-hot with the $k^{th}$ entry equal to 1), but not for other datapoints. We believe the clarification will also remove the confusion regarding Eq. (3) and Eq. (9).
If still in doubt, please look at our example below using Table 1.
* Contributions beyond Michaud et al.
Our goal was to provide the simplest quantitative model that captures the intuition of Michaud et al. 2023, to provide a rigorous derivation of the scaling laws, and to predict the emergence of an MLP. Please see “Contribution beyond Michaud et al. 2023” in the global rebuttal for details.
* Recent literature
Finally, we thank the reviewer for suggesting some related literature, which we will add to our related work section (in particular, the work: Cabannes et al. "Scaling Laws for Associative Memories" (ICLR 2024)).
* Table 1 Example
In our Table 1, note that the first column represents $i$ for a given datapoint, the second column represents $i$ in control bits, and the third column the skill bits $x$. The $(i,x)$ as a pair form the input to the model. The fourth column, which is noted as $y$ is equivalent to the target function $f^*(i,x)$. From the sixth to the last columns are the skill functions from $g_1(i,x)$ to $g_{n_s}(i,x)$.
It is indeed true that for a single datapoint (a row), $f^*(i,x)$ (column 4) is a multiple of $g_i(i,x)$ (column $i+5$). However, no single skill function $g_k$ (a single column between $6$ and $n_s + 5$) can represent $f^*$ (column 4) for **all datapoints (all rows)**. Thus, we need the target function $f^*$ to be a sum of all skill functions (scaled by $S$) in Eq.(4).
In order for the relationship provided by the reviewer to hold, we must assume that the data distribution is generated by a single skill only (justifying that the Kronecker delta $\delta_{ik}$ holds for all datapoints), making the problem a sparse parity and not a `multitask’ sparse parity problem.
If the reviewer finds our clarification insufficient or unclear, please do not hesitate to let us know.
---
Rebuttal 2:
Comment: - Thanks for the clarification; a notation problem indeed.
- I have a good understanding of the paper now. I think the contribution is interesting but still incremental (based on current literature on provable scaling laws).
I'm increasing my score to 6.
---
Rebuttal Comment 2.1:
Comment: Dear reviewer,
We are glad that we clarified the confusion and thank you for updating the score.
---
Rebuttal Comment 2.2:
Comment: Dear reviewer,
Thank you for your review and efforts. Please update your score on the original review as well. | Rebuttal 1:
Rebuttal: Dear reviewers and AC, we present a global rebuttal to address the overlapping reviews.
## Scope of the paper and its title
As most reviewers have pointed out, the primary strength of our paper is the detailed theoretical analysis of the scaling laws and emergence with a concrete model while the main weakness lies in the lack of empirical work on larger models in more natural setups.
Even though we fully appreciate the importance of gaining predictive power in larger models with real-world datasets, we mainly focused on theoretical tractability over realism, and believe the work is meaningful on its own by providing 1. clear isolation of key phenomena (emergence and scaling laws) without confounding factors present in more complex systems and 2. rigorous mathematical analysis and derivation from fundamental principles for future works.
On that note, we fully appreciate the comments on how the title is less focused and potentially suggests a scope beyond our intentions. As commented, a more detailed title such as **“An exactly solvable model for emergence and scaling laws in the multitask sparse parity problem”** better represents the scope and motivation of the paper. We thank the reviewers for pointing this out.
## Contributions beyond Michaud et al. 2023
As acknowledged by some reviewers, our work can be viewed (though not limited to) as a theoretical formalization of the quanta hypothesis Michaud et al. 2023 in which we wish to emphasize our contribution in more detail.
The quanta hypothesis from Michaud et al. 2023 lacks mathematical formalism (i.e., both quanta and skills are not mathematical objects) and lacks the derivation of scaling laws under “gradient descent dynamics” (it was stated that the skills were learned by some criteria, see e.g., the Quantization Hypotheses on page 3 of Michaud et. al, https://openreview.net/pdf?id=3tbTw2ga8K).
We provide a formal model trained on gradient flow that 1. analytically reproduces the scaling laws (including the prefactor constants) and 2. predicts emergence in 2-layer MLPs. The quantitative prediction of emergence in 2-layer MLPs is novel and no modeling or prediction for the emergence phenomenon was studied in Michaud et al. 2023. We believe such formalism serves as a baseline for future more complex models for understanding scaling laws and emergence in more practical NNs.
## Intuition and limitations of our model
Our work presents a simplified model that approximates complicated feature learning dynamics of an MLP with a decoupled basis functions and a product of parameters. Even though our work suggests that our assumptions effectively capture the emergence in MLP (e.g. Fig.(1)), the exact conditions for the assumptions to hold and whether it holds in larger models with realistic dataset requires further study.
We, however, extensively share our intuition on why our model approximates the emergence in MLPs (Second paragraph of Section 6, Appendix D.3, Appendix G, and Appendix I) and also the limits of our model of why it may diverge from the dynamics of MLPs (Section 5.5).
For example, we give some evidence that our model well-approximates the MLP because of the effective decoupling in MLPs or that skills are feature-learned in stages (Second paragraph of Section 6 and Appendix D.3). The large difference in skill frequencies, analogous to stage-like training (Appendix D) of parameters in multilinear models, allows each $g_k$ to be learned at different time scales: justifying why the decoupled dynamics of our multilinear model is a good approximation. Unfortunately, the absolute difference in skill frequencies becomes comparable to SGD noise as $k$ increases: creating a larger divergence for larger $k$. The noise from SGD can be mitigated with a larger batch size, but we then require another magnitude in training time and batch size that challenges our limited computational budget.
Pdf: /pdf/8634d59565de2b49445ff29532603e449dd230b8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Wasserstein Gradient Boosting: A Framework for Distribution-Valued Supervised Learning | Accept (poster) | Summary: This work proposes to perform boosting with base learner which are fitted to the Wasserstein gradient of a loss function on the space of probability distributions, which can be useful in particular to capture uncertainty of the models.
Several variants of the algorithm are discussed (by adding a diagonal Hessian preconditioner and with different approximation of Wasserstein gradients for functionals not differentiable on discrete measures). The method is demonstrated on posterior regression tasks where the functionals are KL divergences with respect to some prior, and applied on different real datasets benchmarks.
Strengths: This paper is well written and proposes a new interesting boosting method to minimize loss over probability distributions.
- The paper is well written
- A new boosting algorithm guided by functionals on probability distributions
- Application on real datasets outperforming baseline methods
Weaknesses: The paper is good overall in my opinion, but still has some weaknesses.
- Experiments focus on KL divergence functional.
- No theoretical analysis
Technical Quality: 4
Clarity: 4
Questions for Authors: Did you compare between the different algorithms (i.e. with and without Wasserstein Hessian preconditioner, and Kernel vs Langevin approximation) on the benchmark of Section 4.2?
Are there other possible functionals which could be interesting besides the KL or other divregences between probability distributions?
Could the method be used with input distributions, e.g. to do regression on probability distributions?
Typos:
- Line 145: "gradint"
- Line 520: "Wasserstien"
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your strong endorsement of our work. We are delighted that you have found our methodology well-written and intriguing.
> Did you compare between the different algorithms (i.e. with and without Wasserstein Hessian preconditioner, and Kernel vs Langevin approximation) on the benchmark of Section 4.2?
Thank you for raising this point. So far we have compared them only through the simulation study to pick up the most reasonable algorithm (i.e. Wasserstein Hessian diagonal preconditioner) for the real-world application. We can add the further comparison of them based on the real-world application.
> Are there other possible functionals which could be interesting besides the KL or other divregences between probability distributions?
Thank you for checking this point with us. Yes, we have a large room to investigate what would be other interesting functionals to use with WGBoost. For example, kernel Stein discrepancy would be one of the examples that we can perform the Wasserstein gradient flow. We will expand the discussion about the other potential functional choices. Although our scope in this work is the KL divergence, it is important feature work to develop a further special case of WGBoost with other functionals.
> Could the method be used with input distributions, e.g. to do regression on probability distributions?
Thank you for your interesting question. Our current thought is that, because WGBoost can use any base learner, we can do that if we use such a base learner that can take a distribution as an input value. This sounds another interesting usage of WGBoost.
> Typos: ...
Thank you. We have corrected the pointed typos.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer and addressing my comments. I do not have further questions and I will keep my score at 7.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer
Comment: We appreciate your response and again all your efforts in reviewing our manuscript. | Summary: This paper proposed a new ensemble algorithm, called Wasserstein Gradient Boosting (WGBoost), which is a novel gradient boosting framework that leverages the wasserstein gradient for probabilistic prediction. Specifically, WGBoost fits a new base learner to the Wasserstein gradient of a loss functional on the space of probability distributions. Since we cannot access to the probability distribution at each iteration and we also cannot evaluate the wasserstein gradient at each iteration, the author proposed to implement it using the particle method and approximate the functional gradient by the kernel method. The algorithm returns a set of particles that approximate a target distribution for each input. The main application demonstrated is posterior regression, where WGBoost provides a distributional estimate of output-distribution parameters.
Strengths: - WGBoost's ability to approximate target distributions with particles offers a robust approach to posterior regression, capturing predictive uncertainty effectively.
- The proposed method shows superior performance in empirical evaluations on real-world tabular datasets, both for regression and out-of-distribution detection tasks.
- The implementation is seems easy by utilizing the particle method combined with kernel approximation.
Weaknesses: The approach seems interesting, but the paper lacks the comparison with existing work. Moreover, there is no analysis or discussion when this algorithm is useful as I will explain in the below.
- It seems that the proposed method seems almost identical to the stein variational gradient descent (SVGD), but there is no qualitative and quantitative comparison with that. Please explain what is the fundamental difference compared to SVGD.
- Since the proposed method is very similar to SVGD, I think the comparison with SVGD and its extended methods including [3, 5, 6, 7] (I think there are other many variants of extention in SVGD).
- It has been known that the posterior approximation quality strongly depends on the choice of kernels in SVGD [1, 2, 3, 4]. Since the proposed algorithm is almost identical to SVGD, the quality of approximation by the WGBoost is strongly affected by the choice of the kernel function. However, there is no discussion about this point.
- The numerical experiments are only conducted with respect to the final performance on benchmark dataset and I cannot understand when and what kind of problems the proposed algorithm is suitable to approximate the posterior distribution. It is known that SVGD suffers from collapse phenomena.
- In addition to the above point, there is no discussion about the computational cost of the proposed algorithm. When I say the computational cost, I point about the computational cost at each iteration and convergence speed. How large computational cost is compared to existing method regarding the number of particles and training dataset size ? I think the proposed algorithm is based on the boosting method, so it suffers from large computational cost with respect to the training dataset size.
- As for the convergence speed, it has been known that the convergence of the SVGD is slow, which does not show the linear convergence [4], and I suspect that the proposed method suffers similar problem. However, no discussion is present about the convergence speed or no numerical comparison exists with existing method.
[1] Stein Points
[2] Measuring Sample Quality with Kernels
[3] Kernel Stein Discrepancy Descent
[4] On the geometry of Stein variational gradient descent
[5] FUNCTION SPACE PARTICLE OPTIMIZATION FOR BAYESIAN NEURAL NETWORKS
[6] Feature Space Particle Inference for Neural Network Ensembles
[7] Repulsive Deep Ensembles are Bayesian
Technical Quality: 2
Clarity: 3
Questions for Authors: I wrote questions in the Weakness.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitation is unclear. As far as I read no formal description is presented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your assessment of our work. We are afraid that there seems a misinterpretation of our method in the reviewer's concerns: (a) the proposed method seems almost identical to the stein variational gradient descent (SVGD) and (b) the paper lacks the comparison with existing work. Please let us recap the high-level setting of WGBoost in the response that follows next. We hope this clarification will assist in the reassessment of our work and open up room for an upward revision of the score.
> the proposed method seems almost identical to SVGD / the paper lacks the comparison with existing work
**While SVGD is a sampling method, WGBoost is a new gradient boosting method**. Please note that our setting is **not** about sampling from a posterior over model parameters (e.g. BNN). To see this, we compare (i) learning of BNN and (ii) our setting here:
- **BNN Learning**: We have a dataset $( x_i, y_i )\_{i=0}^{n}$ of input vector $x_i$ and output vector $y_i$, by which we have a posterior $P(w \mid ( x_i, y_i )\_{i=0}^{n} )$ over the network parameter $w$. The goal is to approximate $P(w \mid ( x_i, y_i )\_{i=0}^{n} )$ well by sampling or VI;
- **Our Setting**: We have a dataset $( x_i, \pi_{x_i} )\_{i=0}^{n}$ where $\pi_{x_i}$ is a probability distribution given at each input $x_i$. Our goal is to have a learning algorithm that can predict the unseen distribution $\pi_x$ for any new input $x$.
Our setting is clearly a less common, challenging regression problem because our target output is now a probability distribution $\pi_{x_i}$ at each input $x_i$ rather than some finite-dimensional vector output $y_i$. WGBoost is designed to solve this problem. Please find *Section B in our global rebuttal* for more detailed recap.
Connection between WGBoost with SVGD is as follows. In Section 2, we described a general framework of WGBoost in Algorithm 1. To use WGBoost, we need to specify two components:
- (a) a loss functional $\mathcal{F}(\hat{\pi}\_{x_i} | \pi\_{x_i})$ between the WGBoost output $\hat{\pi}\_{x_i}$ and the target distribution $\pi\_{x_i}$ at each $x_i$;
- (b) how to estimate the Wasserstein gradient of $\mathcal{F}$ at each $x_i$.
In Section 3.3, we described that the kernel-based functional gradient used in SVGD is one way to approximate the Wasserstein gradient of the KL divergence (see [49]). We then described that we use the kernel-based approximation to specify item (b) for our main applications. In Appendix E, we compared WGBoost under four different choices of the estimate of the Wasserstein gradient: functional gradient in SVGD, one in Stein Newton Method with full Hessian, one in Stein Newton Method with diagonal Hessian, and a stochastic alternative in Langevin diffusion. We applied WGBoost for probabilistic classification/regression. Hence our main experiment was the place to compare WGBoost with relevant UQ competitors.
> ... the quality of approximation by the WGBoost is strongly affected by the choice of the kernel function. ...
Thank you for raising this point. The choice of the kernel is indeed important for most kernel-related methods. We observed that the Gaussian kernel with a scale parameter 0.1 works well through a simulation study. We will add a detailed discussion on the kernel choice and the sensitivity with simulation studies.
> The numerical experiments are only conducted with respect to the final performance on benchmark dataset and I cannot understand when and what kind of problems the proposed algorithm is suitable to approximate the posterior distribution.
Thank you for the opportunity for us to clarify. Our proposed method is a classification/regression algorithm. For experiments using UCI data, reporting the total log-likelihood score or RMSE would be fairly a standard practice. For other simpler datasets, we have visualisation of the output of WGBoost: Figure 2 with the toy target with $\pi_{x_i} = \mathcal{N}(\sin(x_i), 1)$, Figure 1 for data in Section 4.1, and Figure 5 in Appendix for second data in Section 4.1. However, we agree that having more visualisation even for UCI datasets in Section 4.2-4.3 and adding a discussion on the approximation property would be helpful.
For visualisation of UCI datasets, we will pick a few examples of input $x_i$ and target density $\pi_{x_i}$ from the dataset. We will then show the output of WGBoost at $x_i$ together with $\pi_{x_i}$. The input $x_i$ is of arbitrary dimension but the target density $\pi_{x_i}$ at each input $x_i$ is often 1 or 2 dimensional uni-modal density in our application. So the approximation at each $x_i$ should be less challenging. We will demonstrate this point through our additional visualisation.
> In addition to the above point, there is no discussion about the computational cost of the proposed algorithm. ... / As for the convergence speed, it has been known that the convergence of the SVGD is slow ... I suspect that the proposed method suffers similar problem. However, no discussion is present about the convergence speed or no numerical comparison ...
Thank you for the opportunity for us to clarify. As recapped at the top, WGBoost is not a sampling method such as SVGD. WGBoost is an algorithm that depends on user-specification of how to estimate the Wasserstein gradient. In Appendix E, we have a simulation study to compare both the loss-decay speed and computation time of WGBoost under four different estimates of the Wasserstein gradient. We will make more clear this and further expand the discussion on the speed/computational order in the main text.
Section 3.4 clarified that, for our application, we employ the functional gradient used in Stein Newton Method (with diagonal Hessian approximation) in WGBoost at the end rather than one used in SVGD. It is the second-order version of the functional gradient in SVGD and more efficient. Our simulation study shows that this choice has better loss-decay and computation time.
---
Rebuttal 2:
Title: Thank you for the reply
Comment: Thank you for your response to my questions.
However, I still have concerns. Specifically, the fundamental and mathematical differences between SVGD and your proposed methods remain unclear. Although you mention that the purpose of the methods differ—sampling versus boosting—the core principle in both cases involves minimizing the objective functional, such as KL, L2, within an RKHS framework [1, 2]. This suggests that, despite the different objectives, the algorithms and their mathematical foundations might be essentially similar. Therefore, I would like to keep the score as is.
[1] Stein Variational Gradient Descent as Gradient Flow
[2] On the geometry of Stein variational gradient descent
---
Rebuttal 3:
Title: Response to Reviewer
Comment: Thank you for your response and the opportunity to clarify.
> the algorithms and their mathematical foundations might be essentially similar (to SVGD)
Please let us make our last attempt to clarify the difference. Here we will compare them from the viewpoint of sampling. While SVGD samples from one distribution, WGBoost samples from a **conditional** distribution as follows:
- **SVGD** gets samples $(\theta_1, \dots, \theta_N)$ from a distribution $P(\theta)$, where let's say $P(\theta)$ is an arbitrary distribution over an arbitrary variable $\theta$.
- **WGBoost** gets samples $(\theta_1(x), \dots, \theta_N(x))$ from a conditional distribution $P(\theta \mid x)$ for any input $x$. WGBoost learns the conditional distribution in the setting where we can know the form of $P(\theta \mid x_i)$ only at given data inputs $( x_i )\_{i=1}^{n}$. In the algorithm, WGBoost **predicts** the Wasserstein gradient used to sample $(\theta_1(x), \dots, \theta_N(x))$ for each input $x$ by tree-model.
SVGD do not have input variable $x$ so do not have the prediction stage of the Wasserstein gradient unlike WGBoost. In this sense, WGBoost is a conditional version $P(\theta \mid x)$ of a Wasserstein gradient flow of $P(\theta)$, where the Wasserstein gradient of the particles $(\theta_1(x), \dots, \theta_N(x))$ will be predicted for each input $x$ because the form of $P(\theta \mid x)$ can be known only at $x=x_i$ in our setting.
I hope this way of explanation concisely shows the difference. We are keen to hear if the reviewer's score remains unchanged. | Summary: This paper introduces a probabilistic boosting tree algorithm called Wasserstein boosting that uses a smoothed particle gradient to provide probabilistic predictions. Experiments are performed using UCI tabular regression, and out of distribution classification.
Strengths: Originality:
- I like the application of Wasserstein gradients and gradient flow to gradient boosting trees, don’t think I’ve really seen any paper like this before.
Quality:
- I didn’t check especially carefully but the machinery for the algorithm seems to be well explained and correct.
Clarity:
- The relevant machinery of Wasserstein gradients and gradient boosting is mostly well explained.
Significance:
- Making trees more probabilistic with limited modifications to their tabular capabilities would be a quite nice advance.
Weaknesses: Originality:
- Nothing really noted. Being a straightforward application of some machinery is totally fine.
Quality:
- For a paper on trees, I find the experiments quite small scale and limited. I would have expected NLL experiments on larger scale datasets, such as the xgboost and lightgbm papers.
- Comparisons to (approximate) Bayesian neural networks are quite weak – many other methods such as sgmcmc, etc. tend to outperform on datasets of these scales. references: https://arxiv.org/abs/1902.03932, https://arxiv.org/abs/1907.07504, https://arxiv.org/abs/2002.03704, amongst others
- The natural missing comparison here is to Gaussian processes and other kernel methods, which are naturally probabilistic and similarly nonparametric (like trees).
- Another missing set of experiments is comparison to quantile regression / pinball loss using trees, which is implemented directly in lightgbm. Quantile regression itself is also naturally nonparametric in at least some sense.
Clarity:
- My understanding of Wasserstein particle flows is that the particles should interact in some manner during the gradient step. However, the writing of Algorithm 1 makes this quite unclear. I think that the kernel smoothing in the gradient step for the approximate flow is what makes the particles interact, but it’s overall quite unclear.
o The code doesn’t seem to provide any clarity here.
- Overall, it’s quite unclear which parameters in the loss are actually being estimated. If we’re only estimating uncertainty in regression parameters (or analogously classification), then there’s straightforward two stage approaches.
o For example, one can easily fit a tree predicting the mean and then modify the loss function to predict its variance, or we can modify the loss in classification problems to do something analogous.
- The writing is extremely passive and non-specific. Suggestions below.
o L150: “procedure of exact or approximate” Please use the algorithm box to specifically write or point out which algorithm is used in the experiments. The current algorithm is so unspecific it’s very hard to follow.
o L113-115: “Although [32] … originally suggested…” rephrase to something like “Although Friedman [32] originally proposed using a line search … , Buhlmann and Hothorn [34] recommend against the line search …”
Significance:
- Part of the strength of trees in my experience is that they scale pretty well to large tabular datasets (e.g. n = 10 million). You also tend to need strong uncertainty quantification on these types of datasets, which is part of the reason why Bayesian neural nets became popular for a while. Yet, these large scale uncertainty quantification experiments are lacking from the paper.
o Bayesian neural nets: https://proceedings.mlr.press/v115/izmailov20a.html, https://proceedings.mlr.press/v130/immer21a/immer21a.pdf,
o Gaussian processes: https://arxiv.org/abs/1809.11165,
Technical Quality: 2
Clarity: 2
Questions for Authors: Algorithm 1:
- Is the Wasserstein gradient where the particles manage to interact? Otherwise, I see nowhere else the various particles / learners would interact.
Table 1: where do the error bars in the baseline methods come from?
- Was the same preprocessing done / e.g. same train / test split for these?
- In general, I’m pretty sure that MC Dropout is a quite weak baseline for a Bayesian neural net, and many other BNN approaches are much stronger than it at this point (see references above)
- the clear tree baselines here would be quantile regression (implemented in lightgbm) and fitting a mean model with MSE loss and then a second tree to predict the variance by optimizing $\max s = N(y | \hat y, \exp\{s\})$ which can be done in lightgbm with a manual loss function.
Section 3.3:
- so we’re just doing Wasserstein gradients on the mean and variance in a Gaussian regression problem? If so, then it seems like a mean / variance two stage fit model would be the natural “base” method to compare to (and could still plug in the “prior” as regularization”)
Section 3.1:
L190: “it depends only on the log gradient …”: yes, this is the requirement for most Bayesian (and probabilistic) inference, e.g. MCMC sampling, variational inference techniques.
Table 1: what are the error bars?
Table 2: I believe that MC Dropout is pretty well known for being poor at out of distribution detection [find reference], while the P Network is certainly a stronger baseline. What would entropies of a traditional multiclass classification tree perform like in terms of an out of distribution detector here?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback to our work. We respond to each of your comments and concerns below. We hope that these responses will open up room for an upward revision of the score of our work. (In our response below, citation numbers such as [9] and [10] correspond to references in our manuscript.)
> For a paper on trees, I find the experiments quite small scale and limited ... / ... large scale uncertainty quantification experiments are lacking ...
Thank you for the opportunity for us to strengthen our experiment. We will add new experiments on large scale datasets, e.g. HIGGS, KEGG, Buzz in social media, Year Prediction MSD datasets which all have very large size n>500000. Meanwhile, the current set of the experiments would be a common benchmark in UQ literatures [13, 33, 9, 10] and include medium-large datasets of size n≈50000 or n≈10000 (For example, a relevant gradient-boosted tree UQ method, Natural gradient boosting [33] in NeurIPS, used only those datasets in Section 4.2). We believe that adding the large scale experiment to the current experiments would make our benchmark strong for both the tree and UQ contexts.
> Comparisons to (approximate) Bayesian neural networks are quite weak ...
Thank you for raising the point together with the expert literatures. We will include advanced approximate Bayesian methods beyond MC Dropout in our comparison. The previous manuscript followed our relevant UQ literatures [33, 13, 14] who used MC Dropout and Deep Ensemble in comparison.
Here, we have checked experimental results of the provided literature, Subspace Inference for Bayesian Deep Learning. Their Appendix Table 2&3 shows Log-Likelihood (-1 times NLL) and RMSE scores of several advanced Bayesian methods on some datasets we used. Our method produces the best score in 8 rows out of 10 rows in Table 2&3 combined.
> The natural missing comparison here is to Gaussian processes and other kernel methods, ... Another missing set of experiments is comparison to quantile regression / pinball loss using trees, ...
Thank you for your constructive feedback. We will also include kernel-based and quantile-based nonparametric methods in our experiments. With these methods added, our comparison covers a range of methods: common deep learning UQ methods, other approximate Bayesian methods, other gradient boosting UQ method, and nonparametric regression methods. We believe that this gives us a high-quality benchmark for our proposed method.
> The code doesn’t seem to provide any clarity here / The current algorithm is so unspecific it’s very hard to follow.
Thank you for the opportunity for us to clarify. First, we would like to recap our current paper structure. In Section 2, we firstly explained an abstract framework of WGBoost in Algorithm 1. To use WGBoost, we have to specify
- (a) a loss functional over probability densities;
- (b) how to estimate/approximate the Wasserstein gradient of the chosen loss.
These (a) and (b) are user-specified components of WGBoost. Then, in Section 3, we presented the specific setting where we use (a) the KL divergence and (b) the kernel-smoothing estimate of the Wasserstein gradient, for our application. So it was our intention that we described the algorithm in a general manner in Algorithm 1, and then specified (a) and (b) later in Section 3.
We will more strongly clarify that WGBoost in Algorithm 1 takes (a) and (b) as user-specified components. In Appendix, we have Algorithm 3 that shows the explicit algorithmic table for the specific setting in Section 3. We will clarify this as well. Finally, the terminology "exact or approximate Wassetstein gradient" would be confusing so we will change it to "estimated Wasserstein gradient".
> My understanding of Wasserstein particle flows is that the particles should interact in some manner during the gradient step. / Is the Wasserstein gradient where the particles manage to interact?
Thank you for checking this point with us. In principle, interaction is not compulsory part of Wasserstein particle flows. Whether particle interaction happens or not depends on how to estimate the Wasserstein gradient. If it happens, it is beneficial to have some enforced dispersion of particles.
For example, the JKO-scheme-based estimation of the Wasserstein gradient does not have interaction term. Langevin Monte Carlo---a stochastic formulation of Wasserstein flow--does not have interaction term neither. On the other hand, our case of the kernel smoothing estimate in Section 3 indeed induces particle interaction. So interaction is not compulsory but the WGBoost under the setting of Section 3 for our application indeed has particle interaction thanks to the choice of the estimate.
> Table 1: where do the error bars in the baseline methods come from? / Was the same preprocessing done / e.g. same train / test split for these?
We followed a de-facto standard protocol of [53] to prepare 20 different patterns of training/test sets from the datasets in Table 1. The error bar was created by computing the test error 20 times using the 20 different patterns.
> it seems like a mean / variance two stage fit model would be the natural “base” method to compare to ...
WGBoost produces a particle-based "distributional" estimate of the mean and variance of the Gaussian-noise regression for each input. Indeed a method that produces a point estimate of mean and variance would be a good base model, and NGBoost is such a tree method in our comparison. We will further investigate on the two stage model too.
> What would entropies of a traditional multiclass classification tree perform like in terms of an out of distribution detector here?
We didn't include this in our manuscript but observed that entropies of a traditional classifier shows poor OOD performance during the development of our method. We will add a discussion on this point.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarifications. However, without further experimental comparisons, my opinions on the paper are unchanged.
In terms of experimental benefits, I don't think there's a compelling application as described in the paper to use Wasserstein boosting for tree methods as compared to something like NGBoost (or other probabilistic baselines like 2 stage fits, quantile regression etc.). Wasserstein boosting provides non parametric uncertainty estimates, which NG Boost and quantile regression on trees also provide. I understand that Wasserstein boosting could be used on top of a quantile regressor or the two stage fitter, but these experiments just aren't done presently so there's, to my understanding, not as much of a compelling story here.
re " , Subspace Inference for Bayesian Deep Learning. " I believe the pre-processing may be slightly different (you can see by comparing their "SGD" baseline to your deep ensemble baseline), but this is good to understand that the Wasserstein boosting algorithm performs quite sensibly.
I also think that, even after the rebuttal, the comparison with SVGD is a bit confusing to me (as mentioned by reviwer q9Jy), and would like to see the authors further clarify here. The algorithm seems to be different primarily in its applications to boosting, but in some sense this is quite similar to the application in parametric models as well.
---
Rebuttal 2:
Title: Response to Reviewer Comment
Comment: We appreciate the reviewer for providing further comments and opportunity for us to discuss about them. All your comments will be used and helpful to improve our manuscript.
(PS: Please refresh this webpage in case the latex commands were not rendered as math equations properly.)
## Difference from NGBoost and Quantile Regressor
To illustrate, we would like to consider a classification problem, where we have input $x$ and output label $y$. We assume a categorical distribution $C(y \mid q)$ over the label $y$ given the class probability vector $q$. What NGBoost (and two-stage fit model) will provide is a **point estimate of the class probability** $q$ at each input $x$, that is
- $x \to \text{NGBoost} \to q(x) \to C(y \mid q(x))$
On the other hand, what WGBoost will provide is a particle-based **distributional estimate of the class probability** $q$:
- $x \to \text{WGBoost} \to ( q_1(x), \dots, q_N(x) ) \to (1 / N) \sum_{i=1}^{N} C(y \mid q_i(x))$
What we show in the paper is that (i) taking the average over the distributional estimate leads to better performance (in Section 4.2 with better performance than NGBoost for regression) and (ii) uncertainty in the distributional estimate can be used for the OOD detection (in Section 4.3). These advantages are also different from quantile regressors since quantile regressors provide quantile of output variables $y$ only i.e. no distribution on $y$ nor distribution of $q$. (Such a distributional estimate of the class probability $q$ has been explored in Deep Evidential Learning (DEL) approaches. In this context, WGBoost is the first method that enables the tree version of DEL approaches.)
## Difference from SVGD
Firstly, we would like to clarify difference in what WGBoost and SVGD can do:
- **SVGD**: Given one target density $\pi$, we can obtain samples $(\theta_1, \dots, \theta_N)$ form $\pi$.
- **WGBoost**: Given a set of several inputs and output-densities $( x_i, \pi\_{x_i} )\_{i=1}^{n}$, we can obtain **a map from input $x$ to particles $(\theta_1(x), \dots, \theta_N(x) )$** that approximates the given $\pi\_{x_i}$ for in-sample input $x_i$ and also predicts an unseen $\pi\_{x}$ for new out-of-sample input $x$.
Next, we would like to clarify how WGBoost and SVGD is applied for clarification/regression. Let us consider the classification again, where we have input/label dataset $(x_i, y_i)\_{i=1}^{n}$:
- **SVGD**: We have a model $f(x; w)$ with the parameter $w$ that outputs the class probability $q$. We use a **posterior $Pos(w \mid ( x_i, y_i )\_{i=1}^{n} )$ of the model $f(x; w)$ using all data points**. SVGD produces samples $(w_1, \dots w_N)$ from the posterior $Pos(w \mid ( x_i, y_i )\_{i=0}^{n} )$.
- **WGBoost**: WGBoost is a learning model to estimate $q$ distributionally, so there is no other model $f(x; w)$ like the SVGD case. We use a **posterior $\pi\_{x_i}$ of the categorical distribution $C(y \mid q)$ for each single data point** $(x_i, y_i)$ like done in some DEL approaches:
- $\pi\_{x_i}(q) \propto C(y_i \mid q) \times \nu\_{x_i}(q)$ for each $(x_i, y_i)$, where $\nu\_{x_i}$ is a prior over $q$ given at each $(x_i, y_i)$.
WGBoost produces a map from input $x$ to particles s.t. $x \to \text{WGBoost} \to ( q_1(x), \dots, q_N(x) ) \approx \pi_x$.
Finally, please let us elaborate the intuitive algorithmic idea of WGBoost. The challenge is how to produce particles for unseen input $x$:
1. We have the given set $( x_i, \pi\_{x_i} )\_{i=1}^{n}$. First let's sample from every given $\pi\_{x_i}$ using any user-choice of Wasserstein gradient flow (WGF) such as SVGD;
2. The WGF for each $\pi\_{x_i}$ uses the Wasserstein gradient $g\_{x_i}$, so we get a set of the computed Wasserstein gradients $(x_i, g\_{x_i})\_{i=1}^{n}$. Since $g\_{x_i}$ is finite-dimensional, we can train a ML model that predicts the gradient $g_x$ for new $x$ by fitting it to the set $(x_i, g\_{x_i})\_{i=1}^{n}$.
3. The trained ML model gives us a prediction of the Wasserstein gradient $g_x$ for unseen input $x$. So let's perform the WGF with the predicted gradient and have predictive particles even for unseen input $x$.
This procedure uses a user-specified WGF (e.g. SVGD) for every $\pi\_{x_i}$ and train a ML predictive model with the computed Wasserstein gradient at $x_i$. So SVGD is simply used as an intermediate component of WGBoost. Rigorously formalising this idea requires academic work, for which we showed that it can be formalised as an extension of gradient boosting.
## Comment
Would this clarify difference from NGBoost and SVGD? We have added these better clarifications in comparison with SVGD to the main text. To date, adapting Wasserstein gradient flows to other gradient-related ML methods is still underdeveloped. We truly believe this work can bring a new inspiration to the ML community, bridging Wasserstein gradient flows and gradient boosting for the first time. We are keen to hear any opinion from the reviewer.
---
Rebuttal Comment 2.1:
Comment: Thanks for your further responses .
> WGBoost is the first method that enables the tree version of DEL approaches.
Indeed, this is a selling point (and a drawback as the inference should be more expensive..) of your approach as compared to other tree based methods from my understanding. However, it's not clear to my why a well specfied probabilistic model can't just be used for OOD, like the early deep net based approaches were - by entropy of the multi-class predictive distribution (which NGBoost can surely gather).
> Difference between SVGD and WGBoost
My (somewhat non-rigorous) understanding is that with WGBoost you're essentially non-parametrically estimating the integral $p(\hat y | y, \mathcal{M}) = \int p(\hat y | f(x)) p(f(x) | x) df$ where $f$ is the (non-parametric) functional form induced by the model class. SVGD ends up being quite similar, but (in its [original definition](https://arxiv.org/pdf/1608.04471)) operates on a _parametric_ form of model class, estimating $p(\hat y | y, \mathcal{M}) = \int p(\hat y | f_\theta(x) ) p(\theta | x) d\theta \approx p(\hat y | f_\theta(x) ) q(\theta) d\theta$, where $q(\theta) $ is the approximation distribution learnt by SVGD. However, as SVGD is a gradient flow [paper](https://arxiv.org/abs/1704.07520), it is also possible to express as a nonparametric algorithm in my understanding if we express it in terms of function space rather than parameter space.
I guess practically they end up being different algorithms, but the underlying theory is quite similar, if not exactly the same. And there's several different function space algorithms that use stein flows as the other reviewer points out - training either neural networks or GPs with gradient flows like this is fairly well studied.
---
Rebuttal 3:
Title: Response to Reviewer
Comment: Thank you for your response and the additional opportunity to discuss.
> However, it's not clear to my why a well specfied probabilistic model can't just be used for OOD, like the early deep net based approaches were - by entropy of the multi-class predictive distribution (which NGBoost can surely gather).
Thank you for the suggestion. As in the previous comment, we observed that entropy of multi-class classifier didn't perform well and decided to focus only on the uncertainty done in the current paper for the page limit. We will include entropy of multi-class classifier in the comparison. In our understanding, our experiment has already shown better performance of WGBoost on small to medium-large datasets, so adding large-scale dataset and other comparison would complement our experiment at sufficient level.
> underlying theory is quite similar (with SVGD)
Thank you for the discussion about this point. Infinite-dimensional SVGD has been studied and the algorithm/theory is different [1]. Please let us attempt to clarify the difference again. What WGBoost learn is a conditional distribution $P(y | x) = \int P(y | q) P(q | x) dq$, where let's say $P(y | q)$ is a categorical distribution and $P(q | x)$ is a conditional distribution of the class probability vector $q$ given $x$. In this view, WGBoost is a sampler from a conditional distribution with tree-model incorporated in the procedure as follows:
- **SVGD**: Let's say $P(\theta)$ is an arbitrary distribution over an arbitrary variable $\theta$. SVGD gets samples $(\theta_1, \dots, \theta_N)$ from a distribution $P(\theta)$ (regardless of $\theta$ is finite-dimensional or replaced with infinite-dimensional variable $f$);
- **WGBoost** gets samples $(\theta_1(x), \dots, \theta_N(x))$ from a conditional distribution $P(\theta \mid x)$ for any input $x$. WGBoost learns the conditional distribution in the setting where we can know the form of $P(\theta \mid x_i)$ only at given data inputs $( x_i )\_{i=1}^{n}$. In the algorithm, WGBoost **predicts** the Wasserstein gradient used to sample $(\theta_1(x), \dots, \theta_N(x))$ for each input $x$ by tree-model.
SVGD do not have input variable $x$ so do not have the prediction of the Wasserstein gradient unlike WGBoost. In this sense, WGBoost is a conditional version $P(\theta \mid x)$ of a Wasserstein gradient flow of $P(\theta)$, where the Wasserstein gradient of the particles $(\theta_1(x), \dots, \theta_N(x))$ will be predicted for each input $x$ (because the form of $P(\theta \mid x)$ can be known only at $x=x_i$).
We hope this way of explanation clarifies the difference better? We are keen to hear if the reviewers opinion still remains unchanged.
[1] Stein variational gradient descent on infinite-dimensional space and applications to statistical inverse problems | Summary: The paper introduces a novel gradient boosting framework called Wasserstein Gradient Boosting (WGBoost). Unlike traditional gradient boosting methods that fit base learners to the gradient of the loss function, WGBoost fits them to the Wasserstein gradient of a loss functional defined over probability distributions. This approach is particularly useful for probabilistic prediction, where the goal is to approximate the uncertainty in the model prediction. The authors provide a general formulation of WGBoost, its algorithmic implementation, and empirical evaluations on various benchmarks.
Paper's major contributions include, the introduction of the Wasserstein gradient flow framework into gradient boosting; the development of an approximate algorithm for posterior regression using the KL divergence; and the demonstration of WGBoost's performance on regression, classification, and out-of-distribution (OOD) detection tasks. Authors also propose a second-order WGBoost algorithm built on the approximate Wasserstein gradient and Hessian of the KL divergence.
Strengths: S1: The application of Wasserstein gradient flows to gradient boosting is a novel and promising direction, providing a new perspective on ensemble learning algorithms.
S2: Solid theoretical concepts, with detailed derivations and explanations of the Wasserstein gradient and Hessian approximations.
Weaknesses: Authors did not specify details about the hyperparameter selection for Conditional Density Estimation, Classification and OOD Detection tasks.
Misc:
There is a typo in the line no 284, root is written as room.
Technical Quality: 4
Clarity: 4
Questions for Authors: I would be curious to know about the scalability and the computational complexity of the proposed algorithm.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your strong endorsement of our work. We also believe that WGBoost is a promising direction that connects Wasserstein dynamics and gradient boosting ensemble for the first time.
> Authors did not specify details about the hyperparameter selection for Conditional Density Estimation, Classification and OOD Detection tasks.
Thank you for raising this point. We will add a clarification about the hyperparameter selection for the experiments other than Section 4.2.
> Misc: There is a typo in the line no 284, root is written as room.
Thank you for pointing this out. We have corrected the typo.
> I would be curious to know about the scalability and the computational complexity of the proposed algorithm.
Thank you for this point. In Appendix E, we have a simulation study to compare the loss decay and computational time of WGBoost under four different choices of the approximate Wasserstein gradient. We will expand the discussion about the computational cost of the algorithm in the main text (e.g. adding some big O cost order).
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my comments. I do not have further questions and I will keep my score at 8. | Rebuttal 1:
Rebuttal: We would like to express our gratitudes to all the reviewers for their efforts in assessing our work. Our full rebuttal to each reviewer has been provided in each individual rebuttal area. This global rebuttal area is used for the following contents:
- (A) Brief Summary of Each Rebuttal
- (B) Concise Overview of Our Work
## A. Brief Summary of Each Rebuttal
- **Reviewer jx13** & **Reviewer DY4u**: We appreciate your strong endorsement of our work. We believe that our work represents a promising advance that combines the gradient boosting framework with Wasserstein gradient systems for the first time.
- **Reviewer UM7J**: Thank you for your constructive feedback to our work. About our experiment, we will add (i) larger-scale datasets, (ii) other advanced approximate Bayesian methods, and (iii) other kernel-based/quantile-based nonparametric methods. About clarity of our algorithm (Algorithm 1), please find our full rebuttal and Section B below recapping our paper structure. In short,
- Section 2 explains a general framework of WGBoost in Algorithm 1, where we need to specify *'how to estimate the Wasserstein gradient'* as a user-specified component;
- Section 3 then explains that we plug the kernel-smoothing estimate as the Wasserstein gradient estimate into Algorithm 1 for our application.
- **Reviewer q9Jy**: Thank you for your assessment of our work. We are afraid that WGBoost seems misinterpreted as a sampling algorithm (such as SVGD) in raised concerns. Briefly, WGBoost is an algorithm to solve a new form of regression problem, such that, each input $x$ is arbitrary vector and each associated output is a probability distribution $\pi_x$ rather than some vector $y$. Please find our full rebuttal and Section B below recapping the high-level problem setting of WGBoost.
## B. Concise Overview of Our Work
Our paper structure is mainly twofold. We proposed a general framework of WGBoost in Section 2. We then described how WGBoost can be used for probabilistic classification/regression in Section 3.
**B.1. Problem to Solve**
Firstly, the problem that the general framework of WGBoost can solve is a new form of regression problem below:
- We have a dataset $( x_i, \pi_{x_i} )\_{i=0}^{n}$ where input $x_i$ is arbitrary vector and output $\pi_{x_i}$ is a probability distribution given at each $x_i$. Our goal is to have a learning algorithm that predicts the unseen distribution $\pi_x$ for any new input $x$.
This is a challenging problem because the target output $\pi_{x_i}$ at each input $x_i$ is an infinite-dimensional object (probability distribution) rather than some finite-dimensional vector $y_i$. WGBoost constructs a map from any input $x$ to a set of particles $ \hat{\pi}_x$ that approximates $\pi_x$:
- $x \to \text{WGBoost} \to ( \theta^1(x) , \dots, \theta^N(x) ) =: \hat{\pi}_x \approx \pi_x$.
Our novelty is in bringing the concept of Wasserstein dynamics into gradient boosting, which has never been explored.
**B.2. Gradient Boosting with Wasserstein Gradient**
Gradient boosting is a well-used ensemble method that trains multiple weak learners and combines them iteratively. In standard gradient boosting, each weak learner is trained to approximate the gradient $\nabla_{z_i} L(z_i | y_i)$ of a loss function $L(z_i | y_i)$ between the ensemble output $z_i$ and a target-output vector $y_i$ at each input $x_i$. In WGBoost, we rather use a loss functional $\mathcal{F}(\hat{\pi}\_{x_i} | \pi\_{x_i})$ between the WGBoost output and the target distribution at each input $x_i$. Then, each weak learner is trained to approximate the "Wasserstein gradient" of the loss functional at each input $x_i$. The key innovation is that the Wasserstein gradient evaluated here will be a finite-dimensional vector. Hence, each weak learner in WGBoost can be trained like usual regression algorithms, despite the original target $\pi\_{x_i}$ is an infinite-dimensional object. See also illustrative Figure 1-2 in the attached PDF.
In gradient boosting, users have to specify which loss function $L(z_i | y_i)$ to use. In WGBoost, users have to specify
- (a) which loss functional $\mathcal{F}(\hat{\pi}\_{x_i} | \pi\_{x_i})$ to use at each input $x_i$;
- (b) how to estimate/approximate the Wasserstein gradient of the chosen functional $\mathcal{F}$.
For Wasserstein gradient flows in general, it is common that the Wasserstein gradient is not analytically available (except some nice cases) and needs to be approximated (c.f. third paragraph of Section 2.1). Hence WGBoost needs to specification of item (b) too.
**B.3. Application to Classification/Regression**
Section 3 described application of WGBoost to probabilistic classification/regression, and how we specified item (a)-(b) for it. For illustration, consider classification where we have a dataset $( x_i, y_i )\_{i=1}^{N}$ of input vector $x$ and output label $y$. Assume a categorical distribution $\mathcal{C}(y \mid q)$ over the output label $y$ with the class probability vector $q$. With WGBoost, we can obtain a particle-based distributional estimate $\hat{\pi}\_{x}(q)$ of $q$ for any input $x$ (i.e. predictive uncertainty in $q$); see Figure 3 diagram in the attached PDF. To train WGBoost, we prepare the target distribution $\pi_{x_i}$ at each input $x_i$ as follows:
- Prepare a prior distribution $\nu_{x_i}(q)$ over $q$ at each input $x_i$ (c.f. Section 3.2);
- Get a posterior distribution $\pi_{x_i}(q) \propto \mathcal{C}(y_i \mid q) \times \nu_{x_i}(q)$ over $q$ from the likelihood $\mathcal{C}(y_i \mid q)$ and the prior $\nu_{x_i}(q)$ at each data point $(x_i, y_i)$.
Note that this posterior $\pi_{x_i}(q)$ is made at each data point $(x_i, y_i)$ pointwisely. This is different from a posterior of e.g. Bayesian neural network, which is defined over the network parameter $w$ using all data, $P(w \mid (x_i, y_i)\_{i=1}^{N})$. For regression, we can think of $\mathcal{N}(y \mid m, \sigma)$ instead of $\mathcal{C}(y \mid q)$.
Pdf: /pdf/82bbea7547536274c7dcdb7463a88d81f23ec9cf.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Online Weighted Paging with Unknown Weights | Accept (poster) | Summary: This paper considers a generalization of the classical online paging problem. In classical online paging, one needs to maintain a cache of k slots as requests for fetching pages arrive online. If the requested pages are in the cache, there is no charged cost; otherwise, w_p cost will be charged for page p. The goal is to maintain a cache of k slots such that the total fetching cost is minimized. This paper extends this classical setting to the case where the weight of all pages is not known prior to algorithms. Instead, the algorithm needs to sample a value from an unknown distribution, and such a value will be an estimated value of the actual weight. The goal is still to minimize the total fetching cost.
The main contribution of this work is an online algorithm that achieves alg$\leq O(\log k)$ opt + $O(\sqrt{nT})$, where k is the size of the cache, n is the number of pages and T is the number requests. The main technique is fractional solution+rounding. To bridge these two phases, the authors design an interface, which aims to learn an estimation of weights for each page.
Strengths: 1. The studied problem is well-motivated and it should be interested in the ML community. I expect that the studied problem will have a positive impact on practice.
2. I like the interface idea to bridge the fractional solution and rounding. The main purpose of the interface is to learn the weights. Although all these ideas are not novel, it is interesting to see that a simple combination works well.
3. The paper is well-structured. I appreciate that the authors give a very clear statement for the algorithmic framework in Section 3. This is very helpful for readers.
Weaknesses: There is no lower bound in the paper. The regret bound might be far from tight, but this is expected in the first paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you comment on how tight the regret bound O(\sqrt{nT}) is? This seems to be a large number.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This is a theoretical paper, there is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the thoughtful comments and encouraging review.
Regarding lower bounds, see the remark in the general rebuttal. In essence, we’ve presented lower bounds for the problem in “Our Results”, which we intend to further formalize in the camera-ready version.
Regarding the regret bound of $O(\sqrt{nT})$, consider the following implications of the lower bounds (a) and (b) that appear in this paragraph (starting at Line 61 in the paper):
1. Lower bound (b) implies that allowing a competitive ratio that is $o(\log k)$, the regret of any algorithm is $\Omega(T/k)$. In particular, for constant k, this implies $\Omega(T)$ regret. Without loss of generality $n\le T$, as pages that are never requested are never loaded into the cache; thus, the regret lower bound is $\Omega(\sqrt{nT})$ in this case.
2. The term $\Omega(\sqrt{T})$ is necessary for any choice of (sublinear) competitive ratio.
We thank the reviewer again for the supportive review and will happily address any further questions.
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for addressing my concerns. The rebuttal addresses my concerns and I will keep my score. | Summary: This paper is the first to model and study to the online weighted paging problem with unknown weights. In this model, the weights $w_p$ of pages are initially unknown, and the eviction cost is drawn from an unknown distribution with an expectation equal to $w_p$. This study extends previous research on online weighted paging with unknown weights by integrating concepts from online learning.
The authors present an algorithm that achieves the performance bound as follows:
$$
E[ALG]\ \leq O(\log k) OPT + \tilde{O}(\sqrt{nT}).
$$
Remark that $O(\log k) is the competitive ratio in the known-weight setting. The proposed solution comprises two main components: a fractional algorithm and a rounding framework. The fractional algorithm is designed to approximate the optimal cost, while the rounding framework ensures consistency with the fractional solution through a rebalancing routine. The methodology for the fractional algorithm and the rounding and rebalancing framework are from [Bansal et al. 2010] and [Bansal et al. 2012]. Additionally, the weight estimation utilizes a classical Upper Confidence Bound (UCB) method from the multi-armed bandit problem in online learning. It is a non-trivial combination of the two techniques.
Strengths: This paper presents their motivation for studying the new model clearly, using the example of multi-level cache structure as a compelling rationale. The proposed model is clean, providing a robust foundation that is likely to inspire future research.
Weaknesses: The presentation in section 5 is not clear and may only be reader-friendly to those familiar with it [Bansal et al. 2012]. We may need a high-level idea and some introductions, like anti-cache, before presenting the pseudocode.
Technical Quality: 3
Clarity: 2
Questions for Authors: In section 5, the author mentions that they need a more robust rebalancing procedure. What is the detailed difference between it and the previous approach? Can you give some more explanations?
It seems that the algorithm runs really slow because of the continuous rebalancing step. Can you remark on the running time of the algorithm?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The author does not address the limitations of the proposed algorithm. The efficiency of the algorithm is not considered. Given that the algorithm maintains an exponential number of subsets $S \subset P$ and may adjust some $\mu(S)$ each time an $\epsilon$ fraction of $\mu(S)$ changes, the running time may be quite large.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comments and supportive review.
We take into account the comment regarding the presentation in Section 5. In the final version of the paper, we will edit this section to be more clear and emphasize high-level ideas.
Regarding your question about our rebalancing procedure, this procedure is different from that of Bansal, Buchbinder and Naor [2012] in the following ways:
1. Their procedure had to maintain the balanced property only upon changes to the distribution towards maintaining consistency (e.g., a certain probability mass gaining/losing a page). Our rebalancing procedure must also handle the case in which pages change class due to changes in their UCB (as page classes are not constant in our algorithm).
2. A technical observation: note that the rebalancing procedure fixes the imbalance at every level, in descending order. (The imbalance is the striped area in Figure 2.) However, fixing the imbalance at level $i$ can increase the imbalance at levels $\le i-1$. In BBN, after fixing an imbalance of $x$ at level $i$, the imbalance at levels $\le i-1$ can be seen to be at most $x$. However, in our case, the imbalance at levels $\le i-1$ can be at most $3x$; i.e., the imbalance grows exponentially. This tougher case of cascading imbalance occurs when fixing imbalance due to a page-changing UCB class.
Regarding the mentioned limitation, the rounding scheme does maintain a distribution supported by many cache states (while, of course, holding in actuality a single cache state). To the best of our knowledge, this is true of any known rounding scheme for weighted caching, in particular for that of Bansal, Buchbinder, and Naor [2012]. The focus of this paper was introducing unknown weights to weighted paging, rather than improving the computational tractability of the rounding scheme for known weights; however, we agree with your observation and believe this is a promising direction for future study.
We thank you again for the thoughtful review. We would be happy to discuss any further notes/questions you have. If we’ve properly addressed your concerns, we would be happy if you considered updating your assessment accordingly. | Summary: The paper addresses the problem of online weighted paging with eviction costs drawn from unknown page-dependent distributions. As evictions occur, the authors demonstrate how to learn an effective eviction strategy online using previous cost samples, framing this as a multi-armed bandit problem. They first approach a fractional relaxation of the problem, then apply a rounding technique to derive a randomized algorithm for the original problem. The resulting algorithm incurs an expected cost of $O(\log k) \cdot \text{OPT} + O(\sqrt{nT})$, where $k$ is the cache size, $n$ is the number of pages, and $T$ is the number of time steps.
Strengths: * The paper is well-written and well-organized.
* The setting is both well-motivated and theoretically interesting, as it combines techniques from competitive analysis and regret analysis.
* The main result is strong and might inspire future work in the field of online algorithm design.
Weaknesses: I do not see any major weaknesses in this paper, but there are a few minor ones:
* Although the setting is different, the paper is closely related to the framework of learning-augmented algorithms. I suggest that the authors mention this connection in the related work section.
* A very similar setting has been studied in "On Preemption and Learning in Stochastic Scheduling" (ICML 2023) for the non-clairvoyant scheduling problem, which also couples the analyses of the competitive ratio and regret. Maybe this should be cited as a related work.
* While the main contribution is theoretical, it would be nice to conduct a few experiments to demonstrate how the proposed algorithms compare to other benchmark algorithms where the weights are known
Technical Quality: 4
Clarity: 3
Questions for Authors: Since the algorithm only chooses which of the $k$ pages in the cache to evict, would it be possible to improve the regret term in Theorem 1.1 and have instead a term of the form $O(\sqrt{f(k,n) T})$, with $k \leq f(k,n) < n$? If not, can you prove that the bound $O(\sqrt{nT})$ is the best possible for any values of $n$ and $k$?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The assumptions of the theorems are clearly stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the encouraging review, as well as the thoughtful comments. Please see below our response regarding the weaknesses and questions mentioned.
_“… the paper is closely related to the framework of learning-augmented algorithms. I suggest that the authors mention this connection in the related work section.”_ \
Thanks for this remark, we’ll explore the connection between this problem and learning-augmented algorithms in the related work section.
_“...it would be nice to conduct a few experiments…”_ \
We’ll consider adding an experimental section towards demonstrating the applicability of the algorithm. We note that in bandit papers that focus on proving regret bounds, experimental sections are usually optional. (Unlike, e.g., learning-augmented online algorithms, where empirical evaluations are customary for ML conferences.)
_“Since the algorithm only chooses which of the 𝑘 pages in the cache to evict, would it be possible to improve the regret term in Theorem 1.1 and have instead a term of the form $𝑂(\sqrt{𝑓(𝑘,𝑛)𝑇})$, with $𝑘≤𝑓(𝑘,𝑛)<𝑛$? If not, can you prove that the bound $𝑂(\sqrt{𝑛𝑇})$ is the best possible for any values of 𝑛 and 𝑘?”_ \
As mentioned in the lower bounds described in “our results”, in the regime that allows a competitive ratio of $\Theta(log k))$, there is a lower bound on regret of only $\Omega(\sqrt{T})$. It could be the case that the $\sqrt{n}$ term in our $O(\sqrt{nT})$ regret bound could be improved upon; as this is the first work on this problem, we did not focus on obtaining tight bounds. However, we remark that $\Theta(\sqrt{nT})$ is the optimal regret bound for standard MAB, and we would thus be surprised if a significant gap were obtained for OWP-UW.
We thank you again for your positive assessment and hope the above has satisfied your concerns. We would be happy to address any further questions you may have.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response, and I strongly encourage them to add the necessary discussion on the relation with learning-augmented algorithms, and in particular the works where the unknown variables are learned during the algorithm execution.
I would have liked to see experiments mostly because I am curious to observe if the asymptotic behavior, when T is very large, aligns with that of prior algorithms that have knowledge of the weights, in terms of the multiplicative constant in $O(\log k)$. However, I agree that this is not a major weakness of the paper, but a suggestion instead, as the theoretical contribution is sufficiently interesting. | Summary: The paper studies a the online paging problem where the weights to retrieve a items are independent random variables with potentially different distributions. The paper introduces an algorithm whose expected performance differ from the optimal one by a multiplicative factor (logarithmic in the size of the cache) and an additive term (sublinear in the number of the requests).
Strengths: If correct, the paper makes a very interesting advance on an important theoretical topic. The authors position well the paper in the existing literature.
Weaknesses: I found this paper the most difficult to evaluate. I feel NeurIPS may not be the most suited venue for such work, both for its theoretical focus (it seems more a STOC-paper) and the 9-page limit which forced the authors to compress much their reasoning. Probably for this reason, as well as some disorganization in the presentation, I was not able to figure out how the algorithm really works as well as to follow some more technical arguments. I provide some specific comments below
Doubts
- about algorithm 1, what is really behind "continually increase" at line 4, should we assume that that y_p and m_p are updated according to differential equations? This seems to be supported also by later statements like at line 276 "upon any (infinitesimally small) change to a fractional variable"
- I was not able to understand the reasoning about why the problem may not exhibit sublinear regret without a competitive ratio (point b at page 2)
- in section 1.2, the paper argues that the solution cannot be built in the standard way by combining a solution to the fractional problem and a rounding scheme. I was not able to follow the reasoning in lines 86-94. Even more, because the proposed solution combines a solution to the fractional problem and a rounding scheme
- in footnote 2, "note that k \le \sqrt{nT}", I did not get if this follows from the previous reasoning or it is an implicit assumption that T is large enough.
- algorithm 2, what does it mean line 9 "add p' to the anti-cache in an \epsilon-measure of states without p'"? If find this kind of expressions too vague.
- lines 254-255, there is probably an error, how can be UCB_p between 6^i and 6^{i+1} if, as observed in the same line UCB_p is between 0 and 1?
Motivation
- I find the motivation in terms of a cache inserted in the a hierarchy of caches quite weak. The content stored at caches of higher level would be determined by the caching decisions at lower levels. It would the not lead to the random weights considered in this paper.
Confused presentation
- the fact that costs may be considered to be paid upon eviction rather than upon fetching is mentioned in footnote 2 at page 4, but it is already implicitly used at page 2 when explaining why the problem cannot admit logarithmic competitive ratio without regret
- the paper starts talking about moving fraction and servers at line 166 before referring to the parallel with the k-server problem
- the (huge) caption of figure 1 refers to the "consistency property" and the "subset property". The first one is never really formally defined. The second one is only introduced two page later.
- in the caption of figure 2, there is a reference to "class i" but this is only introduced at page 8. Also note that there are two classes i P_i and P_{\ge i}
Minor
- the problem is sometimes referred to as OWP-UW, others as UW-OWP
- line 95: "the and rounding scheme"
- footnote 1: "known in advanced"
- in the figures the lables of the different subfigures (e.g. a, b, c, d in figure 1 are missing)
Update after rebuttal
- the authors have addressed some of my technical questions and I have increased the score.
Technical Quality: 2
Clarity: 2
Questions for Authors: See questions above.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The computational complexity of the algorithm is not discussed. This is important, in particular if, as I think, the algorithm relies on solving some differential equations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their thorough assessment and comments. Here is our response to the points raised.
**Suitability for NeurIPS**: we note that theory papers considering online caching/paging have appeared in ML conferences when combined with an ML-related theme. See for example [1,2,3,4,5], which are theory papers that augment caching/paging with ML predictions, and have appeared in ICML. In our case, multi-armed bandits are clearly appropriate for NeurIPS (in fact, they constitute a primary area). Thus, we believe that NeurIPS is the appropriate venue for this paper.
**Computational tractability**: please see the general rebuttal. In short, the continuous presentation of the algorithm (also chosen for previous work) can easily be replaced with a computationally efficient discrete process.
**Regret lower bound**: the described construction implies that for every randomized algorithm $ALG$ there exists an input such that $OPT = O(T/(k \log k))$ and $E[ALG]=\Omega(T/k)$. Thus, $E[ALG]-OPT$ (i.e., the regret) is at least $\Omega(T/k)$; that is, the regret grows linearly in $T$. We intend to formalize this further in the camera-ready version; see general rebuttal.
**Fractional+rounding framework**: The point in Lines 86-94 was that in most algorithms involving fractional+rounding schemes, the integral problem immediately induces a fractional problem, such that the fractional solver is competitive w.r.t. this fractional problem, independently of any “downstream” rounding scheme. This is the case for Bansal, Buchbinder and Naor’s fractional solver for online paging with known weights. But, for unknown weights, no fractional solver can be competitive without learning about page weights – and this is done through sampling pages integrally, which cannot be done by a fractional solver. Thus, the interaction between the fractional solver and the rounding scheme is crucial even for the fractional solver itself to have any competitiveness guarantee. Indeed, one of the main contributions of this paper is devising an interface between the fractional solver and the rounding scheme that enables sampling page weights when needed.
Thank you for this input regarding writing clarity, we’ll improve this paragraph in the camera-ready version.
**Additional notes**:
1. _" note that $k \le \sqrt{nT}$"_: as noted in Line 146 in the preliminaries section, it holds that $k<n$ (otherwise the problem is trivial as all pages can be simultaneously held in the cache). Also, as $n$ is the number of requested pages and thus $n \le T$, it also holds that $k<T$. The observation follows.
2. _"add $p'$ to the anti-cache in an \epsilon-measure of states without $p'$"_: In keeping with previous work on online weighted paging, our randomized algorithm is described as maintaining a distribution over cache states at any point in time; of course, in practice, only a single state is held by the algorithm and is updated as the distribution changes. For example, if the algorithm holds anti-cache state $S$ which has measure $x$ in the distribution, and the distribution is then updated such that $y<x$ measure of state $S$ receives page $p’$, then page $p’$ will be added to the anti-cache state with probability $y/x$ to maintain consistency.
While this is consistent with previous work, we agree that this should be made more explicit; we will do so in the camera-ready version.
3. Lines 254-255: there is no mistake here. The classes of values between $0$ and $1$ correspond to non-positive values of $i$.
As for the other remarks regarding writing, thanks for bringing those to our attention; we will modify the paper accordingly for the final version.
We hope the above has satisfied your concerns and questions, and that you positively consider increasing your assessment.
We, of course, will happily address any further questions you may have.
[1] "Paging with Succinct Predictions", Antoniadis, Boyar, Eliáš, Favrholdt, Hoeksma, Larsen, Polak and Simon, ICML 2023
[2] "Parsimonious Learning-Augmented Caching", Im, Kumar, Petety and Purohit, ICML 2022
[3] "Robust Learning-Augmented Caching: An Experimental Study", Chłędowski, Polak, Szabucki and Żołna, ICML 2021
[4] "Online metric algorithms with untrusted predictions", Antoniadis, Coester, Elias, Polak and Simon, ICML 2020
[5] "Competitive Caching with Machine Learned Advice", Lykouris and Vassilvitskii, ICML 2018
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their explanations. I was convinced by the technical answers. I still maintain that the motivation is quite weak (and I went through the discussion with the reviewer Yutu), the main text is packing too much information, and the presentation is quite disorganized. I am (slightly) increasing the original score.
---
Reply to Comment 1.1.1:
Comment: We appreciate your feedback and would like to thank you for your consideration of our comments.
Regarding motivation, as stated in our response to Reviewer Yutu, there are multiple motivations for studying OWP-UW, as in essence the deterministic-weights assumption in previous papers is often unjustified. One specific example is stated in our comment to Reviewer Yutu (handling stochastic weights, e.g., when retrieving data from the internet). In the final version of the paper, we'll be sure to expand our motivation segment; e.g., through including this additional motivating case.
Regarding writing, we thank you again for your feedback. We'll ensure that our final version has a clear presentation that addresses your concerns. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thorough assessments and their thoughtful remarks. Here are some clarifications regarding themes that appear in more than one review.
**Running time:**
Reviewers Yutu and qj5j made remarks regarding the running time of the algorithm, and specifically the continuous updates in the fractional solver. The fractional solver is presented as continuously decreasing and increasing anti-server amounts, in keeping with previous work. But, this continuous process can be easily discretized into simple multiplicative updates, without affecting the correctness of the algorithm. This is a common trait of algorithms based on multiplicative updates; for an introduction to such discretization, see e.g. Chapter 4.2 of [1]. Similarly, the rounding scheme which maintains consistency and balance is presented as processing infinitesimal changes (in keeping with previous work), but can actually be applied only after the fractional solver has concluded; i.e., once per request.
**Lower bounds:**
Several reviewers had issues with the lower bounds complementing our main result. In particular, we showed that:
1. In the absence of a regret term, the competitive ratio of any algorithm is unfavorable (i.e., it has a polynomial dependence on $T$).
2. In the absence of a competitive ratio term, the regret of any algorithm is unfavorable (i.e., linear in $T$).
We concluded that both a competitive ratio term and a regret term are necessary. As these lower bounds are rather simple, we chose to present them informally in “our results”; however, these lower bounds are concrete, and can easily be formalized as theorems. Given the reviewers’ feedback, we’ll formalize those lower bounds in the camera-ready version.
[1] “The Design of Competitive Online Algorithms via a Primal-Dual Approach” by Niv Buchbinder and Seffi Naor. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Author consider caching with stochastic weights. In particular,
the cost incurred by the algorithm at eviction of a page p
is drawn independently from a fixed distribution D_p
which is different for every page and is not known to the algorithm in advance.
Authors propose an algorithm with guarantees which combine both
competitive ratio and regret. Authors claim that neither regret nor competitive
ratio are possible on their own.
Their algorithm is a modification of the classical algorithm
by Bansal et al. for weighted paging, incorporating upper and lower confidence
bounds on the average weight of the page.
The more challenging part seems to be randomized rounding, where they
have to deal with the changes of the upper confidence bounds and such
element was not present in the original paper of Bansal et al.
Strengths: * Problem seems interesting
* Result is non-trivial from technical point of view
Weaknesses: * I find the framing of their result in the context of hierarchical caching
and CPU caches very unfortunate for several reasons:
(1) hierarchical caching is already studied (see work of Bansal, Buchbinder, Naor SICOMP'12). Authors do not seem to be aware of this.
(2) It is completely unclear why should there be an underlying distribution
of the cost of loading each page in a hierarchical (or multi-level) cache:
It is enough that the algorithm operating on different levels of the cache
are mis-aligned in some way to make weights of the pages
adversarial instead of stochastic.
(3) I cannot imagine implementing anything resembling their approach
in CPU caches.
* Provided justification of the optimality of the presented results
is very informal. It is just a few lines.
In particular, authors argue about the necessity of the regret term using using
lower bounds for bandits. However, their input is stochastic which
allows much better regret bounds when the mean weights of the pages
differ significantly. On the other hand, if the weights of the pages are
similar, it is easy to achieve good competitive ratio.
* Statements of their results are not completely clear: E.g., what is Q in
Theorem 1.1? Theorem says it is an "input":
Is it composed of the request sequence and the distribution of
page weights, or does it contain also the realization of the page weights?
What is OPT(Q) then? I do not see these things explained even in preliminaries.
Technical Quality: 3
Clarity: 2
Questions for Authors: * Please comment on the last two weaknesses mentioned above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Authors do not discuss possible difficulties in implementing their algorithm
in CPU caches although they use such use case as a main motivation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback, please see below our comments for the concerns raised.
**Regarding (1):** _“Hierarchical caching is already studied”_.
Assuming you refer to [1]: this paper studies generalized weighted paging, in which each page has both a weight and a size demand imposed on the cache. Their only reference to storage hierarchies refers to weights (not sizes); thus, we interpret the reviewer’s comment as claiming that our motivating example is captured by standard weighted paging with known weights.
To illustrate why this is not the case, note that in standard weighted paging, different pages have different weights, but each specific page has the same weight over time. In our motivating example, this is not the case: the same page can alternate between the main memory (fetching cost 1) and the L2 cache (fetching cost epsilon << 1). This motivates learning a page’s expected fetching cost over time, which is the crux of our model.
**Regarding (2):** _“...why should there be an underlying distribution of the cost of loading each page in a hierarchical cache”_.
Our goal in this work is to remove the commonly used known-eviction-costs assumption in online weighted paging. Our motivating example refers to the aforementioned cache hierarchy, in which the L2 cache holds pages with probability proportional to their demand by the individual cores; we believe this is a reasonable assumption to make. We agree that the requested computation could in theory “game” the L2 cache; but, we don’t think encapsulating this (arguably somewhat pathological) case would make our motivating example more compelling.
**Regarding (3):**
(a) _“I cannot imagine implementing…”_.
This paper, as previous works in this research field, is theoretical. Our algorithm is not claimed to be practically implementable, which also resembles previous works.
The only inefficient component in our algorithm is the rounding scheme, as it performs infinitesimal iterations, similar to the original rounding scheme proposed by Bansal, Buchbinder and Naor (FOCS’ 07).
“Justification of the optimality … is very informal”. Our upper bound combines both the element of learning the page weight, which yields a regret term and the element of planning a close-to-optimal eviction strategy, which yields a competitive ratio term. We claimed that each term separately is optimal, and the combination of both is necessary. As you mentioned, our bounds provide a trade-off between the regret caused by the unknown costs and a competitive ratio w.r.t the optimal algorithm that knows the true expected eviction costs.
(b) _“What is $Q$ in Theorem 1.1?”_.
The overwhelming majority of work in randomized online algorithms deals with oblivious adversaries; this is also the case in this work. In particular, the input $Q$ consists of a sequence of $T$ requests for pages, where every page has an associated weight distribution; the oblivious adversary commits to this input. When the algorithm processes input $Q$, upon evicting a page, the algorithm obtains an independent sample from the weight distribution of that page.
We would be happy to elaborate on any issue during the discussion phase. If we’ve addressed your remarks to your satisfaction, we would be happy if you considered revising your merit score accordingly.
[1] "Randomized competitive algorithms for generalized caching", Bansal, Buchbinder & Naor, 2012.
---
Rebuttal Comment 1.1:
Comment: * reference [1] shows how to model the hierarchical cache where the cost of a page load depends on what layer of the cache contains the page as a generalized caching problem.
* what algorithm do you expect to be maintaining the L2 cache? Which algorithm satisfies the property that a given page is in the cache with some probability dependent only on the page itself? I do not find expecting such property reasonable at all. Does your algorithm satisfy this property?
* Yes, there are many theoretical papers on caching but it is definitely not common motivating a complicated and resource demanding algorithm by CPU caches.
to sum up, I am still not happy about the framing of your result.
Now about the tightness of your result. You provide a bound which combines a competitive ratio with a regret term depending on T. You claim that sometimes you may need the regret term and sometimes competitive ratio. But your bound contains both at the same time, which is pretty weak in the context of the competitive analysis as well regret analysis. Do you have a lower bound showing that you need both terms at the time? I have explained my doubts about the existence of such bound in my review.
thank you for explaining the statement of your theorem. your problem is partially adversarial and partially stochastic which is not a particularly common setting. I believe that this deserves a proper explanation in the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment,
Regarding motivation:
1. Suppose that the L2 cache fetches any page requested by a core, and evicts uniformly at random. Then, if a page is requested twice as often by the cores than another page, the probability of that page appearing in the cache at a given point in time is roughly twice as large.
2. Perhaps more importantly than the specifics of the CPU cache example, we note that the motivation for our work is not different from that of any paper in the significant line of work on weighted caching. In fact, the assumption made in these previous papers that the page weights are known is often unjustified. For example, as you've mentioned [1], they use as motivation hierarchical caching for web pages:
*"This version of the problem is called weighted caching and it models scenarios in which the cost of fetching a page is not the same due to different locations of the pages (e.g., main memory, disk, Internet)"*. The cost of fetching a page from the internet cannot be assumed to be constant, and can be modeled much more realistically as drawn from some unknown distribution. This assumption made in all prior weighted paging papers ([1] included) is the one that our paper addresses.
Regarding the lower bounds: \
You have perhaps misunderstood the claim reflected in our lower bounds. We did *not* merely claim that either a competitive ratio or a regret term are needed. We claimed that (a) In the absence of a competitive ratio, a terrible regret term is needed (much worse than our regret bound), and (b) in the absence of a regret term, a terrible competitive ratio is needed (much worse than our competitive ratio). The conclusion to draw is that **both** a competitive ratio term and a regret term are needed. We will ensure that this point is emphasized well in the paper.
*"Your problem is partially adversarial and partially stochastic which is not a particularly common setting"*: \
In fact, the majority of works in the field of multiarmed bandits use this setting. For example, in arguably the most well-known variant of MAB, each arm has a value distribution chosen adversarially, while the samples from this distribution are obtained stochastically. We nevertheless value the writing feedback and will ensure that our paper makes this clear. | null | null | null | null | null | null |
How do Large Language Models Handle Multilingualism? | Accept (poster) | Summary: This paper delves deeper into how LLMs handle multilingualism. The authors hypothesized a three-stage multilingual workflow called MWork: understanding, task-solving, and generating and which language(s) become essential in each stage. To verify their proposed workflow, they experimentally identify language-specific parameters and selectively deactivate them within different structures so this allows to assess the functionality of corresponding structures and enables the hypothesis. To do so, the authors develop a novel approach called Parallel Language-specific Neuron Detection (PLND). Without requiring labeled data, PLND can identify the language specific neurons. Following the PLND, the authors successfully identify the language-specific neurons which account for only 0.13% of all neurons. Their extensive results report that, by deactivating those neurons, the multilingual task performance could significantly dropped.
This paper tackles an important research question of LLMs: how do large language models handle multilingualism? To address this, they deliberately design a new approach of PLND and successfully identifies a language-specific neurons in the LLM models. Along with their MWork hypothesis, they carefully conducted experiments with multiple languages with different scrips and verified their hypothesized three-stage multilingual workflow.
This paper develops a useful model analysis tool of PLND. Their hypothesis is well explored in multiple languages. Technically well sound.
Strengths: - well-organized paper. clear presentation.
- Technically sound by extensive experiments with different natural language understanding tasks with diverse language sets
Weaknesses: - no major weakness though
Technical Quality: 3
Clarity: 4
Questions for Authors: - Regarding those language-specific neuron, are there any patterns or trends related to language family grouping or similarity? Such analysis or findings would be interesting to NLP researchers.
- Have you ever tried this approach against bigger models with > 7B parameters?
- Since language-specific neurons are identified via the proposed framework, can we do model compression by preserving those specific parameters without losing multilingual capability? If you have examined this perspective, can you please share the results?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer NLiQ,
Thank you for your insightful reviews and comments. We appreciate the time and effort you have put into providing valuable feedback. We would like to address your concerns as follows:
> Question #1: Patterns of language-specific neurons.
We acknowledge your concerns regarding the properties of language-specific neurons, which have been thoroughly investigated in Appendix C (lines 512 to 523). Given its significance, we are considering integrate this section into the main text in the final version. Thanks for your recommendation.
> Question #2: Performance on larger models.
Thank you for recommending adding analysis on larger models. Regrettably, we are unable to implement our methods on larger models at this time due to resource constraints. In line with Reviewer TbKY's suggestion, we will adjust the claim "all LLMs" accordingly in the final version.
> Question #3: Model compression.
We appreciate your invaluable insights on the model compression potential of our proposed method. We also explore this direction following this paper.
We first take the opposite approach to language-dependent neurons. Specifically, we extract neurons that are not important for all corpora, denoted as Language-Irrelevant Neutons. We conduct testing in Thai on Mistral-7b-base, employing XLSum as our chosen task. Opting for XLSum is driven by its simplicity in contrast to complex NLP tasks like MGSM, yet it effectively showcases the model's multilingual ability when compared to measuring perplexity. Our analysis reveals that these language-irrelevant neutons represent a mere fraction of the overall parameters (approximately 2%), with model performance showing negligible impact upon deactivation. In contrast, random deactivation of 0.16B parameters (equivalent to 2% of all parameters) markedly impairs the model's functionality.
| # Deactivated Parameter | Language-Irrelevant Neutons | Random |
| ----------------------- | --------------------------- | ------ |
| 0.11B | 24.3 | 23.7 |
| 0.16B | 24.6 | <1 |
Nevertheless, the modest compression ratio proves unnecessary, and we think it's because the filtering criteria for language-irrelevant neurons are too strict. Consequently, we opt to retain language-specific neurons. Specifically, we implement language-specific neurons and deactivate non-language-specific neurons, resulting in improved performance compared to our previous setup. We can achieve nearly unaffected performance by deactivating up to 23% of all parameters. The detailed performance breakdown is presented below.
| # Deactivated Parameter | 0.26B | 0.52B | 0.79B | 1.1B | 1.3B | 1.6B |
| ----------------------- | ----- | ----- | ----- | ---- | ---- | ---- |
| Performance | 23.7 | 24 | 24.9 | 24.5 | 23.3 | 23.1 |
---
Rebuttal 2:
Title: Initial findings for your suggested future directions
Comment: Dear Reviewer NLiQ,
I'm writing to express our gratitude for the time and effort you've dedicated to reviewing our paper. Your insightful questions have truly illuminated new directions for exploration. Please kindly refer to our first response for some preliminary results.
As the discussion period nears its end, we would be delighted to receive any further comments or discussions you might have. Your input would be invaluable in enhancing our work and inspiring further research ideas.
Thank you once again for your thoughtful review and support.
Warm regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for the clarification and the additional experimental results. This paper presents extensive experiments with interesting findings. I have read all the reviews and the corresponding authors' responses, and I don't see any major concerns. Therefore, I will keep my score as it is (7: accept). | Summary: The paper presents two key contributions. One is Parallel Language-specific Neuron Detection, a method of identifying elements of multilingual LLMs that are responsible for handling particular languages; the method only requires unlabeled text data in each language in order to detect these neurons, which makes it cheap and efficient. The second contribution is an insight into the workflow of multilingual LLMs (referred to in the paper as MWork), claiming that LLMs process input in three steps: understanding, task solving and output generation; task solving further splits into retrieval and thinking. Also, the thinking step is shown to be happening in English, while understanding, knowledge retrieval and generation are handled multilingually / specifically to the language of the input. Authors use PLND to verify that the models chosen for the experiments actually follow this workflow and conduct thorough experiments on two models about a dozen languages other than English, including high-resource and low-resource examples.
Strengths: - Both key contributions (PLND and MWork) are of fundamental relevance to the field of LLM interpretability and analysis
- Presented experiments are rigorous and thorough
- The paper is written clearly and although the subject is complex and multi-faceted, the authors do a good job of introducing the necessary concepts and definitions and explaining the rationale behind their work as well as the conducted experiments and analysis of their results.
Weaknesses: W1. Most importantly, the claims are very broad ("LLMs are this / LLMs do that"), however this is verified on two very similar models: the included Mistral and Vicuna models both have 7 billion parameters, which puts them on the modest end of modern LLMs. How can we be sure that models with more parameters (13 / 30 / 70 / ...) behave in a similar manner without testing?, what about the influence of the context length on the performance? This is even mentioned in the limitations section of the paper: I am not suggesting that the authors run an additional set of expensive experiments -- I am suggesting that the paper text should be adjusted to reflect the actual findings of the presented experiments and would avoid bold claims about "all LLMs".
W2. Another weakly supported claim is that 400 documents was enough for language-specific neuron tuning, simply because tuning on 800 documents did not yield better results. I find that this is a single instance and without running more comparisons and more thorough comparisons this claim does not hold and should be adjusted in the text. This is especially important since improvements of 2.3% (low-resource) and 3.6% (high-resource) are small-scaled improvements and bigger amounts of language-specific
data could be expected to yield bigger improvements. Again, rather than running tons of more experiments for a single paper I instead suggest that the paper text should be adjusted (e.g. calling it a "pilot study" or "preliminary indications/results" -- PLND and MWork themselves are already worthy contributions).
W3. Not a strong weakness, but minor typos:
- row 101: "With a set of n corpus" --> "With a set of n corpora", same on row 103 ('corpus' in plural is 'corpora')
- row 319: "one brunch of work" --> "one branch of work"
Technical Quality: 3
Clarity: 4
Questions for Authors: Q1. There seems to be a leap from the experimental results to conclusions about how LLMs behave. Are the presented conclusions (about understanding->thinking->generating) the only possible explanation of the obtained results, or could there be other explanations?
Q2. In your work the language-specificity is defined via that language only (importance is higher than a threshold). Would you expect a stronger performance of PLND if the language-specificity were defined contrastively, that is high impact on one language and low impact on other languages?, in other words, could a neuron be important to several languages?, and what would be the repercussions of such neurons on your work.
Q3. In this work you focus on the "English / non-English" setting. Is this driven by the fact that English is the most abundant language in the training data?, and what would you expect the behavior of the models to be if another language were the most frequent, would the thinking be done in that language?, what about a more balanced setting where more than one language are dominant in the data (for example 40% English, 40% Chinese and other less frequent languages)?
Q4. Concerning language-specific tuning, hypothetically, is multilingual performance of LLMs bound by English-centerdness? that is, can one achieve better performance of an LLM in a non-English language if that language's data is included into pre-training from the start, rather than being English-centered and tuned to a non-English language?
Q5. Modern language models are often trained with programming code added to natural language texts. What would you expect your analysis to show in case the input is programming code or comments?
Q6. Could full fine-tuning to a non-English language achieve better results than just tuning that language's language-specific neurons?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: All good.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer TbKY,
Thank you for your insightful reviews and comments. We appreciate the time and effort you have put into providing valuable feedback. We would like to address your concerns as follows:
> Concern #1: Overclaim on model size and multilingual enhancement experiments.
We appreciate your concern regarding the claims including "LLMs xxxx" and "enhancing by 400 documents". We acknowledge that the claim for "all LLMs" is strong and will revise that accordingly in the final version. Regarding the the enhancement experiment, we conduct experiments on tuning language-specific neurons by more documents from Wikipedia and here are detailed results.
| | **Original** | **0.4k** | **1k** | **8k** | **64k** | **128k** |
| ---- | ------------ | -------- | ------ | ------ | ------- | -------- |
| Vi | 32.7 | **34.9** | 34.3 | 28.3 | 26.4 | 22.1 |
| Th | 25.6 | **28.5** | 27.4 | 23.7 | 20.8 | 15.9 |
| Ar | 21.7 | **23.4** | 22.8 | 19.2 | 16.2 | 8.3 |
| Sw | 15.1 | **16.9** | 15.2 | 12.4 | 7.9 | 5.5 |
We find that adding more documents significantly reduces performance. This phenomenon is likely attributed to overfitting the model to Wikipedia data, causing conflicts with patterns in the newly introduced data. To further validate our assumption and explore the effectiveness of language-specific tuning, we conduct additional experiments by assembling the training set from a wider array of resources such as textbook, website and journalistic materials. Due to space limitations, please kindly refer to the rebuttal addressing concern #2 provided to Reviewer 2b8j for further details.
> Question #1: Other assumption of framework based on observation in Figure 1
We would like to share more details of our preliminary exploration, which involves analyzing the content of embeddings but is removed from the paper for the sake of coherence. Figure (1) in the attached PDF illustrates the decoded embedding of various layers of Vicuna-13b processing the Chinese version of "What is one plus one equal to?". Through this examination, we observe a progression where the model initially comprehends the input, transitioning tokens to English such as "equal" and "?", in the early layers. Subsequently, in the intermediate layers, nearly all tokens are in English, indicating the task solving in English. Finally, in the later layers, the English answer is transferred back into Chinese. In summary, our proposed framework draws inspiration not just from token distribution but also from a thorough examination of token meanings.
> Question #2: Could a neuron be important to several languages?
The answer is yes, and we have investigated the degree of overlap among language-specific neurons in Appendix C (line 512 to line 523).
> Question #3 & #4: What is the performance of non-English-centered LLMs.
We appreciate your concerns on other types of LLMs. Our analysis employs Qwen2-7b, a model that is not centered around English due to the comparable size of training data in Chinese and English. In Figure (2) in attached PDF, the distribution of tokens across layers is depicted. We observe that Qwen2 tends to quickly transition multilingual inputs to Chinese or English to address tasks predominantly in English, albeit with some reliance on Chinese support. Towards the end, answers in English or Chinese are reverted back to multilingual languages.
> Question #5: Whether can implement on code?
We appreciate your invaluable insights on the generalibity of our proposed method. We also explore this direction following this paper. Our initial experiments indicate its potential applicability in code enhancement. Specifically, we utilize an instruction-code dataset [3] to identify code-specific neurons and subsequently fine-tune these specialized neurons. Experiments are conducted on Llama3-8b-Instruct and here are detailed results. Remarkably, we observe that training on a corpus of only 6k instances for less than 10 minutes can enhance the model's coding capabilities without compromising, and in some cases even enhancing, other aspects of the model. This outcome may be attributed to a clearer division of responsibilities among the parameters, culminating in superior overall model performance.
| | Human-Eval | MGSM |
| -------- | ---------- | -------- |
| Original | 32.6 | 56.4 |
| 2k | 35.6 | 57.2 |
| 6k | **37.8** | **61.6** |
> Question #6: Could full fine-tuning performs better than language-specific tuning?
Full fine-tuning demands a substantial amount of training data and may diminish performance in languages like English and Chinese, as evidenced by other continue-training works focusing on multilinguals [1]. Some works merge English corpus in multilingual training data to keep its English performance either in continue train or sft [2] [4]. However, these drawbacks can be avoided by our language-specific neuron tuning method.
[1] Sailor: Open Language Models for South-East Asia, Arxiv 2024
[2] SeaLLMs - Large Language Models for Southeast Asia, ACL 2024
[3] Python_code_instructions_18k_alpaca
[4] Multilingual Instruction Tuning With Just a Pinch of Multilinguality, ACL 2024
---
Rebuttal 2:
Title: Follow-up on Our Rebuttal Submission
Comment: Dear Reviewer TbKY,
I hope this message finds you well. We are grateful for your valuable feedback on our submission and are pleased to see your positive score. We have addressed the points you raised in detail in our responses.
As the discussion period is coming to a close soon, we kindly ask if you could review our responses at your earliest convenience. We are eager to know if our explanations have alleviated your concerns. If there are still areas needing improvement, your insights would be greatly appreciated and instrumental in enhancing our work.
Thank you once again for your thoughtful review and support.
Warm regards,
Authors | Summary: This paper proposes MWork, a workflow to study how large language models handle multilingual inputs. The main idea is to detect English and non-English neurons by probing their value differences and performance gaps. Based on the results, they argue that there are three steps in the workflow: understanding, task solving, and generating. They also use experiments to support their hypothesis.
Strengths: - They propose an interesting way to understand the mechanism of large language models.
Weaknesses: - The authors classify neurons based on English and non-English, which is not very convincing to me. I believe that, beyond these two categories, there should be some neurons that are helpful for **all** languages. These neurons can be used for general understanding and logical reasoning. However, this paper ignores this aspect. See more explanations below.
- For neuron detection, when computing the impact, I am curious why you do not remove the overlapping neurons. In this way, you can detect the real language-specific ones. Specifically, in Appendix C, although the authors argue that the intersection with English from other languages is relatively limited based on rows, if we look at the columns, it seems that all the language-specific neurons are covered by English ones, suggesting that they are actually not language-specific. It is very likely that these neurons are essential for general understanding and are language-agnostic.
- It is not clear how English and non-English tokens are detected in Figure 1. Do you determine this by applying a decoder to decode from the hidden representations?
- The authors argue that large language models will use English tokens for task solving with multilingual inputs based on the interpretation of Figure 1. However, Figure 1 also shows that models will use non-English tokens for task solving with English. I don’t think this is reasonable. From my perspective, the way to classify tokens or neurons is not accurate, and the ignorance of language-agnostic neurons makes this figure unreasonable.
- From the experimental results, deactivating language-specific neurons sometimes causes a performance drop for both English and non-English, suggesting the existence of general neurons.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Nru9,
We appreciate the time and effort you have put into providing valuable feedback. However, we respectfully believe there is a serious misunderstanding regarding our work. We would appreciate the opportunity to clarify a few points and address your concerns as follows.
> Misunderstanding: Exist language-agnostic neurons.
The proposed "language-specific neuron" and the framework do not contradict the presence of language-agnostic neurons. In reality, language-specific neurons depend on these language-agnostic neurons to complete latent comprehensive understanding and logical reasoning. Language-agnostic neurons reside in neurons not identified as language-specific. Analogously to humans, when an English native speaker encounters content in other languages, the solution lies in their overall understanding and problem-solving skills, surpassing any linguistic barrier.
It is crucial to note that language-specific neurons constitute merely 0.1% of all neurons for each non-English language (nearly 0.3% for English). Without the presence of language-agnostic neurons, compressing the model by a factor of 1000 and retaining only language-specific neurons will maintain performance. However, it is obviously impossible. These definitions and observations related to language-agnostic neurons also align with those from concurrent works [1, 2].
Although they account for the majority of neurons, language-agnostic neurons are not the focus of this paper. We mainly investigate how LLMs leverage the English language's capabilities for handling multilingualism. Our findings indicate that LLMs predominantly utilize neurons as language-agnostic entities for comprehension and reasoning, reserving only a select few to explicitly express their understanding and logic in diverse linguistic contexts comprehension and reasoning across various languages. This is exactly the motivation of our language-specific neuron enhancement method.
Nevertheless, we acknowledge your concern and will include the discussion regarding language-agnostic neurons in the final version.
> Question #1: Why do not remove overlapping of language-specific neurons.
As explained earlier, what we discovered are actually language-specific neurons, constituting only a very small fraction of all neurons. This specific region is solely dedicated to language processing, distinct from general comprehension or reasoning abilities. This discovery is also consistent with concurrent research [3]. Therefore, it does not make sense to remove overlapped neurons as they are already specialized for language.
Furthermore, neurons that overlap across all languages account for only 0.02% of all parameters. It is implausible that LLMs depend on this minute fraction of neurons for intricate comprehension and reasoning. Instead, these shared neurons between two or more languages merely denote multifunctionality, such as in an individual proficient in both French and Spanish concurrently, rather than being language-agnostic.
> Question #2: How tokens are decoded in Figure 1?
We employ decoder of the last layer to decode hidden representations, a method also employed in a concurrent work [4] that shares similar findings with ours, also interpreting the phenomenon as "the model thinks in English."
> Question #3: Why Figure 1 contains non-English tokens?
It should certainly contain non-English tokens; at the very least, non-English tokens are necessary to fill in entities although the model thinks in English. Additionally, we claim that the feed-forward structure of the task-solving layer is used to extract multilingual knowledge to obtain factual content. These extracted multilingual tokens necessitate neurons for processing and consequently manifest within the hidden representation.
> Question #4: Why deactivating language-specific neurons influence English performance?
As analyzed in the paper from Section 3.3 to Section 3.6, performance drop in English results from disabling the task-solving layer. If neurons are deactivated appropriately, English performance remains unaffected, as highlighted in Tables 2 to 6.
[1] Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models, ACL 2024
[2] Multilingual Knowledge Editing with Language-Agnostic Factual Neurons, Arxiv 2024
[3] Unveiling Linguistic Regions in Large Language Models, ACL 2024
[4] Do Llamas Work in English? On the Latent Language of Multilingual Transformers, ACL 2024
---
Rebuttal 2:
Title: Rebuttal Review Required for Accurate Assessment
Comment: Dear Reviewer Nru9,
I hope this message finds you well. The discussion period is ending soon, I am writing to emphasize the importance of your review for our submission. Your score is significantly lower than the other three reviewers, and we believe this discrepancy may indicate a misunderstanding or oversight.
We have addressed all the concerns in our detailed rebuttal and would appreciate your prompt attention to it. A thorough reassessment is crucial to ensure a fair evaluation.
Your expertise is highly valued, and we trust that a reconsidered review will reflect the true merit of our work.
Thank you for your immediate attention to this matter.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for your response. I would like to clarify my concerns.
**Regarding language-agnostic neurons:** When I mention "language-agnostic neurons," I'm referring to the "overlapping neurons" found across language-specific neurons. As you show in Appendix C, there is a certain degree of overlap among neurons across different languages, indicating that they are not entirely language-specific. I believe it would be more appropriate to exclude these neurons from the language-specific category, as they may play a crucial role in reasoning and task-solving. If these neurons are not removed, the performance drop observed when removing English-specific or non-English-specific neurons could primarily be due to the loss of these important task-solving neurons rather than actual language differences. This is why I am not fully convinced by the experimental results.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer Nru9,
Thanks for further clarifying your concern. We acknowledge and value your observation regarding the potential language-agnostic nature of overlapped language-specific neurons and their significance in managing multilingualism.
As detailed in our rebuttal, it is important to note that language-agnostic neurons primarily constitute those that are not identified as language-specific, encompassing approximately 99% of all neurons. As evidenced in Figure 5 and Figure 6 in Appendix C, while certain languages exhibit overlaps, most language pairs share only a small fraction of neurons (less than 0.5). Furthermore, upon calculating the neurons overlapped across all languages, they represent a mere 0.02% of the total neuronal population. These shared neurons between two or more languages merely denote multifunctionality, such as in an individual proficient in both French and Spanish concurrently, rather than being language-agnostic.
We are grateful for your feedback and are currently conducting a related experiment to provide empirical evidence. We will provide you with our findings once we get the results during the ongoing discussion phase.
Best Regards,
Author | Summary: This paper examines how LLMs handle multilingualism. The author proposes a hypothetic workflow (MWork), which suggests that LLMs understand the multilingual query, think in English, and than generate results in the input language. A neuron detection method is proposed to detect language specific neurons. By deactivation some of the neurons, the authors validates the above workflow, and show that the ability for a specific language could be improved by finetuning only language specific neurons.
Strengths: The paper presents an interesting hypothesized workflow and validate it with neuron analysis.
The analysis with neuron deactivation is sound enough.
Weaknesses: The analysis of this paper is based on the understanding of layers. However, it might be possible that different tasks or even different instances may have different splitting of layers for different stages in the workflow. How would this affect the analysis?
It is still strange to know that 400 documents could improve the language ability of a low-resource languages, especially when 800 documents will not be more helpful.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness part.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviwer 2b8J,
Thank you for your insightful reviews and comments. We appreciate the time and effort you have put into providing valuable feedback. We would like to address your concerns as follows:
> Concern #1: different instances may have different splitting of layers
We appreciate your concern regarding varying splitting settings for different tasks and different instances. Yes, the boundaries between layers are not very clear and vary across models, languages and tasks. In our paper, for each model we consistently apply the same splitting approach across various languages and tasks to demonstrate the generalizability of our proposed framework. It is reasonable to speculate that more precise and finely-tuned layer splitting could improve the performance of deactivation experiments (Table 2 to Table 5) and the enhancement experiments in Section 4.
> Concern #2: why 400 documents can improve language ability
We acknowledge your concern regarding why only 400 documents can yield satisfactory performance. When considering that language-specific neurons account for just 0.1% of all parameters, this size of training corpus seems reasonable. Other studies that boost specific low-resource languages [1] [2] necessitate nearly millions of documents, comparable with scaling the number of language-specific neurons proportionally.
We conduct experiments on tuning language-specific neurons by more documents from Wikipedia and here are detailed results.
| | **Original** | **0.4k** | **1k** | **8k** | **64k** | **128k** |
| ---- | ------------ | -------- | ------ | ------ | ------- | -------- |
| Vi | 32.7 | **34.9** | 34.3 | 28.3 | 26.4 | 22.1 |
| Th | 25.6 | **28.5** | 27.4 | 23.7 | 20.8 | 15.9 |
| Ar | 21.7 | **23.4** | 22.8 | 19.2 | 16.2 | 8.3 |
| Sw | 15.1 | **16.9** | 15.2 | 12.4 | 7.9 | 5.5 |
We find that adding more documents significantly reduces performance. This phenomenon is likely attributed to overfitting the model to Wikipedia data, causing conflicts with patterns in the newly introduced data.
To further validate our assumption and explore the effectiveness of language-specific tuning, we conduct additional experiments by assembling the training set from a wider array of resources such as textbook, website and journalistic materials. Employing the same settings as in Table 6, we present detailed results below. While 0.4k documents enhance multilingual proficiency, 1k and 8k documents disrupt this ability as new patterns conflict with those learned during pre-training. However, expanding the training corpus to 64k and 128k documents reconstructs multilingual proficiency and boosts overall performance.
Our findings reveal that enhancing model's multilingual capability through neuron-specific continue-training undergoes a process of disruption and reconstruction. However, adding more data from Wikipedia fails to reconstruct the multilingual ability due to overfitting. Notably, the training data comprising 128k documents remains significantly lower—by a factor of 10 to 100—than the quantities required in previous studies [1] and [2].
| | **Original** | **0.4k** | **1k** | **8k** | **64k** | **128k** |
| ---- | ------------ | -------- | ------ | ------ | ------- | -------- |
| Vi | 32.7 | 34.3 | 31.8 | 31.5 | 33.7 | **35.2** |
| Th | 25.6 | 27.8 | 25.4 | 25.8 | 26.5 | **30.7** |
| Ar | 21.7 | 22.6 | 21.3 | 20.1 | 22.4 | **24.9** |
| Sw | 15.1 | 16.4 | 15.2 | 16.0 | 16.6 | **17.3** |
[1] Sailor: Open Language Models for South-East Asia, Arxiv 2024
[2] SeaLLMs - Large Language Models for Southeast Asia, ACL 2024
---
Rebuttal 2:
Title: Regarding why 400 documents yield good performance
Comment: Dear Reviewer 2b8J,
I hope this message finds you well. We are grateful for the time and effort you have put into reviewing our paper. Your concern regarding why only 400 documents can yield satisfactory performance indeed motivated us to do an additional experiment, kindly refer to the later part of our first response.
As the discussion period is coming to a close soon, we are eager to know if our explanations have alleviated your concerns. If there are still areas needing improvement, your insights would be greatly appreciated and instrumental in enhancing our work.
Thank you once again for your thoughtful review and support.
Warm regards, Authors | Rebuttal 1:
Rebuttal: Thank you for your insightful reviews and comments. We include Figures added in rebuttal in the attached PDF.
Pdf: /pdf/683b0336466b3865700a197444fa9f179677b661.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Versatile Diffusion Transformer with Mixture of Noise Levels for Audiovisual Generation | Accept (poster) | Summary: This paper presents a new diffusion model that is capable of generating data following various conditional distributions in multimodal data. Rather than applying a uniform noise level to all data elements, the proposed model permits the use of differing noise levels across modalities and time dimensions. This allows the model to concurrently learn a variety of conditional distributions, including cross-modal conditioning, temporal interpolation, and temporal continuation. The experimental results with audio-video datasets validate the effectiveness of the proposed model.
Strengths: - The idea of MoNL is simple and reasonable. It definitely enables the model to effectively learn various conditional distributions simultaneously.
- The proposed method performs well in the experiments. MoNL particularly contributes to boost the performance of the model in AV-inpaint and AV-continue tasks.
- The manuscript is well-written and easy to follow.
Weaknesses: - The major concern is on the experiments: the majority of the empirical evaluations are conducted with the internal dataset. As there are already several publicly-available datasets (such as Landscape, AIST++, and VGGSound) that are commonly used in the literature, I strongly encourage the authors to do at least the ablation studies with such datasets to make them reproducible.
- The novelty in methoology is somewhat limited. The idea of using various noise levels in multimodal generative modeling has been presented in the prior work [3]. A particular challenge in this paper is to extend this idea also for modeling temporal dynamics, but similar strategies are commonly adopted in several video generation studies:
- “Diffusion Models for Video Prediction and Infilling,” TMLR 2022.
- “MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation,” NeurIPS 2022.
Technical Quality: 2
Clarity: 4
Questions for Authors: - In the ablations at Table 1, why is Vanilla not used? Without the Vanilla strategy, the model may not encounter cases where all elements are almost entirely noise or close to clean ones during training, which could potentially degrade the performance of the model to some extent. Therefore, a more reasonable approach for ablation might be to gradually add other strategies to Vanilla.
---
<post-rebuttal>
I have updated my rating from 4 to 5, as my concern on the dataset is somewhat resolved through the discussion.
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: Limitations are properly discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **[W1]**: *"The major concern is on the experiments: the majority of the empirical evaluations are conducted with the internal dataset. As there are already several publicly-available datasets (such as Landscape, AIST++, and VGGSound) that are commonly used in the literature, I strongly encourage the authors to do at least the ablation studies with such datasets to make them reproducible. "*
While we agree that evaluating on a diverse set of datasets is crucial, we believe that our approach effectively addresses the core challenge of audiovisual alignment.
Indeed, existing audiovisual datasets often lack the diversity of cues necessary for a comprehensive assessment of audiovisual synchorny.
Our focus has been on developing a robust audiovisual generative model with the great alignment capable of handling diverse and challenging practical real-world scenarios such as talking avatar. The Monologues dataset (16M samples), with its extensive variety of audiovisual cues such as human appearances(e.g., a range of perceived age, perceived gender expression, head pose, lighting), verbal(e.g., intonation, prosody, emotion), and nonverbal cues(e.g., head nods, body gestures and expressions), provides a strong foundation for evaluating our model's performance under these conditions. We believe that the insights gained from our experiments on this dataset are valuable contributions to the field.
Furthremore, while we included AIST++(1020 samples) and Landscape(8233 samples) datasets as they are used in our baseline, MM-Diffusion [R1] for the evaluation, we found that AIST++ and Landscape datasets are very small datasets and the results from ablation studies may not be generalizable given the risk of overfitting and memorizing.
We are committed to advancing the research in audiovisual generation and plan to opensource our Monologue datasets as well as explore the use of publicly available datasets with rich audiovisual cues.
[R1] Ruan et al. "MM-Diffusion: Learning multi-modal diffusion models for joint audio and video generation." CVPR 2023
$ $
> **[W2]**: *"The novelty in methoology is somewhat limited. The idea of using various noise levels in multimodal generative modeling has been presented in the prior work [3]. A particular challenge in this paper is to extend this idea also for modeling temporal dynamics, but similar strategies are commonly adopted in several video generation studies"*
While the concept of noise levels in generative modeling has been explored, MoNL offers significant advancements. Unlike UniDiffuser [R2], which struggles with multimodal sequence data, MoNL introduces a generalized formulation for handling diverse modalities with temporal dynamics. Our systematic evaluation demonstrates MoNL's superior performance and versatility across various tasks, including audio-video continuation and interpolation.
Unlike previous video-focused methods such as “Diffusion Models for Video Prediction and Infilling” and "MCVD", MoNL excels in handling cross-modal and noisy conditions. Our approach avoids complex masked input mechanisms and hyperparameter tuning, resulting in a simpler and more efficient model. These key distinctions highlight MoNL's novelty and contribution to the field.
We will incorporate a more detailed comparison to related work, including quantitative performance metrics, in the revised paper.
[R2] Bao et al. "One transformer fits all distributions in multi-modal diffusion at scale." ICML 2024
$ $
> **[Q1]**: *"In the ablations at Table 1, why is Vanilla not used? Without the Vanilla strategy, the model may not encounter cases where all elements are almost entirely noise or close to clean ones during training, which could potentially degrade the performance of the model to some extent. Therefore, a more reasonable approach for ablation might be to gradually add other strategies to Vanilla."*
While we appreciate the reviewer's suggestion, our ablation design was carefully considered. Our goal was to systematically demonstrate the contributions of different components to our model's overall performance.
- **Vanilla as a Baseline**: Our "joint generation" model, $\texttt{Vanilla}$, serves as a baseline for comparison. This model represents a unconditional training for joint generation without specialized handling of multimodal or cross-modal tasks.
- **Task-Specific Strategies**: We introduced $\texttt{Pt}$, $\texttt{Pm}$, and $\texttt{Ptm}$ to address specific challenges in multimodal and cross-modal generation. Our ablation compares these strategies against the baseline to quantify their impact.
- **Combined Approach**: The $\texttt{Pt/Pm/Ptm}$ model demonstrates the effectiveness of combining task-specific strategies.
- **MoNL with $\texttt{Vanilla}$**: Finally, we compare MoNL (which incorporates $\texttt{Vanilla}$) to $\texttt{Pt/Pm/Ptm}$ to highlight the additional benefits of the $\texttt{Vanilla}$ components for joint generation.
Furthermore, we found that even though the model may not encounter cases where all elements are almost entirely noise or close to clean ones during training, each strategy of $\texttt{Pt}$, $\texttt{Pm}$, and $\texttt{Ptm}$ can work overally and excels at each target task.
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: Thanks for the response. I have read the response as well as the other reviews.
I have considered the author's rebuttal, but it does not provide new or compelling information that would change my evaluation. Therefore, I will maintain my current score.
---
Rebuttal 2:
Title: Theoretical Background on Mixture of Noise Levels
Comment: ## Theoretical Background on Mixture of Noise Levels
$ $
### 1. Theoretical Background on Multimodal Learning
In *"A Theory of Multimodal Learning"* [1], multimodal learning is shown to offer a superior **generalization bound** compared to unimodal learning, with an improvement factor of **$O(\sqrt{n})$**, where **$n$** denotes the sample size. This benefit relies on **connection** and **heterogeneity** between modalities:
- **Connection**: The bound depends on learned connections between (**$\mathcal{X}$**) and (**$\mathcal{Y}$**).
- **Heterogeneity**: Describes how modalities, **$\mathcal{X}$** and **$\mathcal{Y}$**, diverge and complement.
If connection and heterogeneity are missing, ill-conditioned scenarios can arise. For instance, if **$x \equiv y$**, perfect connection suggests no need for learning about **$\mathcal{Y}$**. On the other hand, if **$x$** is random noise, there is heterogeneity but no meaningful connection between **$\mathcal{X}$** and **$\mathcal{Y}$**, making non-trivial learning on **$\mathcal{X}$** alone impractical.
The theory also highlights that and learning effective connections between modalities via **generative models** can enhance multimodal learning. This forms the basis for our **Mixture of Noise Levels (MoNL)** approach, which is particularly suited for multimodal learning with **audio** and **video** data.
[1] Zhou Lu. *"A Theory of Multimodal Learning."* NeurIPS 2023.
$ $
### 2. Advantages of Mixture of Noise Level Training (MoNL)
Our **Mixture of Noise Level (MoNL)** training method offers significant benefits for multimodal learning, especially with **audio** and **video** data:
- **Heterogeneity and Connection**: Audio and video are naturally heterogeneous. For example, a video of a person speaking includes **audio** of spoken words and **video** of lip movements and facial expressions. MoNL uses **variable noise levels** to enhance learning by capturing the **generic transition matrix** across the **temporal axis**.
$$p _{\mathbf{\theta}}([\mathbf{z} _{t ^{(1,1)}-1} ^{(1,1)}, \ldots, \mathbf{z} _{t ^{(M, N)}-1} ^{(M, N)}] \mid[\mathbf{z} _{t ^{(1,1)}} ^{(1,1)}, \ldots, \mathbf{z} _{t ^{(M, N)}} ^{(M, N)}]) \qquad \text{(Eq. (4))}$$
where $M$, $N$ are the number of modalities and time-segments, respectively.
- **Enhanced Connectivity**: MoNL improves **connectivity** between **audio** and **video** modalities. Our experiments show that MoNL often surpasses **task-specific learning** approaches by fostering better connections between modalities, adapting its focus more effectively.
$ $
### 3. Enhanced Connectivity - Comparison with Existing Methods
- **MoNL vs. Joint Learning in MMD** [2]: Unlike joint learning methods that focus on the joint distribution $p_{\mathbf{\theta}}(\mathbf{z} _{t-1} \mid\mathbf{z} _{t})$, MoNL trains across **multiple conditioning**, enabling better connections by varying its focus. This is evidenced by MoNL outperforming the Vanilla (see Table 1) and MMD models (see Tables 2 and 3).
- **MoNL vs. Per-Modality Training**: MoNL goes beyond per-modality training in UniDiffuser [3], which uses variable noise between modalities i.e., learning $p_{\mathbf{\theta}}([\mathbf{z} _{t ^{(1)}-1} ^{(1)}, \ldots, \mathbf{z} _{t ^{(M)}-1} ^{(M)}] \mid[\mathbf{z} _{t ^{(1)}} ^{(1)}, \ldots, \mathbf{z} _{t ^{(M)}} ^{(M)}])$. MoNL introduces variable noise across different **time segments**, learning connections across **temporal dynamics** as well. This advantage is demonstrated in Table 1.
- **MoNL vs. Masked Training** [4]: Diffusion models often obscure high-frequency details with low noise and low-frequency structures with high noise [5]. MoNL employs variable noise levels to explore diverse **frequency components**, enhancing the model's ability to correlate high and low-frequency elements. This is in contrast to masked self-supervised learning, which limits frequency-specific connections by masking entire elements.
[2] Ruan et al. *"MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation."* CVPR 2023.
[3] Bao et al. *"One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale."* ICML 2023.
[4] Voleti et al. *"MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation."* NeurIPS 2022.
[5] Sander Dieleman. *"Noise Schedules Considered Harmful."* [Link](https://sander.ai/2024/06/14/noise-schedules).
$ $
### Conclusion
In summary, the effectiveness of **MoNL** for **multimodal diffusion models**, particularly with **audio** and **video** data, stems from its strategic use of **connection** and **heterogeneity**. By applying **variable noise levels**, MoNL enhances **connectivity** between modalities and better adapts to diverse **temporal** and **frequency components**, leading to superior performance compared to existing multimodal learning methods.
---
Rebuttal 3:
Title: Clarification on Resource Constraints and Additional Theoretical Contributions
Comment: Dear Reviewer 4xRp,
Thank you for your feedback and for considering our initial response.
We acknowledge the importance of validating our model on publicly available datasets. Unfortunately, training on these datasets with the necessary computational resources (requiring approximately $35k per model) presented a significant challenge at the moment. However, we plan to proceed the experiments as the permit is approved and believe our work with the Monologues dataset, which contains 16M samples with a diverse range of audiovisual cues, provides valuable insights and demonstrates the robustness of our approach.
In addition, we have newly provided the **"Theoretical Background on Mixture of Noise Levels,"** which we hope you will consider as part of our ongoing effort to strengthen our work and contribute to the field.
We hope you understand our constraints and continue to see the value in the contributions we have made with the resources available. We remain committed to advancing research in **audiovisual generation** and plan to release the Monologues dataset to further support reproducibility in the future and encourage future work.
Thank you for your consideration!
---
Rebuttal Comment 3.1:
Title: Thanks for further clarification
Comment: I appreciate that the authors plan to release the Monologues dataset. I would like to hear more on how the dataset was properly constructed to avoid any issues related to license or private information.
One quick comment on the additional theoretical background. This theoretical perspective itself sounds really interesting. While I have no doubt on heterogeneity of audio and video, it seems that it is not trivial to show how MoNL contributes to boost connectivity in theory.
---
Rebuttal 4:
Title: Clarifications and Additional Insights on the Monologues Dataset and MoNL
Comment: Dear Reviewer 4xRp,
Thank you for your valuable feedback. We appreciate your interest in the Monologues dataset and your insights into the theoretical aspects of our work.
**Dataset Construction**
To address your concerns about dataset construction, we want to emphasize that the Monologues dataset was meticulously curated to adhere to ethical and legal standards. We have implemented robust measures to protect user privacy and avoid copyright infringement. The dataset consists exclusively of public links (and timestamps) to videos that are:
* Including those that are **older than 90 days**
* Free from **copyright claims**
* Excluding **harmful or inappropriate material** (e.g., nudity, violence)
* Verified to contain **motion** (excluding static images)
* In the **English language** with a **single visible speaker**
* Framed with **torso up**
The dataset link list is automatically updated on a daily basis to ensure ongoing compliance with these criteria.
The Monologues dataset is a valuable resource for the research community, providing a rich audiovisual benchmark for developing and evaluating multimodal models.
**Theoretical Background**
While formally proving the connectivity benefits of MoNL presents a significant challenge, our empirical results provide compelling evidence of its effectiveness. Table 1 and Figures 2 and 7 demonstrate MoNL's superior performance across diverse audiovisual tasks, generating more temproally-consistent audiovisuals compared to joint learning and per-modality method, suggesting enhanced intra- and inter-modality connectivity.
We recognize the need for a deeper theoretical understanding and plan to explore this area in future research. As highlighted in *"A Theory of Multimodal Learning."* (Zhou et al. NeurIPS 2023), theoretical foundations in multimodal learning remain relatively underdeveloped. Our work serves as a **crucial experimental foundation** for future theoretical investigations in this field. We believe our work represents a **substantial step forward in audiovisual and multimodal research**.
**We believe these clarifications provide valuable insights into our work and would be grateful for your consideration of an improved score.**
Sincerely,
The Authors
---
Rebuttal 5:
Title: Additional analysis of the objectives and effectiveness of our approach
Comment: To enhance the understanding of how MoNL contributes to building a unified model for diverse audiovisual tasks—including cross-modal inference and multimodal interpolation— and improve connectivity between modalities, we provide a detailed analysis of the objectives and effectiveness of our approach, considering a simplified case with two modalities and two time-segments.
$ $
### Objective Functions
To determine the optimal objective function for learning a model $\theta$ that estimates:
- **Cross-modal inferences:** $P(X|Y)$ and $P(Y|X)$
- **Multimodal interpolations:** $P([\mathbf{x}_2, \mathbf{y}_2] \mid [\mathbf{x}_1, \mathbf{y}_1])$, $P([\mathbf{x}_1, \mathbf{y}_1] \mid [\mathbf{x}_2, \mathbf{y}_2])$, $P([\mathbf{x}_1, \mathbf{y}_2] \mid [\mathbf{x}_2, \mathbf{y}_1])$, and $P([\mathbf{x}_2, \mathbf{y}_1] \mid [\mathbf{x}_1, \mathbf{y}_2])$
we evaluate the following objectives for heterogeneous multimodal data $X$ and $Y$, where $X = [\mathbf{x}_1, \mathbf{x}_2]$ and $Y = [\mathbf{y}_1, \mathbf{y}_2]$.
1. **Task-Specific Method**
$$
\min \mathbb{E}[P_\theta(X \mid Y)]
$$
This approach focuses solely on one direction of inference, potentially missing significant interactions between modalities.
2. **Joint Learning Method**
$$
\min \mathbb{E}[P_\theta(X, Y)]
$$
This method captures both modalities simultaneously but may fail to model temporal and multimodal interactions, which are crucial for conditional tasks.
3. **Per-Modality Method**
$$
\min \mathbb{E}\left[P_\theta(X, Y) + P_\theta(X \mid Y) + \alpha P_\theta(Y \mid X)\right]
$$
This method improves upon the task-specific approach by incorporating reverse inference between modalities. However, it may not effectively capture temporal interactions, which are important for interpolation tasks.
4. **Mixture of Noise Levels (MoNL)**
$$
\min \mathbb{E}\left[P_\theta(X, Y) + \beta_1 P_\theta(X \mid Y) + \beta_2 P_\theta(Y \mid X) + \beta_3 P([\mathbf{x}_2, \mathbf{y}_2] \mid [\mathbf{x}_1, \mathbf{y}_1]) + \beta_4 P([\mathbf{x}_1, \mathbf{y}_1] \mid [\mathbf{x}_2, \mathbf{y}_2]) + \beta_5 P([\mathbf{x}_1, \mathbf{y}_2] \mid [\mathbf{x}_2, \mathbf{y}_1]) + \beta_6 P([\mathbf{x}_2, \mathbf{y}_1] \mid [\mathbf{x}_1, \mathbf{y}_2])\right]
$$
MoNL integrates joint distributions, cross-modal inferences, and multimodal interpolations by learning a transition matrix that captures complex interactions between modalities.
$ $
### Heterogeneous and Connected Multimodality
For heterogeneous multimodal data (e.g., audio and video), where $X$ and $Y$ are interconnected but distinct, MoNL excels by capturing intricate interactions through:
- **Direct Relationships:** Modeled by $P_\theta(X, Y)$
- **Cross-Modal Inferences:** Modeled by $P_\theta(X \mid Y)$ and $P_\theta(Y \mid X)$
- **Multimodal Interpolations:** Modeled by $P([\mathbf{x}_2, \mathbf{y}_2] \mid [\mathbf{x}_1, \mathbf{y}_1])$, $P([\mathbf{x}_1, \mathbf{y}_1] \mid [\mathbf{x}_2, \mathbf{y}_2])$ and similar terms.
MoNL effectively models the connectivity and complex interactions between modalities, significantly enhancing performance in various audiovisual tasks.
$ $
We hope this analysis is helpful and will be incorporated into the final version.
---
Rebuttal Comment 5.1:
Title: Thanks for the response
Comment: Thanks for the response. I will update my rating after the discussion with the other reviwers. | Summary: The paper tackles the audio-visual cross-modality generation problem and proposes a training approach to learn arbitrary conditional distributions in the audiovisual space. At the methodological level, the authors propose to apply variable diffusion timesteps across the temporal dimension. The experiments are conducted on Monologues, AIST++, and Landscape datasets.
Strengths: The high-level motivation to learn arbitrary conditional distributions in the audiovisual space is interesting. The experiments show promising results and the authors conduct user studies as supplementary evaluations.
Weaknesses: 1. The writing and presentation of the paper can be improved. Figure 1 seems to have some issues with the first-row caption for the AIST dataset, which makes it difficult to read. The usage of math symbols is inconsistent, e.g., the $x$ should be $\bf{x}$ in Line 72. The term “multivariate data” in Line 63 is also not rigorous and confusing, in other words, do the authors imply that the static image data is uni-variante? The use of $\equiv$ and $=$ is also mixed in the paper.
2. One of the key claims and motivations, “Training separate models for each variation is expensive and impractical”, seems controversial and susceptible to me, while I understand training separate DMs for audio and visual data could be expensive, I don’t think that learning a single mixture is a better option than learning two separate data distributions, because learning a mixture increase the complexity of target data distribution, and thus should not be beneficial for performance especially if the domain gap between separate modality is large. As this is one of the key claims that motivates the methodological design, I was expecting more rigorous theoretical support in the paper, could the authors elaborate on that?
3. For the experiments, while the authors claim that “the sequential representations can be either latent spaces or raw data”, the actual implementation only conducts experiments on low dimensional features of pre-trained models, i.e., MAGVIT-v2 for video representations and SoundStream for audio representations. While this is not a serious flaw, it raises questions about the generalizability of the proposed method, and the fairness in comparison with other methods, as the pre-trained models may (and very likely) directly influence the final performance. And the baseline method MM-Diffusion seems not to operate on the same feature space according to Appendix C?
Technical Quality: 3
Clarity: 2
Questions for Authors: In addition to the comments in the Weaknesses section.
I may have missed several details: how is the scheduler defined in the proposed method Eq. (5)? What is the time cost for training the proposed model? And how does this design yield a difference compared to training on single modality or separate DMs?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations are discussed in the Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **[W1]**: *"The writing and presentation of the paper can be improved. Figure 1 seems to have some issues with the first-row caption for the AIST dataset, which makes it difficult to read. The usage of math symbols is inconsistent, e.g., the $𝑥$ should be $\boldsymbol{x}$ in Line 72. The term “multivariate data” in Line 63 is also not rigorous and confusing, in other words, do the authors imply that the static image data is uni-variante? The use of ≡ and = is also mixed in the paper."*
We sincerely appreciate the reviewer’s constructive feedback on the clarity and presentation of our paper. We agree that improving these aspects is crucial for effective communication of our research.
We will meticulously address the reviewer’s comments by enhancing the readability of Figure 1, ensuring consistency in mathematical notation (e.g., using $\boldsymbol{x}$ for vectors), and refining ambiguous terminology such as "multivariate data" to accurately reflect the sequential nature of our data. We will also carefully distinguish between the use of ≡ and = throughout the paper.
$ $
> **[W2&Q3]**: *"One of the key claims and motivations, “Training separate models for each variation is expensive and impractical”, seems controversial and susceptible to me, while I understand training separate DMs for audio and visual data could be expensive, I don’t think that learning a single mixture is a better option than learning two separate data distributions, because learning a mixture increase the complexity of target data distribution, and thus should not be beneficial for performance especially if the domain gap between separate modality is large. As this is one of the key claims that motivates the methodological design, I was expecting more rigorous theoretical support in the paper, could the authors elaborate on that?"*
We appreciate the reviewer's insightful comments. While learning a single mixture model can increase complexity compared to separate models, we argue that the benefits of a unified model outweigh the potential drawbacks, especially in the context of large-scale multimodal tasks.
The reviewer points out the potential challenges of learning a mixture distribution with large domain gaps. While we acknowledge this, we believe that the shared underlying structure of audio and video data, such as temporal synchronization and physical world correlations, can help mitigate these issues. Our model leverages these shared representations to improve performance across various tasks. This is supported by our experimental results in Table 1 as well as the recent works on audio-visual representation learning (AVMAE [R1]) which demonstrates the benefits of multimodal learning on the audiovisual tasks compared to unimodal learning for audio and video respectively. Also, UniDiffuser [R2] show the success of multimodal generation in the text-image domain.
As you mentioned, training separate models for each task variation can be prohibitively expensive, especially when considering the vast number of potential combinations in the audio-video domain. Our approach offers a more efficient and scalable solution by learning a single model capable of handling multiple tasks simultaneously.
We believe that our model offers a promising approach to tackling the challenges of large-scale multimodal tasks.
[R1] Bao et al. "One transformer fits all distributions in multi-modal diffusion at scale." ICML 2024
[R2] Georgescu et al. "Audiovisual masked autoencoders." CVPR 2023
$ $
> **[W3]**: *"For the experiments, while the authors claim that “the sequential representations can be either latent spaces or raw data”, the actual implementation only conducts experiments on low dimensional features of pre-trained models, i.e., MAGVIT-v2 for video representations and SoundStream for audio representations. While this is not a serious flaw, it raises questions about the generalizability of the proposed method, and the fairness in comparison with other methods, as the pre-trained models may (and very likely) directly influence the final performance. And the baseline method MM-Diffusion seems not to operate on the same feature space according to Appendix C? "*
We appreciate the reviewer's insightful comments. Regarding generalizability, we acknowledge the limitation of our current experiments to latent spaces. While we believe our method is conceptually applicable to raw data, practical constraints such as computational resources from conducting extensive experiments in this setting. We will clarify this point in the modified version. Note that our training data was not seen during training of the audio and video autoencoders.
MMD has their own autoencoders. To address the fairness of comparison, comparison of MMD with our proposed MoNL and MMD with original joint learning would be ideal. We encountered challenges in effectively integrating our diffusion timestep vector with the sparse multimodal attention module (RS-MMA) – a core component of the MMD architecture.
To provide a more equitable comparison, we opted to train a transformer-based variant of MMD (Vanilla in Table 1) as a counterpart to our transformer-based MoNL. Our results demonstrate the superiority of MoNL over this transformer-based MMD, highlighting the effectiveness of our proposed approach. Additionally, we directly compare AVDiT, MoNL, and MMD to further underscore the overall strength of our model.
$ $
> **[Q1]**: *"how is the scheduler defined in the proposed method Eq. (5)? "*
The noise scheduler used in Equation (5) is identical to the one detailed in Section 2 (Lines 68-74).
$ $
> **[Q2]**: *"What is the time cost for training the proposed model?"*
A comprehensive analysis of the training time costs is provided in the Supplementary Material, Section B (Lines 536-541). On average, the models were trained for around 350K steps with a batch size of 256 for around five days.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal
Comment: I appreciate the author's efforts in preparing the rebuttal.
After reading the rebuttal, I think some of my concerns related to the experiments are clarified.
On the other hand, this is a rather empirical paper, and I cannot always find the underlying theoretical intuition of this entire line of work. However, maybe the brute-force scaling up can overpass the underlying theoretical justification over the challenge of learning disjoint distributions without much understanding of the data distribution and it seems to work fine.
Anyway, I raise my score to 5 for post-rebuttal.
---
Rebuttal 2:
Title: Theoretical Background on Mixture of Noise Levels
Comment: ## Theoretical Background on Mixture of Noise Levels
$ $
### 1. Theoretical Background on Multimodal Learning
In *"A Theory of Multimodal Learning"* [1], multimodal learning is shown to offer a superior **generalization bound** compared to unimodal learning, with an improvement factor of **$O(\sqrt{n})$**, where **$n$** denotes the sample size. This benefit relies on **connection** and **heterogeneity** between modalities:
- **Connection**: The bound depends on learned connections between (**$\mathcal{X}$**) and (**$\mathcal{Y}$**).
- **Heterogeneity**: Describes how modalities, **$\mathcal{X}$** and **$\mathcal{Y}$**, diverge and complement.
If connection and heterogeneity are missing, ill-conditioned scenarios can arise. For instance, if **$x \equiv y$**, perfect connection suggests no need for learning about **$\mathcal{Y}$**. On the other hand, if **$x$** is random noise, there is heterogeneity but no meaningful connection between **$\mathcal{X}$** and **$\mathcal{Y}$**, making non-trivial learning on **$\mathcal{X}$** alone impractical.
The theory also highlights that and learning effective connections between modalities via **generative models** can enhance multimodal learning. This forms the basis for our **Mixture of Noise Levels (MoNL)** approach, which is particularly suited for multimodal learning with **audio** and **video** data.
[1] Zhou Lu. *"A Theory of Multimodal Learning."* NeurIPS 2023.
$ $
### 2. Advantages of Mixture of Noise Level Training (MoNL)
Our **Mixture of Noise Level (MoNL)** training method offers significant benefits for multimodal learning, especially with **audio** and **video** data:
- **Heterogeneity and Connection**: Audio and video are naturally heterogeneous. For example, a video of a person speaking includes **audio** of spoken words and **video** of lip movements and facial expressions. MoNL uses **variable noise levels** to enhance learning by capturing the **generic transition matrix** across the **temporal axis**.
$$p _{\mathbf{\theta}}([\mathbf{z} _{t ^{(1,1)}-1} ^{(1,1)}, \ldots, \mathbf{z} _{t ^{(M, N)}-1} ^{(M, N)}] \mid[\mathbf{z} _{t ^{(1,1)}} ^{(1,1)}, \ldots, \mathbf{z} _{t ^{(M, N)}} ^{(M, N)}]) \qquad \text{(Eq. (4))}$$
where $M$, $N$ are the number of modalities and time-segments, respectively.
- **Enhanced Connectivity**: MoNL improves **connectivity** between **audio** and **video** modalities. Our experiments show that MoNL often surpasses **task-specific learning** approaches by fostering better connections between modalities, adapting its focus more effectively.
$ $
### 3. Enhanced Connectivity - Comparison with Existing Methods
- **MoNL vs. Joint Learning in MMD** [2]: Unlike joint learning methods that focus on the joint distribution $p_{\mathbf{\theta}}(\mathbf{z} _{t-1} \mid\mathbf{z} _{t})$, MoNL trains across **multiple conditioning**, enabling better connections by varying its focus. This is evidenced by MoNL outperforming the Vanilla (see Table 1) and MMD models (see Tables 2 and 3).
- **MoNL vs. Per-Modality Training**: MoNL goes beyond per-modality training in UniDiffuser [3], which uses variable noise between modalities i.e., learning $p_{\mathbf{\theta}}([\mathbf{z} _{t ^{(1)}-1} ^{(1)}, \ldots, \mathbf{z} _{t ^{(M)}-1} ^{(M)}] \mid[\mathbf{z} _{t ^{(1)}} ^{(1)}, \ldots, \mathbf{z} _{t ^{(M)}} ^{(M)}])$. MoNL introduces variable noise across different **time segments**, learning connections across **temporal dynamics** as well. This advantage is demonstrated in Table 1.
- **MoNL vs. Masked Training** [4]: Diffusion models often obscure high-frequency details with low noise and low-frequency structures with high noise [5]. MoNL employs variable noise levels to explore diverse **frequency components**, enhancing the model's ability to correlate high and low-frequency elements. This is in contrast to masked self-supervised learning, which limits frequency-specific connections by masking entire elements.
[2] Ruan et al. *"MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation."* CVPR 2023.
[3] Bao et al. *"One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale."* ICML 2023.
[4] Voleti et al. *"MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation."* NeurIPS 2022.
[5] Sander Dieleman. *"Noise Schedules Considered Harmful."* [Link](https://sander.ai/2024/06/14/noise-schedules).
$ $
### Conclusion
In summary, the effectiveness of **MoNL** for **multimodal diffusion models**, particularly with **audio** and **video** data, stems from its strategic use of **connection** and **heterogeneity**. By applying **variable noise levels**, MoNL enhances **connectivity** between modalities and better adapts to diverse **temporal** and **frequency components**, leading to superior performance compared to existing multimodal learning methods.
---
Rebuttal Comment 2.1:
Title: Follow-Up on Theoretical Clarifications and Request for Final Review
Comment: Dear Reviewer SA7V,
Thank you for your feedback and for raising your score after reviewing our rebuttal. We appreciate your acknowledgment of our efforts to clarify the experimental aspects of our work.
In response to your concern about the theoretical intuition, we want to highlight that we have newly provided the **"Theoretical Background on Mixture of Noise Levels,"** which we hope you will consider as part of our ongoing effort to strengthen our work and contribute to the field.
As we have only two days left for further discussion, we kindly request that you review our additional responses. We sincerely appreciate the time and effort you have dedicated to reviewing our paper and your constructive and insightful comments.
Thank you once again.
Best regards,
The Authors | Summary: This paper introduces the Audiovisual Diffusion Transformer (AVDiT) with Mixture of Noise Levels (MoNL) for audiovisual sequence generation. The key innovation is the use of variable noise levels during the diffusion process, applied across different time segments and modalities. This approach enables the model to effectively learn arbitrary conditional distributions in a task-agnostic manner, making it versatile for various generation tasks such as cross-modal generation, multimodal interpolation, and audiovisual continuation. Experiments on multiple datasets demonstrate that AVDiT with MoNL outperforms existing baselines, generating temporally and perceptually consistent audiovisual sequences.
Strengths: 1. The proposed variable noise levels across different time segments and modalities is an effective approach to enhance the flexibility of diffusion models.
2. The model's ability to handle various audiovisual generation tasks within a single framework is impressive.
3. The paper includes extensive experiments on multiple datasets, with both qualitative and quantitative evaluations. The additional demo page provides an intuitive comparison.
Weaknesses: 1. While the variable noise levels concept is interesting, the overall novelty of the approach may be seen as incremental. Similar techniques in diffusion models and transformers have been explored in papers like MM-Diffusion.
2. Some technical details are not thoroughly explained, such as the criteria for selecting and varying noise levels across time segments and modalities.
3. The evaluation is primarily conducted on a few datasets (Monologues, AIST++, Landscape). The model's generalizability to real-world scenarios remains uncertain.
4. The authors compare the transformer-based AVDiT model to the UNet-based MM-Diffusion model, which might not be a fair comparison due to the different architectures. The authors should consider training a UNet-based model using their proposed approach to provide a more direct comparison and validate the effectiveness of their method.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors mention some limitations in the appendix, such as the need for further improvements in visual and speech quality and the potential for ethical concerns. However, for me, the lack of extensive evaluation of more diverse and real-world datasets is a noteworthy limitation. Some additional limitations can be found in the Weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **[W1]**: *"While the variable noise levels concept is interesting, the overall novelty of the approach may be seen as incremental. "*
While we appreciate the reviewer's acknowledgment of the variable noise levels concept, we believe that our work offers significant advancements beyond prior art.
- **Temporal Dynamics**: Unlike MM-Diffusion [R1], which struggles with generating temporally consistent sequences, our method effectively models temporal dependencies through AVDiT with MoNL. This enables superior performance on tasks requiring temporal coherence, such as multimodal interpolation and audiovisual continuation.
- **Generalization and Flexibility**: Our approach is designed to handle a wide range of multimodal, sequential tasks, surpassing the limitations of previous methods like UniDiffuser [R2] and Versatile Diffusion [R3], which primarily focus on unimodal or static data. By introducing a generalized framework, we enable the modeling of complex audiovisual interactions and the generation of expressive, controllable multimedia content.
- **Systematic Evaluation and Performance**: Our extensive evaluation demonstrates that AVDiT with MoNL consistently outperforms state-of-the-art baselines in generating high-quality, temporally coherent audiovisual sequences.
- **Pioneering Diffusion Transformer for Audio-Video**: To the best of our knowledge, our work is the first to successfully apply a diffusion transformer to the audio-video multimodal domain. This novel architecture, combined with MoNL, has led to significant advancements in audiovisual sequence generation.
[R1] Ruan et al. "MM-Diffusion: Learning multi-modal diffusion models for joint audio and video generation." CVPR 2023
[R2] Bao et al. "One transformer fits all distributions in multi-modal diffusion at scale." ICML 2024
[R3] Xuet al. "Versatile diffusion: Text, images and variations all in one diffusion model." CVPR 2023
$ $
> **[W2]**: *"Some technical details are not thoroughly explained, such as the criteria for selecting and varying noise levels across time segments and modalities. "*
We appreciate the reviewer's keen interest in the technical details of our work. Section 3.2 and Algorithm 1 provides a detailed explanation of how we selected and varied noise levels across time segments and modalities. For instance, in MoNL that is a training paradigm where a timestep is uniformly randomly selected from the mixture, we first chosse a stratgy for selecting variable noise levels from $\mathcal{U}( \set{ \texttt {Vanilla}, \texttt {Pt}, \texttt {Pm}, \texttt {Ptm} } )$, and the noise level is set by setting time step vectors depending on the type of the stratge following Line 123-135.
We understand the importance of these technical details for reproducibility and have included comprehensive supplementary materials detailing implementation details on autoencoders, AVDiT, diffusion training, inference. Also we provided experimental setup and evaluation details. To enhance readability, we are committed to carefully selecting key details for inclusion in the main paper, ensuring that the core methodology is clear and accessible to a broad audience.
$ $
> **[W3]**: *"The evaluation is primarily conducted on a few datasets (Monologues, AIST++, Landscape). The model's generalizability to real-world scenarios remains uncertain."*
While we agree that evaluating on a diverse set of datasets is crucial, we believe that our approach effectively addresses the core challenge of audiovisual alignment.
Indeed, existing audiovisual datasets often lack the diversity of cues necessary for a comprehensive assessment of audiovisual synchorny.
The Monologues dataset (16M), with its rich variety of audiovisual cues provides a strong foundation for assessing this alignment: human appearances(e.g., a range of perceived age, perceived gender expression, head pose, lighting), verbal(e.g., intonation, prosody, emotion), and nonverbal cues(e.g., head nods, body gestures and expressions).
To further demonstrate the generalizability of our model, we conducted comprehensive experiments on multiple datasets, including AIST++ and Landscape where our baseline evaluated. These datasets represent diverse audiovisual domains, ensuring a wide range of audio (music, speech, natural sounds) and video content (monologues, dance, natural scenes).
This diverse evaluation demonstrates the versatility of our model in handling various audiovisual combinations.
$ $
> **[W4]**: *"The authors compare the transformer-based AVDiT model to the UNet-based MM-Diffusion model, which might not be a fair comparison due to the different architectures. The authors should consider training a UNet-based model using their proposed approach to provide a more direct comparison and validate the effectiveness of their method. "*
We appreciate the reviewer's insightful comment regarding the architectural differences between our transformer-based MoNL and the UNet-based MMD. While a comparison of MMD with our proposed MoNL and MMD with original joint learning would be ideal, we encountered challenges in effectively integrating our diffusion timestep vector with the sparse multimodal attention module (RS-MMA) – a core component of the MMD architecture.
To provide a more equitable comparison, we opted to train a transformer-based joint learning that can be regarded as a variant of MMD (Vanilla in Table 1) as a counterpart to our transformer-based MoNL. Our results demonstrate the superiority of MoNL over this transformer-based MMD, highlighting the effectiveness of our proposed approach. Additionally, we directly compare AVDiT, MoNL, and MMD to further underscore the overall strength of our model.
We will clarify this point and provide additional details in the revised manuscript.
---
Rebuttal 2:
Title: Theoretical Background on Mixture of Noise Levels
Comment: ## Theoretical Background on Mixture of Noise Levels
$ $
### 1. Theoretical Background on Multimodal Learning
In *"A Theory of Multimodal Learning"* [1], multimodal learning is shown to offer a superior **generalization bound** compared to unimodal learning, with an improvement factor of **$O(\sqrt{n})$**, where **$n$** denotes the sample size. This benefit relies on **connection** and **heterogeneity** between modalities:
- **Connection**: The bound depends on learned connections between (**$\mathcal{X}$**) and (**$\mathcal{Y}$**).
- **Heterogeneity**: Describes how modalities, **$\mathcal{X}$** and **$\mathcal{Y}$**, diverge and complement.
If connection and heterogeneity are missing, ill-conditioned scenarios can arise. For instance, if **$x \equiv y$**, perfect connection suggests no need for learning about **$\mathcal{Y}$**. On the other hand, if **$x$** is random noise, there is heterogeneity but no meaningful connection between **$\mathcal{X}$** and **$\mathcal{Y}$**, making non-trivial learning on **$\mathcal{X}$** alone impractical.
The theory also highlights that and learning effective connections between modalities via **generative models** can enhance multimodal learning. This forms the basis for our **Mixture of Noise Levels (MoNL)** approach, which is particularly suited for multimodal learning with **audio** and **video** data.
[1] Zhou Lu. *"A Theory of Multimodal Learning."* NeurIPS 2023.
$ $
### 2. Advantages of Mixture of Noise Level Training (MoNL)
Our **Mixture of Noise Level (MoNL)** training method offers significant benefits for multimodal learning, especially with **audio** and **video** data:
- **Heterogeneity and Connection**: Audio and video are naturally heterogeneous. For example, a video of a person speaking includes **audio** of spoken words and **video** of lip movements and facial expressions. MoNL uses **variable noise levels** to enhance learning by capturing the **generic transition matrix** across the **temporal axis**.
$$p _{\mathbf{\theta}}([\mathbf{z} _{t ^{(1,1)}-1} ^{(1,1)}, \ldots, \mathbf{z} _{t ^{(M, N)}-1} ^{(M, N)}] \mid[\mathbf{z} _{t ^{(1,1)}} ^{(1,1)}, \ldots, \mathbf{z} _{t ^{(M, N)}} ^{(M, N)}]) \qquad \text{(Eq. (4))}$$
where $M$, $N$ are the number of modalities and time-segments, respectively.
- **Enhanced Connectivity**: MoNL improves **connectivity** between **audio** and **video** modalities. Our experiments show that MoNL often surpasses **task-specific learning** approaches by fostering better connections between modalities, adapting its focus more effectively.
$ $
### 3. Enhanced Connectivity - Comparison with Existing Methods
- **MoNL vs. Joint Learning in MMD** [2]: Unlike joint learning methods that focus on the joint distribution $p_{\mathbf{\theta}}(\mathbf{z} _{t-1} \mid\mathbf{z} _{t})$, MoNL trains across **multiple conditioning**, enabling better connections by varying its focus. This is evidenced by MoNL outperforming the Vanilla (see Table 1) and MMD models (see Tables 2 and 3).
- **MoNL vs. Per-Modality Training**: MoNL goes beyond per-modality training in UniDiffuser [3], which uses variable noise between modalities i.e., learning $p_{\mathbf{\theta}}([\mathbf{z} _{t ^{(1)}-1} ^{(1)}, \ldots, \mathbf{z} _{t ^{(M)}-1} ^{(M)}] \mid[\mathbf{z} _{t ^{(1)}} ^{(1)}, \ldots, \mathbf{z} _{t ^{(M)}} ^{(M)}])$. MoNL introduces variable noise across different **time segments**, learning connections across **temporal dynamics** as well. This advantage is demonstrated in Table 1.
- **MoNL vs. Masked Training** [4]: Diffusion models often obscure high-frequency details with low noise and low-frequency structures with high noise [5]. MoNL employs variable noise levels to explore diverse **frequency components**, enhancing the model's ability to correlate high and low-frequency elements. This is in contrast to masked self-supervised learning, which limits frequency-specific connections by masking entire elements.
[2] Ruan et al. *"MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation."* CVPR 2023.
[3] Bao et al. *"One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale."* ICML 2023.
[4] Voleti et al. *"MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation."* NeurIPS 2022.
[5] Sander Dieleman. *"Noise Schedules Considered Harmful."* [Link](https://sander.ai/2024/06/14/noise-schedules).
$ $
### Conclusion
In summary, the effectiveness of **MoNL** for **multimodal diffusion models**, particularly with **audio** and **video** data, stems from its strategic use of **connection** and **heterogeneity**. By applying **variable noise levels**, MoNL enhances **connectivity** between modalities and better adapts to diverse **temporal** and **frequency components**, leading to superior performance compared to existing multimodal learning methods.
---
Rebuttal 3:
Title: Request for Final Review: Response and Theoretical Clarifications
Comment: Dear Reviewer tkoE,
We kindly request that you review our responses, as we have only **two days** left for further **discussion**. We have addressed your comments thoughtfully and provided a thorough **theoretical background** of our method. We sincerely appreciate the time and effort you have dedicated to reviewing our paper and your constructive and insightful feedback.
Thank you once again.
Best regards,
The Authors
---
Rebuttal Comment 3.1:
Comment: Thank you for addressing all my comments. Overall, it is a good paper on audiovisual generation, although I find the proposed method to be somewhat incremental. I suggest the authors make their trained models and code available for reproducibility. I'm willing to increase my rating to BA.
---
Rebuttal 4:
Title: Thank you for your time and effort
Comment: Dear Reviewer tkoE,
We sincerely appreciate your time and effort in reviewing our paper. We are glad that you found our work to be a good contribution to the field of audiovisual generation.
We believe that our AVDiT with MoNL effectively models temporal dependencies, handles diverse multimodal tasks, and demonstrates superior performance compared to existing approaches, offering significant advancements beyond prior art. We believe these contributions can be valuable to the community!
We are committed to fostering reproducibility in our research and are happy to make our trained models and code publicly available upon acceptance of the paper.
Thank you again for your valuable insights.
Sincerely,
The Authors | Summary: The paper presents a new method for audiovisual generation where the input output condiitions may comprise of 2 modalities, namely video and audio sequence. The authors propose a new training approach to effectively learn conditional distribution in multimodal space. The main novely in the paper is a mixture of noise level formulation for processing the inputs. The method produces temporally consistent samples and outperforms existing arts and the vanilla configuration
Strengths: 1. The utilization of mixture of noise levels is novel and the methods seems to improve robustness in denoising, hence leading to a performance boost
2. The evaluations seems to be fair and clearly signifies the working of the method.
3. The paper is well written and easy to follow.
Weaknesses: 1. Although the method seems to work well empirically, the paper lacks theoretical backing for the proposed method. It would be good to see some proofs that the proposed method leads to a better approximation of the variational lower bound and joint distribution.
2. Are there any sampling modifications required to accommodate the proposed training strategy?
3. There is a plethora of compositional works derived from the energy based formulation of diffusion models. Could the authors analyze how the proposed method performs in comparison to it.
[1] https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/
4. I'm rating the paper borderline for now, I will improve my rating if the authors can address my concerns
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Could the authors provide a proof of why the proposed method would work better
2. Are there any sampling modifications required to accommodate the proposed training strategy?
3. Could the authors give an analysis of how the proposed algorithm will work when compared to [1]
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please see weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **[W1&Q1]**: *"Although the method seems to work well empirically, the paper lacks theoretical backing for the proposed method. It would be good to see some proofs that the proposed method leads to a better approximation of the variational lower bound and joint distribution."*
We appreciate the reviewer's request for theoretical justification. Our approach leverages variable noise levels to learn a general transition matrix (Eq. 4, Section 3.1) between modalities and time segments. This can be used for the inference of arbitrary conditional distributions with various input-output combinations and enables handling diverse conditional tasks (Section 3.3).
To elucidate the core idea, let's consider a simplified scenario where the multivariate data consists of only two elements, $x$ and $y$. In this case, each transition matrix can be interpreted as learning a conditional distribution of the form $p(x|y=c)$, where $c$ represents a specific condition. By training across multiple variable noise levels, our objective become as follows:
$$L = E_{x,y}[-\text{log} \ p(x,y)] + α E_{x,y}[-\text{log} \ p(x|y)]$$
where $α$ balances the importance of joint and conditional learning depending on the variable noise schedule stratgies. This formulation demonstrates that our method simultaneously learns the joint distribution and conditional distributions, offering potential advantages over methods that solely focus on joint learning $E_{x,y}[-\text{log} \ p(x,y)]$, such as MM-Diffusion.
While this simplified explanation provides intuition, we acknowledge that this is a simplified explanation and that a more rigorous theoretical analysis is ongoing. We plan to provide a comprehensive proof and generalized formulation in future work.
$ $
> **[W2&Q2]**: *"Are there any sampling modifications required to accommodate the proposed training strategy? "*
We detailed sampling modifications in Section 3.3 and Figure 4. Inference involves selective noise injection based on the task. Clean inputs are used for conditioned parts ($t^{(m, n)} = 0$), while noisy inputs are injected for desired generation ($t^{(m, n)} = t$). This allows a single model to handle various conditional tasks with diverse input-output combinations.
$ $
> **[W3&Q3]**: *"There is a plethora of compositional works derived from the energy based formulation of diffusion models. Could the authors analyze how the proposed method performs in comparison to it. [1] "*
Both methods involve energy-based diffusion models. However, our work differs significantly:
- **Target Tasks/Modality**: We focus on Audio-to-Video, Video-to-Audio, and Audiovisual continuation and interpolation tasks with variable durations. Composable Diffusion targets only Text-to-Image generation.
- **Objective of Method**: Composable Diffusion focuses on composing concepts at inference without further training ($p(\boldsymbol{x}|\boldsymbol{y}^{(1)}, ..., \boldsymbol{y}^{(C)})$ where $c_i \in [1,C]$ represents each concept). We aim for a single model handling diverse conditional distributions ($p(\boldsymbol{x}_0^{(n \in \mathcal{N}_x^c)}, \boldsymbol{y}_0^{(n \in \mathcal{N}_y^c)} \mid \boldsymbol{x}_0^{(n \in \mathcal{N}_x)}, \boldsymbol{y}_0^{(n \in \mathcal{N}_y)})$ where
$\mathcal{N}_x$ and $\mathcal{N}_y$ are input index sets).
- **Training/Inference**: Composable Diffusion uses weighted summation of conditional scores during inference. We learn a general transition matrix during training and perform selective noise injection based on the task for inference.
These fundamental differences highlight that our methods can be complementary. We envision combining them for complex tasks like predicting background audio from two videos.
---
Rebuttal 2:
Title: Theoretical Background on Mixture of Noise Levels
Comment: ## Theoretical Background on Mixture of Noise Levels
$ $
### 1. Theoretical Background on Multimodal Learning
In *"A Theory of Multimodal Learning"* [1], multimodal learning is shown to offer a superior **generalization bound** compared to unimodal learning, with an improvement factor of **$O(\sqrt{n})$**, where **$n$** denotes the sample size. This benefit relies on **connection** and **heterogeneity** between modalities:
- **Connection**: The bound depends on learned connections between (**$\mathcal{X}$**) and (**$\mathcal{Y}$**).
- **Heterogeneity**: Describes how modalities, **$\mathcal{X}$** and **$\mathcal{Y}$**, diverge and complement.
If connection and heterogeneity are missing, ill-conditioned scenarios can arise. For instance, if **$x \equiv y$**, perfect connection suggests no need for learning about **$\mathcal{Y}$**. On the other hand, if **$x$** is random noise, there is heterogeneity but no meaningful connection between **$\mathcal{X}$** and **$\mathcal{Y}$**, making non-trivial learning on **$\mathcal{X}$** alone impractical.
The theory also highlights that and learning effective connections between modalities via **generative models** can enhance multimodal learning. This forms the basis for our **Mixture of Noise Levels (MoNL)** approach, which is particularly suited for multimodal learning with **audio** and **video** data.
[1] Zhou Lu. *"A Theory of Multimodal Learning."* NeurIPS 2023.
$ $
### 2. Advantages of Mixture of Noise Level Training (MoNL)
Our **Mixture of Noise Level (MoNL)** training method offers significant benefits for multimodal learning, especially with **audio** and **video** data:
- **Heterogeneity and Connection**: Audio and video are naturally heterogeneous. For example, a video of a person speaking includes **audio** of spoken words and **video** of lip movements and facial expressions. MoNL uses **variable noise levels** to enhance learning by capturing the **generic transition matrix** across the **temporal axis**.
$$p _{\mathbf{\theta}}([\mathbf{z} _{t ^{(1,1)}-1} ^{(1,1)}, \ldots, \mathbf{z} _{t ^{(M, N)}-1} ^{(M, N)}] \mid[\mathbf{z} _{t ^{(1,1)}} ^{(1,1)}, \ldots, \mathbf{z} _{t ^{(M, N)}} ^{(M, N)}]) \qquad \text{(Eq. (4))}$$
where $M$, $N$ are the number of modalities and time-segments, respectively.
- **Enhanced Connectivity**: MoNL improves **connectivity** between **audio** and **video** modalities. Our experiments show that MoNL often surpasses **task-specific learning** approaches by fostering better connections between modalities, adapting its focus more effectively.
$ $
### 3. Enhanced Connectivity - Comparison with Existing Methods
- **MoNL vs. Joint Learning in MMD** [2]: Unlike joint learning methods that focus on the joint distribution $p_{\mathbf{\theta}}(\mathbf{z} _{t-1} \mid\mathbf{z} _{t})$, MoNL trains across **multiple conditioning**, enabling better connections by varying its focus. This is evidenced by MoNL outperforming the Vanilla (see Table 1) and MMD models (see Tables 2 and 3).
- **MoNL vs. Per-Modality Training**: MoNL goes beyond per-modality training in UniDiffuser [3], which uses variable noise between modalities i.e., learning $p_{\mathbf{\theta}}([\mathbf{z} _{t ^{(1)}-1} ^{(1)}, \ldots, \mathbf{z} _{t ^{(M)}-1} ^{(M)}] \mid[\mathbf{z} _{t ^{(1)}} ^{(1)}, \ldots, \mathbf{z} _{t ^{(M)}} ^{(M)}])$. MoNL introduces variable noise across different **time segments**, learning connections across **temporal dynamics** as well. This advantage is demonstrated in Table 1.
- **MoNL vs. Masked Training** [4]: Diffusion models often obscure high-frequency details with low noise and low-frequency structures with high noise [5]. MoNL employs variable noise levels to explore diverse **frequency components**, enhancing the model's ability to correlate high and low-frequency elements. This is in contrast to masked self-supervised learning, which limits frequency-specific connections by masking entire elements.
[2] Ruan et al. *"MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation."* CVPR 2023.
[3] Bao et al. *"One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale."* ICML 2023.
[4] Voleti et al. *"MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation."* NeurIPS 2022.
[5] Sander Dieleman. *"Noise Schedules Considered Harmful."* [Link](https://sander.ai/2024/06/14/noise-schedules).
$ $
### Conclusion
In summary, the effectiveness of **MoNL** for **multimodal diffusion models**, particularly with **audio** and **video** data, stems from its strategic use of **connection** and **heterogeneity**. By applying **variable noise levels**, MoNL enhances **connectivity** between modalities and better adapts to diverse **temporal** and **frequency components**, leading to superior performance compared to existing multimodal learning methods.
---
Rebuttal 3:
Title: Request for Final Review: Response and Theoretical Clarifications
Comment: Dear Reviewer bNvz,
We kindly request that you review our responses, as we have only **two days** left for further **discussion**. We have addressed your comments thoughtfully and provided a thorough **theoretical background** of our method. We sincerely appreciate the time and effort you have dedicated to reviewing our paper and your constructive and insightful feedback.
Thank you once again.
Best regards,
The Authors
---
Rebuttal 4:
Comment: Dear Reviewer bNvz,
We would like to gently remind you that we have **only one day remaining** for further discussion on our manuscript. We have carefully considered your valuable feedback and provided **theoritical backing** for our methodology.
We greatly appreciate your time and insights thus far.
Thank you again for your contributions.
Sincerely,
The Authors
---
Rebuttal Comment 4.1:
Comment: Dear Authors,
I thank you for the detailed explanations and clarifications. After careful consideration and going through the other reviews I’m increasing my rating to weak Accept. The reason of not giving a higher score is due to lack of theoretical novelty.
---
Rebuttal 5:
Comment: Dear Reviewer bNvz,
We sincerely appreciate your time and thoughtful feedback. We are grateful for your recognition of the detailed explanations and clarifications provided, and for upgrading our paper to a "weak accept."
We recognize the need for stronger theoretical foundations. While we acknowledge the current limitations in the theoretical landscape of multimodal learning as highlighted in *"A Theory of Multimodal Learning."* (Zhou et al. NeurIPS 2023), we believe our research makes a substantial empirical contribution to this field. Our findings serve as a robust foundation for future theoretical explorations and advancements.
We are committed to addressing the theoretical aspects of our work in greater depth in our ongoing research. Thank you once again for your valuable insights.
Sincerely,
The Authors | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their constructive and insightful feedback.
Their recognition of the paper's strengths has been invaluable. We are particularly grateful for their positive comments on:
- **The effectiveness of our approach**: The reviewers highlighted the innovative use of mixed noise levels, which significantly enhances the model's robustness and overall performance.
- **The comprehensive evaluation**: Our fair and thorough experimental analysis, including qualitative and quantitative assessments, has been acknowledged as crucial in demonstrating the method's efficacy.
- **The clarity and readability of the paper**: The reviewers commended the paper's well-structured presentation and ease of understanding.
- **The flexibility and versatility of our model**: The ability to handle diverse audiovisual generation tasks within a unified framework has been recognized as a key contribution.
- **The strong empirical results**: The reviewers noted the impressive performance of our method across various benchmarks and tasks, including the notable improvements in AV-inpaint and AV-continue.
We have carefully considered all reviewer comments and have made substantial revisions to the paper accordingly. Detailed responses to each point can be found in the rebuttal. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Customized Subgraph Selection and Encoding for Drug-drug Interaction Prediction | Accept (poster) | Summary: The paper proposes a novel subgraph-based approach for predicting drug-drug interactions (DDIs) that harnesses neural architecture search (NAS) to customize subgraph selection and encoding process. The authors first introduce refined search spaces to realize fine-grained subgraph selection and expressive encoding function searching. Then, based on the well-defined bi-level search problem, the subgraph space relaxation mechanism and the representation approximation strategy are proposed, enabling differentiable searching efficiently. Extensive experiments show that CSSE-DDI extensively outperforms the state-of-the-art approaches.
Strengths: 1.The problem and the method are well-motivated and formulated, with extensive experimental support. The results are solid.
2.The idea of customizing subgraph selection and encoding is a conceptually strong and justified innovation, which can distinguish this paper from other approaches, as shown in Table 1.
3.The manuscript is well-structured and clearly written, facilitating comprehension.
4.CSSE-DDI demonstrates superior performance on two recognized benchmarks, indicating its practical effectiveness. The case study shows interpretability in the context of drug interactions, which is a very important aspect in such an application paper.
Weaknesses: 1. For the supernet training phase, in addition to the algorithm process, the authors are encouraged to provide an illustration to help the reader understand the corresponding steps more clearly.
2. A typo error in line 220: “Subgraph Repersentation“
Technical Quality: 4
Clarity: 4
Questions for Authors: Given the method itself is general, is there any reason why the authors would like to specifically focus on DDI prediction?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and efforts in reviewing our paper. Please find our responses below to your concerns.
> W1. For the supernet training phase, in addition to the algorithm process, the authors are encouraged to provide an illustration to help the reader understand the corresponding steps more clearly.
**R1**. Thanks to your suggestion, we will add a graphical illustration of the search process in the revised version to show the supernet training process more clearly.
> W2. A typo error in line 220: “Subgraph Repersentation“
**R2**. Thank you for your careful review! We have thoroughly checked the entire text and corrected all spelling and formatting issues.
> Q1. Given the method itself is general, is there any reason why the authors would like to specifically focus on DDI prediction?
**R3**. DDI is a critical task in pharmacology and healthcare. Compared with the generalized multi-relational graph link prediction, DDI prediction has more explicit application scenarios and practical needs. Therefore, we design CSSE-DDI based on the DDI prediction task and dataset. However, our method can also be used for generalized multi-relational graph link prediction, i.e., knowledge graph relation prediction, after the modification of search space, And we will include some discussions about knowledge graph relation prediction in the revised version. | Summary: The article presents a novel method for predicting drug-drug interactions (DDIs) by using a customized subgraph selection and encoding process. The authors propose a framework called Customized Subgraph Selection and Encoding for Drug-Drug Interaction prediction (CSSE-DDI), which leverages neural architecture search (NAS) to tailor the subgraph selection and encoding components for different datasets.Extensive experiments demonstrating superior performance of the CSSE-DDI framework compared to hand-designed methods.
Strengths: 1.The use of NAS to customize subgraph selection and encoding for DDI prediction is a novel approach that addresses the limitations of fixed, hand-designed methods.
2. The creation of extensive subgraph selection and encoding spaces allows for more accurate and context-specific predictions.
3. The framework's ability to adapt to different datasets and customize subgraph components based on data-specific characteristics is a significant strength.
Weaknesses: 1.The author applies NAS to the DDI task, but it does not exhibit the task-specific customization expected for DDI. For instance, the primary focus of the DDI task should be the interactions between drugs. However, the author raises the following issues: "The relaxation strategy is a prerequisite for differentiable NAS methods. This is because the subgraphs in the selection space comprise different nodes and edges, making it challenging to design a relaxation function that unifies subgraphs of varying sizes. Additionally, to search within the subgraph selection space, we must first obtain all subgraphs within this space, but the task of sampling such a large number of subgraphs is computationally infeasible." These issues are also present in subgraph extraction tasks on a single graph.
2.To address the problem of subgraph sampling, the author employs an implicit encoding sampling method. However, its effectiveness in terms of time efficiency and accuracy lacks sufficient evidence.
3.In the DDI task, the subgraph sampling space grows exponentially. The author's method for addressing the excessively large sampling space does not offer any advantages over the methods used to address the large sampling space in a single graph.
4.The author does not provide a comparison of the time overhead between their NAS method and the baseline methods. Given that NAS is likely to have significantly high time overhead, this comparison is crucial for evaluating model performance.
5.The author lacks comparisons with the latest baselines to highlight the performance advantages of their model. For instance, only one of the baselines chosen by the author is from 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: If the author solves the above weaknesses, I am willing to change my score.
In addition, the author's method solves the subgraph sampling problem. Whether the model can perform visual analysis on the sampled subgraphs is critical for the interpretability of the model.
If possible, I hope the author can provide more baseline comparisons and include a greater variety of experimental settings (such as inductive settings, which are common in many DDI tasks), as well as experiments on more datasets.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and efforts in reviewing our paper. Please find our responses below to your concerns.
> W1. About task-specific customization expected for DDI.
**R1**. Please refer to the general response **GR1**.
> W2. Time efficiency and accuracy about implicit encoding sampling method.
**R2**. The time overhead of using explicit subgraphs is huge. When we use explicit subgraphs, we need to sample all candidate subgraphs of a query. For $\eta=3$ on Drugbank, the sampling time is 15 hours. In the process of searching, the memory overhead will be multiple times that of the subgraph-based method.
However we can prove the effectiveness of our method in the following way: We collect the corresponding subgraph sizes of different queries obtained from the search process, explicitly sample the subgraphs of the corresponding sizes and use the searched message functions to encode the subgraphs and predict the interactions between drug pairs, called CSSE-DDI(explicit). Table 4 shows the experimental results. It can be seen that the prediction performance of explicit sampling and implicit sampling is similar. However, in terms of single running time, the explicit sampling method needs to perform a separate message passing on each subgraph, whereas the implicit sampling method is more efficient as it can quickly obtain subgraph representations using the subgraph representation approximation strategy.
|Table 4|Drugbank|||
|-------|-------|-------|-------|
| |F1|ACC|Running time(minutes)|
|CSSE-DDI(explicit)|91.63±0.29|94.23±0.54|201|
|**CSSE-DDI**|92.08±0.22|95.56±0.15|32|
> W3. About the advantage of the proposed subgraph sampling space.
**R3**. Please refer to the general response **GR1**.
> W4. About time overhead between their NAS method and the baseline methods.
**R4**. Please refer to the general response **GR2**.
> W5. Lack comparisons with the latest baselines.
**R5**. Thanks for the suggestion. We add the results of comparisons with multiple latest baselines, as shown in Table 6, including LaGAT[1] published in 2022, ACDGNN[2] published in 2023, and TransFOL[3] published in 2024. All codes are publicly available.
|Table 6|Drugbank||TWOSIDES||
|-------|-------|-------|-------|-------|
| |F1|ACC|ROC-AUC|PR-AUC|
|LaGAT (2022)|81.63±0.56|86.21±0.18|89.78±0.21|86.33±0.15|
|ACDGNN (2023)|86.24±0.93|90.53±0.38|93.69±0.47|92.12±0.21|
|TransFOL (2024)|89.97±1.64|91.92±0.89|94.16±0.62|93.52±0.53|
|**CSSE-DDI**|92.08±0.22|95.56±0.15|95.47±0.02|94.21±0.05|
As can be seen, CSSE-DDI is still competitive against the latest baselines. For LaGAT[1], it still extracts the same range of subgraphs for different queries, and in addition R-GAT is still a suboptimal solution for handling diverse DDI interaction data. For ACDGNN, its overall framework is still based on a complete bio-heterogeneous network for reasoning, and its expressive power is still limited compared with the subgraph-based approach.For TransFOL, which utilizes cross-transformers and graph convolutional networks to deal with interactions in DDI datasets, it mainly focuses on complex logical query tasks in DDI data, and does not have any advantage in single drug-drug interaction prediction tasks.
> Q1. About visual analysis.
**R6**. We visualize some of the sampled subgraphs in Section 4.5.1 and make some case studies to demonstrate the effectiveness and interpretability of our method. In the revised version, we will add more visual analysis.
> Q2. Provide more baseline comparisons, inductive setings, as well as experiments on more datasets.
**R7**. For the experimental results of the latest baselines, please refer to **R5**.
In terms of inductive settings. we use the inductive setting and datasets in EmerGNN[4] method, to predict drug-drug interactions between emerging drugs and existing drugs. The experimental results are shown in Table 7. Emergnn, which is specifically designed for new drug prediction, achieves optimal performance. However, CSSE-DDI still achieves impressive performance, which is mainly due to the robust learning ability of NAS technology for unknown data.
|Table 7|Drugbank||TWOSIDES||
|-------|-------|-------|-------|-------|
| |F1|ACC|ROC-AUC|PR-AUC|
|CompGCN|30.98±3.26|52.76±0.46|84.83±1.02|83.68±1.86|
|SumGNN|26.57±1.59|44.30±1.04|80.02±2.17|78.42±1.62|
|KnowDDI|31.14±1.24|53.44±1.73|84.23±2.63|82.58±1.94|
|EmerGNN|58.13±1.36|69.53±1.97|87.42±0.39|86.20±0.71|
|**CSSE-DDI**|37.24±1.13|58.57±0.85|88.33±0.52|86.47±0.27|
For more datasets, most of the research work on DDI prediction only uses the DrugBank and TWOSIDES datasets, but we have been continually looking to see if there are other available datasets to validate the effectiveness of our approach.
[1] LaGAT: link-aware graph attention network for drug–drug interaction prediction, Bioinformatics' 22.
[2] Attention-based cross domain graph neural network for prediction of drug–drug interactions, Briefings in Bioinformatics' 23.
[3] TransFOL: A Logical Query Model for Complex Relational Reasoning in Drug-Drug Interaction, IEEE Journal of Biomedical and Health Informatics' 24.
[4] Emerging Drug Interaction Prediction Enabled by Flow-based Graph Neural Network with Biomedical Network. Nature Computational Science' 23.
---
Rebuttal Comment 1.1:
Comment: Your reply has solved most of my questions, but I still have some doubts about the innovation of the method: the customization of the DDI prediction task is limited to building a search space that adapts to the DDI dataset. I think the innovation is limited, and it does not combine the DDI task to innovate the method and theory. If there is, I hope the author can further clarify it. Otherwise, I think this paper simply transfers the method applied on a single molecule to the DDI task.
---
Reply to Comment 1.1.1:
Comment: Thank you for your patience and positive feedback in helping us improve our work. We would like to kindly argue that CSSE-DDI introduces the crucial task-specific customization for DDI prediction. A comprehensive and thorough analysis demonstrates our insight of the core issues in the drug-drug interaction (DDI) prediction task, which has also prompted us to utilize NAS technology for precise prediction of drug-drug interactions. For NAS-based methods, the innovation typically manifests in the design of a search space informed by domain insights, coupled with search strategies capable of efficiently exploring the proposed space. Therefore, we wish to emphasize that our manuscript presents clear motivations, novel, and effective designs specifically tailored for the drug-drug interaction (DDI) prediction problem. | Summary: This work introduces CSSE-DDI, a searchable framework for DDI prediction, which refines search spaces for fine-grained subgraph selection and data-specific encoding. To improve search efficiency, CSSE-DDI employs a relaxation mechanism to continuousize the discrete subgraph selection space and use subgraph representation approximation to accelerate the search process. Extensive experiments demonstrate that CSSE-DDI significantly outperforms state-of-the-art methods, and the results are interpretable, revealing domain concepts like pharmacokinetics and metabolism.
Strengths: it introduces comprehensive subgraph selection and encoding spaces to cover the diverse contexts of drug interactions for DDI prediction. Faced with overwhelming sampling overhead, this work designs an effective relaxation mechanism to efficiently explore optimal subgraph configurations using an approximation strategy, enabling a robust search algorithm to explore the search space efficiently.
Weaknesses: Examples of symmetric semantic patterns(headache, pain in throat) are not very convincing.
Technical Quality: 2
Clarity: 2
Questions for Authors: CSSE-DDI-FF and KnowDDI both are fine-grained Subgraph Selection. What is the reason for the performance difference?
Whether CSSE-DDI can be used to predict ddi for new drugs?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and efforts in reviewing our paper. Please find our responses below to your concerns.
> W1. Examples of symmetric semantic patterns(headache, pain in throat) are not very convincing.(sematic property)
**R1**. In the field of multi-relational graphs, the modeling of different semantic types of interaction relationships has been a research hotspot. Many works[1-4] have tried to propose diverse hand-designed interaction functions to model different properties of semantic attributes. In the DDI dataset, the semantic nature of drug interactions is diverse, and our example for asymmetric type and symmetric type is to validate the semantic diversity in different datasets. Such diversity is what motivates us to use NAS to search for adaptive models.
> Q1.1 CSSE-DDI-FF and KnowDDI both are fine-grained Subgraph Selection. What is the reason for the performance difference?
**R2.1**. It should be noted here that the query subgraph of KnowDDI, which is extracted by combining external biological knowledge graphs (HetioNet), contains richer external information, such as genes, diseases, proteins, etc. We label KnowDDI as a fine-grained subgraph selection method, which means that it utilizes a graph structure learning module to realize a fine-grained information supplementation and pruning of the extracted subgraphs. Whereas, although our variant (CSSE-DDI-FF) carries out fine-grained subgraph selection, the variant only employs a basic GNN backbone (CompGCN) to encode subgraphs without external knowledge, which makes it difficult to effectively capture the semantic information in the subgraphs, and thus lead to some performance differences.
And CSSE-DDI still outperforms KnowDDI without relying on external knowledge by fine-grained subgraph selection and data-specific encoding function, which directly demonstrates the effectiveness of our method in capturing complex interactions in DDI datasets.
> Q1.2 Whether CSSE-DDI can be used to predict DDI for new drugs?
**R2.2**. As the reviewer pointed out, there is a strong need to predict DDI related to new drugs in real-world scenarios. Therefore, to further validate the effectiveness of our method, we use the inductive setting and datasets in the EmerGNN[5] method, to predict drug-drug interactions between emerging drugs and existing drugs. The experimental results are shown in Table 3. It can be seen that there is a significant performance drop from the transductive setting to the inductive setting, which shows that the DDI prediction for new drugs is more difficult. Emergnn, which is specifically designed for new drug prediction, achieves optimal performance. However, CSSE-DDI still achieves impressive performance, which is mainly due to the robust learning ability of NAS technology for unknown data.
|Table 3|Drugbank||TWOSIDES||
|-------|-------|-------|-------|-------|
| |F1|ACC|ROC-AUC|PR-AUC|
|CompGCN|30.98±3.26|52.76±0.46|84.83±1.02|83.68±1.86|
|Decagon|11.39±0.79|32.56±0.92|57.49±1.75|59.38±1.09|
|SumGNN|26.57±1.59|44.30±1.04|80.02±2.17|78.42±1.62|
|KnowDDI|31.14±1.24|53.44±1.73|84.23±2.63|82.58±1.94|
|EmerGNN|58.13±1.36|69.53±1.97|87.42±0.39|86.20±0.71|
|**CSSE-DDI**|37.24±1.13|58.57±0.85|88.33±0.52|86.47±0.27|
**References**
[1] Translating Embeddings for Modeling Multi-relational Data, NeurIPS' 13.
[2] Embedding Entities and Relations for Learning and Inference in Knowledge Bases, ICLR' 15.
[3] Holographic Embeddings of Knowledge Graphs, AAAI' 16.
[4] RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space, ICLR' 19.
[5] Emerging Drug Interaction Prediction Enabled by Flow-based Graph Neural Network with Biomedical Network. Nature Computational Science' 23. | Summary: This paper addresses the challenge of predicting drug-drug interactions (DDIs), crucial for medical practice and drug development, using subgraph-based methods. It highlights the importance of customizing subgraph selection and encoding but notes the high cost of manual adjustments. Inspired by neural architecture search (NAS), the authors propose a method to search for data-specific components in the subgraph-based pipeline. They introduce extensive subgraph selection and encoding spaces and design a relaxation mechanism to efficiently explore optimal configurations. Extensive experiments demonstrate the method's effectiveness and adaptability.
Strengths: 1. Extensive experiments demonstrate the method's effectiveness. Compared to existing hand-designed methods, the CSSE-DDI framework shows superior performance, enhancing the proposed method's validity.
2. The writing quality is good.
3. Using NAS for precise prediction of drug-drug interactions is novel.
Weaknesses: 1. The motivation for using NAS to search components is not clear.
2. Despite the designed efficiency mechanisms, the inherent overhead of neural architecture search remains significant, especially for large-scale DDI prediction tasks.
3. While the paper compares the proposed method with existing ones, a more detailed comparative analysis, including discussions on computational costs and efficiency metrics, would provide a clearer picture of the trade-offs involved.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why is the performance of GAT in DrugBank so poor? In [1], it seems to work well.
2. Why does the NAS method proposed by the authors appear to perform much better than other NAS methods in the DrugBank dataset? What is the core reason for this?
[1] Hong Y, Luo P, Jin S, et al. LaGAT: link-aware graph attention network for drug–drug interaction prediction[J]. Bioinformatics, 2022, 38(24): 5406-5412.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and efforts in reviewing our paper. Please find our responses below to your concerns.
> W1. The motivation for using NAS to search components is not clear.
**R1**. The use of NAS technology to customize components stems from our analysis and understanding of the DDI prediction problem and datasets in the following two ways:
1. Different DDI datasets have different semantic natures of interactions. (asymmetric patterns in DrugBank and symmetric in TWOSIDES)
2. Reasoning that different drug-pair queries should require different ranges of subgraph information, rather than a fixed range. (Our empirical analysis in the supplementary PDF(Figure A1))
Based on the above analysis, we believe that in the face of diverse data properties, the use of customized search techniques, i.e., NAS, can help improve the performance of DDI prediction task and the interpretability of the results, as evidenced by our extensive experiments.
> W2. Despite the designed efficiency mechanisms, the inherent overhead of neural architecture search remains significant, especially for large-scale DDI prediction tasks.
**R2**. We would like to kindly argue that our design for the search algorithm with the implicit subgraph sampling strategy is precisely to cope with the large-scale graph data. Tab. I shows the running overhead of our method and other subgraph methods on the drugbank dataset, for the subgraph methods, before the "+" sign represents the explicit subgraph sampling overhead, and for our method, before the "+" sign represents the searching overhead, and after the "+" sign represents the model's single running time.
As can be seen, CSSE-DDI and the subgraph approach are comparable or even superior in terms of time overhead, which stems from the fact that our implicit subgraph sampling strategy saves a lot of time in the subgraph sampling and representation phases. It is conceivable that if the size of the dataset is further increased, the number of subgraph sampling will be further increased and the advantage of our implicit sampling strategy will be even more significant.
|Table 1|Running time(minutes)|
|-------|-------|
|SumGNN|342+361|
|KnowDDI|379+393|
|**CSSE-DDI**|444+32|
> W3. While the paper compares the proposed method with existing ones, a more detailed comparative analysis, including discussions on computational costs and efficiency metrics, would provide a clearer picture of the trade-offs involved.
**R3**. For the discussion of the time overhead, please refer to **R2**.
Overall, compared with subgraph methods, CSSE-DDI has lower time overheads, which is the advantage brought by our proposed implicit subgraph sampling strategy. Compared with GNN-based methods, CSSE-DDI, by analyzing the DDI prediction problem and datasets, designs a search space that can be adapted to different data by analyzing the DDI prediction problem. By efficiently searching for the components, it substantially improves the prediction performance under tolerable time overhead compared with the traditional GNN-based methods. In addition, CSSE-DDI can design robust and customized model structures to cope with unknown datasets, which is exactly the advantage of utilizing NAS technology to solve the DDI prediction problem.
> Q1.1. Why is the performance of GAT in DrugBank so poor?
**R4.1**. A large number of literature[1-3] points to the fact that although GAT[4] considers the attention on different edges, it fails to consider more generalized and more complex multi-relational graph data, which contains diverse semantic information, as its performance is also not good. Therefore, the direct application of GAT on DDI graphs results in suboptimal solutions.
> Q1.2. In [3], it seems to work well.
**R4.2**. Compared with GAT, LaGAT extends the graph encoding module of SumGNN, and utilizes multi-relational GAT to take into account the diverse types of relations and the attention weights of different edges in the DDI data. We re-run its code in the DrugBank and TWOSIDES data, and the comparison results are shown in Table 2. As shown below, our approach still achieves the best performance by customizing subgraph selection and encoding process. For the performance reported by LaGAT, we were unable to reproduce it. The official repository does not release the corresponding dataset, although it claims it uses the same DrugBank dataset as SumGNN.
|Table 2|Drugbank||TWOSIDES||
|-------|-------|-------|-------|-------|
| |F1|ACC|ROC-AUC|PR-AUC|
|LaGAT|81.63±0.56|86.21±0.18|89.78±0.21|86.33±0.15|
|**CSSE-DDI**|92.08±0.22|95.56±0.15|95.47±0.02|94.21±0.05|
> Q2. Why does the NAS method proposed by the authors appear to perform much better than other NAS methods in the DrugBank dataset? What is the core reason for this?
**R5**. Compared with other NAS methods, our method has advantages in both search space and search algorithm.
- **Search space**: we design subgraph selection spaces and subgraph encoding spaces adapted to the DDI dataset (Sec 3.2), whereas other methods are only limited to generalized multi-relational graphs and do not have task-specific customization for the DDI prediction task.
- **Search algorithm**: we design a message-aware partition supernet training strategy for the operation coupling problem(Sec 3.3.4), which improves the consistency and accuracy of supernet, enabling the search algorithm more stable and robust, whereas the other methods only use the traditional one-shot search algorithm, which leads to sub-optimal performance.
**References**
[1] SumGNN: Multi-typed Drug Interaction Prediction via Efficient Knowledge Graph Summarization, Bioinformatics' 21.
[2] r-GAT: Relational Graph Attention Network for Multi-Relational Graphs. Arxiv' 21.
[3] LaGAT: link-aware graph attention network for drug–drug interaction prediction, Bioinformatics' 22.
[4] Graph Attention Networks, ICLR' 18.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response. I have carefully read the rebuttal, and the analysis of the motivation and the efficiency experiments have largely resolved my doubts. However, I still have some minor questions regarding some experimental results. GAT performed quite well in the literature [3], so why did it perform only moderately in the author's experimental results? I will increase my rate to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for your patience and suggestions in helping us improve our work. For all baseline results (including GAT [4]), we reran the official code and fine-tuned the hyperparameters to ensure consistency with the conclusions of existing literature [1]. Regarding the GAT results in [3], based on our experience, the results appear to be on the higher side. We will reach out to the authors of [3] to obtain more details about the results. | Rebuttal 1:
Rebuttal: Dear reviewers,
Thank you for your time and comments in reviewing our paper. To summarize, all reviewers agree that **the use of NAS to customize subgraph selection and encoding for DDI prediction is a novel approach** (Qw7Z, zvSg, KE8i, 2Vrd). **The manuscript is well-structured and clearly written, facilitating comprehension** (Qw7Z, KE8i). **The problem and the method are well-motivated and formulated, with extensive experimental support** (zvSg, KE8i). The proposed method, CSSE-DDI, **addresses the DDI prediction problem effectively** (Qw7Z, zvSg, KE8i). The case study **shows interpretability in the context of drug interactions, which is a very important aspect in such an application paper** (KE8i).
We believe all of the reviewers’ concerns can be addressed. In the following, we respond to the main concerns and suggestions raised in the review:
### GR1. Task-specific customization for DDI (zvSg, Qw7Z)
Our customization for the DDI prediction task is the search space adapted to the DDI datasets:
- **Search Space**: We design subgraph selection spaces **(the dense property of DDI dataset)** and subgraph encoding spaces **(the diverse semantic nature of DDI dataset, i.e., asymmetric patterns in DrugBank and symmetric in TWOSIDES)** adapted to the DDI dataset (Sec 3.2).
### GR2. The inherent overhead of neural architecture search (Qw7Z, zvSg)
We would like to kindly argue that **our design for the search algorithm with the implicit subgraph sampling strategy is efficient**. Table G1 shows the running overhead of our method and other subgraph methods on the DrugBank dataset, for the subgraph methods, before the "+" sign represents the explicit subgraph sampling overhead, and for our method, before the "+" sign represents the searching overhead, and after the "+" sign represents the model's single running time.
|Table G1|Running time(minutes)|
|-------|-------|
|SumGNN|342+361|
|KnowDDI|379+393|
|**CSSE-DDI**|444+32|
In general, **CSSE-DDI and the subgraph approach are comparable or even superior in terms of time overhead**, which stems from the fact that our implicit subgraph sampling strategy saves a lot of time in the subgraph sampling and representation phases.
### GR3. Inductive setting (Qw7Z,2Vrd)
There is a strong need to predict DDI related to new drugs in real-world scenarios. Therefore, to further validate the effectiveness of our method, we use the inductive setting and datasets in the EmerGNN[1] method, to predict drug-drug interactions between emerging drugs and existing drugs. The experimental results are shown in Table G2. It can be seen that there is a significant performance drop from the transductive setting to the inductive setting, which shows that the DDI prediction for new drugs is more difficult. Emergnn, which is specifically designed for new drug prediction, achieves optimal performance. However, **CSSE-DDI still achieves impressive performance**, which is mainly due to the robust learning ability of NAS technology for unknown data.
|Table G2|Drugbank||TWOSIDES||
|-------|-------|-------|-------|-------|
| |F1|ACC|ROC-AUC|PR-AUC|
|CompGCN|30.98±3.26|52.76±0.46|84.83±1.02|83.68±1.86|
|Decagon|11.39±0.79|32.56±0.92|57.49±1.75|59.38±1.09|
|SumGNN|26.57±1.59|44.30±1.04|80.02±2.17|78.42±1.62|
|KnowDDI|31.14±1.24|53.44±1.73|84.23±2.63|82.58±1.94|
|EmerGNN|58.13±1.36|69.53±1.97|87.42±0.39|86.20±0.71|
|**CSSE-DDI**|37.24±1.13|58.57±0.85|88.33±0.52|86.47±0.27|
Please let us know if there are any outstanding concerns, and we are happy to discuss them. We would appreciate it if you could take our responses into consideration when making the final evaluation of our work.
Sincerely,
Authors
**References**
[1] Emerging Drug Interaction Prediction Enabled by Flow-based Graph Neural Network with Biomedical Network. Nature Computational Science' 23.
Pdf: /pdf/2c2c9b44616ea35b3b7c8e7ecbf44eee4a6a38ac.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Delta-CoMe: Training-Free Delta-Compression with Mixed-Precision for Large Language Models | Accept (poster) | Summary: This paper proposes an application of the mixed precision quantization technique to the singular vectors of the delta weights, which is encountered when serving multiple aligned LLMs.
Strengths: Please see the “Questions” section.
Weaknesses: Please see the “Questions” section.
Technical Quality: 2
Clarity: 3
Questions for Authors: My review is as follows:
1) As a strength, this paper is well-written and makes it easy for the reader to follow the ideas.
2) The proposed method is a somewhat straightforward extension of mixed precision quantization to the singular vectors of the delta weights. The mixed precision quantization technique is well-known and commonly used by practitioners. I view the contribution of this paper as an extension of that. Because of that, I am not sure if the contributions of the paper could be considered novel. Having said that, it is good to see that the method performs well in multiple scenarios.
3) I understand that studying the compression of delta weights is important. In practice, the base model’s weights need to be quantized/compressed too for real-world situations. I don’t think the compression of base model weights and delta weights are completely orthogonal. I wonder what the authors think on the interdependency of the compression of base model weights and delta weights. In my opinion, it would improve the paper to include some results/discussion on this.
4) I’m wondering why the authors haven’t considered a more thorough search algorithm in Section 5.1 (or if they did, it would be great to include a discussion on it).
Minor:
5) Typo in Table 1: “Aligend”
6) Typo in Table 6: “HuamnEval”
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your questions.
# For question 2.
Representative works on mixed precision include Yao et al., Agile-Quant, and SpQR, which focus on applying mixed precision to model weights and activations, and Bablani et al. who apply mixed precision across different layers, while we adopt different precisions across different singular values in the feature space. Recently, Apple intelligence has also applied mixed precision across different layers. As the reviewer Wicf said, "While neither low-rank nor mixed precision quantization techniques are particularly novel, the combination of the two for delta-compression is novel and clever." As far as we know, we are the first to propose mixed precision compression in the feature space and in multi-modal settings.
# For question 3
We compress our backbone into 4-bit, and test their performance shown in the following table. Our method can still retain performance though the backbone is compressed in low-bit format.
| | | GSM8K | delta |
|-------------------|-------------------|--------|--------|
| 4-bit backbone | WizardMath 4-bit | 49.36% | |
| | Llama2 4-bit + 1bit delta | 47.01% | -2.3% |
| 16-bit backbone | WizardMath 16-bit | 55.2% | |
| | Llama2 16-bit + 1bit delta | 53.6% | -1.6% |
| | | MBPP | delta |
|-------------------|-------------------|--------|--------|
| 4-bit backbone | Magicoder 4-bit | 66.2% | |
| | Codellama-python 4-bit + 1bit delta | 65.4% | -0.8% |
| 16-bit backbone | Magicoder 16-bit | 66.7% | |
| | Codellama-python 16-bit+ 1bit delta | 67.2% | +0.3% |
| | | TruthfulQA | delta |
|-------------------|-------------------|--------|--------|
| 4-bit backbone | WizardMath 4-bit | 49.36% | |
| | Llama2 4-bit + 1bit delta | 47.01% | -2.3% |
| 16-bit backbone | WizardMath 16-bit | 55.2% | |
| | Llama2 16-bit + 1bit delta | 53.6% | -1.6% |
| | | TextVQA | delta |
|-------------------|-------------------|--------|--------|
| 4-bit backbone | Llava-v1.5 4-bit | 57.68% | |
| | Vicuna 4-bit + 1bit delta | 57.58% | -0.1% |
| 16-bit backbone | Llava-v1.5 16-bit | 58.2% | |
| | Vicuna 16-bit + 1bit delta| 58.5% | +0.3% |
# For question 4
Due to limited pages, we did not include the detailed process of the decision of the number of different bit-widths in the current version. When setting $𝑟_{𝑏𝑒𝑔𝑖𝑛}$ and $𝑟_{𝑒𝑛𝑑}$, we decided based on minimizing the error between the activations of the compressed model and the original model. When searching for "Double Precision", using two 8-bit singular values results in the smallest error, while for "Triple Precision", 32 3-bit singular values results in the smallest error.
#### Ablation on 8-bit in "Double Precision", changing the number of 8-bit from 1-8
| Num. of 8-bit | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|---------------|-----|------|------|------|------|------|------|------|
| Error (× 10⁻²) | 0.85 | **0.81** | 0.84 | 0.86 | 0.88 | 0.85 | 0.95 | 0.94 |
#### Ablation on 3-bit in "Triple Precision", changing the number of 3-bit from 8-64
| Num. of 3-bit | 8 | 16 | 24 | 32 | 40 | 48 | 56 | 64 |
|---------------|-----|------|------|------|------|------|------|------|
| Error (× 10⁻²) | 0.77 | 0.77 | 0.76 | **0.74** | 0.75 | 0.76 | 0.78 | 0.77 |
Based on the reviewers' insights, we regard the mixed precision issue as a multi-objective optimization problem, considering single precision to be a special case of mixed precision. We developed a genetic algorithm to address this problem, using the bit count of single precision as the initial solution. We use the following objective function, $f = min PPL(x1, x2, x3, x4, x5) $ where x1, x2, x3, x4, x5 indicating the number of 16-bit, 8-bit, 4-bit, 3-bit, 2-bit and $PPL(.)$ means we calculate perplexity in 128 samples randomly chosen form C4 dataset. For each aligned model, we can automatically determine the mixing strategy through the genetic algorithm. The results demonstrate that the genetic algorithm can yield better results than greedy search, making our method easy to be applied to many different aligned models.
| Models | WizardMath | | magicoder-S-CL | | Llama-2-7b-chat | | Llava-v1.5 | | Ave.
|---------------|-----------------|---------------------|------------------|-------------|------------------|-------------|----------|--------|-----|
| Tasks | GSM8K | Math | HumanEval | Mbpp | SafetyBench | TruthfulQA | GQA | TextVQA |
| loss-based greedy search | 53.6 | 10.24 | 67.1 | 67.9 | 59.8 | 46.9 | 61.7 | 58.5 | 53.2
| genetic search | 53.6 | 10.24 | **69.5** | **68.9** | **59.9** | **47.3** | 61.7 | 58.5 | **53.7**
We also conducted experiments on the 13B model, where the genetic algorithm yielded better performance.
| Models | WizardMath | |
|-------------|---------|-------------|
| Tasks | GSM8K | Math |
| loss-based greedy search | 58.8 | 12.8 |
| genetic search | **59.4** | **12.9** |
The genetic algorithm demonstrated better performance, with the average performance on the 7B model even slightly surpassing "Aligned models". However, even with the same settings without genetic algorithm, our method's performance is close to that, indicating its generalization ability.
---
Rebuttal 2:
Title: Invitation to Participate in the Discussion Period
Comment: Thank you very much for your insightful suggestions. We have provided detailed responses to your question. If you could participate in the discussion period, we would be very grateful.
---
Rebuttal Comment 2.1:
Comment: Thanks for the detailed answers to my questions. The new results are helpful. I'm however still on the fence about whether the novelty of this approach is substantial enough for a neurips paper. | Summary: In the context of SVD compression of delta weight, the paper employs higher bitwidth for singular vectors corresponding to larger singular values. The available bitwidth 8, 3, 2 are empirically chosen. Once the bitwidths are assigned to the singular vectors, the vectors are group-wise quantized by GPTQ.
Strengths: The paper is generally well-written and the experiments are very thorough. The combination of delta-compression and mixed-precision quantization appears to be novel. The method is straightforward and well-motivated.
Weaknesses: Something is wrong with the latex encoding of this pdf. Sections are not detected properly by pdf readers, and there are odd reference errors such as the ones on line 26 and 30.
There are three steps of optimization being done: the number of largest singular vectors chosen, the bitwidth assigned to the singular vectors, and the quantization algoritm performed on the singular vectors. In the paper, the three steps are being done sequentially, leading to arbitrary, empirically-driven and possibly suboptimal decisions.
Technical Quality: 3
Clarity: 3
Questions for Authors: Why is there an advantage to SVD the delta weights as opposed to SVD the aligned model weights directly?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comment.
# For weakness
Due to limited pages, we did not include the detailed process of the decision of the number of different bit-widths in the current version. When setting $𝑟_{𝑏𝑒𝑔𝑖𝑛}$ and $𝑟_{𝑒𝑛𝑑}$, we decided based on minimizing the error between the activations of the compressed model and the original model. When searching for "Double Precision", using two 8-bit singular values results in the smallest error, while for "Triple Precision", 32 3-bit singular values results in the smallest error.
#### Ablation on 8-bit in "Double Precision", changing the number of 8-bit from 1-8
| Num. of 8-bit | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|---------------|-----|------|------|------|------|------|------|------|
| Error (× 10⁻²) | 0.85 | **0.81** | 0.84 | 0.86 | 0.88 | 0.85 | 0.95 | 0.94 |
#### Ablation on 3-bit in "Triple Precision", changing the number of 3-bit from 8-64
| Num. of 3-bit | 8 | 16 | 24 | 32 | 40 | 48 | 56 | 64 |
|---------------|-----|------|------|------|------|------|------|------|
| Error (× 10⁻²) | 0.77 | 0.77 | 0.76 | **0.74** | 0.75 | 0.76 | 0.78 | 0.77 |
Based on the reviewers' insights, we regard the mixed precision issue as a multi-objective optimization problem, considering single precision to be a special case of mixed precision. We developed a genetic algorithm to address this problem, using the bit count of single precision as the initial solution. We use the following objective function, $f = min PPL(x1, x2, x3, x4, x5) $ where x1, x2, x3, x4, x5 indicating the number of 16-bit, 8-bit, 4-bit, 3-bit, 2-bit and $PPL(.)$ means we calculate perplexity in 128 samples randomly chosen form C4 dataset. For each aligned model, we can automatically determine the mixing strategy through the genetic algorithm. The results demonstrate that the genetic algorithm can yield better results than greedy search, making our method easy to be applied to many different aligned models.
| Models | WizardMath | | magicoder-S-CL | | Llama-2-7b-chat | | Llava-v1.5 | | Ave.
|---------------|-----------------|---------------------|------------------|-------------|------------------|-------------|----------|--------|-----|
| Tasks | GSM8K | Math | HumanEval | Mbpp | SafetyBench | TruthfulQA | GQA | TextVQA |
| loss-based greedy search | 53.6 | 10.24 | 67.1 | 67.9 | 59.8 | 46.9 | 61.7 | 58.5 | 53.2
| genetic search | 53.6 | 10.24 | **69.5** | **68.9** | **59.9** | **47.3** | 61.7 | 58.5 | **53.7**
We also conducted experiments on the 13B model, where the genetic algorithm yielded better performance.
| Models | WizardMath | |
|-------------|---------|-------------|
| Tasks | GSM8K | Math |
| loss-based greedy search | 58.8 | 12.8 |
| genetic search | **59.4** | **12.9** |
The genetic algorithm demonstrated better performance, with the average performance on the 7B model even slightly surpassing "Aligned models". However, even with the same settings without genetic algorithm, our method's performance is close to that, indicating its generalization ability.
# For question
1. Compress aligned models directly using SVD can easily achieve bad performance in low-bit setting. As the following table using SVD compress the model into 1-bit or 2-bit settings resulting in a performance drop to 0 in math and code tasks.
| Tasks | GSM8K | Math | HumanEval | Mbpp |
|---------------|-------|------|-----------|------|
| Compress 1-bit| 0 | 0 | 0 | 0 |
| Compress 2-bit| 0 | 0 | 0 | 0 |
2. Recently Onebit (Xu et al.) find compress aligned model needs more sophisticated algorithms and post-training is also needed. Even in this carefully designed setting, there is still a significant performance drop. For example, in Table 2 of OneBit, the average performance on 6 benchmarks decreased from 64.06 to 51.33 and from 66.39 to 55.17 on Llama-2-7B and Llama-2-13B, respectively.
3. In our experiments, delta exhibits a long-tail distribution characteristic, and using SVD compression algorithms can retain the model in a near-lossless manner, compressing it to an equivalent of 1-bit. Therefore, compressing delta-weights can achieve better performance.
---
Rebuttal 2:
Title: Invitation to Participate in the Discussion Period
Comment: Thank you very much for your insightful suggestions. We have provided detailed responses to your question. If you could participate in the discussion period, we would be very grateful.
---
Rebuttal Comment 2.1:
Comment: Dear Authors, My concerns have been addressed accordingly. I have raised my score to 6 and confidence to 4. | Summary: This paper proposes an improved way to apply delta-compression for aligned language models (compact representations of the difference in weights between the pretrained and finetuned language models), which is just as effective w.r.t compression as the most extreme existing binary quanitzation strategy (BitDelta), but resolving many existing quality issues w.r.t. math and code tasks over previous methods. The key insight is that it is possible to combine the best of both worlds between low-rank methods (selecting the top singular values) and quantization methods (e.g. BitDelta), and represent more singular values by using decreasing precision for decreasing singular values: a mixed precision technique overall. The authors call this technique Delta-CoMe. When comparing low-rank, BitDelta, and Delta-CoMe at a fixed compression rate (equal to the binarization / BitDelta rate, which is quite aggressive), Delta-CoMe consistently outperforms all baselines and is close to matching performance with the full precision aligned model. Experiments take place over a diverse set of tasks (math, code, QA, safety, multimodal) as well as a diverse set of models at 7B and 13B scales and aligned versions (Llama 2, WizardMath, MagicCoders, Llama-2-Chat, LLAVA v1.5, OpenChat, Llama-3-8b-Instruct). The authors also provide quality comparisons between delta-compression methods and delta-tuning (i.e. LoRA), demonstrating that Delta-CoMe outperforms LoRA w.r.t quality.
Strengths: 1. While neither low-rank or mixed precision quantization techniques are particularly novel, the combination of the two for delta-compression is novel and clever in the way that the framework smoothly models the trade-off between representing everything with low (binary) precision, and representing the most important features with full precision. Although this is not explored deeply in this paper (the authors recognize that this is a proof of concept of the effectiveness rather than an optimized solution), this tradeoff could be tuned to suit a variety of cases. This idea could also potentially be applied in other areas, such as PEFT or general mixed-precision work, as previous works used quite different heuristics to select which weights to have mixed precision on.
2. The results presented are significant. The approach significantly outperforms the baselines in all aspects (math, code, chat, multimodal), with the biggest improvements in math and code. The approach is the only one to come close to matching the non-compressed aligned model overall over all capabilites. This will be of interest to anyone interested in delta-compression literature or even PEFT techniques. It is the first work to apply mixed-precision to delta weights.
3. The experiments was done on a wide variety of models (math specialized tunings, code specialized tunings, chat tunings, multimodal tunings) and settings (7/8B, 13B, over base models like Llama2, Llama3 and Mistral) enough to demonstrate confidence in the ability of the technique. The authors also provide nice additional analysis of quantization error, and comparison in quality against vanilla LoRA as additional evidence for their approach.
4. The paper is overall well structured and well written, which makes the problem easy to understand and the author's intuition and investigation relatively easy to follow. There are some improvements that could be made (below), but it is nice overall.
Weaknesses: 1. In Table 4, it does seem like on the larger 13B models WizardMath models, the performance recovery in GSM8k and MATH is not as strong as on 7 or 8B scale models. It does bring into question whether there are limitations to this approach not discussed in this work that may limit general adoptability. Whether that means not working as well on larger scales, or for the certain type of post-training happening in WizardMath.
1. Although I understand the author's motivation of using greedy decoding to encourage reproducibility and decrease noise in the evaluations, many models especially on chat tasks use sampling based-decoding, and having results here would improve the overall robustness of the experiments to verify that the techniques still work well.
1. Presentation a couple points in the paper could be improved :
a. While the authors do note that the greedy search strategy to allocate mixed-precision to the singular values presented in Section 5.1 represents a proof of concept to show that even such an unoptimized approach can achieve good results, it would be valuable to understand how some of the design choices here were informed. For example, in the 3-precision setting, the range of singular vectors for the 2nd precision is set to be between 2 and 34, which feels arbitrary. What is the intuition behind this or were there ablations ran here? Being that Section 5.1 is the core description of the mixed-precision strategy, more detail and clarity here would be appreciated.
b. (Minor) The introduction could be cleaned up a bit, with lines 21-36 being oddly detailed. The claim on line 44-46 to motivate the work is also not super well founded -- there are models on LMSys showing general ability with ~20B parameters, not too much bigger than those considered in this work.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. What is the intuition behind choosing the singular vector bands for the different precisions the way that you do in Section 5.1?
1. Do you think the ideas here could apply to mixed-precision approaches more generally?
1. Why is the approach named Delta-CoMe? It appears to stand for delta compression method which seems very generic? And has nothing to do with the details of the approach.
1. Are there any practical limitations e.g. hardware with mixed-precision delta-compression not covered in this paper?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors do clearly point out that a major limitation of the work is the lack of optimization for how the mixed precision is assigned. This makes the approach more of a proof of concept that even a naive solution works, rather than an optimized method. This is perfectly fine.
It would be valuable to understand though whether theres actually limitations in deploying this type of approach on actual hardware with multi-tenant hosting.
It would also be interesting to understand the boundaries to when this approach works or does not work, e.g. at what compression ratio would it break, whether theres a model scale too small or too large, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your questions, weaknesses, and limitations which can bring much improvement to our paper.
# For question 1.
Due to limited pages, we did not include the detailed process of the decision of the number of different bit-widths in the current version. When setting $𝑟_{𝑏𝑒𝑔𝑖𝑛}$ and $𝑟_{𝑒𝑛𝑑}$, we decided based on minimizing the error between the activations of the compressed model and the original model.
#### Ablation on 8-bit in "Double Precision", changing the number of 8-bit from 1-8
| Num. of 8-bit | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|------|-----|----|----|----|---|----|----|----|
| Error (× 10⁻²) | 0.85 | **0.81** | 0.84 | 0.86 | 0.88 | 0.85 | 0.95 | 0.94 |
#### Ablation on 3-bit in "Triple Precision", changing the number of 3-bit from 8-64
| Num. of 3-bit | 8 | 16 | 24 | 32 | 40 | 48 | 56 | 64 |
|--------|-----|------|------|------|------|------|------|------|
| Error (× 10⁻²) | 0.77 | 0.77 | 0.76 | **0.74** | 0.75 | 0.76 | 0.78 | 0.77 |
# For question 2.
Based on the reviewers' insights, we regard the mixed precision issue as a multi-objective optimization problem, considering single precision to be a special case of mixed precision. We developed a genetic algorithm to address this problem, using the bit count of single precision as the initial solution. We use the following objective function, $f = min PPL(x1, x2, x3, x4, x5) $ where x1, x2, x3, x4, x5 indicating the number of 16-bit, 8-bit, 4-bit, 3-bit, 2-bit and $PPL(.)$ means we calculate perplexity in 128 samples randomly chosen form C4 dataset. For each aligned model, we can automatically determine the mixing strategy through the genetic algorithm. The results demonstrate that the genetic algorithm can yield better results than greedy search, making our method easy to be applied to many different aligned models.
| Models | WizardMath | | magicoder-S-CL | | Llama-2-7b-chat | | Llava-v1.5 | | Ave.
|----------|---------|------------|-------|--------|-------------|--------|-----|--------|-----|
| Tasks | GSM8K | Math | HumanEval | Mbpp | SafetyBench | TruthfulQA | GQA | TextVQA |
| loss-based greedy search | 53.6 | 10.24 | 67.1 | 67.9 | 59.8 | 46.9 | 61.7 | 58.5 | 53.2
| genetic search | 53.6 | 10.24 | **69.5** | **68.9** | **59.9** | **47.3** | 61.7 | 58.5 | **53.7**
We also conducted experiments on the 13B model, where the genetic algorithm yielded better performance.
| Models | WizardMath | |
|-------------|---------|-------------|
| Tasks | GSM8K | Math |
| loss-based greedy search | 58.8 | 12.8 |
| genetic search | **59.4** | **12.9** |
The genetic algorithm demonstrated better performance, with the average performance on the 7B model even slightly surpassing "Aligned models". However, even with the same settings without genetic algorithm, our method's performance is close to that, indicating its generalization ability.
Recently, mixed precision is being more widely used such as Apple intelligence (Gunter, Tom, et al. ) allocate different bits to different layers which achieves 3.5-bit on average and Jinhao Li et al. employ mixing 2-bit, 4-bit and 16-bit that ultimately attains an average of 2-bit. Recently, mixed precision has received more attention than before, and the use of mixed precision methods has also been widely applied to model compression. We believe that our method could also be further applied to model compression, which is the direction of our next exploration.
# For question 3.
Thanks for your great question. We will give a more proper title.
# For question 4.
Ablation on batch, we set sequence length to 128, our method achieves 3x speedup than Pytorch. Further, Han Guo et al. and Jianyu et al. have implemented more advanced kernels. We can draw on their methods to achieve higher acceleration ratios.
#### Ablation on batch, we set sequence length to 128, our method achieves 3x speedup than Pytorch.
| bsz | Linear (ms) | Our (ms) |
|-----|-------------|----------|
| 2 | 0.5995 | 0.1488 |
| 4 | 0.6024 | 0.1995 |
| 6 | 0.6136 | 0.2134 |
| 8 | 0.6656 | 0.2561 |
#### Ablation on hidden_size, we set batch 8.
| hidden_size | Linear (ms) | Our (ms) |
|-------------|--------------|----------|
| 1024 | 0.5369 | 0.1206 |
| 2048 | 0.5586 | 0.1654 |
| 3072 | 0.6011 | 0.2046 |
| 4096 | 0.6656 | 0.2561 |
| 5120 | 0.6897 | 0.2788 |
# For limitation
Thanks for your great insight, we conducted a more granular exploration, compressing the model up to 32×. We use WizardMath-7B in GSM8K task and results are shown in the following table. When the compression ratio is within 20×, our method still performs well. However, at a compression ratio of 32×, there is a noticeable decline in performance, but it still outperforms low-rank and low-bit methods, which only achieve a 16× compression ratio.
| | w/o Comp. | 1/16 | 1/18 | 1/20 | 1/22 | 1/26 | 1/32 |
|----------------|---------|---------|---------|---------|---------|---------|-----------|
| WizardMath-7B | 55.2 | 53.6 | 52.2 | 51.9 | 51.2 | 50.1 | 48.8 |
# For weakness
We have used sample decoding algorithm and set decoding temperature = 0.2. We run 5 times and calculate the mean value and confidence interval shown in the table.
| | GSM8K | truthfulQA | MBPP | TextVQA |
|----------|------------------|-----------------|-----------------|-----------------|
| Low-rank | 42.6 (41.9, 43.3)| 42.2 (42.0, 42.4)| 65.4 (65.1, 65.9)| 52.9 (52.5, 53.6)|
| Bitdelta | 45.2 (44.0, 46.4)| 40.8 (40.4, 41.3)| 65.7 (65.4, 65.9)| 56.5 (56.2, 57.1)|
| Ours | 53.8 (53.0, 54.6)| 46.9 (46.3, 47.6)| 68.0 (67.6, 68.5)| 58.3 (57.9, 58.9)|
---
Rebuttal Comment 1.1:
Title: Performance on hardwares
Comment: To accelerate the inference of our proposed method, we implemented a computation kernel using Triton, which integrates the de-quantization process with matrix multiplication. This kernel supports the multiplication of matrices with different bit-widths.
By using our customized Triton kernal, we can significantly improve the forward speed of Delta-CoMe:
Ablation on batch, we set sequence length to 128, our method achieves 3x speedup than Pytorch.
| bsz | Delta-CoMe w/ PyTorch (ms) | Our Customized Triton Kernal (ms)|
|-----|-------------|----------|
| 2 | 0.5995 | 0.1488 |
| 4 | 0.6024 | 0.1995 |
| 6 | 0.6136 | 0.2134 |
| 8 | 0.6656 | 0.2561 |
Ablation on hidden_size, we set batch 8.
| hidden_size | Delta-CoMe w/ PyTorch (ms) | Our Customized Triton Kernal (ms) |
|-------------|--------------|----------|
| 1024 | 0.5369 | 0.1206 |
| 2048 | 0.5586 | 0.1654 |
| 3072 | 0.6011 | 0.2046 |
| 4096 | 0.6656 | 0.2561 |
| 5120 | 0.6897 | 0.2788 |
Further, Han Guo et al. and Jianyu et al. have implemented more advanced kernels. We can draw on their methods to achieve higher acceleration ratios.
To further demonstrate the hardware advantages of our method, in addition to inference speed, we have also provided memory usage in the table below.
| Num. deployed models | Original Storage | Our Storage |
|-----------------|------------------|-------------|
| 2 | 26.67G | 15.39G |
| 4 | 52.24G | 17.02G |
| 8 | OOM | 20.28G |
| 16 | OOM | 26.79G |
| 32 | OOM | 39.84G |
| 64 | OOM | 66.68G |
Our method can save GPU memory significantly. | Summary: This paper introduces Delta-CoMe, a delta compression method for LLM. It proposes to combine low-rank compression and low-bit compression together to achieve better performance. Specifically, it applies mixed-precision quantization for different singular vectors based on their singular values. The experimental results demonstrate good performance of the proposed method.
Strengths: 1. The proposed method is easy and straight to implement for other LLMs, based on existing open-source tools.
2. The motivation and the proposed idea is clear and supported by the experiment results.
Weaknesses: 1. The bit-width in mixed-precision seems to be a multi-objective optimization problem, and the authors adopt a greedy search in this paper. In Table 2, different settings vary a lot in performance. This suggests that it requires to decide the bit-width carefully, making the method not easy to generalize well to other tasks/models. Performance may not be guaranteed before doing a search.
2. The value of $r_{begin}$ and $r_{end}$ seem to be set intentionally and empirically without explanation or study. This would also impact the generalization of the method.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In equation 5, why the quantization of $V$ is $Quant_k(V, X)$ without $U$ and $\Sigma$ as in the quantization of $U$?
2. In Section 5.1, how are $r_{begin}$ and $r_{end}$ set to those values?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your questions
# For question 1
In delta-compression, we consider compressing $U$ and $V^{T}$. For $U$ and $V^{T}$, their inputs are crucial for adjusting weights during the quantization process, as illustrated in GPTQ. In a forward pass $Y = W X + (UΣV^{T}) X$, the input of $V^{T}$ is $X$, while the input of $U$ is $ΣV^{T}X$. Therefore, the quantization of $V$ is $Quant(V, X)$, while the quantization of $U$ is $Quant(U, ΣV^{T}X)$.
# For question 2
Due to limited pages, we did not include the detailed process of the decision of the number of different bit-widths in the current version. When setting $𝑟_{𝑏𝑒𝑔𝑖𝑛}$ and $𝑟_{𝑒𝑛𝑑}$, we decided based on minimizing the error between the activations of the compressed model and the original model. When searching for "Double Precision", using two 8-bit singular values results in the smallest error, while for "Triple Precision", 32 3-bit singular values results in the smallest error.
#### Ablation on 8-bit in "Double Precision", changing the number of 8-bit from 1-8
| Num. of 8-bit | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
|---------------|-----|------|------|------|------|------|------|------|
| Error (× 10⁻²) | 0.85 | **0.81** | 0.84 | 0.86 | 0.88 | 0.85 | 0.95 | 0.94 |
#### Ablation on 3-bit in "Triple Precision", changing the number of 3-bit from 8-64
| Num. of 3-bit | 8 | 16 | 24 | 32 | 40 | 48 | 56 | 64 |
|---------------|-----|------|------|------|------|------|------|------|
| Error (× 10⁻²) | 0.77 | 0.77 | 0.76 | **0.74** | 0.75 | 0.76 | 0.78 | 0.77 |
Based on the reviewers' insights, we regard the mixed precision issue as a multi-objective optimization problem, considering single precision to be a special case of mixed precision. We developed a genetic algorithm to address this problem, using the bit count of single precision as the initial solution. We use the following objective function, $f = min PPL(x1, x2, x3, x4, x5) $ where x1, x2, x3, x4, x5 indicating the number of 16-bit, 8-bit, 4-bit, 3-bit, 2-bit and $PPL(.)$ means we calculate perplexity in 128 samples randomly chosen form C4 dataset. For each aligned model, we can automatically determine the mixing strategy through the genetic algorithm. The results demonstrate that the genetic algorithm can yield better results than greedy search, making our method easy to be applied to many different aligned models.
| Models | WizardMath | | magicoder-S-CL | | Llama-2-7b-chat | | Llava-v1.5 | | Ave.
|---------------|-----------------|---------------------|------------------|-------------|------------------|-------------|----------|--------|-----|
| Tasks | GSM8K | Math | HumanEval | Mbpp | SafetyBench | TruthfulQA | GQA | TextVQA |
| loss-based greedy search | 53.6 | 10.24 | 67.1 | 67.9 | 59.8 | 46.9 | 61.7 | 58.5 | 53.2
| genetic search | 53.6 | 10.24 | **69.5** | **68.9** | **59.9** | **47.3** | 61.7 | 58.5 | **53.7**
We also conducted experiments on the 13B model, where the genetic algorithm yielded better performance.
| Models | WizardMath | |
|-------------|---------|-------------|
| Tasks | GSM8K | Math |
| loss-based greedy search | 58.8 | 12.8 |
| genetic search | **59.4** | **12.9** |
The genetic algorithm demonstrated better performance, with the average performance on the 7B model even slightly surpassing "Aligned models". However, even with the same settings without genetic algorithm, our method's performance is close to that, indicating its generalization ability.
---
Rebuttal 2:
Title: Invitation to Join the Discussion Period
Comment: Thank you very much for your insightful suggestions. We have provided detailed responses to your question. If you have any other questions, we sincerely invite you to participate in the discussion period if possible. Thanks very much! | Rebuttal 1:
Rebuttal: Thank all the reviewers for the constructive suggestions. We will take into account the advice to improve the manuscript! | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Neural Krylov Iteration for Accelerating Linear System Solving | Accept (spotlight) | Summary: The authors use a neural operator approach to generate subspaces that are used for the acceleration of Krylov subspace methods for several partial differential equations setups.
Strengths: The authors are interested in a very relevant problem of computational science and engineering of solving linear systems of equations. The attempt of providing a convergence analysis for the proposed method.
Weaknesses: The details of the method remain hidden to me and it seems difficult to grasp the details of the method and many details and even the major workings of the method are unclear.
Technical Quality: 2
Clarity: 1
Questions for Authors: Why the smallest eigenvalues in line 163 on page 5?
I assume that Y_k in Algorithm 1 is the predicted subspace from the neural network, but then explain the derivation used for equation (11)? How is this motivated?
How is the method of the authors combined with preconditioning? The details are not given but the choice of the preconditioner is crucial and how would this depend on the learned subspace the method provides?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The method is limited to linear problems but this is a very broad and general class. If the method was better explained and the details clear and convincing there would be no limitations for these kinds of problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the patience to read through our paper. We are pleased to re-introduce our work and respond to your comments as follows. We sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work.
---
## Major Workings of Our Method
- We propose a novel method, namely NeurKItt, for accelerating linear systems solving. NeurKItt consists of two modules, one is the subspace prediction module, while the other is the acceleration module.
- The subspace prediction module employs the neural operator for predicting the invariant subspace $\hat{\mathcal{K}}$ of given linear systems $Ax=b$, as a matrix $X$ that contains eigenvectors corresponding to the smallest eigenvalues from $A$.
- The acceleration module uses the predicted subspace for accelerating the linear system solving. It is the Krylov subspace algorithm with predicted subspace involved for iteration.
---
## Subspace Prediction Module
- We use FNO to predict the subspace of the matrix $A$ from the given linear system $Ax=b$. We give the introduction to FNO in Appendix C.
- The reason why we select the neural operator for subspace prediction is that the mapping from the matrix $A$ of linear system $Ax=b$ to its corresponding invariant subspace $\mathcal{\hat{K}}$ can be considered an operator, i.e., a mapping between two Hilbert spaces, thus solving such problems involves finding the corresponding operator.
- We employ the projection loss function for training the FNO and apply thin QR decomposition to the output of the FNO. Projection loss is designed for learning the subspace. The idea behind it is that the predicted subspace should contain the basis vectors in the target subspace. We employ the thin QR decomposition for orthogonalizing the FNO output. We provide the details about thin QR decomposition in Appendix E.
> **Why the smallest eigenvalues in line 163 on page 5?**
- The FNO learns the subspace $\mathcal{S}$ that associates with the $n$ smallest eigenvalues, assuming $\mathcal{S}=\mathrm{span}\{ s_1, s_2, \ldots, s_n \}$, where $s_i$ is the eigenvector corresponding to the $i$-th smallest eigenvalue of the given matrix $A$, $i = 1, 2, \ldots, n$. We think such subspace $\mathcal{S}$ is easy to learn because Theorem 5.2 gives proof that if changes in a matrix $A$ occur in the subspace corresponding to larger eigenvalues, the subspace associated with the smallest eigenvalues is minimally affected, as long as the changes are smaller than the gap between the smallest and larger eigenvalues. Theorem 5.2 can be found in Section 5.
---
## Acceleration Module
- The acceleration module is the Krylov iteration algorithm with predicted subspace involved, inspired by Krylov recycling algorithms. The detailed pseudo-code about the acceleration module can be found in Appendix A.2. The key of the acceleration module is the Arnoldi relation $(I−C_kC^H_k )AV_{m−k} = V_{m−k+1}\underline{H}_{m−k}$, which takes the subspace into iteration.
- Here we give an intuitive idea about the acceleration module. Krylov subspace iteration solves the linear system problem by iteratively approximating the invariant subspace of matrix $A$ from the linear system $Ax=b$. When the subspace information about $A$ is given, we don't need to start the approximation from scratch, thus the acceleration module can improve the convergence speed.
> **I assume that Y_k in Algorithm 1 is the predicted subspace from the neural network, but then explain the derivation used for equation (11)? How is this motivated?**
- $Y_k$ is the matrix of predicted subspace $\mathcal{\hat{K}}$ from the neural operator. In particular, we obtain $C_k$ and $U_k$ from the thin QR decomposition on $AY_k$, such that $AU_k=C_k$ and $C_k^{H}C_k=I_k$.
- Equation $C_k^{H}C_k=I_k$ allows us to find the basis of $\mathcal{\hat{K}}$, which enables us to remove the subspace $\mathcal{\hat{K}}$ in Equation (11) $(I−C_kC^H_k)AV_{m−k} = V_{m−k+1}\underline{H}_{m−k}$ by $(I-C_kC_k^H)$. This approach makes the Krylov subspace iteration more efficient because we only need to search the rest of the invariant subspace for matrix $A$.
- Equation $AU_k=C_k$ makes sure the matrix $\underline{H}_{m-k}$ is an upper Hessenberg matrix, which speedups the solving of the linear system in the later iteration procedure.
> **How is the method of the authors combined with preconditioning? The details are not given but the choice of the preconditioner is crucial and how would this depend on the learned subspace the method provides?**
- Supposing the original Arnoldi relation $AV=VH_0$ and the preconditioner matrix $P$, the preconditioned Arnoldi relation is $PAV=VH_1$. So, for NeurKItt, after the combination, the Arnoldi relation is $P(I-C_kC_k^H)AV=VH_2$. Here, $H_i$ is the upper Hessenberg matrices.
- In our experiments, the learned subspace for any preconditioning method is the subspace composed of eigenvectors corresponding to the smallest eigenvalues of the given matrix $A$.
---
We also provide a theoretical analysis of the acceleration module in Section 5.1, which gives the convergence analysis and explains our choice to predict the subspace rather than the solution. In Section 5.2, we further justify our choice of using the subspace composed of eigenvectors corresponding to the smallest eigenvalues for prediction.
If you are confused about any of the content in our paper, like what kind of obstacle hinders the understanding of our work, please let us know your further concerns, and we will continue to actively respond to your comments.
---
Rebuttal Comment 1.1:
Comment: Thank you for the comments. Regarding your last point on the use of the preconditioner, the main idea of applying a preconditioner P is to cluster the eigenvalues of the preconditioned matrix PA. One typically hopes that the n previously smallest eigenvalues would now be clustered in a tighter region, ideally the perfect preconditioner guarantees only very few distinct eigenvalues What does this mean for your method and the approximated subspace?
---
Reply to Comment 1.1.1:
Comment: Thank you for your question. We address it as follows:
- We agree with the reviewer about how preconditioners work. However, our method is based on the predicted subspace to improve the convergence speed, which avoids approximating the subspace from scratch for Krylov subspace iteration. Thus, preconditioners that disrupt the invariant subspace, not eigenvalues, will have a negative influence on our method, like ICC and ILU.
- Despite this potential influence, extensive experiments show that these disruptions do not significantly affect our improvement in practice. In particular, our method accelerates the solving across all preconditioners, especially achieving up to 2.64$\times$ time speedup under ICC and 2.6$\times$ time speedup under ILU.
- Besides, optimizations for the approximated subspace could be our future work, which takes the negative influence of different preconditioners on subspace into consideration.
---
Reply to Comment 1.1.2:
Title: Thanks for Your Positive Feedback
Comment: We appreciate the positive feedback and the time you taken to evaluate our work. To improve clarity, we will include these discussions to help readers grasp the key aspects of our study. | Summary: The manuscript introduces a novel method, referred to as NeurKItt, which combines neural network techniques with Krylov subspace methods to accelerate the solution of linear systems derived from partial differential equations (PDEs). The core innovation of NeurKItt lies in its capability to predict invariant subspaces associated with the matrices defining these linear systems, thereby significantly reducing the number of iterations required for convergence.
Strengths: 1. **Originality:**
The manuscript introduces a novel approach by integrating neural network techniques with Krylov subspace methods to predict invariant subspaces of linear systems, significantly enhancing the efficiency of these traditional methods.
2. **Quality:**
The manuscript effectively targets the critical issues of computational inefficiency and instability in the context of high-dimensional and poorly conditioned linear systems. The choice of problem is highly relevant to both academic research and practical applications in scientific computing, making the work valuable to a wide audience.
3. **Clarity:**
The manuscript is well-structured, presenting complex concepts in an understandable manner. It effectively communicates the challenges of existing methods and how NeurKItt addresses these issues, providing clear explanations and logical progression from problem statement to solution. However, improvements in typographical accuracy and some clarifications in methodological descriptions could further enhance clarity.
4. **Significance:**
The significance of this work lies in its potential impact on the efficiency of solving large-scale linear systems, which are crucial in various scientific and engineering applications. By reducing computational time and resource consumption, NeurKItt could significantly benefit fields reliant on large-scale computations, making this contribution highly relevant to both academic research and industry applications.
Weaknesses: 1. The sentence "Research in using neural networks for accelerating linear system solving" on line 75 appears abruptly terminated and lacks a verb or continuation that would complete the thought.
2. On line 81, the term "precondition" is likely a typographical error. The correct term, given the context, could be "preconditioning".
3. There appears to be a typographical error in Equation (8), where the variable $z$ should presumably be $y$.
4. Equation (13) on page 7 starting with an unnecessary 's' before the equal sign seems to be a typo.
5. In the manuscript, Section 4.1 discusses the use of Fourier Neural Operators (FNO) for parametric PDE problems and introduces the subsequent subspace prediction concept. It employs a subspace $S$ defined as the span of vectors $\{s_1, s_2, \cdots, s_n\}$. The manuscript mentions that $s_i$ is the eigenvector corresponding to the $i$-th smallest eigenvalue, but it does not specify whether these eigenvalues are of the matrix or another related matrix. This omission could lead to confusion about the origin and relevance of these eigenvectors, particularly in how they are integrated into the model and influence the outcome of the subspace prediction.
6. In Section 4.1, the manuscript proposes using invariant subspaces associated with the smallest eigenvalues to train the subspace prediction module. However, an oversight in the theoretical analysis (Section 5.2) arises from applying this concept to potentially non-Hermitian matrices. The section (section 5.2) is based on Hermitian positive definite matrices, whose eigenvalues are all real numbers that can be orderly compared. Non-Hermitian matrices, on the other hand, may exhibit complex eigenvalues, complicating or nullifying the notion of "smallest" eigenvalues due to their lack of a natural order. This discrepancy raises concerns about the applicability of the method to non-Hermitian matrices, which are prevalent in many practical applications.
Technical Quality: 2
Clarity: 3
Questions for Authors: In reviewing the theoretical analysis provided between lines 237 and 248, I observed a notable similarity in both the phrasing and the mathematical formulations with those presented in reference [23]. Could the authors clarify the extent of originality in these sections? It is crucial for academic integrity to distinguish clearly between direct quotes and original analysis.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors discussed limitations in Section 7 Limitation and Conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work.
## Weaknesses
> **Typos mentioned in Weakness 1-4**
- Thank you for pointing out these typos. We will fix them in the future version.
> **The manuscript mentions that $s_i$ is the eigenvector corresponding to the $i$-th smallest eigenvalue, but it does not specify whether these eigenvalues are of the matrix or another related matrix. This omission could lead to confusion about the origin and relevance of these eigenvectors, particularly in how they are integrated into the model and influence the outcome of the subspace prediction.**
- Given a linear system $Ax=b$, these eigenvalues are derived from the same matrix $A$. Thank you for pointing out this potentially misleading statement. We will fix it in the future version. In particular, we will replace the sentence in lines 164-165 "where $s_i$ is the eigenvector corresponding to the $i$-th smallest eigenvalue, $i = 1, 2, \dots, n$." with "where $s_i$ is the eigenvector corresponding to the $i$-th smallest eigenvalue of **the given matrix $A$**, $i = 1, 2, \ldots, n$."
> **In Section 4.1, the manuscript proposes using invariant subspaces associated with the smallest eigenvalues to train the subspace prediction module. However, an oversight in the theoretical analysis (Section 5.2) arises from applying this concept to potentially non-Hermitian matrices. The section (section 5.2) is based on Hermitian positive definite matrices, whose eigenvalues are all real numbers that can be orderly compared. Non-Hermitian matrices, on the other hand, may exhibit complex eigenvalues, complicating or nullifying the notion of "smallest" eigenvalues due to their lack of a natural order. This discrepancy raises concerns about the applicability of the method to non-Hermitian matrices, which are prevalent in many practical applications.**
- Thank you for pointing out this problem. Analyzing perturbations for non-Hermitian matrices involves pseudo-spectral methods, which makes it difficult to investigate. To simplify the analysis, Section 5.2 explores how to select an appropriate subspace in the Hermitian positive definite case. In practice, for Hermitian positive definite matrices, the smallest eigenvalues are compared following the natural order. But **for non-Hermitian matrices, the eigenvalues will be sorted by comparing their modulus**, such that: $$|\lambda_1|\le|\lambda_2|\le|\lambda_3|\le|\lambda_4|\le\dots\le|\lambda_n|$$
where $\lambda_i\in\mathbb{C}$, $i=1,2,\dots,n$, is the eigenvalue of a given non-Hermitian matrix $A$. We will add these details to Section 5.2 to avoid the discrepancy.
## Questions
> **In reviewing the theoretical analysis provided between lines 237 and 248, I observed a notable similarity in both the phrasing and the mathematical formulations with those presented in reference [23]. Could the authors clarify the extent of originality in these sections? It is crucial for academic integrity to distinguish clearly between direct quotes and original analysis.**
- We are sorry for the misunderstanding caused by our improper citation. We fully adhere to academic ethics, and this issue was due to a writing oversight. Sentences in lines 237-248 provide the definitions and assumptions serving as the preliminaries for Theorem 5.2, which is mentioned in Theorem 5.2. **To keep Theorem 5.2, which is cited properly, coherent with the original text, we reused the definitions and assumptions from reference [23], and this might potentially lead to disputes.**
- Due to NeurIPS rebuttal restrictions, we are not allowed to submit a revised paper during rebuttal. **We will make the quotes clear by adding the sentence " The following definitions and assumptions for Theorem 5.2 are from the reference [23]." at line 238 to indicate that content in lines 237-248 comes from the reference [23].**
---
Rebuttal 2:
Comment: Dear Reviewer iSRc,
We are writing as the authors of the paper titled "Neural Krylov Iteration for Accelerating Linear System Solving" (ID: 18161). Thanks again for your valuable comments and constructive suggestions, which are of great help to improve the quality of our work. We have followed your suggestions to significantly enhance the quality of our work. As the deadline for the author-reviewer discussion period is approaching (due on Aug 13), we are looking forward to your further comments and/or questions.
We sincerely hope that our rebuttal has properly addressed your concerns. If so, we would deeply appreciate it if you could raise your scores. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for the response of the authors.
All of my previous concerns have been addressed in the rebuttal, I think this is a good work and I would like to raise my score from 5 to 6 for this submission.
---
Reply to Comment 2.1.1:
Title: Thanks for Your Positive Feedback
Comment: We appreciate your positive feedback and the efforts you made in reviewing our rebuttal. Thanks for your constructive comments and valuable suggestions. We will incorporate them into our paper in the future version. | Summary: This paper proposes a data-driven approach to accelerate solving linear systems. Linear Systems are widespread in scientific computing applications like solving partial differential equations (PDEs), nonlinear systems, etc. so this method has potential for major impact. The proposed method builds upon the idea of neural operators that learn mapping between function spaces.
While most prior methods have attempted to accelerate solving linear systems by predicting a better initial guess, this work instead predicts the matrix invariant subspace. It uses this subspace to accelerate the Krylov Iterations. The method is validated on linear systems originating from PDEs and achieves speedups of around $5.5\times$ in computation time.
Strengths: 1. The presented method is novel, and to the best of my knowledge there doesn't exist a method that uses a similar approach
2. Experiments are well designed and use strong baseline methods (GMRES from PETSc). Speedup in computation time and iteration count over such a strong baseline presents a strong case for the proposed method.
Weaknesses: 1. The Neural Operators need to be trained on each individual problem. So while there is an improved convergence speed, there is an increased overall time when the training time is taken into account. However, this is a pretty common shortcoming across neural operator approaches and hence I haven't used this shortcoming as part of my overall scoring.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Are there any cases where GMRES (with no-preconditioners) fails to converge while NeurKItt can solve it because it is data-driven?
2. Is it necessary to learn the NO for each system? Or is it possible to learn a single network and reuse it?
3. (Maybe I missed this) Is it possible to include the training times for the neural networks?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work.
## Weaknesses
> **The Neural Operators need to be trained on each individual problem. So while there is an improved convergence speed, there is an increased overall time when the training time is taken into account. However, this is a pretty common shortcoming across neural operator approaches and hence I haven't used this shortcoming as part of my overall scoring.**
- We agree with the reviewer that training the neural network involves additional costs, and it is a shortcoming of any neural operator approach. But considering the benefits of acceleration over millions of linear system solving, such training costs could be negligible due to the time saved. This is one of the reasons why we claim the feasibility of NeurKItt in practice.
## Questions
> **Are there any cases where GMRES (with no-preconditioners) fails to converge while NeurKItt can solve it because it is data-driven?**
- There might be the case where GMRES fails but NeurKItt can solve.
- Specifically, NeurKItt consists of the subspace prediction module and acceleration module. The subspace prediction module is data-driven, while the acceleration module uses the predicted subspace to improve the Krylov iteration convergence speed. The case mentioned is due to the acceleration module that replaces the original Arnoldi relation with $(I-C_kC_k^H)AV_{m-k}=V_{m-k+1}\underline{H}_{m-k}$, which takes the invariant subspace into iteration. It can also be considered as a preconditioned linear system solving, which replaces the original linear system problem $Ax=b$ with $PAx=Pb$, where $P=(I-C_kC_k^H)$. This change directly improves the convergence speed while lowering the condition number of the given matrix $A$, which leads to the case that GMRES fails to but NeurKItt can solve. The data-driven module (Subspace Prediction Module) makes the improvement possible, but it's not the direct answer.
> **Is it necessary to learn the NO for each system? Or is it possible to learn a single network and reuse it?**
- We'd like to give answers given different contexts.
- **First, given the case that systems derived from different PDEs**, we did not conduct relevant experiments. But we think it is possible to learn a single network and reuse it in the given context, while the network is pre-trained. Recent works like \[1\], \[2\], and \[3\], give experimental proofs that the neural operator can be a pre-train model, which can be reused for different PDEs with only fine-tuning.
- **Second, given the case that systems derived from the same PDE but with different scales**, like different sizes of the linear systems, we have conducted an experiment to show the potential of NeurKItt in Appendix I. In particular, we train the neural operator on linear systems with a fixed matrix size but predict linear systems with larger sizes, then use the predicted subspace to accelerate the solving for those linear systems with larger sizes. The experimental results show that NeurKItt successfully accelerates the solving, which indicates the potential to learn a single network and reuse it for the linear systems derived from the same PDE but with different scales.
> **(Maybe I missed this) Is it possible to include the training times for the neural networks?**
- Yes, we have reported the training time for each dataset in Appendix H.2. In our experiments, all the training converges within 150 minutes, and keeping training has no additional performance improvement once reaching the convergence.
[1] Yang, Liu, et al. "In-context operator learning with data prompts for differential equation problems." *Proceedings of the National Academy of Sciences* 120.39 (2023): e2310142120.
[2] Hao, Zhongkai, et al. "DPOT: Auto-regressive denoising operator transformer for large-scale pde pre-training." *arXiv preprint arXiv:2403.03542* (2024).
[3] Zhou, Anthony, et al. "Strategies for Pretraining Neural Operators." *arXiv preprint arXiv:2406.08473* (2024).
---
Rebuttal 2:
Comment: Dear Reviewer XS9H,
We are writing as the authors of the paper titled "Neural Krylov Iteration for Accelerating Linear System Solving" (ID: 18161). Thanks again for your valuable comments and constructive suggestions, which are of great help to improve the quality of our work. We have followed your suggestions to significantly enhance the quality of our work. As the deadline for the author-reviewer discussion period is approaching (due on Aug 13), we are looking forward to your further comments and/or questions.
We sincerely hope that our rebuttal has properly addressed your concerns. If so, we would deeply appreciate it if you could raise your scores. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: I would like to thank the authors for their detailed responses to my questions. Thank you for pointing out the relevant sections in the appendix which clearly answer my remaining questions.
Considering this and the response to Reviewer CSA7 that improves upon the explanation of the method, I have increased the score to 8.
---
Reply to Comment 2.1.1:
Title: Thanks for Your Positive Feedback
Comment: We sincerely appreciate your positive feedback and the time you’ve dedicated to reviewing our rebuttal. Thank you for appreciating the merits of our work. Your constructive comments are invaluable to us. | Summary: The paper develops a new method, NeurKItt, for accelerating the solution of non-symmetric linear systems. NeurKItt constructs an approximate invariant subspace of non-symmetric matrix A using the Fourier Neural Operator. This invariant subspace is then used as the initial subspace within GMRES. The paper provides theoretical support for the proposed method and empirical results showing the benefits of NeurKItt.
Strengths: 1.) The paper develops an interesting new method for solving non-symmetric linear systems based on operator learning. In particular, I have not seen the Fourier Neural Operator used in this way before. I believe the idea has the potential to be quite useful and could have valuable implications for other problems.
2.) The authors provide theoretical analysis supporting the method.
3.) The method performs well in practice, which is what is most important. NeurKItt seems to yield significant reductioning in the number of GMRES iterations, and also yields good wall-clock time speed-ups (which is particularly impressive as these matrices are very sparse). So I think the method has the potential to be very useful in practice.
Overall, the paper proposes an exciting new idea that appears to work quite well and has the potential to be quite useful.
Moreover, the paper addresses an important hard problem: improving the solutions of non-symmetric linear systems. This setting is quite challenging, as the non-symmetry makes things difficult. Often, methods for accelerating GMRES are highly problem-dependent, so a method that can lead to generic improvements is significant. I'm happy to recommend its acceptance to NeurIPS provided an issue given below in the weaknesses section is properly addressed.
Weaknesses: I'd say the main weakness of the paper is its presentation. In particular, a significant issue is that the paper lacks a precise description of how FNO is implemented to learn the invariant subspace. The appendix only briefly describes FNO but does not explain how the paper applies it. The current discussion in lines 148-151 is vague and unclear. In particular, the output of FNO is the output of an operator on a function, not a subspace. So, how are you getting the predictive subspace from the output of the FNO? I have ideas of how this is done, but rather than guessing, I would like the authors to explain the precise procedure adequately. This should then be added to the paper, ideally in the main body, but at least the supplement with clear pointers in the main paper of where to find the details.
I'm willing to consider raising my score if the authors address this point well. If they don't, I will lower my score, as I cannot recommend acceptance of a paper for which a significant part of the methodology is unclear.
Aside from this, the authors should do a careful spell-check, as there are many typos throughout, which is somewhat distracting.
Technical Quality: 4
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have adequately addressed the current limitations of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work.
## Weaknesses
> **How are you getting the predictive subspace from output of the FNO? I have ideas of how this is done, but rather than guessing, I would like the authors to explain the precise procedure adequately. This should then be added to the paper, ideally in the main body, but at least the supplement with clear pointers in the main paper of where to find the details.**
- **Here, we use the 2D Darcy flow problem to demonstrate how the Fourier Neural Operator (FNO) predicts the subspace given the input function.**
- Let the input function be $a\in\mathbb{R}^{d_a\times d_a}$ from the 2D Darcy flow problem, where $d_a\in\mathbb{N}$ is the resolution of the input function, which yields a linear system $Ax=b$ for numerical solving, where matrix $A\in\mathbb{R}^{d_A\times d_A}$. We aim to predict a subspace $\mathcal{\hat{K}}$, as a matrix $X\in\mathbb{C}^{d_A\times n}$, for matrix $A$ in the linear system. First, the FNO's lifting layer $\mathcal{R}$ transforms $a$ to a higher-dimensional representation $v_0\in\mathbb{C}^{d_a\times d_a\times c}$, while the 3rd dimension is channel dimension. After $T$ Fourier layers forward, we have the $v_T\in\mathbb{C}^{d_a\times d_a\times c}$. Because Fourier layers keep the shape unchanged, we apply a transformation $Q$ to map the $v_T$ to the desired space, $\mathcal{\hat{K}}=Q(v_T)$, with $Q:\mathbb{C}^{d_a\times d_a\times c}\rightarrow\mathbb{C}^{d_A\times n}$.
- In practice, transformation $Q$ is a stack of transformation layers. It first flattens the first and second dimension of $v_T$, obtaining $q_0\in\mathbb{C}^{d_a^2\times c}$. Then a fully-connected neural network (FNN) applies the mapping $\mathbb{C}^{d_a^2\times c}\rightarrow\mathbb{C}^{d_a^2\times n}$ to $q_0$, obtaining $q_1\in\mathbb{C}^{d_a^2\times n}$. And another FNN applies the mapping $\mathbb{C}^{d_a^2\times n}\rightarrow\mathbb{C}^{d_A\times n}$ to $q_1$, obtaining the output $X\in\mathbb{C}^{d_A\times n}$. Finally, we apply QR decomposition to $X$ for orthogonalizing, obtaining $\mathcal{\hat{K}}=\mathrm{span}\{X\}$.
- **To address the problems identified in the reviewers' comments, we will make several changes to our paper.** First, we will replace lines 148-151 with more precise content to clarify the subspace prediction task. The revisions include the problem setup and the detailed description of how to predict subspace by FNO. Second, we will add the 2D Darcy flow example mentioned above to Appendix C to give an image of how to predict the subspace by the FNO. The sentences quoted below are the revised version of lines 148-151:
- "Generally, for a linear system $Ax=b$ derived from the parametric PDE problem, to predict its invariant subspace $\mathcal{\hat{K}}$, the input to FNO is the input function $a\in\mathbb{R}^{d_a}$ from the given PDE, where $d_a\in\mathbb{N}$. We provide a detailed discussion in Appendix B about how to build a linear system problem from a PDE problem, and what is the input function. Our task is to learn the mapping between two Hilbert space $\mathcal{G}:\mathbb{R}^{d_a}\rightarrow \mathbb{C}^{d\times n}$ using FNO.
- For FNO, the lifting transformation $\mathcal{R}$ first lifts the input $a$ to a higher dimensional representation $v_0\in\mathbb{C}^{d_a\times c}$, where $c$ is the number of channels. Then we feed the $v_0$ to Fourier layers. After $T$ Fourier layers forward, we have $v_T\in\mathbb{C}^{d_a\times n}$ from the last Fourier layer, which keeps the same shape as $v_0$. The FNO's output $X=Q(v_T)$ is the projection of $v_T$ by the transformation $Q:\mathbb{C}^{d_a\times c}\rightarrow\mathbb{C}^{d_A\times n}$. NeurKItt then uses QR decomposition to orthogonalize the matrix $X$, obtaining the predicted subspace $\mathcal{\hat{K}}$. We provide more details about how to predict the subspace given the 2D Darcy flow problem in Appendix C."
- Due to NeurIPS rebuttal restrictions, we cannot submit a revised version during the rebuttal. We will incorporate these modifications into our paper in the future version. If these modifications do not adequately address the issues raised, please let us know your further concerns, and we will continue to actively respond to your comments.
> **Aside from this, the authors should do a careful spell-check, as there are many typos throughtout, which is somewhat distracting.**
- Thank you for bringing this to our attention. We will conduct a thorough spell-check and correct any typos in the future version.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: I appreciate the authors' detailed reply to my concerns. This adequately addresses the issues I had with the presentation. I trust the authors to include this in the final revision, as they have promised here. Given this I will raise my score from 7 to 8, and presentation from 1 to 3.
---
Reply to Comment 1.1.1:
Title: Thanks for your positive feedback
Comment: Thank you for your positive feedback and for taking the time to read and respond to our rebuttal. We appreciate your constructive comments and concrete suggestions. We will incorporate them into our paper in the future version. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FasterDiT: Towards Faster Diffusion Transformers Training without Architecture Modification | Accept (poster) | Summary: The paper presents FasterDiT, a method intended to speed up the training of Diffusion Transformers (DiT) without making changes to the architecture. By utilizing insights from the Probability Density Function (PDF) of Signal-to-Noise Ratio (SNR), FasterDiT improves training strategies and the effectiveness of supervision. The paper also includes a thorough set of experiments and a new supervision approach called FasterDiT.
Strengths: The paper introduces an innovative approach to training strategies by considering the Probability Density Function (PDF) of Signal-to-Noise Ratio (SNR), aiming to improve training efficiency. It includes a thorough empirical analysis with significative experimental results, providing robust evidence to support the findings. FasterDiT achieves significant acceleration in training Diffusion Transformers with a notable increase in training speed, making it a relevant and practical solution for enhancing the efficiency of training large-scale generative models.
Weaknesses: The paper would benefit from additional experiments to demonstrate the generalizability of the proposed approach. A more comprehensive comparison with existing methods for accelerating training of Diffusion Transformers would provide a broader context for evaluating the effectiveness of FasterDiT. Additionally, the scalability of FasterDiT to larger datasets or more complex tasks is not sufficiently discussed, potentially limiting its applicability. The absence of theoretical results and proofs in the paper limits the depth of understanding of the proposed method.
Technical Quality: 3
Clarity: 2
Questions for Authors: No questions.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: A more extensive discussion regarding the drawbacks of the proposed approach is needed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\color{red}{Question1:}$ The paper would benefit from additional experiments to demonstrate the generalizability of the proposed approach. Additionally, the scalability of FasterDiT to larger datasets or more complex tasks is not sufficiently discussed, potentially limiting its applicability.
$\color{blue}{Response1:}$ Thanks for your suggestion. As requested, we have conducted three types of additional experiments to further explore the generalization of FasterDiT. Please refer to **Response to All Reviewers** for details.
$\color{red}{Question2:}$ A more comprehensive comparison with existing methods for accelerating training of Diffusion Transformers would provide a broader context for evaluating the effectiveness of FasterDiT.
$\color{blue}{Response2:}$ Previous well-known acceleration methods, such as MaskDiT [1], MDT [2], and MDTv2 [3], have achieved significant performance improvements by **modifying architectures** and redesigning the training pipelines of DiT. In contrast, our work approaches the problem from a different perspective. Our aim is to improve DiT training with minimal modifications and **without any changes to the architecture**. This approach is complementary to previous methods. In the future, we plan to explore how combining FasterDiT with more structured improvements can achieve even more efficient generation performance.
$\color{red}{Question3:}$ The absence of theoretical results and proofs in the paper limits the depth of understanding of the proposed method.
$\color{blue}{Response3:}$ Thank you for your suggestions. Our work is to analyze the relationship between SNR PDFs and generation performance through a large number of experiments. We try to derive empirical results and insights to accelerate diffusion training. We will further explore this issue from a theoretical perspective in the future.
$\color{red}{Limitations:}$ A more extensive discussion regarding the drawbacks of the proposed approach is needed.
$\color{blue}{Response:}$ The main limitation of this paper lies in the lack of exploration of large-scale experiments, such as 2K high-resolution images, text-to-image generation, and video generation. Among these, we particularly focus on the text-to-image generation aspect. Specifically, in the class-conditional generation described in this paper, the DiT block only needs to process image tokens. However, in some text-to-image architectures, such as SD3 [4], self-attention needs to handle sequences combining text and visual tokens. The features from different sources (such as T5 and VAE) may lead to potential instability. We plan to further investigate this issue in the future. Besides, we will update our section on 'Limitations' in the next version of our paper. Thanks for your suggestion.
*Refs:*
[1] Zheng H, Nie W, Vahdat A, et al. Fast training of diffusion models with masked transformers[J]. arXiv preprint arXiv:2306.09305, 2023.
[2] Gao S, Zhou P, Cheng M M, et al. Masked diffusion transformer is a strong image synthesizer[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 23164-23173.
[3] Gao S, Zhou P, Cheng M M, et al. MDTv2: Masked Diffusion Transformer is a Strong Image Synthesizer[J]. arXiv preprint arXiv:2303.14389, 2023.
[4] Esser P, Kulal S, Blattmann A, et al. Scaling rectified flow transformers for high-resolution image synthesis[C]//Forty-first International Conference on Machine Learning. 2024. | Summary: This work aims at solving the slow training convergence of Diffusion Transformer, from the perspective of Signal-to-Noise Ratio (SNR).
Different from other works, the authors formulate the probability density function (PDF) of SNR during training, and then leverage such SNR PDF to analyze the association between training performance and robustness across some common pipelines of DiT.
The finding of the trade-off between method robustness and performance makes authors to propose Fast-DiT which significantly accelerate DiT's training convergence.
Strengths: 1. The motivation is very clear. DiT suffers from slow convergence and this work successfully solved this issue without any changes in model architecture.
2. The formulation of SNR PDF is novel. With this analysis tool, authors can check the robustness and performance among various DiT training pipeline.
3. The emipirical experiments are sufficient and the final results compared with SOTA are convincing for me.
Weaknesses: 1. Current Table 1 compares the CFG (classifier-free guidance) results.
Please compare the Class-conditional results of Faster-DiT with DiT and SiT, under the same setting in Table 1 (or even more iterations of Faster DiT).
Usually we need both CFG and Class-conditional results to check the training convergence. Currently Figure 1 right with small iterations is not convincing.
2. I am still unclear how you change the std of data without changing dataset. Please specify this point.
Technical Quality: 3
Clarity: 3
Questions for Authors: Listed in weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors has discussed the limations of the best performance of FastDiT without sufficient GPUs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\color{red}{Question1:}$ Current Table 1 compares the CFG (classifier-free guidance) results. Please compare the Class-conditional results of Faster-DiT with DiT and SiT, under the same setting in Table 1 (or even more iterations of Faster DiT). Usually we need both CFG and Class-conditional results to check the training convergence. Currently Figure 1 right with small iterations is not convincing.
$\color{blue}{Response1:}$ Thank you for your suggestion.
1. As requested, we have added the following content: (1) We continue training FasterDiT up to 2000k iterations. (2) We additionally report the generation results without CFG (classifier-free guidance). The results indicate that FasterDiT still demonstrates a significant advantage. Please refer to the **Response to All Reviewers** for details.
2. All experiments in Figure 1 are conducted on ImageNet and trained for 100k iterations. This setting stems from our observation of the training process that the trend of the training FID at 100k is similar to the trend of the training FID over a longer period of time. For example, the table below shows one set of our experiments on ImageNet. It shows that the trend of FID-50k at 100k is similar to the trend of FID-50k at 200k.
| Type | Std |FID-50 (100k) | FID-50k (200k) |
|:------:|:------:|:------:|:------:|
|DDPM cosine schedule| 0.5 | 43.22 | 22.08 |
| | 0.6 | 61.15 | 25.70 |
| | 0.7 | 64.18 | 25.96 |
| | 0.8 | 72.39 | 28.94 |
| | 0.9 | 78.45 | 30.45 |
| | 1.0 | 98.91 | 42.41 |
| | 1.1 | 81.26 | 38.10 |
| | 1.2 | 111.83 | 42.44 |
$\color{red}{Question2:}$ I am still unclear how you change the std of data without changing dataset. Please specify this point.
$\color{blue}{Response2:}$ The $std$ here refers to the standard deviation of the input. We can achieve the scaling of the standard deviation simply by multiplying a scale factor.
---
Rebuttal Comment 1.1:
Comment: Thanks for your experiments and answers! These answers have solved my problems.
I will keep my ratings. | Summary: This paper propose FasterDiT, a diffusion model training method that considers the data distribution in the definition of signal-to-noise ratio. It formulates the SNR in a new framework, estimates the PDF of the SNR, and then employs it to improve the training efficiency of DiT. Experimental results show that FasterDiT achieves relative FID but with 1/7 training time.
Strengths: 1. The idea of re-formulating the SNR in the diffusion model is a very interesting and novel strategy for me. I believe this paper has a good technical contribution.
2. The experimental results of DiT show both good acceleration and generation performance.
Weaknesses: 1. This paper has a bad quality in writing. There are some typos and the explanation to the SNR is not clear to me. I'm confused for many questions: (1) What does the std means in Line 109. Does it mean the std of the value of pixels in the images? (2) Why can we assume std^2 approximate K(I)/std^2 as a constant C(I)? (3) Authors mention "robustness" for many times in the paper for "training robustness", "data robustness". What does it mean in detail? (4) In Figure 6, the caption inside the figure is "multi-step balance", while the caption after the figure is multiple-step balance. Please use the same description. What does multi-step balance mean? Does that mean the new training strategy with SNR? The "single-step supervision" is also not a good choice here. It should be something like "directionality of velocity" or "single-step supervision (ours)" since there has already been single-step supervision in the traditional training method.
2. Does the proposed method generalize well to the diffusion models besides DiT, such as latent diffusion models? Please discuss on this. If so, experimental results should be provided for this since only experiments on DiT & ImageNet is not very convincing.
Technical Quality: 3
Clarity: 1
Questions for Authors: 1. Does x_* in the paper indicate the x_0 (real images)? If so, I advise to replace it with x_0 to align with previous works.
2. Section 4: Improving DiT Training. The "." should be removed.
3. The space between line 140 and 141 are excessively reduced.
In summary, I think this paper may have good technical contribution. But it's really difficult for me to understand the details of this paper.
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\color{red}{Question1:}$ What does the $std$ means in Line 109. Does it mean the $std$ of the value of pixels in the images?
$\color{blue}{Response1:}$ The $std$ here refers to the standard deviation of the input. For traditional diffusion, it refers to the standard deviation of pixel values. For latent diffusion, it refers to the standard deviation of the latents after VAE encoding. In this paper, our experiments are primarily based on latent diffusion.
$\color{red}{Question2:}$ Why can we assume $std^2$ approximate $K(I)/std^2$ as a constant $C(I)$?
$\color{blue}{Response2:}$ The approximation maintains our definition as an extension of the previous one with minimal changes. Specifically, in previous definitions, the standard deviation ($std$) is the most crucial factor in quantifying the Signal-to-Noise Ratio ($SNR$). For ease of analysis, previous works [1, 4] treat both the input and noise as normal distributions, resulting in:
$SNR_{pre} = \frac{(\alpha_t \times std_{input})^2}{(\sigma_t \times std_{noise})^2}=\frac{\alpha_t^2}{\sigma_t^2}, std_{input}=std_{noise}=1$
In FasterDiT, our definition extends this from the following two aspects: Firstly, we take the standard deviation of input back into consideration instead of assuming it to be one. Secondly, some work [5] has shown that $SNR$ relates to more properties of inputs besides $std_{input}$ such as resolution. Here, we denote these additional properties as $K(I)/std_{input}^2$. Thus, we obtain:
$SNR = \frac{(\alpha_t \times std_{input})^2}{\sigma_t^2}\times\frac{K(I)}{std_{input}^2}$
In our work, we pay more attention to the former aspect and relax the definition. We assume the $std_{input}^2$ is a decoupled parameter in $K(I)$ for ease of analysis similar to the previous definition. It helps us simplify the modeling of the SNR PDF with minimal impact on the qualitative results. We plan to conduct a more in-depth exploration of this in our future work.
$\color{red}{Question3:}$ Authors mention "robustness" for many times in the paper for "training robustness", "data robustness". What does it mean in detail?
$\color{blue}{Response3:}$ Here, "robustness" refers to the stability of training performance despite variations in the standard deviation of the input data. We will standardize this terminology in future versions.
$\color{red}{Question4:}$ In Figure 6, the caption inside the figure is "multi-step balance", while the caption after the figure is multiple-step balance. Please use the same description. What does multi-step balance mean? Does that mean the new training strategy with SNR? The "single-step supervision" is also not a good choice here. It should be something like "directionality of velocity" or "single-step supervision (ours)" since there has already been single-step supervision in the traditional training method.
$\color{blue}{Response4:}$ The "multi-step balance" refers to modifying the SNR PDF by adjusting the proportion of different timesteps during training. We will standardize the terminology and improve the naming of the direction loss to enhance clarity. Thank you for your suggestions.
$\color{red}{Question5:}$ Does the proposed method generalize well to the diffusion models besides DiT, such as latent diffusion models? Please discuss on this. If so, experimental results should be provided for this since only experiments on DiT & ImageNet is not very convincing.
$\color{blue}{Response5:}$ Thanks for the suggestion. Yes, it does. The method proposed in FasterDiT is decoupled from the model architecture. It could be interesting to explore its performance alongside DiT. As requested, we employed the U-Net used in LDM [2] and another well-known architecture, U-ViT [3], to investigate the influence of our method. The results demonstrate that FasterDiT continues to enhance the convergence rate of these models. Please refer to **Response to All Reviewers** for more details.
$\color{red}{Question6:}$ (1) Does $x_*$ in the paper indicate the $x_0$ (real images)? If so, I advise to replace it with $x_0$ to align with previous works. (2) Section 4: Improving DiT Training. The "." should be removed. (3) The space between line 140 and 141 are excessively reduced. In summary, I think this paper may have good technical contribution. But it's really difficult for me to understand the details of this paper.
$\color{blue}{Response6:}$ Thank you very much for your careful review of our submission and your detailed suggestions. In this paper, we explore DiT training with both diffusion and flow matching. We didn't choose $x_0$ to denote the input because of the different timestep definitions. Different from DDPM, in Rectified Flow, $x (t=0)$ refers to the pure noise. We will consider clearer expressions in future versions of our paper. Besides, we will address all the other issues you raised and thoroughly review the entire manuscript to prevent similar issues from occurring in the future.
*Refs:*
[1] Salimans T, Ho J. Progressive Distillation for Fast Sampling of Diffusion Models[C]//International Conference on Learning Representations.
[2] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10684-10695.
[3] Bao F, Nie S, Xue K, et al. All are worth words: A vit backbone for diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 22669-22679.
[4] Hang T, Gu S, Li C, et al. Efficient diffusion training via min-snr weighting strategy[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 7441-7451.
[5] Hoogeboom E, Heek J, Salimans T. simple diffusion: End-to-end diffusion for high resolution images[C]//International Conference on Machine Learning. PMLR, 2023: 13213-13232.
---
Rebuttal 2:
Title: Response to author rebuttal
Comment: After reading the response from the authors, I decide to increase my rate from 5 to 6. I believe this paper has real novelty in the training of diffusion models and it still can be greatly improved by better writing. Please keep polishing it. | Summary: The paper focuses on accelerating the training process of Diffusion Transformers (DiT) without modifying their architecture. The authors identify two primary issues: inconsistent performance of certain training strategies across different datasets, and limited effectiveness of supervision at specific timesteps.
Key contributions include:
1. Extended Definition of Signal-to-Noise Ratio (SNR)
2. Extensive experiments and empirical findings for SNR PDF
3. A new supervision method
Strengths: - FasterDiT achieves competitive results (2.30 FID on ImageNet 256 resolution at 1000k iterations) while being seven times faster in training compared to traditional DiT (2.27 FID).
- The paper presents a large number of experiments to empirically validate the proposed method. The paper presents various discussions and insights of SNR PDF.
- By generalizing the definition of SNR and analyzing various training strategies through SNR PDFs, the paper shows promising outcomes of faster training.
Weaknesses: - The experiments were conducted on 256-resolution ImageNet only. It would be interesting to validate the proposed method on larger resolutions (such as 512, 1024, etc.). Acceleration is more critical to those scenarios.
- The figures could be improved. The rendered text is not easy to read.
Technical Quality: 2
Clarity: 2
Questions for Authors: see weakness
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\color{red}{Question1:}$ The experiments were conducted on 256-resolution ImageNet only. It would be interesting to validate the proposed method on larger resolutions (such as 512, 1024, etc.). Acceleration is more critical to those scenarios.
$\color{blue}{Response1:}$ Thanks for your suggestion. As requested, we have conducted experiments with higher resolution (ImageNet 512x512). It shows that FasterDiT could still achieve faster convergence speed compared to the original DiT. Please refer to our **Response to All Reviewers** for more details. Due to the limited time available for rebuttal, we will explore higher resolution and larger training scales in the future.
$\color{red}{Question2:}$ The figures could be improved. The rendered text is not easy to read.
$\color{blue}{Response2:}$ Thank you very much for your kind suggestions. We will improve our figures in the future version of our manuscript.
---
Rebuttal Comment 1.1:
Title: comment
Comment: Thank you for the experiments. I hope the authors will add the high-res results to the final paper. | Rebuttal 1:
Rebuttal: ## Response to All Reviewers
We appreciate all of your valuable feedback. Your valuable suggestions have significantly improved our manuscript.
All reviewers have suggested that we conduct additional experiments to demonstrate the effectiveness and generalization ability of our method. As requested, we have added the following **three types of additional experiments**:
1. Training FasterDiT with longer iterations and performance without CFG (R_dNCD).
2. Training FasterDiT with higher resolution (R_fAgp, R_qPr3)
3. Applying the FasterDiT strategy to different diffusion models (R_ttWn, R_FhtE).
Due to the limited time available for the rebuttal, we have conducted experiments to the maximum extent that our resources allow. The larger scale experiments mentioned by the reviewers, such as 1024 resolution, text-to-image generation ,and video generation, cannot be completed within this short period. We plan to explore these in future work. Thank you for all the suggestions.
### 1. Longer Training of FasterDiT:
We further improve the performance of FasterDiT with longer training iterations and report more results of FID-50k on ImageNet 256 resolution.
It shows that even without any structural modifications, FasterDiT could achieve comparable results to the original DiT at 1000k iterations. Furthermore, when the training period is extended to 2000k iterations, FasterDiT's performance further improves, achieving an FID-50k score of 2.03, demonstrating the effectiveness of our approach.
**Performance without Classifier-Free Guidance (CFG)**
| Method | Models | Training Samples | FID-50k |
|:--------|:--------:|:--------:|:--------:|
| DiT | DiT-XL/2 | 7000k x 256 | 9.60 |
| SiT | DiT-XL/2 | 7000k x 256 | 8.60 |
| FasterDiT | DiT-XL/2 | 1000k x 256 | 8.72 |
| | | 1500k x 256 | 8.22 |
| | | **2000k x 256** | **7.91** |
**Performance with Classifier-Free Guidance (CFG)**
| Method | Models | Training Samples | FID-50k |
|:--------|:--------:|:--------:|:--------:|
| DiT (*cfg=1.5*) | DiT-XL/2 | 7000k x 256 | 2.27 |
| SiT (*cfg=1.5*) | DiT-XL/2 | 7000k x 256 | 2.06 |
| FasterDiT (*cfg=1.5*) | DiT-XL/2 | 1000k x 256 | 2.30 |
| | | 1500k x 256 | 2.12 |
| | | **2000k x 256** | **2.03** |
### 2. Higher Resolution:
As requested, we explore FasterDiT on higher resolution generation experiments to demonstrate its generalization ability. Specifically, we apply our method to DiT-B/2 and DiT-L/2 for ImageNet 512x512 generation training. The results are shown in the table below.
FasterDiT achieves faster convergence across all experiments. When trained with 200k iterations, we improve the FID-10k performance of DiT-B/2 by 18.78 and DiT-L/2 by 17.93. This demonstrates the effectiveness of our method in high-resolution tasks.
| Method | Models | Training Samples | Resolution | FID-10k |
|:--------|:--------:|:--------:|:--------:|:--------|
| DiT | DiT-B/2 | 100k x 128 | 512x512 | 93.36 |
| | | 200k x 128 | 512x512 | 77.11 |
| FasterDiT | DiT-B/2 | 100k x 128 | 512x512 | 77.85 (-11.51) |
| | | 200k x 128 | 512x512 | 58.33 **(-18.78)** |
| DiT | DiT-L/2 | 100k x 64 | 512x512 | 87.24 |
| | | 200k x 64 | 512x512 | 67.29 |
| FasterDiT | DiT-L/2 | 100k x 64 | 512x512 | 71.58 (-15.66) |
| | | 200k x 64 | 512x512 | 49.36 **(-17.93)** |
### 3. Different Architectures:
As requested, we explore our training method with other diffusion models besides DiT, such as Latent Diffusion Models (UNet architecture) [1] and U-ViT [2]. The results are shown in the table below.
It shows that with our training methods, the performance of U-ViT and UNet both improve. It demonstrates that our method might have opportunities to be generalized to more diffusion architectures.
| Model | Training Samples | FID-10k |
|:--------|:--------:|:--------:|
| U-ViT-L | 200k x 128 | 50.22 |
| U-ViT-L + Ours | 200k x 128 | **37.12** |
| UNet | 200k x 128 | 66.73 |
| UNet + Ours | 200k x 128 | **60.07**
*Refs:*
[1] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10684-10695.
[2] Bao F, Nie S, Xue K, et al. All are worth words: A vit backbone for diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 22669-22679.
Pdf: /pdf/f200681cd98745e244a2a61f49d54b060786c7d4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper presents observations on training strategies for Diffusion Transformers (DiT) using Signal-to-Noise Ratio (SNR) Probability Density Function (PDF) analysis. Through extensive experiments, the authors derive insights into training performance and robustness. Based on these findings, they propose a method to accelerate the training process of DiT without modifying the model architecture.
Strengths: 1. The problem addressed is practical, focusing on the significant computational demands of training Diffusion Transformer models.
2. The authors propose a simple yet effective technique to improve the training efficiency of Diffusion Transformer models without modifying the architecture
3. The paper is well-structured with clear motivation, methodology, and results.
Weaknesses: 1. The paper lacks experiments with higher resolutions such as 512x512 or 1024x1024, which could provide insights into the method's scalability for more complex image generation tasks.
2. The hyperparameter 'std' requires tuning to find the appropriate setting, which may introduce additional complexity in implementing the method across different datasets or tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: Why does adding cosine similarity loss for velocity direction supervision accelerate the convergence rate of the model?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The scalability of this method is not thoroughly evaluated, including its effectiveness for higher resolutions, other datasets, or different types of generative tasks (e.g., text-to-image, video generation).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\color{red}{Question1:}$ The paper lacks experiments with higher resolutions such as 512x512 or 1024x1024, which could provide insights into the method's scalability for more complex image generation tasks.
$\color{blue}{Response1:}$ Thanks for the suggestion. As requested, we have conducted experiments on a higher resolution (ImageNet 512x512) using two different model scales (DiT-B and DiT-L). The results show that FasterDiT still achieves faster convergence at higher resolutions. For more details, please refer to our **Response to All Reviewers**.
$\color{red}{Question2:}$ The hyperparameter 'std' requires tuning to find the appropriate setting, which may introduce additional complexity in implementing the method across different datasets or tasks.
$\color{blue}{Response2:}$ Thank you for the question. We will explain it from two perspectives.
Firstly, we believe that an appropriate standard deviation (std) of the diffusion input can influence training performance in certain cases. This observation is similar to the concept of a 'magic number' or 'scale factor' in previous well-known studies [1, 2]. Using the CIFAR10 toy setting as an example, when we apply the most common image normalization, the standard deviation (std) of the data is approximately 0.4, and the training FID result is 51.40. However, when we adjust the std to 0.8, the result of the same training method improves significantly to 36.31. This demonstrates the importance of std adjustment.
| Model | Std |Training Samples | FID-10k |
|:--------|:--------:|:--------:|:--------|
| DiT-S/2 | 0.4 | 50k x 128 | 51.40 |
| DiT-S/2 | 0.8 | 50k x 128 | 36.31 (-15.09) |
Secondly, the complexity brought by tuning the 'std' will decrease as research on diffusion deepens. For instance, [2] suggested that as resolution increases, training tends to favor a smaller std. In our work, the SNR PDF provides a new analytical tool and offers insights into this issue. We hope it can facilitate a deeper understanding of the diffusion process.
$\color{red}{Question3:}$ Why does adding cosine similarity loss for velocity direction supervision accelerate the convergence rate of the model?
$\color{blue}{Response3:}$ The direction loss of velocity serves as an auxiliary form of supervision that enables a more fully supervised training process. To provide further insights, we have conducted an ablation study. We train two DiT-B models: one with FasterDiT and the other with FasterDiT without direction loss. We then visualize the training loss at different timestep intervals, which can be seen in our **submitted PDF**. The results show that the model with directional loss demonstrates a significantly better training effect, particularly during the diffusion period of relative low-noise ($0.4 < t < 0.9$). The end-to-end performance comparison is shown in the table below.
| Method | Models | Training Samples | Resolution | FID-10k (100k) | FID-10k (200k) |
|:--------|:--------:|:--------:|:--------:|:--------:|:--------:|
| FasterDiT (w/o direction loss) | DiT-B/2 | 100k x 128 | 512x512 | 83.29 | 63.11 |
| FasterDiT | DiT-B/2 | 100k x 128 | 512x512 | **77.85** | **58.33** |
$\color{red}{Limitations}:$ The scalability of this method is not thoroughly evaluated, including its effectiveness for higher resolutions, other datasets, or different types of generative tasks (e.g., text-to-image, video generation).
$\color{blue}{Response:}$ Thanks for your suggestion. We have provided experiments with higher resolutions and different models of FasterDiT. We will explore more complex tasks you mentioned such as T2I generaiton and video generation in our following work.
*Refs:*
[1] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10684-10695.
[2] Chen T. On the importance of noise scheduling for diffusion models[J]. arXiv preprint arXiv:2301.10972, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. These additional experiments address most of my concerns. Based on this, I am willing to upgrade my score. | null | null | null | null | null | null |
Unrolled denoising networks provably learn to perform optimal Bayesian inference | Accept (poster) | Summary: This work seeks to understand why neural network-based approaches for inverse problems can outperform methods that incorporate hand-crafted priors. In the Bayesian setting, it is known that when the true prior is available, the optimal estimator (in a mean square sense) is the conditional mean. However, in practice, the true prior is never known, and learning-based methods, such as those based on unrolling, must implicitly estimate such a prior from samples of the unknown prior. While such approaches have shown to perform well empirically, it is unclear from a mathematical perspective if they are learning the true prior. This work addresses this by analyzing what estimators do unrolled AMP-based networks converge to. It is shown that when the underlying prior comes from a product distribution with subGaussian coordinates, then the network trained with SGD will converge to the optimal denoiser, if the prior were known. This convergence occurs in the high-dimensional asymptotic regime. The authors support their theoretical results with empirics showing that the learned denoisers converge to the optimal denoisers under the setting of their theorem, along with extensions to other problems, such as the nonlinear rank-1 matrix estimation problem.
Strengths: - The work is tackling an interesting and timely question, which is why do neural-network based methods outperform algorithms based on hand-crafted priors and can they reach the performance of “optimal” methods.
- Toy empirical results help in supporting the theory that the training scheme and network architectures converge to the optimal Bayes denoisers.
Weaknesses: - Some aspects regarding the readability and presentation could be improved, especially as they relate to the main theorems. I have noted my questions below.
- The assumptions on the prior distribution being a smooth product distribution feels somewhat limiting. In particular, the authors note that their overparameterization result is dimension-free, but I wonder whether this is a result of the fact that with a smooth product distribution, the dynamics are characterized by the 1-d state evolution. Do the authors have a sense of whether the result would still be dimension-free if the product distribution assumption was relaxed?
Technical Quality: 3
Clarity: 3
Questions for Authors: - I am confused about how the notion of complexity is applied in the Theorem 2. In particular, what is the function $\phi$ in the statement of Theorem 2? Is it the underlying “optimal” denoiser $f_{\ell}^*$ or is it the one-hidden layer ReLU network? It is also not clear in the statements of Lemma 3 and Theorem 3 in the appendix what the function classes $\mathcal{F}$ are. Are these functions of the form $F^*$ with bounded coefficients $(a_i, w_{i,1},w_{i,2})$? Moreover, to apply Lemma 3, the $\phi_i$’s need to be infinite-order smooth. How does this arise in the assumptions of the Theorem?
- Also, why is the error on the right-hand side of the inequality in terms of $\hat{v}_L$ and $\hat{x}_L$ instead of $v_L$ and $x_L$, the outputs of the optimal Bayes AMP? It’s not clear why it should be in terms of the outputs of the learned AMP, since the claim is that the learned AMP’s ell2 error is close to the optimal Bayes AMP ell2 error.
- Have the authors conducted experiments in the case when the distribution is not a product prior? Do similar results extend to this case?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss the limitations of their theory in the Conclusion section, noting that the assumptions on the prior distribution and linearity/Gaussianity of the inverse problem are in particular, limited settings. There are no broader societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful questions and remarks.
- **On the smoothness and product distribution assumption**: Please see the global response for the discussion about the assumptions on the prior.
- **On the notion of the complexity of denoisers**: We think that this confusion is because of a typo/underspecification in the statement of theorem 2 and theorem 3. The notion of complexity is applied to the underlying optimal denoiser $f_\ell^*$ (i.e. the function $\phi$). In theorem 2/theorem 3, $M_0$ and $N_0$ are polynomial in the sum of the complexities of $f_{\ell}^*$ for $\ell$ from $0$ to $L-1$. Moreover, $M_0$ and $N_0$ are polynomial in $\sum_{\ell \in [0, L-1]} C(f_{\ell}^*, R(\log \frac{1}{\epsilon})^{1.5})$ and $\frac{1}{\epsilon}$. We apologize for the confusion caused by the typo and will update it in the revised draft of the paper.
- **On applying function classes and Lemma 3**:
- When we refer to the function class $\mathcal{F}$, we are indeed referring to target functions $F^*(x)$ with bounded coefficients. Moreover, to obtain the learning result for the $\ell^{th}$ layer denoiser, we apply Lemma 3 representing $f_{\ell}^*(x)$ as a target function $F^*(x)$.
- Note that this is equivalent to setting $p=1$ and $a_1=1$, along with $\phi_1 = f_\ell^*$ (where $a_1^*=1$ and $w_{1, 1}^*=\langle 1, 0\rangle $, $w_{1, 2}^*=\langle 0, 1\rangle$; note we are treating "$x$" as the two dimensional input $\langle x, 1 \rangle$ into the target function).
- Recall that $f_{\ell}^*$ is the optimal denoiser at noise level $\tau_\ell^*$ (i.e., $f_{\ell}^*(y) = E[ \mathbf{x} | \mathbf{x} + \tau_\ell^* \mathbf{z} = y ]$); therefore, it is an infinite order smooth function. To get an intuition for why this is the case, note that by Tweedie’s formula, the optimal denoiser is, up to a shift by $x$ and scaling, given by the derivative of $p(x; \tau_\ell^*)$, i.e. the log-density of the prior convolved with some Gaussian. Even if the prior were a mixture of Dirac deltas, the prior convolved with Gaussian is a mixture of Gaussians, and the log-density of this has derivatives at all orders. In particular, $p(x; \tau_\ell^*)$ is always positive and continuous (due to the convolution with a Gaussian, no matter the prior), so all higher order derivatives of $\frac{\nabla p(x; \tau_\ell^*)}{p(x; \tau_\ell^*)}$ exist by applying the quotient rule.
- We thank the reviewer for bringing up these details and will expand on them in the revisions.
- **On the right-hand-side error of the inequality**: This is a typo. The right-hand side of theorem 2 should be $v^L$ and $x^L$ instead of $\hat{v}^L$ and $\hat{x}^L$ as mentioned in the formal version of the theorem in the appendix (theorem 3). This follows from the closeness of the state-evolution parameters ($\tau^L$ and $\hat{\tau}^L$) of iterates $x^L$ and $\hat{x}^L$. We thank the reviewer for pointing this out, and we will update it in the revised draft.
- **On experiments with non-product signal priors**: We have subsequently extended our experiments to non-product priors. We do see similar advantages to unrolling over Bayes-AMP show up in this setting; please see the global response and the supplemental second figure in the attached pdf for a discussion of the non-product prior experiments.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you to the authors for their detailed response and answering my questions. I appreciate the additional experiments that were conducted on examples with non-product distributions. I was hoping to further ask about the sample complexity:
- **Notion of complexity:** Thank you for clarifying that the function $\phi$ indeed should be the optimal denoiser $f_{\ell}^*$. I was hoping to make sure I understand this point further. If the notion of complexity then depends on $f_{\ell}^*$, how should I think about how the quantity $C(f_{\ell}^*, R(\log1/\epsilon)^{1.5})$ scales depending on the distribution $p$ chosen? In particular, you mention in another bullet point that $f_{\ell}^*$ is an infinite-order smooth function under the Theorem's assumptions. In this case, what would the complexity quantity scale like? There is an example in the paper stating that polynomials of degree $\ell$ have complexity $\mathrm{poly}(\alpha^{\ell},\log(1/\epsilon)^{\ell})$ and that for cases where the score is sufficiently smooth, one could apply Jackson's theorem to approximate them. What will be the tradeoff in terms of approximation in sample complexity (e.g., the approximation error + complexity when choosing an $n$-degree polynomial)? Are there examples of distributions $p$ for which we can explicitly calculate or easily bound $C(f_{\ell}^*, R(\log1/\epsilon)^{1.5})$?
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer again for clarifying the subtle details on how the complexity quantity scales with the denoiser.
In Definition 1 in the submission, the complexity $\mathcal{C}(\phi, \alpha)$ is given by the equation
$\mathcal{C} ( \phi, \alpha ) = \sum_{i=0}^\infty \Big( \frac{ \alpha C \sqrt{ \log (1 / \epsilon) } }{ \sqrt{i} } \Big)^i | c_i |$.
Since we know that the denoisers $\phi = f_\ell^*$ are infinite-order smooth, we can leverage Jackson’s Theorem to get a polynomial approximation of $f_\ell^*$ up to degree $k$, and this degree $k$ is what controls how $\mathcal{C}(\phi, \alpha)$ scales.
When $\alpha = R(\log 1/\epsilon)^{1.5}$, plugging into definition 1, the leading term takes the form $\frac{\sqrt{k} {R^{k}}}{k^{\frac{k}{2}}} \cdot (\log{1/\epsilon})^{2k} || c ||$ where $|| c ||$ denotes $L_2$ norm of $[c_0, c_1, \ldots, c_k]$ vector.
As you pointed out, the key tradeoff is the approximation error in Jackson’s theorem, which modulates the degree of the polynomial expansion, which dictates how this leading term scales.
Jackson’s theorem states that the error of approximation $f_\ell^*$ with a degree $k$ polynomial is at most a constant times $|\nabla f_\ell^*|/k$. Since we further assume the score function is $B$-Lipschitz, this is at most $B/k$. Thus, if we want an approximation error $\delta$ in the polynomial approximation, we require $k \geq \frac{B}{\delta}$.
Putting this all together, if you admit error $\delta$ in the polynomial approximation of $f_\ell^*$ and error $\epsilon$ in SGD, then the complexity scales polynomially with leading term
$\left(\frac{\delta}{B}\right)^{\frac{B}{2\delta}} \cdot (R(\log{1/\epsilon})^{2})^{B/\delta} || c ||$.
**Example**: For $Z_2$ prior, we know that the denoiser function is given by $\tanh(y / \tau_\ell^2)$. In this case, the denoiser function is Lipschitz function with $\frac{1}{\tau_\ell^2} \leq \frac{1}{\sigma^2}$ constant. Additionally, it is sub-Gaussian with constant sub-Gaussianity constant. Moreover, $|| c || \leq O((R \log 1/\epsilon)^{10B/\delta})$ (see Lemma 23 in [1], the polynomial in Lemma 23 is bounded with constant at most 2 for $\delta < 1$ because it is $\delta$-approximation of $\tanh$ (a bounded function)). Using this bound on $|| c ||$ and $B=1/\sigma^2, R=O(1)$, the complexity of $\delta-$approximation of the denoiser is
$\left( \delta \sigma^2 \right)^{\frac{1}{2\sigma^2 \delta}} \cdot (C \log{1/\epsilon})^{ \frac{12}{\sigma^2\delta} }$
for some large constant $C$.
We will include this explanation in the appendix of our revision for clarity. Thank you again for engaging closely with our work!
[1] Goel et al. Reliably Learning the ReLU in Polynomial Time. | Summary: This work investigates unrolling approximate message passing (AMP) for solving compressive sensing under Gaussian design and separable prior distribution. The unrolling scheme consists of parametrizing the AMP denoiser at each iteration by the layer of a neural network, which is sequentially trained using observations from the signal distribution by minimizing the empirical mean-squared error.
The main result is to show that with a polynomially wide two-layer neural network trained under one-pass SGD with with polynomially many observations, unrolled AMP asymptotically achieves the same MSE as AMP implemented with the optimal denoiser.
Strengths: The paper is well-written and easy to follow. It is also self-contained - the small introduction to AMP in Section 2.1 is a nice addition since most NeurIPS readers will not be familiar with AMP. The results are interesting, since they suggest that the optimal AMP denoiser can be learned "on the fly" with unrolling.
Weaknesses: The topic is relatively niche within the NeurIPS community. Also, AMP is designed to Gaussian measurements, and it is well-known that as an algorithm it is not very robust to other designs, therefore the results have a limited practical scope. Unrolling requires samples from the signal distribution.
Technical Quality: 3
Clarity: 4
Questions for Authors: - L80-81:
> "*and thus achieve mean-squared error which is conjectured to be optimal among all polynomial-time algorithms*"
As the authors probably know, there are a few counter examples to this conjecture, including for problems which are closely related to Compressive Sensing (e.g. noiseless phase retrieval, see []). I suggest the authors being more precise, by either refeering to the known optimality results within first order methods [CMW20], MW22b] or by adding a footnote saying the conjecture is believed to hold in a broad sense, since all known exceptions rely on rather fine tuned algorithms (like LLL in the noiseless phase retrieval case)
- The observation in L303-304 that the denoiser learned by unrolling can outperform the optimal denoiser is surprising. I understand that optimality of AMP is an asymptotic statement - so this difference should get smaller with the dimension - but is this something you consistently observed throughout in the experiments? Is this improvement always within a $O(d^{-1/2})$ interval? If yes, how can it be distinguished from finite-size fluctuations?
- Can the authors comments what "*sufficiently small*" learning rate and "*sufficiently large*" number of steps exactly mean?
- The tricks for training discussed in Appendix B are intriguing. Do the authors have an intuitive understanding of why fine-tuning is necessary for successfully learning the a good denoiser for Compressive sensing with Gauss-Bernouilli prior but not rank-one matrix factorization and Compressive sensing with $\mathbb{Z}_{2}$?
- At a similar note to the above, what is the intuition of why sequential learning avoid bad local minima? Similarly, why initializing at the solution of the previous layer avoids local minima? How these local minima look like and how they compare with the optimal denoisers? This should be easy to visualise since the denoiser is a one dimensional function (as in Figure 2)
- It would help readability to add in all figure captions the details on the plots, e.g. the dimensions $m, n, d, L$.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are discussed in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments.
- **On a relatively niche topic within the NeurIPS community**: While understanding the Bayes-AMP algorithm in itself is a relatively niche area for the NeurIPS community, our theoretical results show that the learnability of Bayes-AMP amounts to characterizing training dynamics for the score-matching objective. Given the prevalence of score matching within the literature on diffusion models, which is of very wide interest in the NeurIPS community, we believe that this line of work involving unrolling and AMP offers valuable insight into the dynamics and approximation capabilities of diffusion models. Some of this has been explored in work by [Mei24] and [MW24] (see our related work section in the Appendix).
[Mei24] Song Mei. U-nets as belief propagation: Efficient classification, denoising, and diffusion in generative hierarchical models.
[MW23] Song Mei and Yuchen Wu. Deep networks as denoising algorithms: Sample-efficient learning of diffusion models in high-dimensional graphical models.
- **AMP on non-Gaussian measurements**:
- We agree with the reviewer that AMP has these limitations, but empirically we find that *unrolling offers a powerful way to interpolate between the theoretically optimal guarantees of AMP in “nice” Gaussian settings and the practically useful guarantees of neural network-based methods for inference in real-world settings.*
- For example, we find in our “learnable B” experiments that by learning auxiliary matrix parameters, unrolling can surpass the performance of Bayes AMP (and baselines such as ISTA) on non-Gaussian designs despite working with an architecture inspired by AMP. In particular, we examine a random orthogonal design (fig. 4) and a random Gram matrix design (attached fig. in pdf in global response) and show our method can surpass Bayes-AMP. While our theory only explains why unrolling can compete with AMP, it opens up the potential for future work exploring why learning auxiliary matrix parameters can ameliorate the non-robustness of AMP to other designs.
- **On the conjectured to be optimal among all poly-time algorithms**: We thank the reviewer for bringing this up. We will in our revision refer to the precise GFOM optimality results for compressed sensing and low-rank matrix estimation to qualify the optimality of AMP.
- **On the denoiser learned by unrolling can outperform the optimal denoiser**:
- With regards to learning the denoiser, as in Figure 1, the unrolled denoisers outperform Bayes AMP by a relatively small margin. The NMSE in Figure 1 is averaged over $2^{15}$ samples with fixed, randomly chosen design, and we observed this improvement consistently for other randomly sampled designs as well, so we do not believe the small improvement of MLP denoisers over vanilla AMP in Figure 1 is from random fluctuations. The broader point we want to emphasize is that Bayes AMP is only known to be “optimal” up to $o_d(1)$ margin in NMSE, and the fact that the MLP denoisers we learn improve over this in the finite-dimensional setting is consistent with this state of knowledge.
- We also observed consistently that as the signal dimension decreased, we could outperform Bayes AMP by learning the “B” matrix (or measurement transpose) as detailed in Figure 3. Here, we believe the learned B experiments take advantage of finite-dimensional sub-optimalities of AMP as a GFOM, rather than just finite-dimensional sub-optimalities of the denoisers used by Bayes AMP.
- **On the intuitive understanding of the fine-tuning**:
- We observed finetuning was not largely impactful for the compressed sensing experiments but was helpful for rank-one PCA. Note that PCA has different strengths of initialization for AMP of the signal estimate while compressed sensing has a “canonical” initialization ($x_0=0$). PCA thus induces a “burn-in” phase where the AMP estimates reflect almost no signal at early iterations (see Figure 6 of the submission).
- We suspect that finetuning assists in maintaining the consistency of the learned denoisers across this burn-in phase where the correlation between the estimate and MMSE estimate is low, communicating the loss objectives of the later denoisers to earlier iterations.
- We also observe that finetuning improves validation loss most during this “burn-in” phase, but notably not as much during later iterations where the estimate is close to the MMSE estimate.
- **On the intuition of why sequential learning avoid local minima**:
- Sequential learning takes advantage of the following result: Suppose we have learned the Bayes-AMP denoisers in all previous layers. Then the denoiser that minimizes the MSE estimate of the next layer is the corresponding Bayes-AMP denoiser as well (see thm 3 in [MW22]). This simplifies the learning optimization immensely, as we only need the current layer’s denoiser to globally optimize the MSE loss function.
- Another way to view this is that we know that our learned network should provide the minimum MSE estimate after each layer, but training end-to-end throws away this “intermediate layer” information and only uses the final layer’s optimality.
[MW22] Statistically Optimal First Order Algorithms: A Proof via Orthogonalization. Andrea Montanari, Yuchen Wu.
- **Intuition behind initializing at the solution of the previous layer**: As for initializing at the solution of the previous layer, the intuition is as follows. As the Bayes-AMP estimates converge, so do the denoisers, so we expect that the denoiser from the previous layer is close to the optimal denoiser, at least closer than a random initialization.
- **On quantification of learning rate and number of steps**: By sufficiently small learning rate, we mean that the learning rate is $\Theta(1/(\epsilon*m))$ and by a sufficiently large number of steps, we mean polynomial in the complexity of the denoiser and $1/\epsilon$ (See the statement of Theorem 3).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal, which has addressed my questions. I maintain my score and recommendation towards acceptance. | Summary: This paper studies the theoretical capabilities of unrolled denoising networks in the context of compressed sensing and rank-one PCA problems, with a focus on experimental validation. The authors present the first proof of learning guarantees for neural networks based on unrolling Approximate Message Passing (AMP). They demonstrate that, when trained with gradient-based methods, these networks can achieve performance on par with optimal prior-aware algorithms. Specifically, for compressed sensing under the assumption that the data is sampled from a product prior, the parameters of the network converge to the same denoisers used in Bayes AMP. The technical aspect of this proof employs a combination of state evolution and neural tangent kernel (NTK) analysis. Numerically, the authors validate their claims and show that the unrolled architecture can handle more general priors.
Strengths: The paper presents significant theoretical contributions to an important subfield of machine learning. It is well-motivated, clearly written, and aligns with the existing literature. This is a significant theoretical advancement that bridges the gap between optimal prior-aware algorithms and data-driven training methods.
The authors introduce a novel proof technique that combines state evolution and Neural Tangent Kernel (NTK) analysis. Unlike prior work, this proposed analysis is robust and independent of dimensionality.
The author provides sufficient numerical evidence to validate their theoretical contributions.
Weaknesses: The paper's theoretical results are confined to settings of compressed sensing with data derived from a product prior, which is an unrealistic assumption compared to data from actual science and engineering applications.
Furthermore, the chosen architecture could be considered impractical in practice due to its fixed width and depth.
However, these weaknesses also present opportunities for future research.
Technical Quality: 4
Clarity: 4
Questions for Authors: * Have the authors numerically simulated the effects of different network architectures on the robustness of their estimates? This inquiry focuses on understanding how architectural variations impact the results, rather than solely assessing performance.
* Could the authors comment on the practical likelihood of Assumption 1?
* Could the authors please provide context for the variable B in Informal Theorem 2? Its meaning is clear from Formal Theorem 3, but not from the informal statement.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Limitations are clearly discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their effort in reviewing our work and address their comments below.
- **On the product distribution assumption**: Please see the global response for the discussion about the assumptions on the prior. We have also subsequently expanded experiments to the non-product distribution regime, also included in the global response and the second supplementary figure in the attached pdf.
- **On the flexibility of the architecture of the MLP denoiser**: The MLP parameterizing the denoiser can in practice be set to any architecture (convolutional net, U-net, etc.), and doesn’t have to be constrained to the three-layer, 70-neuron wide MLP we used.
- **On the understanding how architectural variations impact the results**: We examined different depths and widths of our MLP parameterization and found that the behavior of unrolling was relatively robust to variations in the choice of the denoiser architecture. The adjustments we made were ultimately meant to control for the amount of error in learning the denoiser, and the architecture we eventually reported results for worked sufficiently well for our experiments while also being quite reasonable to train.
- **On the practical likelihood of Assumption 1**: Assumption 1 comprises two parts: 1) sub-Gaussianity and 2) Lipschitz continuity of the $\tau-$Gaussian smoothed score function (score function of $x + \tau z$). We believe both assumptions are mild, hold for a large distribution class, and are used in prior literature (e.g., the sub-Gaussianity of a distribution holds for any distribution with bounded support and the Lipschitz continuity of the score function holds for mixtures of spherical Gaussians with bounded means). Please also see the general response on the additional discussions about these assumptions. We will include a discussion about the generality of the assumption in the revised draft.
- **The definition of variable $B$**: The variable $B$ denotes the Lipschitz constant of the score function of $p(x; \tau)$ where $p(x; \tau)$ is a probability density of random variable $x + \tau z$ where $x$ and $z$ corresponds to the data distribution and standard Gaussian distribution respectively (Please see Assumption 1 for the complete definition of variable $B$).
---
Rebuttal 2:
Comment: Thank you for your detail responses I will update my score accordingly to the thoughtful respones the authors have provided. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their careful reviews and constructive feedback. We address some general comments raised by multiple reviewers here and address the rest of the comments in the individual responses.
- **On the product prior assumption**: We restricted our results to product priors for two main reasons:
- The product prior setting is quite standard and widely studied within the theory literature on AMP (e.g., see [BM11, DAM15]).
- The state evolution for AMP, which establishes that the input to the denoisers behaves like samples from the data distribution smoothed with Gaussian noise, still holds for non-product distributions (see [BMN17]). So to show that unrolled AMP provably learns to perform Bayesian inference, one needs to show that unrolled AMP learns denoisers at different noise scales. However, this is a major open problem in the theory of diffusion models (understanding when “score estimation” can be performed in polynomial time) and therefore remains well outside the scope of this work. However, with product distributions, corresponding to scalar/one-dimensional score estimation, we are able to show such a result. That being said, it would be interesting to study the specific class of non-product priors given by **generative priors** coming from pushforwards along random neural networks, as studied by Aubin et al. [ALB20].
[BM11] Mohsen Bayati and Andrea Montanari. The dynamics of message passing on dense graphs, with applications to compressed sensing.
[DAM15] Yash Deshpande, Emmanuel Abbe, Andrea Montanari Asymptotic Mutual Information for the Two-Groups Stochastic Block Model.
[BMN17] Raphaël Berthier, Andrea Montanari, and Phan-Minh Nguyen. State Evolution for Approximate Message Passing with Non-Separable Functions.
[ALB20] Benjamin Aubin, Bruno Loureiro, Antoine Baker. Exact asymptotics for phase retrieval and compressed sensing with random generative priors.
- **On the smoothness assumption**:
- We want to reiterate that our theoretical result only requires smoothness of the data distribution after the Gaussian convolution operation. More specifically, we require the Lipschitz constant of the score function of random variable $x + \tau z$, where $x$ is drawn from the prior and $z$ follows the standard Gaussian distribution. Note that this is a mild assumption and has been used in prior literature [CCL+23, CLL23].
[CCL+23] Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions. Sitan Chen, Sinho Chewi, Jerry Li, Yuanzhi Li, Adil Salim, Anru R. Zhang. 2023.
[CLL23] Improved Analysis of Score-based Generative Modeling: User-Friendly Bounds under Minimal Smoothness Assumptions. Hongrui Chen, Holden Lee, Jianfeng Lu 2023.
- **Extension to experiments with a non-product prior**:
- We have subsequently extended our empirical results to non-product priors. Here, the learned denoiser is now a $d$-to-$d$ dimensional function as opposed to a scalar function, and the Onsager term is the averaged divergence of the denoiser. Our MLP architecture parametrizes this $d$-to-$d$ input-output structure as well. Our non-product setting is a mixture of Gaussians prior which in general can accommodate practical distributions through kernel density estimation. We outperform the Bayes-AMP baseline in low-dimensional settings; see Figure 2 in the attached pdf.
Pdf: /pdf/237a24c129ff62950891b3d2cfb6389d05f76228.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Light Unbalanced Optimal Transport | Accept (poster) | Summary: This paper proposes U-LightOT, a lightweight solver for the Unbalanced Entropic Optimal Transport (UEOT) problem. This method uses a Gaussian Mixture approximation for the potential $v_{\theta}(y)$ and measure $u_{w}(x)$. This paper proves that under this approximation, the KL divergence to the ground truth UEOT plan has a tractable form. U-LightOT is evaluated on Gaussian Mixture and Unpaired Image-to-Image Translation tasks.
Strengths: 1. This paper provides a theoretical analysis of the generalization bounds and the universal approximation property for the Gaussian mixture parametrization.
2. The proposed method is a lightweight solver for the UEOT problem, which requires several minutes of CPU training for the experiment in Sec 5.
3. This paper is easy to follow.
Weaknesses: 1. The optimization objective and Gaussian Mixture approximation in Sec 4 are similar to [1].
2. While this paper provides the universal approximation property for Gaussian Mixure approximation, I have concerns about whether this Gaussian Mixture parametrization can achieve decent results for more complex distributions, such as in generative modeling within the data space on CIFAR-10.
[1] Korotin, Alexander, Nikita Gushchin, and Evgeny Burnaev. "Light schr\" odinger bridge." ICLR 2024.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In the Unpaired Image-to-Image Translation task, Table 2 only presents the accuracy of keeping the attributes of the source images. However, since the goal of this task is semantic translation, the accuracy of the target semantic is also required. For example, in the Young-to-Adult task, the accuracy of whether the generated image is indeed an adult image. Could you provide this target semantic accuracy results?
2. In the Appendix, Tables 5 and 7 show the Frechet distance (FD) between the learned and target measures. I believe this FD metric evaluates whether the semantic translation is successful, at the marginal level. Generally, increasing τ\tauτ decreases (improves) FD metrics in Table 5 and 7. Could you clarify how the optimal $\tau$ is selected? I am curious because when $\tau$ is overly large, U-LightOT achieves worse accuracy compared to other models in Table 2.
3. Could you present the FD results (Tables 5 and 7) for the other models?
4. For the optimal $\tau=500$ in the Man-to-Woman task, Table 6 shows an accuracy of 83.85 at best, while Table shows an accuracy of 92.85. Could you clarify which result is correct?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors addressed the limitations and broader impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough feedback. Please find the answers to your questions below.
**(1) The optimization objective and Gaussian Mixture approximation in Sec 4 are similar to [1].**
Our solver can be considered as the generalization of the one from [LightSB, 1] in the sense that it subsumes LightSB for the specific choice of $f$-divergences. However, this generalization is not straightforward or direct since our objective is built on the completely different principles:
1. Our solver is derived from minimizing $D_{\text{KL}}$ divergence (defined as a discrepancy between positive measures) between ground truth plan $\gamma^*$ and its approximation $\gamma_{\theta,\omega}$. This definition of divergence notably differs from the ordinary definition of $D_{\text{KL}}$ for probability measures used in [1].
2. We parametrize the entire plan using Gaussian mixtures while in [1] it is done only for conditional plans. It is an important difference, since in an unbalanced case the marginals of optimal plan do not coincide with source and target measures. Our parametrization allows sampling from the left marginal of UEOT plan and identifying potential outliers in the source measure.
**(2) While this paper provides the universal approximation property for Gaussian Mixure approximation, I have concerns about whether this Gaussian Mixture parametrization can achieve decent results for more complex distributions, such as in generative modeling within the data space on CIFAR-10.**
In general, methods based on Gaussian parametrization are usually not appropriate for tasks with complex data, e.g., images. We mention this limitation in our paper (lines 256-257,702-707). Still, our aim was to develop a lightweight unbalanced solver which can serve as a simple and easy-to-use baseline in moderate-dimensional tasks. As expected, we get this ease in exchange for the rich parametrization required for large-dimensional tasks and vice versa.
**(3) In the Unpaired Image-to-Image Translation task, Table 2 only presents the accuracy of keeping the attributes of the source images. However, since the goal of this task is semantic translation, the accuracy of the target semantic is also required. For example, in the Young-to-Adult task, the accuracy of whether the generated image is indeed an adult image. Could you provide this target semantic accuracy results?**
We are working on this and aim to add the results soon.
**(4) In the Appendix, Tables 5 and 7 show the Frechet distance (FD) between the learned and target measures. I believe this FD metric evaluates whether the semantic translation is successful, at the marginal level. Generally, increasing $\tau$ decreases (improves) FD metrics in Table 5 and 7. Could you clarify how the optimal tau is selected? I am curious because when tau is overly large, U-LightOT achieves worse accuracy compared to other models in Table 2.**
In Appendix C (Tables 5, 7), we perform an ablation study of our method in order to show that it offers a flexible way to select a domain translation configuration (unbalancedness parameter $\tau$) that either allows for very good level of preserving the properties of the input objects or generation of a distribution which is a very good approximation of the target distribution. In that section, we highlighted the parameter which is optimal in the sense that it provides the best tradeoff between the closeness of the learned translations and target ones (*Pareto-optimal*), and the ability of the learned latents to keep the features of input latents. However, the final selection of the optimal configuration remains at the discretion of the user.
**(5) Could you present the FD results (Tables 5 and 7) for the other models?**
We will add the results soon.
**(6) For the optimal tau=500 in the Man-to-Woman task, Table 6 shows an accuracy of 83.85 at best, while Table shows an accuracy of 92.85. Could you clarify which result is correct?**
The Tables provide results for different number of training steps and parameters $\tau$. Table 2 shows the accuracy for $\tau=100$ and 5K steps of the algorithm which is specified in Appendix B.3. Table 6 presents the results for only 3K steps and different parameters $\tau$. (Note that the value for $\tau=100$ in the Table 6 (88.59 $\pm$ 0.40) is close to 92.85 - up to the difference in number of training steps.)
**References.**
[1] Korotin, Alexander, Nikita Gushchin, and Evgeny Burnaev. "Light schr" odinger bridge." ICLR 2024.
---
Rebuttal Comment 1.1:
Title: Further clarifications
Comment: **In the Unpaired Image-to-Image Translation task, Table 2 only presents the accuracy of keeping the attributes of the source images. However, since the goal of this task is semantic translation, the accuracy of the target semantic is also required. For example, in the Young-to-Adult task, the accuracy of whether the generated image is indeed an adult image. Could you provide this target semantic accuracy results? [...] Could you present the FD results (Tables 5 and 7) for the other models?**
As per your request, we provide the Accuracy (of mapping to the target) and FD (between learned and target latents) results for our solver and its unbalanced competitors (in Young$\rightarrow$Adult translation) in the Table below. For completeness, we also include the results for balanced LightSB [1] solver.
| | Choi et al. [3] | Yang et al. [4] | UOT-FM [2] | LightSB [1] | ULight-OT (ours, $\tau=100$) |
|-------------------------------------------|-------|-------|-------|---------|-------|
| Accuracy (mapping to the target) | 85.36 | 80.32 | 83.27 | 88.14 | 81.14 |
| FD (between generated and target latents) | 13.24 | 11.50 | 10.27 | 24.66 | 27.72 |
The results show that *balanced* LightSB solver outperforms other methods according to the target accuracy results. Note that FD metrics is based on the first and second moments of distributions, therefore, there is a chance that it can provide imprecise results (as it, possibly, happens for the case of LightSB). Our method provides the target accuracy results on par with UOT-FM model (which is the second-best model according to the accuracy of keeping the class, see Table 2). Other unbalanced solvers (Yang et al., Choi et al.) provide better accuracies of mapping to the target and FD results, but are slightly worse in keeping the attributes (classes) of the source latents. Besides, our solver is simpler and and faster than its competitors, especially, those from (Yang et al., Choi et al.) which are based on adversarial learning, see the speed-up comparison in our answer to the reviewer bWBu (https://openreview.net/forum?id=co8KZws1YK¬eId=akup4rEAWd).
[1] Korotin et al. "Light schrodinger bridge", ICLR 2024.
[2] L. Eyring et al. Unbalancedness in neural monge maps improves unpaired domain translation. ICLR, 2024
[3] J. Choi et al. Generative modeling through the semi-dual formulation of unbalanced optimal transport. NeurIPS, 2023.
[4] K. D. Yang et al. Scalable unbalanced optimal transport using generative adversarial networks. ICLR, 2018. | Summary: The proposal focuses on developing a fast solver for the unbalanced entropy-regularized optimal (EOT) transport between continuous Radon measures. The authors utilize the dual formulation of unbalanced EOT and use the relationship between the optimal potentials (i.e., the dual variables) and the primal transport plan. They then consider a parameterization of the transport plan and plan to minimize the KL divergence between this parameterized transport plan and the optimal one. Given that the optimal transport plan is unknown, the authors first use the relationship between the primal and dual solutions to reparameterize the dual variables and then derive a tight upper bound for the KL between the optimal plan and the parameterized one, which they propose to minimize. To deal with the normalization terms in their upper bound, the authors use a similar framework to that of Gushchin et al. [29] and assume the reparameterized dual variables are unnormalized Gaussian mixtures; this assumption enables analytic solutions to the otherwise difficult-to-calculate terms in the upper bound. Lastly, the authors provide a generalization error bound for their proposed framework. The paper provides two small-scale numerical examples to demonstrate their solver's efficiency: 1) two-dimensional Gaussian mixtures and 2) unpaired-image-to-image translation in the embedding space of an autoencoder, specifically ALAE, on an unbalanced subset of FFHQ dataset for Adult, Young, Man, Woman face.
Strengths: + The paper is very well written and straightforward to follow.
+ The clever parameterizations used in this paper (while they appear in some prior work), provide a unique approach for solving the UEOT problem between continuous measures.
+ The provided generalization error bounds (while straightforward to derive), are important and certainly add value to the paper.
+ The method is easy to implement and fast to train. Quick convergence on the CPU is a notable achievement unlocked by this work.
Weaknesses: - One major weakness is that the paper does not discuss how $K$ and $L$, i.e., the number of Gaussians in the mixtures, affect the results. The generalization error bound mentions that $K$ and $L$ will appear as constants in the error bound, but the practical implications of the choice of $K$ and $L$ are missing from the paper.
- Experiments are relatively modest: 2 experiments in low dimensions. It would be beneficial to have an experiment on robustness to outliers, as this is included in the main claim.
- The paper claims to have a fast solver but lacks a detailed speed comparison for either experiment. It would be great to have a wall-clock comparison of competing methods.
- The Gaussian mixture assumption limits the method's applicability to only low-dimensional problems, and it is not clear whether this limitation can be overcome.
Technical Quality: 4
Clarity: 4
Questions for Authors: * How does the performance change as a function of $K$ (assuming $L=K$)?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Limitations are provided in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough feedback. Please find the answers to your questions below.
**(1) The paper does not discuss how K and L, i.e., the number of Gaussians in the mixtures, affect the results. The generalization error bound mentions that K and L will appear as constants in the error bound, but the practical implications of the choice of K and L are missing from the paper. [...] How does the performance change as a function of
K(assuming K=L)?**
To address your question, we perform additional experiments (both on Gaussians and in the latent space of ALAE autoencoder) with our solver varying the number of Gaussian modes ($K$, $L$) in potentials.
The setup of this experiment follows the setup introduced in our Section 5.1. We test our solver with diverse number of potentials $K\in\{1,3,5\}$ and $L\in\{1,2,3,4,5\}$. The results are visualized in Fig. 1 of the **attached PDF file**. It can be seen that for insufficient number of modes in potentials, the solver exhibits issues with convergence and do not correctly solve the task.
**(2) It would be beneficial to have an experiment on robustness to outliers, as this is included in the main claim.**
Thank you for this valuable suggestion. We conduct the experiment on Gaussian Mixtures with added outliers and visualize the results in Fig. 2 of the **attached PDF file**. The setup of the experiment, in general, follows the *Gaussian mixtures* experiment setup described in section 5.2 of our paper. The difference consists in outliers (small gaussians) added to the input and output measures. The results show that our U-LightOT solver successfully eliminates the outliers and manages to simultaneously handle the class imbalance issue. At the same time, the balanced LightSB [4] solver fails to deal with either of these problems.
**(3) [...] speed comparison for either experiment. It would be great to have a wall-clock comparison of competing methods.**
Thank you for the suggested idea. We compared the running time of our algorithm and unbalanced competitors on the image translation task (Adult$\rightarrow$Young), its setup is described in section 5.2 of our paper. The results for all of the methods (wall-clock times for 10k updates) are presented in the Table below. We omit the results for other variants of translations since they are quite similar to the results in the Table.
As you can see, our proposed solver outperforms its competitors (unbalanced methods) in terms of convergence time.
| | ULight-OT | UOT-FM [1] | Yang et al. [2] | Choi et al. [3] |
|---|---|---|---|---|
| Time | **02:38** | *03:21* | 16:30 | 18:11 |
**(4) The Gaussian mixture assumption limits the method's applicability to only low-dimensional problems, and it is not clear whether this limitation can be overcome.**
In general, methods based on Gaussian parametrization are usually not appropriate for tasks with complex data, e.g., images. We mention this limitation in our paper (lines 256-257,702-707). Still, our aim was to develop a lightweight unbalanced solver which can serve as a simple and easy-to-use baseline in moderate-dimensional tasks. As expected, we get this ease in exchange for the rich parametrization required for large-dimensional tasks and vice versa.
**Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score.
**References.**
[1] L. Eyring et al. Unbalancedness in neural monge maps improves unpaired domain translation. ICLR, 2024
[2] J. Choi et al. Generative modeling through the semi-dual formulation of unbalanced optimal transport. arXiv preprint arXiv:2305.14777, 2023.
[3] K. D. Yang et al. Scalable unbalanced optimal transport using generative adversarial networks. ICLR, 2018.
[4] Korotin et al. "Light schrodinger bridge", ICLR 2024.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I appreciate the authors' extensive responses and clarifications.
The experiment on varying $K$ and $L$ is particularly insightful, as it demonstrates that even in a simple toy problem, the method's performance is highly sensitive to the appropriate selection of these hyperparameters. I believe the paper would benefit from reporting the method's results on large-scale experiments across a range of $K$ and $L$ values, perhaps in the supplementary material.
I also appreciate the wall-clock performance data provided by the authors. A more rigorous analysis of the wall-clock time, considering different sample sizes and varying $K$ and $L$ values on toy datasets, could further enhance the paper’s practical value to the community.
Overall, I find this paper well-written, easy to follow, novel, and of potential interest to the community. The strengths of the paper outweigh the weaknesses, and I am increasing my score to Weak Accept. | Summary: This work focuses on the largely computationally intractable efforts in unbalanced OT dual form where neural networks are used as a proxy (used as potentials) in order to approximate Wasserstein distances. In this work, the authors set out to significantly reduce this optimization procedure by decomposing the join optimal solution into conditionals which allows for both easier inference and a reduction in the number of parameters required. Experimental results are then carried out to show the success of this method beyond the improved efficiency.
Strengths: [+] The reduction in efficiency seems to be quite strong and effective way to reduce the overall parameters required to approximate OT distances
[+] The theoretical results are sound and well motivated.
[+] A generalization bound is also presented, attesting to the soundness one achieves with this light variation.
Weaknesses: [-] Appears to be specific only to the case of having KL divergence penalties for the mass constraints.
[-] Paper can appear a bit difficult and dense to read.
Technical Quality: 3
Clarity: 2
Questions for Authors: (1) The reduction you get appears to have (perhaps even superficially) some relationship to the way WAE decomposes the coupling into conditionals. More coincidentally, WAE also uses conditional Gaussians to parametrize the encoder distribution, although for different purposes. Do you have any comments if there is any deeper link here?
(2) Do you have any intuition if one were to use other penalties beyond KL to enforce the mass constraint?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough feedback. Please find the answers to your questions below.
**(1) Appears to be specific only to the case of having $D_{\text{KL}}$ divergence penalties for the mass constraints. [...] Do you have any intuition if one were to use other penalties beyond $D_{\text{KL}}$ to enforce the mass constraint?**
Our solver admits different divergences except for the $D_{\text{KL}}$ one. From the theoretical point of view, we describe the set of admissible divergences in our Appendix C (lines 549-672). Besides, we provide a numerical example illustrating the performance of our solver with $\mathcal{D}_{\chi^2}$ divergence, see Fig. 3 and description in lines 559-677.
**(2) Paper can appear a bit difficult and dense to read.**
We are upset that you found our work difficult to read. We will try to improve this aspect if you indicate in more detail which points were difficult to understand.
**(3) The reduction you get appears to have (perhaps even superficially) some relationship to the way WAE decomposes the coupling into conditionals. More coincidentally, WAE also uses conditional Gaussians to parametrize the encoder distribution, although for different purposes. Do you have any comments if there is any deeper link here?**
Thanks for asking. We think that there is no direct link. Indeed, in WAE, the encoder for each input $x$ outputs some Gaussian, while in our case, all the conditional Gaussians (more precisely, Gaussian mixtures) are tight together. This means that given one conditional distribution $\gamma_{\theta}(y|x=x_0)$, one can immediately express all the other $\gamma_{\theta}(y|x=x_{\text{other}})$. In fact, the densities of all these conditional distributions are parameterized by a single scalar-valued function $v$; see eq. (8) in our paper. This is achieved because of the properties of the entropic optimal transport solutions which we exploited to construct our algorithm.
**Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score.
---
Rebuttal Comment 1.1:
Comment: Thank you for responding to my questions, I don't have any concerns after reading the response. | Summary: The paper presents a novel approach to solving the continuous Unbalanced Entropic Optimal Transport (UEOT) problem. The authors introduce a lightweight, theoretically-justified solver that addresses the challenges of sensitivity to outliers and class imbalance in traditional Entropic Optimal Transport (EOT). The proposed method features a non-minimax optimization objective and employs Gaussian mixture parametrization for UEOT plans, resulting in a fast, simple, and effective solver. The authors provide theoretical guarantees for their solver's performance and apply it to simulated and image data.
Strengths: - The paper is well-written and easy to follow
- The paper describes well related literature and clearly motivates the approach / why there is a need for this solver
- The paper introduces a novel way to solve UEOT problems using Gaussian mixtures, even if the approach was previously used for balanced EOT problems as mentioned by the authors.
- The authors thoroughly study generalization bounds.
- The authors consider a wide range of competing methods.
Weaknesses: - While the authors provide generalisation bounds, it would be helpful to assess the performance of the method on the UEOT plan between Gaussian distributions, see Janati et al. ,2020
- As mentioned by the authors, the Gaussian mixture approach is likely to work only in low dimensions. It would be interesting to see when it fails, e.g. using the benchmark above.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The authors state that for OT-FM and UOT-FM in the FFHQ dataset, they use a 2-layer feed-forward network with 512 hidden neurons and ReLU activation. Where does this parameterization come from? It seems to be relatively small for a flow matching architecture on images, and does not seem to be the architecture used in the original papers.
- Why are FID scores not reported for the image translation tasks?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have considered the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough feedback. Please find the answers to your questions below.
**(1) Performance of the method on the UEOT plan between Gaussian distributions, see Janati et al.,2020 [5]**
Thank you for your suggestion. Unfortunately, a comparison of our method's solutions with the analytical solutions proposed in [5] is not relevant, since this paper consideres a different setup of the UEOT problem. Namely, this paper derives solutions for the UEOT problem (between Guassian measures) with $D_{\text{KL}}$ as entropy regularization instead of the differential entropy used in our paper. We noted the difference between the problem we are considering and the one considered in [5] in our paper, see lines 91-92 and corresponding footnote.
**(2) The Gaussian mixture approach is likely to work only in low dimensions. It would be interesting to see when it fails, e.g. using the benchmark above.**
As we explained in the previous answer the benchmark provided in [5] is not relevant for us as it considers another UEOT problem.
**(3) The authors state that for OT-FM and UOT-FM in the FFHQ dataset, they use a 2-layer feed-forward network with 512 hidden neurons and ReLU activation. Where does this parameterization come from? It seems to be relatively small for a flow matching architecture on images, and does not seem to be the architecture used in the original papers.**
It is important to understand here that we run our experiments in the latent space of the ALAE autoencoder, and not on the images directly. Accordingly, we adapted the architectures of the neural networks used in OT-FM and UOT-FM to work with latent codes. In this case, architectures such as fully connected neural networks are relevant.
**(4) Why are FID scores not reported for the image translation tasks?**
We conduct the image translation experiment in the latent space of ALAE autoencoder. For this reason, we did not report the FID metrics assessing the quality of the generated images but rather focus on assessing the quality of the generated latents and focus on the Frechet distance (FD) defined as the difference in means and covariances of the generated and target latents.
However, to fully address the raised question, we report the FID scores between the generated images (produced by ALAE decoder from the generated latent codes) and target images distributions in the Table below. The results show that FID is nearly the same for all of the models under consideration. It supports our intuition that FID is indeed not a representative metric for assessing the performance of models performing the translation of latent codes.
| 10k updates | ULight-OT (ours) | Light-SB [1] | UOT-FM [2] | Yang et al. [3] | Choi et al. [4] |
|-------------|------------------|-------------------|----------------------|----------------------|-------------------|
| FID | $0.331 \pm 0.03$ | $0.331 \pm 0.03$ | $0.331 \pm 0.04 $ | $0.344 \pm 0.04 $ | $0.339 \pm 0.03 $ |
**Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score.
**References**
[1] Korotin et al. "Light schrodinger bridge", ICLR 2024.
[2] L. Eyring et al. Unbalancedness in neural monge maps improves unpaired domain translation. ICLR, 2024
[3] J. Choi et al. Generative modeling through the semi-dual formulation of unbalanced optimal transport. arXiv preprint arXiv:2305.14777, 2023.
[4] K. D. Yang et al. Scalable unbalanced optimal transport using generative adversarial networks. ICLR, 2018.
[5] H. Janati et al. Entropic optimal transport between unbalanced gaussian measures has a closed form. NeurIPS, 2020.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for their clarifications, and apologise for having missed this difference explained in lines 91-92. Thus, I increase my score to 6. | Rebuttal 1:
Rebuttal: Dear reviewers,
thank you for your thorough and detailed reviews! We are highly inspired by the fact that you agree on the importance of our theoretical results (Reviewer bWBu, vYvs), clarity (Reviewer WAcu, bWBu, t9a3, nig3) of our paper and mark the efficiency of the our solver (Reviewer bWBu, vYvs). We hope that our U-LightOT algorithm would be easy to use in practical applications.
We will incorporate the changes suggested by the reviewers in the final version of our paper. We list the changes below:
(a) **Main text** $-$ addition of the Table with wall-clock comparison of our U-LightOT solver and its competitors (Reviewer ) plus minor requested clarifications,
(b) **Additional experiment in Appendix C** section $-$ ablation study of our solver with different number of gaussian modes in potentials(**Reviewers nig3, bWBu**),
(c) **Additional experiment in Appendix E** section $-$ *Gaussian Mixtures with outliers* experiment showing the robustness of our solver towards potential outliers (**Reviewer bWBu**).
Please find Figures for experiments requested by the reviewers nig3, bWBu in the **attached PDF file**.
Please find the answers to your questions below.
Pdf: /pdf/66bd7cf37d8a4d8dbbc966c43bf56891c7a76477.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper presents a lightweight solver for Unbalanced Entropic Optimal Transport (UEOT) that does not rely on neural network parametrization. Instead, the authors parameterize the potential functions of UEOT using Gaussian Mixture Models (GMM). This parametrization enables the derivation of a tractable joint coupling. By incorporating the parameterized potential into the dual objective, the authors achieve a simple and tractable loss function. Additionally, the paper provides a universal approximation result for GMM parametrization. Experiments are conducted on toy data (GMM) and image-to-image (I2I) translation.
Strengths: - The paper proposes a simple and fast UEOT algorithm.
- The paper justifies the GMM parametrization by presenting generalization bounds.
- The paper demonstrates applicability to large-scale tasks such as I2I translation when combined with an autoencoder (AE).
- The paper is well-written, clear, and easy to follow.
Weaknesses: - The method of parameterizing the potential function using GMMs was already proposed in LightSB [1]. The only change here is the switch to a UOT objective, making the methodological contribution minimal. Aside from the universal approximation result, the theoretical contributions are also limited.
- The experiments are not comprehensive. First, the experiments are conducted only on face-related data. More diverse datasets should be included. Second, the fairness of the comparisons is questionable. In the I2I experiments, the authors use the ALAE autoencoder, while some of comparison methods are implemented directly in the image space. All of the comparisons should be implemented in the latent space for fairness. Third, since U-LightOT are implemented on a latent space that captures attributes well, the attribute accuracy is expected to be high. Other than accuracy, more general metrics such as c-FID or FID should be used for comparison. Fourth, there is a lack of ablation studies on the number of Gaussian modals $N$ and $M$. This is very important and expected to be sensitive hyperparameter, thus, I believe authors should provide ablation studies on this parameter. Overall, the practical utility of the approach is questionable.
[1] Light Schrodinger Bridge, ICLR, 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the performance change when parameterizing very high-dimensional and multi-modal GMM data with fewer or more $N,M$?
- In the I2I experiments, how does the number of modes in the GMM affect performance?
- In toy data experiments, does U-LightOT has lower transport plan costs and smaller Wasserstein distances between the target and generated distributions compared to other comparisons?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Discussed in Weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough feedback. Please find the answers to your questions below.
**(1) The method of parameterizing the potential function using GMMs was already proposed in LightSB [1]. The only change here is the switch to a UOT objective, making the methodological contribution minimal.**
Our solver can be considered as the generalization of the one from [LightSB, 1] in the sense that it subsumes LightSB for the specific choice of $f$-divergences. However, this generalization is not straightforward or direct since our objective is built on the completely different principles:
1. Our solver is derived from minimizing $D_{\text{KL}}$ divergence (defined as a discrepancy between *positive measures*) between ground truth plan $\gamma^*$ and its approximation $\gamma_{\theta,\omega}$. This definition of divergence notably differs from the ordinary definition of $D_{\text{KL}}$ for **probability measures** used in [1].
2. We parametrize the entire plan using Gaussian mixtures while in [1] it is done only for conditional plans. It is an important difference, since in an unbalanced case the marginals of optimal plan do not coincide with source and target measures. Our parametrization allows sampling from the left marginal of UEOT plan and identifying potential outliers in the source measure.
**(2) Aside from the universal approximation result, the theoretical contributions are also limited.**
We partially agree with the reviewer that the proof of our Universal Approximation Theorem (UAT) is the most difficult and tricky among the results obtained in our paper. However, the theoretical contributions of our paper are not limited to this theorem. Our other results include:
(1) Theorem 4.1 $-$ the derivation of the tractable optimization objective in terms of the $D_{\text{KL}}$ divergence between positive measures; (2) Proposition 4.2 $-$ the derivation of the bound for the estimation error of our solver; (3) Theorem A.4 $-$ derivation of the dual form of UEOT problem with the potentials belonging to the space $C_{2,b}(x)$ of continuous functions, bounded by the quadratic polynom (from the both sides) and additionally bounded by constant from above. Proof of each of these results is non-trivial and requires highly specialized knowledge in the diverse fields of mathematics and statistics.
**(3) In the I2I experiments, the authors use the ALAE autoencoder, while some of comparison methods are implemented directly in the image space. All of the comparisons should be implemented in the latent space for fairness.**
All of the methods included in comparison in the image-to-image translation task were *implemented in the latent space* of the ALAE autoencoder which is mentioned in the paper, see Section 5.2, line 276. We agree that it might written more clearly and will additionally emphasize this aspect in the final version of our paper.
**(4) Since U-LightOT are implemented on a latent space that captures attributes well, the attribute accuracy is expected to be high. Other than accuracy, more general metrics such as c-FID or FID should be used for comparison.**
We conduct the image translation experiment in the latent space of ALAE autoencoder. For this reason, we did not report the FID metrics assessing the quality of the generated images but rather focus on assessing the quality of the generated latents and focus on the Frechet distance (FD) defined as the difference in means and covariances of the generated and target latents.
However, to fully address the raised question, we report the FID scores between the generated images (produced by ALAE decoder from the generated latent codes) and target images distributions in the Table below (Adult → Young translation). The results show that FID is nearly the same for all of the models under consideration. It supports our intuition that FID is indeed not a representative metric for assessing the performance of models performing the translation of latent codes.
| 10k updates | ULight-OT (ours) | Light-SB [1] | UOT-FM [2] | Yang et al. [3] | Choi et al. [4] |
|-------------|------------------|-------------------|----------------------|----------------------|-------------------|
| FID | $0.331 \pm 0.03$ | $0.331 \pm 0.03$ | $0.331 \pm 0.04 $ | $0.344 \pm 0.04 $ | $0.339 \pm 0.03 $ |
**(5a) Lack of ablation studies on the number of Gaussian modes N and M.***
To address the reviewer's concern, we perform additional experiments with our solver varying the number of Gaussian modes ($N$, $M$) in potentials.
*Gaussians mixtures.* The setup of this experiment follows the setup introduced in our Section 5.1. We test our solver with diverse number of potentials $N\in\{1,3,5\}$ and $M\in\{1,2,3,4,5\}$. The results are visualized in Fig. 1 of the **attached PDF file**. It can be seen that for insufficient number of modes in potentials, the solver exhibits issues with convergence and do not correctly solve the task.
**(5b) How does the performance change when parameterizing very high-dimensional and multi-modal GMM data with fewer or more N and M?**
Unfortunately, to assess the performance of our solver in such an experiment with multi-model GMM data, we need to have some kind of ground-truth solutions. However, for the multi-modal GMM data the solutions *are not available* making it hard to perform such an experiment. Following your comment, we qualitatively demonstrated the performance of our solver with varying number of potential modes $N,M$ for 2-dimensional Gaussian mixtures and quantitavely assess its performance in Image-to-Image translation task, see the answer above.
**Concluding remarks**. Please respond to our post to let us know if the clarifications above suitably address your concerns about our work. We are happy to address any remaining points during the discussion phase; if the responses above are sufficient, we kindly ask that you consider raising your score.
---
Rebuttal Comment 1.1:
Comment: **References.**
[1] Korotin et al. "Light schrodinger bridge", ICLR 2024.
[2] L. Eyring et al. Unbalancedness in neural monge maps improves unpaired domain translation. ICLR, 2024
[3] K. D. Yang et al. Scalable unbalanced optimal transport using generative adversarial networks. ICLR, 2018.
[4] J. Choi et al. Generative modeling through the semi-dual formulation of unbalanced optimal transport. NeurIPS, 2023
---
Reply to Comment 1.1.1:
Title: Further clarifications (1)
Comment: **In toy data experiments, does U-LightOT has lower transport plan costs and smaller Wasserstein distances between the target and generated distributions compared to other comparisons?**
To answer this question, we compared our solver with different unbalancedness parameters $\tau\in\{1,10,50,100\}$ and LightSB for an experiment with a mixture of Gaussians. The results are presented in the table below. Note that our solver is designed to solve an unbalanced EOT problem with relaxed boundary conditions. This entails two properties. Firstly, our solver better preserves the properties of the input objects - indeed, it allows for the domain translation which preserves object classes even in the case of class imbalance. Secondly, due to the relaxed boundary condition for the target distribution, the distribution generated by our solver is naturally less similar to the target distribution than for balanced methods.
The above intuitive reasoning is confirmed by the metrics we obtained. Indeed, as the $\tau$ parameter increases, when our method becomes more and more similar to balanced approaches, the normalized OT cost ($\mathbb{E}\_{x\sim p} \mathbb{E}\_{y\sim \gamma(y|x)} \frac{(x-y)^2}{2}$) between the source and generated distributions increases, and the Wasserstein distance between mapped $p$ and target distribution $q$ decreases. This property of our solver was noted in our paper, see Appendix C. LightSB [1] baseline, which is a purely balanced approach, shows the best quality in terms of Wasserstein distance and the worst in terms of OT cost.
| | LightSB | U-LightOT (ours, $\tau=100$) | U-LightOT (ours,$\tau=50$) | U-LightOT (ours,$\tau=10$) | U-LightOT (ours,$\tau=1$) |
|-------------------------|---------|----------------------|---------------------|--------------------|------|
| OT cost | 3.952 | 3.931 | 3.874 | 2.913 | 2.023 |
| $\mathbb{W}_2$-distance | 0.088| 0.091 | 0.138 | 1.107 | 2.044 |
It is important to note that our method offers a flexible way to select a domain translation configuration that allows for better preserving the properties of the original objects or generating a distribution closer to the target one. The final optimal configuration selection remains at the discretion of the user. At the same time, balanced approaches do not allow making a choice in favor of preserving the properties of the original objects. | null | null | null | null | null | null |
Structured Matrix Basis for Multivariate Time Series Forecasting with Interpretable Dynamics | Accept (poster) | Summary: This paper proposes a novel approach for effectively capturing spatial and temporal correlations in multivariate time series forecasting and enhancing the interpretability of prediction models. The aim is to address the shortcomings of existing methods, which often fail to adequately reflect dynamic spatial correlations and exhibit high variability. To tackle this issue, the paper introduces a method that bypasses the traditional two-stage learning process and directly generates dynamic structures using a structured matrix basis. Furthermore, the basis matrices are parameterized through singular value decomposition (SVD), and all basis matrices share the same orthogonal matrices to improve the efficiency of model training.
Strengths: 1. The method of bypassing the two-stage learning process and directly generating dynamic spatial structures effectively overcomes the limitations of existing models.
2. By providing interpretability of the model, it offers users greater insights and increases the reliability of the results. This is a crucial aspect that is often required in many time series forecasting research papers.
3. This paper effectively addresses various limitations that arise in traditional time series forecasting problems.
4. This paper is well-organized theoretically.
Weaknesses: 1. The proposed model may be complex to implement due to the use of structured matrix bases and singular value decomposition, potentially leading to a steep initial learning curve in practical applications. Does the appendix indicate similar results for datasets other than Electricity?
2. While the model has been validated on various datasets, it remains to be seen how well it performs on very specialized domain data or data with unclear characteristics (i.e., weak temporal dependencies or spatial correlations). How does the model address these scenarios?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The proposed model may be complex to implement due to the use of structured matrix bases and singular value decomposition, potentially leading to a steep initial learning curve in practical applications. Does the appendix indicate similar results for datasets other than Electricity?
2. While the model has been validated on various datasets, it remains to be seen how well it performs on very specialized domain data or data with unclear characteristics (i.e., weak temporal dependencies or spatial correlations). How does the model address these scenarios?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Including more analysis and comparison with graph-based spatio-temporal models would enhance the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1 The proposed model may be complex to implement due to the use of structured matrix bases and singular value decomposition, potentially leading to a steep initial learning curve in practical applications. Does the appendix indicate similar results for datasets other than Electricity?
We would like to clarify that our method actually is very easy to implement. Note that the **singular value decomposition only serves to offer a parameterization form, we do not need to implement it in practice**. Instead, we only need to implement the parameterization of two orthogonal matrices $\mathbf{U}$ and $\mathbf{V}$ via the Clay map. As we mentioned in the paper, the Clay map can be implemented in a numerically stable way **in PyTorch with one line code by the `torch.linalg.solve` function**. The learning curves of our proposed method are also very stable across all six datasets, and we will add more of them to the revised manuscript.
>Q2 While the model has been validated on various datasets, it remains to be seen how well it performs on very specialized domain data or data with unclear characteristics (i.e., weak temporal dependencies or spatial correlations). How does the model address these scenarios?
Thanks for your comments. To our knowledge, the temporal dependency assumption is necessary for all time series forecasting methods. If a dataset has only **weak temporal dependencies**, it will impede the performance of all existing time series forecasting methods. For datasets with **weak spatial correlation**, it means that different channels (series) are independent and our proposed method will consequently learn an identity matrix for the spatial structure. This will not cause any negative impact on the forecasting performance. We will add the corresponding experiments to verify this by synthesizing the datasets with independent channels in the revised manuscript.
>Q3 Including more analysis and comparison with graph-based spatio-temporal models would enhance the paper.
Thanks for your suggestion. In Table 1 of our manuscript, MTGNN, MegaCRN, iTransformer, Crossformer, Card, ESG, FourierGNN, and TPGNN are all GNN-based methods. First, we note that the graph-based spatio-temporal methods often produce better results than the methods without considering the spatial structures. Second, the dynamic graph-based methods such as iTransformer, Card, and ESG often give rise to better performance than the static graph-based methods such as MTGNN and MegaCRN. However, the existing graph-based spatio-temporal models all rely on a two-stage spatial structure learning process that is prone to yielding high variance and impeding the final performance, which is verified by the experiments. We will add more comparisons and analyses in the next version of our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, after reading your response and other reviews, I am updating my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your timely response and the suggestions for improving this paper. We will further revise the paper according to your suggestions and the results in the rebuttal, including adding more learning curves, the corresponding experiments to verify the performance on datasets with independent channels, and more analyses and comparisons with graph-based spatio-temporal models. | Summary: This paper presents a multivariate forecasting model underlined by a dynamic spatial structure generation function, enabled by SVD-based parameterization and theoretically-bounded output space. Experiments on six commonly benchmark datasets in comparison to several existing baselines demonstrated the overall improved forecasting performance of the presented method as measured by MAE/RMSE/MAPE. Additional ablation studies further demonstrated the benefits of dynamic coefficient generation and the structured parameterization.
Strengths: The concept of dynamic generation of spatial structure function is of novelty and has practical value for adapting to the varying spatial structure underlying time series data.
The SVD-based parameterization and low-rank approximation provides theoretical-based solutions to the challenge of identifying time-varying spatial structure functions.
The experiments considered a large number of common benchmarks and representative existing forecasting models. The performance was overall favorable, and the ablation study thorough.
The interpretability results in section 4.4 and Fig 2 are interesting, especially in uncovering the dynamic patterns underlying a dataset.
Weaknesses: The obtained performance gain (Table 1) was overall marginal (up to 2-3 decimal points). The practical implication of such margin of improvements is not clear and need to be clarified. The effect of hyperparameters and random initialization on such margin of improvements also needs to be examined — adding statistics to the results over different random seeds of experiments is important.
In switching dynamic systems, it is also common to model the transition matrix over time as a linear combination of several global matrices, with time-varying mixing coefficients (without considering the proposed SVD decomposition and parameterization). It’d strengthen the paper to add discussion about the relation with this line of works, as well as experimental comparison.
Based on the results in Fig 3, it appears that increasing M from 1 to 2 in general introduced small difference in performance except in the Electricity dataset. This again raises some question on the significance of the dynamic spatial structure introduced in this paper. Stronger clarification on performance improvement and adding error bars to the results in Fig 3 will be important for verifying the contribution of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Clarification on the observed margin of performance improvements and statistics/error bars to the results will be appreciated.
Better clarification and empirical results on relation with switching dynamics systems will be appreciated.
What is the computational cost difference in the ablation with or without common U-V basis (i.e., vs. w/ Um,Vm)?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors did not include sufficient discussion about the limitation or potential negative societal impact of the presented work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1 The obtained performance gain (Table 1) was overall marginal (up to 2-3 decimal points). Adding statistics to the results over different random seeds of experiments is important.
> Clarification on the observed margin of performance improvements and statistics/error bars to the results will be appreciated.
Thank you for your comments and suggestions. We would like to point out that the performance gains of 2-3 decimal points are often considered to be **notable improvements in time series forecasting literature**, e.g., the prior works iTransformer [1] and Card [2] also achieve similar gains. To further demonstrate it more clearly, we provide the **improvements (percentage)** over the best baseline methods in the following table.
| Dataset | Electricity | Weather | PEMS| ETTh2 | Traffic | Solar |
|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| **Improvement**| 2.06% | 3.97% | 3.17% | 1.06% | 1.66% | 2.15% |
In summary, our proposed method achieves an **average performance improvement of 2.35%** over the best baselines. This enhancement is significant in the field of time series forecasting and verifies the effectiveness of our proposed model.
Regarding the effect of random initialization, as mentioned in line 263 of our manuscript, our reported results are actually averaged over three runs with different random seeds. We will further add the statistics (standard deviation) into the tables as well as error bars in our revised manuscript. In addition, **the error bars are also provided in Figure 1 in the supplied PDF file**.
[1] iTransformer: Inverted Transformers are Effective for Time Series Forecasting, ICLR 2024.
[2] Card: Channel Aligned Robust Blend Transformer for Time Series Forecasting, ICLR 2024.
>Q2 In switching dynamic systems, it is also common to model the transition matrix over time as a linear combination of several global matrices, with time-varying mixing coefficients. Better clarification and empirical results on relation with switching dynamics systems will be appreciated.
Thank you for your suggestion. In switching dynamic systems, **the transition matrix is used to model the temporal dependencies**. The recent representative works include Hippo [1], LSSL [2], S4 [3], and Mamba [4]. In contrast, **the structured matrix basis is used to capture the spatial structures** in our proposed method. We have conducted the additional experiments to **compare with S4 in Table 2 in the supplied PDF file** and will also add the experimental results as well as the corresponding discussion in our revised manuscript.
[1] Hippo: Recurrent Memory with Optimal Polynomial Projections, NeurIPS 2020.
[2] Combining Recurrent, Convolutional, and Continuous-time Models with Linear State Space Layers, NeurIPS 2021.
[3] Efficiently Modeling Long Sequences with Structured State Spaces, ICLR 2022.
[4] Mamba: Linear-time Sequence Modeling with Selective State Spaces, arXiv 2023.
>Q3 Based on the results in Fig 3, it appears that increasing M from 1 to 2 in general introduced small difference in performance except in the Electricity dataset.
> Stronger clarification on performance improvement and adding error bars to the results in Fig 3 will be important for verifying the contribution of the paper.
Thank you for your suggestion. To further demonstrate the effectiveness of our proposed dynamic spatial structure, we present the **improvements (percentage)** in the table below as $M$ increases from 1 to 2. The results show that our method achieves an **average performance improvement of 5.66%** across six different datasets. This significant improvement underscores the effectiveness of our dynamic spatial structure design. We also include the **error bars in Figure 2 in the supplied PDF file** to show these more clearly.
| Dataset | Electricity | Weather | PEMS| ETTh2 | Traffic | Solar |
|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| **Improvement**| 10.77% | 5.45% | 3.10%| 2.20% | 2.80% | 9.63%|
>Q4 What is the computational cost difference in the ablation with or without common U-V basis (i.e., vs. w/ Um,Vm)?
Let $M$ be the dimension of the matrix basis, $N$ the number of nodes, $K$ the rank, and $D$ the feature dimension. As shown in Equation 10 of our manuscript, if each matrix basis has specific coordinates $\mathbf{U}_m$ and $\mathbf{V}_m$, then $\mathbf{V}_m^T \mathbf{x}$ and $\mathbf{U}_m \mathbf{y}^\prime$ will be calculated $M$ times. Consequently, the computational costs are $\mathcal{O}(MNKD)$ and $\mathcal{O}(NKD)$ for the models without and with a common $\mathbf{U}$-$\mathbf{V}$ basis, respectively. The computational cost difference will be included in our revised manuscript.
>Q5 The authors did not include sufficient discussion about the limitation or potential negative societal impact of the presented work.
Thanks for your suggestion. We will add more discussion on the limitation including, the model performance on **long-term forecasting tasks and hyperparameter selection**.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal effort -- I appreciate the new baseline and the error bars added to the pdf. It'd be highly recommended for the authors to add the error bar (or their numerical values) to the main text of the paper if accepted, as that's the best evidence for the significance of the margins of improvements (instead of stating that the range is common in literature). I will raise my rating.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for raising the rating! We will include error bars in our revised manuscript. | Summary: This paper presents a method to learn dynamic spatial structures in spatio-temporal forecasting tasks. Specifically, it proposes to parametrize the dynamic structures with a convex combination of fixed matrix bases, and the bases are further confined to be in the same coordinate system. Beyond that, it also imposes low-rank assumption on the coordinates to further reduce complexity. Empirical evidences show the proposed method achieve impressive accuracy in a number of spatio-temporal forecasting benchmarks, and the found spatial structures are highly interpretable.
Strengths: - The paper is well-written with clear introduction of motivation behind major proposals and solid theoretical ground.
- Extensive experiments are provided to showcase the advantages of the proposed method empirically that addresses both efficacy and efficiency concerns.
Weaknesses: The forecasting horizon in the experiments is quite different from the setting in some baselines, such as PatchTST and FEDformer. While long-term forecasting is not major claim of the paper, I wonder if the method scales well with prediction length.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why is TCN selected for temporal encoding?
2. Is there any issue that the learned bases degenerate to be trivial or converge to be close?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed, and I agree that the selection of $M$, i.e. the number of bases, is highly empirical and ad-hoc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1 The forecasting horizon in the experiments is quite different from the setting in some baselines, such as PatchTST and FEDformer. While long-term forecasting is not major claim of the paper, I wonder if the method scales well with prediction length.
Thanks for your comments. Indeed, long-term forecasting is not the main focus of our proposed method, and the present version of Sumba is not aimed at addressing long-term predictions. But **we do conduct experiments with a forecasting horizon of 96 and as Table 1 in the supplied PDF file shows, Sumba still gives rise to favorable performance** in comparison to PatchTST, FEDformer, and iTransformer which are designed specifically for long-term forecasting. We will leave it for our future work to further enhance its long-term forecasting capability and add this discussion to the limitation section.
>Q2 Why is TCN selected for temporal encoding?
Since the main focus of the paper is to effectively capture the spatial structures under the GCN framework, TCN is chosen for temporal encoding **due to its simplicity and can be easily integrated with the GCN operation**. In addition, **TCN architecture** (with appropriate modifications) [1] **also proved very effective in time series analysis** recently. We are also exploring the possibility of integrating our proposed spatial structure modeling with other temporal encoders such as Transformers, Structured State Space models, etc.
[1] ModernTCN: A Modern Pure Convolution Structure for General Time Series Analysis, ICLR 2024.
> Q3 Is there any issue that the learned bases degenerate to be trivial or converge to be close?
**We do not empirically run into this issue** in our experiments (conducted on six public datasets from various domains). We hypothesize that **the degenerated issue may arise when the spatial structures are static** and can be represented by a single graph structure. In such a case, the model would only attend to one of the matrices in the basis (that is, all weights are concentrated in one entry while the others are zeros in the coefficient $\alpha$). This will not have a negative impact on the model's performance.
> Q4 Limitations are discussed, and I agree that the selection of $M$, i.e. the number of bases, is highly empirical and ad-hoc.
Yes, it will be our future work to explore the adaptive selection of $M$ as well as enhancing the model’s long-term forecasting capability. | null | null | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their precious comments and valuable suggestions. Our major points of view are summarized as follows.
- We conducted **additional experiments with a forecasting horizon of 96**.
- We provide explanations on **the choice of temporal encoding function and the potential degeneration issue**.
- We clarify the **performance improvements** and include the error bars to show them more clearly.
- We include **S4 as a new baseline method** and the corresponding discussion is also presented.
- We provide the **computational cost** of the models with/without a common $\mathbf{U}$-$\mathbf{V}$ basis.
- We further clarify the **implementation details** and add a more **detailed analysis and comparison with graph-based spatiotemporal models**.
Pdf: /pdf/6eb58f154468a48bb672641a6f9d826c9d154805.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Community Detection Guarantees using Embeddings Learned by Node2Vec | Accept (poster) | Summary: The paper presents theoretical results for the quality of embeddings learned by random walk based methods like DeepWalk and its generalization node2vec. More precisely, the main result shows that embeddings learned node2vec, and DeepWalk as a special case of node2vec, preserve the community structure of graphs that exhibit a community structure. This is formalized for graphs generated by a stochastic block model (SBM), a common tool for the analysis community detection algorithms. The paper is mostly theoretical with experimental results based on synthetic graphs generated by SBM.
Strengths: - The paper provides theoretical insights into a widely used embedding algorithm.
- The theoretical results are non-trivial, and the proof techniques are advanced.
- Overall, the paper is well-written.
Weaknesses: - The considered problem is too specific. Of course, DeepWalk and node2vec are widely used but community detection is just one of the application of node embeddings. In the appendix, the authors discuss node classification and link prediction but my understanding is that this is again based on community structures and not really convincing. There are other factors that influence node labels such as structural roles, node degree, etc.
- The paper is about node2vec but the effect of the p and q parameters is not really analyzed. The results would have been much more interesting if they analyzed the interplay of p and q in node2vec and p and q in SBM. In particular, given a graph G generated by SBM(p, q) what values p, q in node2vec(p,q) would be necessary to preserve the community structure of G.
- The informal Theorem 1 is actually confusing. It does not say anything about the structure of the graph or the number of communities c(u). If the graph consists of connected components or the number of communities is n, then the result is trivial.
Technical Quality: 4
Clarity: 3
Questions for Authors: Comment on the above issues, in particular why the influence of p and q in node2vec(p,q) is not thoroughly analyzed. I see the short discussion after Theorem 2 but cannot gain much from it. Also, Corollary 5 does not make sense as scenario i) is DeepWalk with p=q=1. Should the corollary read $\tilde{p} > \tilde{q}$ ?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The paper does not present a new approach so the limitation section does not really apply here. The assumptions for the theoretical results are correctly shown, if this can be considered a limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed reading of our work. Below are our responses to some of the weaknesses and questions about the paper.
## Weaknesses
> The considered problem is too specific...
It is true that there are other frequent applications of embeddings, which our paper does not focus on (with the exception of the remarks in the appendix). This is a criticism which applies more broadly across the theoretical literature (e.g the literature on the theory for spectral clustering of networks). The reason why our discussion in the appendix ends up being based on community structures is because we assume the graph arises from a model where community labels are the main nodal latent information, and our results show that that the embeddings learned do reflect this structure. Generally, if learned embeddings capture some statistic of nodal attributes, it is also necessary that these attributes are predictive for the task in order to say something meaningful about the task performance.
Because our approach depends on applying a generative model to a graph, we need suitable graph models which reflect these additional factors - such as node degree - to be able to analyze them. We highlight that part of our results apply for DCSBMs models, which take the node degree into account. For these models, we show that Deepwalk (node2vec with p = q = 1) learns a representation which is free of the degree heterogeneity parameters, even though it is an important predictive factor for whether two nodes are connected. As a result, one can not always assume learned embeddings capture all the relevant information to complete these tasks. We do ultimately agree though that more in-depth investigations along these lines would be interesting and valuable work.
> the effect of $p$ and $q$ is not really analysed
This is true, and is because the formula for the sampling probabilities for the second order random walk are substantially more complicated to specify, than when $p = q = 1$ in which case the random walk resolves to a (stationary) first order random walk. There are some things which could be potentially investigated numerically - for example, given a particular 2-block SBM model, one could numerically calculate the gap between the two vectors which the embeddings cluster around as a function of $(p, q)$. In theory, the larger this gap, the quicker the convergence of the community detection (as given by Theorem 4). However our numerical experiments seem to suggest that, at least within the regime we study, the effects of $(p, q)$ on the downstream performance seem to be minimal as soon as $n$ is moderately sized (see e.g Figure S6), and so the value add of such a simulation seems minimal.
> The informal Theorem 1 is confusing...
There is some additional context above the informal theorem statement which avoids these trivialities. We can revise the theorem statement to make it more clear by explicitly including the description of the SBM (and highlighting that it is connected w.h.p) within the theorem statement - thank you for highlighting this.
## Questions
> why the influence of p and q in node2vec(p,q) is not thoroughly analyzed...
The discussion after Theorem 2 relates specifically to Deepwalk, and considering what happens for a particular SBM model (the planted partition model). If this comment concerns how this discussion relates to node2vec, one possible source of confusion is that the node2vec parameters $p$ and $q$ are also frequently used as the parameters in the planted partition model, which is discussed after Theorem 2. While we use $\\tilde{p}$ and $\\tilde{q}$ for the planted partition model parameters to prevent notation overloading, we can see that this could either do with changing (or be introduced more strongly) to prevent any possibility of confusion. We will improve the clarity of this when we revise the paper.
> should read $\\tilde{p} > \\tilde{q}$?
Correct, thanks for catching this!
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. Your observation that the effect of p and q is minimal, is actually quite interesting. I would highlight this as a separate result. This is exactly the kind of results that would make the paper more interesting to a broader audience. I still have my doubts that the paper would be interesting to the broader graph ML community but overall this is a solid work and I decided to increase my score. | Summary: Theoretical results showing the consistency of node2vec embedding for community detection in SBMs and DCSBMs using k-means clustering. Versions of this problem have been approached in other work, for example, node2vec rephrased as a matrix factorisation problem [39], and rank $d$ approximations for DeepWalk [51]. This paper addresses the issues in previous works relating to approximations made within previous work, such as an Frobenius norm approximation to the cross-entropy loss, at the cost of weaker consistency results, $\ell^2$-norm as opposed to two-to-infinity norm.
Strengths: The strength and novelty of this work lies in the technical results in the Appendix. The paper would be better served with this material given more attention, perhaps in a journal rather than conference paper. However, I was unable to give the material in the Appendix a full read through, so I am working under the assumption that these results are technically sound. More clarity would be needed to show how the results differ from previous work (see below).
There are some important insights about the use of node2vec for node clustering in different types of SBM and DCSBM. For example, setting $U = V$ in constrained DeepWalk where $p \le q$ is unable to cope with disassortitive (DC)SBMs. This feels like a significant issue and I would be interested to learn more why random walk based embedding algorithms fail on bipartite-like networks. It is also interesting that node2vec on DCSBMs produce clusters that only depend on the SBM community and that the degree parameters $\theta$ are unimportant. Being able to perform k-means clustering directly on the node2vec embedding is an advantage over, say, spectral methods where hyperplane or spherical projection is required to remove the effect of $\theta$ from the embeddings.
Weaknesses: Without delving too deeply into the mathematics in the Appendix, it is not obvious to me how this paper extends the results in [10] and [11] which only get briefly mentioned on Lines 121-123. These two papers analyse the asymptotic distribution of the node2vec embedding for latent position graph models, a wide range of models which include the SBM and DCSBM. It is not clear how the theoretical results in this paper extend those results, except for looking at the consistency for k-means clustering, which is non-specific to the node2vec embedding stage. I don't want to say that the contribution in this work is 'poor', but it is hard to know what is new here in the current state.
Understanding random walk embedding algorithms is important, but they have been overtaken by graph neural networks designed for this exact problem, rather than ideas shoehorned in from natural language processing.
The experiment shown in Figure 3 is somewhat misleading. I read this plot as showing that k-means clustering is unsuitable for spectral embeddings of DCSBM, like the political blog network, rather than spectral embedding failing.
Technical Quality: 3
Clarity: 2
Questions for Authors: The main issue is how do the results here compare to the work of [10] and [11]? Section 1.2 on Related Works describes how this work extends [39] and [51], but the fact that these two strongly relevant papers are only given a brief sentence feels a significant oversight.
What conditions are needed/prevent two-to-infinity norm consistency compared to the $\ell_2$-norm in this setting?
What about other clustering algorithms other that k-means? The results show asymptotically k-means will give the correct clustering, other papers [41] show that there are issues for fixed $n$ which are avoided by other clustering algorithms such as fitting Gaussian mixture models. Practically, any matrix $M$ transformation $U \rightarrow UM$ and $V \rightarrow VM^{-\top}$ is a valid embedding which will affect the k-means clusters.
Similar results for core-periphery SBMs (On a 'two-truths' phenomenon in spectral graph clustering, Priebe et al), another important type of SBM for practical embeddings?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Error bars needed in Figures 1 and 2 showing the multiple experiments described in Lines 296-297. Needs comparison to other embedding algorithms, for example, spectral embedding. Misleading experiment given in Figure 3 (see above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed reading of our paper and the constructive feedback.
## Weaknesses
> ... how this paper extends the
results in [10] and [11] ...
The comparison in the paper was limited due to space constraints, and we will expand the discussion. In summary, [10] and [11] study node2vec with $p=q=1$ in the constrained setting (where $U = V$), and focus on giving more abstract guarantees for the gram matrix in the abstract setting of graphons. In [10] the norm guarantees extend only to the L1 norm between the gram matrix of the embeddings and the minimizer, which is not sufficient to give guarantees on the individual embeddings. In [11] the norm guarantees are upgraded to the L2-norm, albeit with less optimal rates of convergence than our paper. In our paper we also give guarantees for node2vec in full generality (no restriction on p/q) and give the calculation details for SBMs and DCSBMs to explicitly show what the asymptotic distribution can be in certain regimes.
> ... overtaken by graph neural networks ...
While GNNs are more performant than node2vec and reflect the current state of art, there is value in understanding why methods transferred from NLP work (when a-priori there is no apparent reason why they should). We answer this by showing that random walks on the graph recover the latent information required for community detection (in models with community structure). We argue there is also partial relevance of these results even for GNNs - for example, in the unsupervised setting the GraphSAGE authors [1] recommend using the GraphSAGE encoder within the node2vec loss. In the absence of nodal covariates or a supervised loss, the only way in which GNNs can learn relevant information is if graph sub-samples carry relevant latent information for downstream tasks at hand, which is the flavor of our type of results.
[1] Hamilton, William L. and Ying, Rex and Leskovec, Jure. Inductive Representation Learning on Large Graphs. Neural Information Proccessing Systems, 2017.
> The experiment shown in Figure 3 is somewhat misleading...
We included this experiment to highlight that many community detection methods struggle with this degree heterogeneity. As discussed in the rebuttal to NUS8, we see this same performance for different variations of the graph laplacian. We considered using a gaussian mixture model rather than k-means with the spectral embeddings and saw the same poor performance.
## Questions
> The main issue is how do the results here compare to the work of [10] and [11]?
See the above discussion. We will expand this in the revised paper.
> ... two-to-infinity norm consistency compared to the $\ell_2$-norm in this setting?
To provide additional context for why this is relevant - obtaining two-to-infinity norm consistency as a result would allow for our theoretical guarantees for community detection to be upgraded to say that exact recovery holds with asymptotic probability 1. With our current approach, this is out of reach as we apply results similar in flavor to the Davis-Kahan Theorem in order to go from bounds between gram matrices to those of the individual factors. These bounds are tight in general, and so in order to obtain tighter bounds we need some explicit structure on the minimizers of the loss function. For example in [2] and [3], the authors use theorems which exploit the fact spectral clustering takes the eigenvectors of matrices with an explicit rich probabilistic structure. In our scenario, the mix of the cross-entropy loss with a low-rank matrix factorization problem prevents us from being able to write down an explicit minimizer, which means being able to apply these type of results is (unfortunately) unlikely.
- [2] Strong Consistency, Graph Laplacians, and the Stochastic Block Model. Shaofeng Deng, Shuyang Ling, Thomas Strohmer. Journal of Machine Learning Research, 22(117):1−44, 2021.
- [3] Strong Consistency of Spectral Clustering for Stochastic Block Models. Liangjun Su; Wuyi Wang; Yichong Zhang. IEEE Transactions on Information Theory, Volume: 66, Issue: 1, January 2020.
> What about other clustering algorithms other that k-means?...
This is an interesting question. Doing some reviewing of the literature on GMMs, they frequently assume that the data are derived from a GMM in order to give guarantees, and don’t seem easily applicable in our scenario. One way of interpreting our results is that for e.g 99% of nodes, with asymptotic probability 1 as $n \to \infty$, nodes in separate communities are contained within separated spheres of radius $\delta/4$, and the minimum distance between spheres is $\delta$. This implies that e.g a DBSCAN algorithm (ignoring the distinction between core and border points) with an Eps parameter of $\delta/4$ would correctly recover the communities of >99% of nodes. One could probably use similar logic to argue that a sufficiently well initialized EM algorithm (with a robust maximization step) would converge, but this is orthogonal to the work of the paper. We will add a discussion for DBSCAN as it follows very quickly from the definition of that algorithm, and if we are able to find any references for GMMs which are easily applicable in our scenario, we will include this also.
> ..core-periphery SBMs ...
This is doable with our results. We do not have space to include it here (we can provide it as an additional comment if desired) but we can show that for the core-periphery 2 group SBM as given in the paper, the learned embeddings will be separated with a gap of order of the largest edge probability, allowing community detection. The finite sample performance would be an interesting future direction.
## Limitations
> Error Bars
These are included but are too small. We will increase the size.
> Comparison to other embedding algorithms
In the appendix we examined the rates of convergence for our proposed method. We also included an additional real data example, getting similar results.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
I am slightly more optimistic about two-to-infinity norm consistency, but it was interesting to hear your views. For spectral embedding techniques, results can be shown using Davis-Kahan to get useful bounds. Since node2vec can be equated to a matrix factorisation in certain settings [39], I am hopeful something could be possible. Perhaps, even a CLT style result showing the asymptotic distribution of the node2vec embedding. This motivated my question about clustering algorithms, but I accept that k-means will be good enough if we want most nodes to be correctly labelled as with asymptotic probability 1 as $n \to \infty$.
Again, I feel the strength and novelty of this work lies in the Appendix. The paper is an important technique improvement of previous work, although admittedly I am not the intended audience of this work. Thank you for proposing to include further discussion on [10] and [11] in the revised paper. I have improved my score as a result of this rebuttal. | Summary: The performance of node2vec for community detection in the degree-corrected stochastic block model is studied. It is proven that for sufficiently sparse DCSBMs, node2vec followed by $k$-means results in a clustering with only a vanishing fraction of misclassified vertices. Experiments are performed to demonstrate the results.
Strengths: The paper is very well-written and pleasant to read.
The results are extremely useful in understanding the performance of node2vec in downstream tasks.
Weaknesses: The presented results are interesting, but not much intuition is provided.
Minor comments:
* The sentence from lines 21 to 24 is unnecessarily long and difficult.
* on line 110: shouldn't $d\ge O(\kappa)$ be $d=\Omega(\kappa)$? Or do you mean something else?
* Brackets mismatch in the $\log\sigma$ at the end of (10)
* The $\mathcal{E}$ (presumably the edge-set) of (7) is not introduced
* The notation $A_{2,\infty}$ in Theorem 2 collides with the adjacency matrix notation.
* Figures are too small
Technical Quality: 4
Clarity: 3
Questions for Authors: Is there any hope of extending the results to partial recovery in the sparse regime? That is, $\rho_n=\Theta(n^{-1})$ so that the average degree remains bounded as $n\rightarrow\infty$.
Can the results be extended to the case where the number of communities grows with $n$? If so, how fast can it grow with $n$?
The NMI$\approx0$ that is shown in Figure 3 is surprising. Can we improve this by taking a different Laplacian?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have adequately addressed the limitations by clearly stating the assumptions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful reading of our paper and pointing out some further details which can improve it. Below we address each of the mentioned weaknesses and questions.
## Weaknesses
> The presented results are interesting, but not much intuition is provided.
We aren't certain whether this is asking for intuition about the proof technique or the bounds produced (or something else). Here’s some intuition for the proof technique - we are happy to provide further summaries where it would be helpful and can add to the revised version of the paper.
Keeping to the SBM case, the proof works as follows - we show that the probability that an edge $(u, v)$ is positively or negatively sampled within node2vec concentrates around a function only of the underlying communities of vertices $u$ and $v$, which we label $c(u)$ and $c(v)$ respectively. With this, we are able to argue that the node2vec loss concentrates uniformly (in a neighborhood of their minima) around a function whose minima
$M^*\\in \\mathbb{R}^{n \\times n}$
satisfies
$M^*_{u, v} = \\tilde{M}_{c(u), c(v)}$
for some matrix $M$.
This allows us to show that any set of embeddings $\\omega_u$ which minimize the node2vec loss will converge (up to rotation) to vectors $\\eta_{c(u)}$ which depend only on the community label, which consequently allows us to give consistency guarantees for clustering algorithms such as k-means.
> Minor Comments
Thanks for pointing these out, in particular the typo about $d=\\Omega(\\kappa)$, the poor use of $A_{2,\\infty}$ and the unnecessarily long sentences. We have updated these and the bracket mismatch. $\\mathcal{E}$ is defined at the start of section 2, but we should highlight this again when we first use it. We will also make all figures larger and clearer when the paper is revised.
## Questions
> Extend to partial recovery in the sparse regime
Doing so would be very interesting but challenging, as studying random walks on the graph in the exactly sparse regime is substantially more difficult than in the denser regimes. To give some intuition for why, suppose we simply have an Erdos-Renyi graph (i.e a single community) with probability $p = \\lambda / n$ for $\\lambda > 1$ (so we are in the regime where the graph has a single giant component), and that we restrict the graph to this giant component. A more tractable contiguous model - based around taking a random multigraph, expanding the edges to be paths of random length, and then attaching Galton-Watson trees to each vertex - for such graphs is given by Ding, Lubetzky and Peres [1]. This has been used successfully to describe properties of random walks on this graph (see for example the paper [2] discussing mixing times of the simple random walk on an Erdos-Renyi graph). This is one possible avenue which could be explored, although it would require being able to generalize the structure theorem of [1] to stochastic block models.
[1] Jian Ding, Eyal Lubetzky, Yuval Peres. Anatomy of the giant component: The strictly supercritical regime. European Journal of Combinatorics, Volume 35, 2014, pp. 155-168.
[2] Nathanaël Berestycki, Eyal Lubetzky, Yuval Peres and Allan Sly. Random Walks on the Random Graph. The Annals of Probability, Vol. 46, No. 1 (January 2018), pp. 456-490.
> Number of communities growing in $n$
Yes, this is the case. In Theorem 3 (and consequently Theorem 4, which gives the actual guarantees for community detection) the embedding dimension $d$ is said to equal the rank of the matrix $M^*$. It turns out that typically equals the number of communities themselves - this can be seen to be the case for the SBM($n$, $\\kappa$, $\\tilde{p}$, $\\tilde{q}$, $\\rho_n$) model as described in the example below Theorem 2. As a result, from the rates we can take $\kappa = o(n \\rho_n)$ and maintain consistency. (While this is clear under scenarios (i) and (iii) from the stated bound, this is also the case in scenario (ii), as the $o_p(1)$ bound arises from a term which dominates the $(\max( \\log n, d) / n \\rho_n)^{1/2}$ term which is still present in this scenario.)
> Choice of Laplacian for Figure 3
For the results shown here we used the normalized Laplacian. Some quick experiments indicate that choosing the standard Laplacian or
different clustering methods (see rebuttal to znFf) would lead to the same results.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for answering my questions. The comment about extending the consistency result to a growing number of communities sounds interesting. I encourage the authors to elaborate on this in the revised version of the paper. I will keep my score unchanged. | null | null | Rebuttal 1:
Rebuttal: We would again like to thank each of the reviewers for their time and effort in reviewing our paper, and their thoughtful comments and feedback - particularly around typos and some parts of the paper which could be made clearer - which will help us to improve our paper. We have responded to the comments re: weaknesses and questions for each of the reviewers as a response to the individual reviews, and are happy to respond to any additional inquiries. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DeiSAM: Segment Anything with Deictic Prompting | Accept (poster) | Summary: This paper introduces a new reasoning-based segmentation by adding deictic logic as prompts into the SAM. Motivated by the deictic logical analysis, the authors present DeiSAM, a combination of large pre-trained neural networks with differentiable logic reasoners for deictic promptable segmentation. DeiSAM first use LLMs to generate first-order logic rules and perform differentiable forward reasoning on generated scene graphs. In addition, authors present Deictic Visual Genome dataset. Experimental results show DeiSAM works better than pure data-driven baselines.
Strengths: 1, Overall writing is good and easy to follow. Motivation is easy to understand.
2, The performance on proposed DeiVG benchmark looks good.
3, The idea of using first order deictic as prompting for segmentation is interesting.
Weaknesses: Despite the idea of combining first order deictic prompt into SAM is interesting, there are still lot of limitations in this work.
1, Motivation is not new. Performing reasoning segmentation via LLM is not new. There several previous works exploring the same settings [1]-[5]. The difference lies on the author adopt more visual cues (scene graph) into LLM before sending the language features to SAM.
[1], LISA: Reasoning Segmentation via Large Language Model, CVPR-2024.
[2], An Improved Baseline for Reasoning Segmentation with Large Language Model, arxiv-2023.
[3], GRES: Generalized Referring Expression Segmentation, CVPR-2023.
[4], Towards Robust Referring Image Segmentation, TIP-2023.
[5], See, Say, and Segment: Teaching LMMs to Overcome False Premises, CVPR-2024.
2, The technical novelties are limited. The proposed method just combines previous differentiable logic reasoners with LISA-like segmenter (LLM + SAM) architecture. I find no extra insights or advantages for end-to-end reasoning learning. Moreover, the method needs extra scene graphs as inputs, which introduce more complexities to the pipeline.
3, Related works discussion. Missing lots of works on referring segmentation [3]-[5].
4, Performance and benchmarks. There are only refcoco+ and proposed benchmark results. Several important benchmarks are missing (Please refer to [1]). Moreover, the performance is not strong. This result is misleading since the authors miss lots of previous works for fair comparison.
5, Missing detailed ablation studies for each component. It is hard to know the real benefits on the combined logical reasoner and effects of input scene graph.
Technical Quality: 2
Clarity: 3
Questions for Authors: See the weakness part.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes, they have checklist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the fruitful feedback and for acknowledging that the paper is well-written and that combining first-order logic with prompting is interesting.
We would like to address concerns next.
> Motivation is not new. Performing reasoning segmentation via LLM is not new. There several previous works exploring the same settings [1]-[5].
We disagree with this. Our claim is to use something other than LLMs for reasoning, as they often hallucinate on complex prompts. Instead, we propose using logic for reasoning. We integrate LLMs and SAM with differentiable reasoners for neuro-symbolic inference on various deictic prompts. To the best of our knowledge, this approach has not been attempted in previous studies.
Most importantly, we demonstrated that LISA [1] significantly degrades its performance on abstract prompts in Tables 3 and 4, indicating a divergence from our problem setting. Our focus does not encompass referring expressions or commonsense reasoning, which differ from high-level abstract reasoning.
> The difference lies on the author adopt more visual cues (scene graph) into LLM before sending the language features to SAM.
Not quite. There is a factual error: we do not feed scene graphs to LLMs, nor do we utilize LLMs directly for reasoning. In DeiSAM, LLMs generate logic programs to reason with scene graphs on the abstract level.
> The technical novelties are limited. The proposed method just combines previous differentiable logic reasoners with LISA-like segmenter (LLM + SAM) architecture.
As Reviewer YJtK pointed out, our major findings are as follows:
- Existing strong baselines using transformers for reasoning perform poorly on the segmentation tasks with abstract complex prompts.
- This indicates that the current transformer-baed models are limited in high-level abstract visual reasoning.
- DeiSAM is the first framework to successfully extend segmentation models using differentiable reasoning with scene graphs to gain reasoning ability over abstract representations.
> I find no extra insights or advantages for end-to-end reasoning learning.
We address the trade-off between logic-based reasoning and model adaptability: while logic provides faithful reasoning capabilities, it forces the model to be deterministic and unlearnable. For instance, employing off-the-shelf logic libraries such as Prolog ensures reasoning but impedes gradient-based learning due to their non-differentiable inference. The key insight of our approach is that it enables the segmentation model to function as both a faithful reasoner and an adaptive learner, as demonstrated in Table 5.
> Moreover, the method needs extra scene graphs as inputs, which introduce more complexities to the pipeline.
Not quite. There is indeed a trade-off between the complexity of the input and the reasoning models employed. Scene-graph representations make the reasoning module highly parameter-efficient. For instance, in DeiSAM, the forward reasoner with $N$ rules requires only $N$ parameters, with one weight per rule.
In contrast, purely neural models typically require a large number of parameters in the reasoning model (e.g., experiments in [5] employ LLaVA-v1.5-7B as the LLM backbone) to perform reasoning on pixels.
> 4, Performance and benchmarks. There are only refcoco+ and proposed benchmark results. Several important benchmarks are missing (Please refer to [1]).
In our manuscript, on lines 90-95, we explicitly discuss the reasons for not conducting experiments on the benchmark [1]. The suggested benchmark [1] primarily focuses on low-level commonsense reasoning, whereas our objective is to address high-level abstract reasoning segmentation. This rationale also extends to other referring-expression segmentation benchmarks.
> Moreover, the performance is not strong. This result is misleading since the authors miss lots of previous works for fair comparison.
We respectfully disagree with the suggestion that additional baselines are necessary to validate our experiments. We consistently used the LISA [1] model, a state-of-the-art segmentation method, as the baseline for our experiments. None of the works suggested by the reviewer specifically address high-level abstract reasoning. Therefore, we believe that incorporating these additional baselines is not essential to validate our experimental results.
> Missing detailed ablation studies for each component. It is hard to know the real benefits on the combined logical reasoner and effects of input scene graph.
We refer to the general remark. To answer this, we conducted additional experiments on abstract visual scenes with more complex prompts and demonstrated the gain by having the logical reasoner with scene graphs.
> Missing lots of works on referring segmentation [3]-[5].
We agree with the reviewer that papers [3]-[5] are relevant to our study. Here, we discuss these works and clarify our paper's contribution.
GRES [3] proposes two distinct attention mechanism to address addresses multi-target segmentation. R-RIS [4] proposes RefSegformer, a transformer-based model that includes a language encoder, a vision encoder, and an encoder-fusion meta-architecture for handling incorrectly described textual prompts. See, Say, and Segment [5] propose SESAME, an LMM designed to "see" whether objects are present, "say" to interact with users, and "segment" target objects.
All these methods rely on transformers (or attentions) as their core reasoning pipeline, and they would inherit the reasoning limitations inherent to purely neural models. In contrast, DeiSAM explicitly encodes logical reasoning processes to guarantee accurate and faithful interpretation of abstract and complex prompts. We will include these discussions in the final version.
Thank you once again for your valuable feedback. We hope our response adequately addresses your concerns, and we would be pleased to answer any further questions you may have.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
We hope we have answered all your questions and resolved the outstanding concerns. As the discussion phase is coming to an end, we would appreciate if the reviewer engages in a discussion so that we can answer any further concerns, if any.
Regards,
Authors
---
Rebuttal 2:
Title: Reviewer 9P1c: please respond to the authors' rebuttal
Comment: Dear reviewer,
thanks for your participation in the NeurIPS peer review process. We are waiting for your response to the rebuttal. You gave a reject rating (3), with 5 weaknesses.
Is the response from authors satisfactory?
- If yes, are you planning to increase your score?
- If no, could you help the authors understand how they can improve their paper in future versions?
Thanks,
AC | Summary: This paper proposes DeiSAM to enhance the ability of current state-of-the-art referring segmentation frameworks with deictic prompting. DeiSAM is an innovative approach that combines large pre-trained neural networks with differentiable logic reasoners to perform image segmentation based on complex, deictic textual prompts. The method leverages Large Language Models (LLMs) to generate first-order logic rules from textual descriptions and uses differentiable reasoning on scene graphs to identify and segment objects in images accordingly. The paper also introduces the Deictic Visual Genome (DeiVG) dataset for evaluating deictic segmentation and demonstrates DeiSAM's superiority over neural baselines through empirical results.
Strengths: In general, this paper identifies an important problem in the intersection between referring image segmentation and scene graph generation, and proposes a simple yet effective solution to resolve it. The contribution is comprehensive, from my perspective, including the following aspects:
1. **Novelty and Innovation**: The paper proposes a novel framework that integrates Large Language Models (LLM) with visual logic reasoning for deictic image segmentation, addressing a gap in the field where current methods struggle with complex prompts. The symbolic rules designed in the pipeline look relevant to the visual programming concept, which is novel and technically elegant.
2. **Empirical Validation and Efficacy**: The authors introduce a new dataset, DeiVG, to test the capabilities of DeiSAM, and provide extensive experimental results that validate the effectiveness of their approach, showcasing clear improvements over existing baselines. Concretely, for DeiVG1, DeiVG2 and DeiVG3 respectively, the proposed method outperforms the previous state-of-the-art LISA by 50.24%, 29.37% and 12.04% in terms of Mean Average Precision.
3. **Simplicity and End-to-end Differentiable**: DeiSAM is both simple and effective, which makes it easy to follow. DeiSAM's modular architecture and the use of differentiable reasoning allow for end-to-end training, which is a significant advantage for adapting to complex downstream tasks and improving segmentation quality.
Weaknesses: 1.
The clarity of the paper could be further improved. Despite careful reading and familiarity with the field, several aspects remain confusing:
- **Training Process**
- **Unclear Tuning**: What components are tuned during the training stage?
- **Revision Recommendation**: Annotate which modules are tuned during training.
- **Scene Graph Generator**
- **Pre-training Limitations**: The Scene Graph Generator appears pre-trained on Visual Genome, which annotates only a limited set of objects (e.g., persons, boats, cars).
- **Generalizability Concerns**:
- Will the proposed method work on images containing objects not annotated in Visual Genome?
- If not, the title "Segment Anything with Deictic Prompting" may be an overstatement.
- **Semantic Unifier and Forward Reasoning**
- **Ambiguous Illustration**: The explanation of these components lacks clarity.
- What are the inputs and outputs of the Semantic Unifier?
- What constitutes the "background knowledge" $\mathcal{B}$ in forward reasoning?
- What is learned in the forward reasoning process?
2.
The proposed method should also be evaluated on other referring image segmentation benchmarks, such as RefCOCO, RefCOCOg, and RefCOCO+. This evaluation would help determine whether the proposed method is effective in general referring image segmentation settings, or if the integration of scene graph information potentially hinders performance in standard referring segmentation tasks.
Technical Quality: 4
Clarity: 4
Questions for Authors: All are listed in weaknesses. Please see above.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the limitations in the submitted manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the fruitful feedback and for acknowledging that the proposed framework is novel and effective and the empirical results are valid.
We would like to address the concerns raised by the reviewer.
> The proposed method should also be evaluated on other referring image segmentation benchmarks, such as RefCOCO, RefCOCOg, and RefCOCO+
We already reported this in the paper. Table 3 shows the result on the RefCOCO+ benchmark. We observed that DeiSAM can handle the referring expression task, which is the common benchmark suggested by the reviewer.
Moreover, in Table 4, we demonstrated that neural baselines (LISA and GroundedSAM) significantly degraded their performance if we modified the textual input to the abstract form, e.g., “kid wearing navy shirt” is modified to “an object that is wearing navy shirt.” This indicates the lack of the ability to use abstract high-level reasoning in neural baselines.
> What components are tuned during the training stage? Annotate the tuned modules.
The rule weights in the differentiable forward reasoner. On line 321, we explicitly stated: “We minimize the binary cross entropy loss with respect to rule weights $w_1$ and $w_2$.”
To enhance clarity, we added annotations to the figure as the reviewer proposed. Thank you for the suggestion.
> Pre-training Limitations: The Scene Graph Generator appears pre-trained on Visual Genome, which annotates only a limited set of objects (e.g., persons, boats, cars).
We agree with the reviewer that good scene graph representations are the key to the proposed DeiSAM framework. Our finding is that if good scene graphs (generators) are available, the segmentation quality on abstract prompts can be much improved by combining logical reasoning.
As scene graphs can be noisy and low-quality, DeiSAM performs embedding-based vocabulary matching and learning scene graphs in combination. It is a vital open question to handle scene understanding for unseen data in a zero-shot manner.
> Will the proposed method work on images containing objects not annotated in Visual Genome?
Yes, it works. We conducted additional experiments using the CLEVR environment [1] that contains abstract 3D visual scenes, which is largely different from visual genome. We show that DeiSAM can segment abstract 3D objects given more complex textual prompts.
Moreover, our result on the RefCOCO+ dataset also validates that it is not limited to only visual genome images.
[1] CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning. CVPR 2017
> If not, the title "Segment Anything with Deictic Prompting" may be an overstatement.
As the premise is false (we clarified above), we believe this is not an overstatement.
> What are the inputs and outputs of the Semantic Unifier?
The input is a scene graph and a set of rules generated by LLMs from textual prompts. If a term in the rules is absent in the scene graph, the semantic unifier investigates the most likely matching between these two representations using term embedding.
We added these technical details to the final version of our manuscript Thank you for pointing this out.
> What constitutes the "background knowledge" in forward reasoning?
It is a set of rules and facts described in first-order logic. It is common for logical reasoners to accept background knowledge from users. For DeiSAM, we provided no background knowledge throughout the experiments.
> What is learned in the forward reasoning process?
In our experiments, rule weights are learned for composing a mixture of scene graphs to maximize segmentation performance.
To clarify, our forward reasoner implements forward-chaining reasoning in first-order logic, a method used to deduce all facts given known rules and known premises. This reasoning process is inherently deterministic. The differentiable version of the forward reasoner introduces weighted rules, which can be optimized through backpropagation. Our approach seamlessly integrates forward reasoning with neural modules, thereby enhancing the overall learning performance.
Thank you once again for your valuable suggestions. We hope our response addresses your concerns, and we are happy to answer any further questions you may have.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
We hope we have answered all your questions and resolved the outstanding concerns. As the discussion phase is coming to an end, we would appreciate if the reviewer engages in a discussion so that we can answer any further concerns, if any.
Regards,
Authors
---
Rebuttal 2:
Title: Reviewer TxaQ: please respond to the authors' rebuttal
Comment: Dear reviewer,
thanks for your participation in the NeurIPS peer review process. You indicated that you are leaning towards accepting the paper. Is the response from authors satisfactory? Does it address weaknesses that you mentioned in the review?
- If yes, are you planning to increase your score?
- If no, could you help the authors understand how they can improve their paper in future versions?
Thanks,
AC | Summary: The paper proposes to study Deictic references/prompts, i.e., phrases/references that describe the role/purpose/context rather than naming the object directly. Firstly, the paper constructs a new dataset with deictic prompts based on Visual Genome; Next, it is shown that existing methods do poorly on this task. Next, a neuro-symbolic method that works on top of a generated scene graph is proposed which is shown to work a lot better on this newly constructed dataset as well as perform comparably on related tasks such as referring expression recognition.
[EDIT: Post rebuttal comments are useful but do not change overall score]
Strengths: **[S1] Interesting Findings:** While the setup is somewhat artificial/contrived (see W1), the results on existing methods are surprisingly poor for the proposed task. This is an interesting finding and might point to a bigger problem in the way "referring segmentation" is done currently. While there are existing datasets with referring expressions, I think the paper studies a more challenging setup which is harder to game so it should provide a clearer picture of true understanding capability of the models.
**[S2] Good clarity:** The presentation is clear and detailed. The experimental designs are meaningful and the results are clearly presented.
Weaknesses: **[W1] Somewhat contrived setup:** I fully agree with the big premise that deictic expressions are important and common in everyday usage. However, many examples from the constructed dataset are unnatural and forced. The paper also takes deictic phrases to an extreme by removing all nouns from the deictic expressions with very generic prompts such as "stuff" or "object". It is more common for references to consist of categorial nouns instead of object names (E.g., that toy on top of the chair) instead of teddy bear, etc. So, while a useful tool in diagnostic abilities, I feel that the dataset is not broadly applicable. Similarly, since it is such an unnatural setup; It might also make it unrealistically hard for general purpose baseline methods.
**[W2] Reliance on Scene graphs:** The proposed work relies on access to scene graph prediction model outputs. This is not something that is available to baselines and, in my opinion, does a lot of heavy lifting. I believe this makes it hard to do a fair comparison with the baselines. One could even argue that if the scene graph is comprehensive and accurate the **main** task in decoding deictic expressions (Thought I admit that it is still non trivial). Also see Q1.
[W3] Minor issues on the dataset: The dataset is divided based on how many "hops" of reasoning are necessary to decode the expression. E.g., "A tall object wearing a hat walking in sidewalk" is a 3 hop expression (tall, wearing a hat, walking on the sidewalk). However, unless there are counterexamples to each. (E.g., other tall things, other stuff wearing the hat, ...) it degenerates into a subset of that. This means a shortcut "wearing a hat" alone might be enough to decode the object. I would like to see an analysis of this.
Technical Quality: 3
Clarity: 4
Questions for Authors: Q1. If scene graphs are assumed to be available, should they not be available in some form to every algorithm. E.g., if the object name is easily decodable from deictic expression by looking it up in the scene graph, can we not change the expression to include the name of the object as well? E.g., "A red furry object" --> Select all objects that are red and furry from scene graph --> Grab the names of those objects -> use the name instead of deictic expression when prompting SEEM or LISA.
Q2. What is the effect of the quality of scene graphs on the final results of the proposed method. Is it resitant to mistakes in the SGG prediction?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and fruitful feedback and for acknowledging that our findings are insightful, the experimental setups are meaningful, and the paper is well-written.
We address the concerns next.
> W1: many examples from the constructed dataset are unnatural by removing all nouns from the deictic expressions with "stuff" or "object".
> It is more common for references to consist of categorial nouns instead of object names (E.g., that toy on top of the chair) instead of teddy bear
We agree with this. Using categorical names (e.g. toy, food, deice) would be very promising in covering more real-world applications of referring expressions. This could be achieved by having an LLM while dataset generation to convert each specific name to a categorical name.
However, we do not conduct experiments on this as our primary focus is abstract reasoning in which object and categorical names are absent in the prompt, i.e. the models need to deduce the object name from a given relational specification.
To mitigate, we reported on Table 10 in the Appendix of performances by making DeiVG datasets with a more referring-expression style, e.g. we provide "Person on the boat" instead of “An object on the boat”. The results indicate that neural baselines gain a lot by having object names. We expect that if we provide the categorical names in the prompt, neural baselines will gain some performance but not as much as with object names.
> W2: Reliance on Scene graphs: The proposed work relies on access to scene graph prediction model outputs.
We answer this later for Q1.
> W3: Shortcut of in the DeiVG dataset. E.g. "wearing a hat" alone might be enough to decode the object given prompt: tall, wearing a hat, walking on the sidewalk.
We agree with the reviewer that this is often the case in our dataset. We have analyzed how often it happens in the generated DeiVG dataset.
For example, with a prompt “An object wearing a hat walking on the sidewalk", we consider the following variations of ways to match objects:
- Relation: We consider only relations to detect matching between objects in the scene, i.e., "wearing" or "walking on," disregarding the object names.
- Object: We consider only objects to detect matching between objects in the scene, i,e., "hat" or "sidewalk," disregarding the relations.
- Either: We consider either a partial match from Relation or Object.
- Complete: We consider complete counterexamples, i.e., "wearing hat" or "walking on the sidewalk", which is exactly articulated by the reviewer.
The table below shows the proportion of prompts involved with such shortcuts.
| Proportion (%) | Relation | Object | Either | Complete |
|----------------|------------------|----------------|----------------|-------------------------|
| DeiVG$_3$ | 23.01 | 46.00 | 17.35 | 60.56 |
| DeiVG$_2$ | 23.39 | 48.22 | 18.13 | 58.65 |
The result indicates that the shortcut often happens in the dataset. In DeiVG$_3$, 60\% of the prompts are involved with such a shortcut, i.e., no other objects in the scene share the same 3 relations. We included this result to the appendix of our manuscript. Thank you for the suggestion to improve the paper.
Moreover, to explore more cases with similar objects without shortcuts, we conducted an additional experiment on an abstract 3D environment with more complex prompts. For more details, refer to the general remark.
> Q1: Can we use scene graphs for other baselines? E.g., "A red furry object" --> Select all objects that are red and furry from scene graph --> Grab the names of those objects -> use the name instead of deictic expression when prompting SEEM or LISA.
This approach would not be fair. While it is an interesting suggestion to use scene graphs for neural baselines, the proposed method involves providing an external program for preprocessing the scene graphs so that neural models can process them. This would result in a composite model consisting of a scene graph preprocessor and a separate large segmentation model. Such an approach severely limits further extensions aimed at enhancing scene understanding and reasoning segmentation because it is not straightforward to propagate segmentation errors back to improve scene graph generation.
In contrast, DeiSAM treats scene graphs as an internal primal representation. It performs segmentation through differentiable reasoning, allowing it to backpropagate errors to the scene graphs. This capability is crucial for enabling the model to learn to segment more accurately from data and to generate explanations using gradients, indicating which parts of the scene were essential for the segmentation.
An alternative comparison could be to encode the scene graphs as text and provide them to the neural baselines.
> Q2: What is the effect of the quality of scene graphs on the final results of the proposed method. Is it resitant to mistakes in the SGG prediction?
It significantly impacts the segmentation quality, but DeiSAM demonstrates resilience to noise in SGGs. If the quality of scene graphs is extremely low, the resulting segmentation quality will also be low, as the reasoning process relies on the scene graph representation. However, DeiSAM manages noise in scene graphs through embedding-based vocabulary matching and by learning a mixture of scene graphs.
Thank you for your valuable suggestions. We hope our response satisfactorily addresses your concerns, and we are happy to answer any additional questions you may have.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
We hope we have answered all your questions and resolved the outstanding concerns. As the discussion phase is coming to an end, we would appreciate if the reviewer engages in a discussion so that we can answer any further concerns, if any.
Regards,
Authors
---
Rebuttal Comment 1.2:
Title: Post-rebuttal comments
Comment: I have read the responses to my comments as well as all other reviews and their responses carefully. Like me, others also had concerns about the use of scene graphs and limited vocals caused by using a COCO-bases scene graph. There hasn't been a very satisfactory response about this fact.
Secon point that I raised was about "unfair" use of extra information in form of scene graph. Unfortunately, that also hasn't been very satisfactory.
> This approach would not be fair. While it is an interesting suggestion to use scene graphs for neural baselines, the proposed method involves providing an external program for preprocessing the scene graphs so that neural models can process them. This would result in a composite model consisting of a scene graph preprocessor and a separate large segmentation model.
I disagree. Scene graph representation in form of scene graph triplets can be natively processed by LLMs and even better with a few in-context examples. Even if needed to make a "program" it can be a very simple one based on semantic parsers (which can be inaccurate but would still provide a sort of a equalization of available information.
> Such an approach severely limits further extensions aimed at enhancing scene understanding and reasoning segmentation because it is not straightforward to propagate segmentation errors back to improve scene graph generation.
Yes it would, but that is fine and would only serve to differentiate the submitted work with the above approach, which is a strength. We would still have liked to see what the results are compared to a hypothetical naive model that gets the equivalent amount of information.
However, both those points were originally included in my first round review so the scores remain unchanged.
---
Reply to Comment 1.2.1:
Title: Thank you for your response
Comment: Thank you for your response. We agree with the reviewer that the experimental setup suggested in the response for the LLM baselines would work as another naive baseline, on top of the LLM baselines presented in the paper. We will run the experiments with the suggested setup and include the results to the final version of our manuscript. Thank you again for your insightful suggestions to improve the paper.
---
Rebuttal 2:
Title: Reviewer YJtK: please respond to the authors' rebuttal
Comment: Dear reviewer,
thanks for your participation in the NeurIPS peer review process. You indicated that you are leaning towards accepting the paper. Is the response from authors satisfactory? Does it address weaknesses that you mentioned in the review?
- If yes, are you planning to increase your score?
- If no, could you help the authors understand how they can improve their paper in future versions?
Thanks,
AC | Summary: The manuscript proposes DeiSAM, a framework that integrates large pre-trained neural networks with differentiable logic reasoners to address deictic promptable segmentation.
DeiSAM leverages large language models (LLMs) to generate first-order logic rules and performs forward reasoning on generated scene graphs for object segmentation based on complex textual prompts.
The Deictic Visual Genome (DeiVG) dataset is introduced for evaluation.
Strengths: 1. The paper is well-structured and easy to read.
2. The integration of LLMs with differentiable logic reasoners for deictic prompt-based segmentation is a novel approach, addressing limitations in neural network-based segmentation models.
Weaknesses: 1. Over-simplified Assumptions: The curated datasets DeiVG1, DeiVG2, and DeiVG3 contain prompts using only 1-3 relations and are restricted to avoid multiple objects with ambiguity. However, in real-world applications, object and relation categories can be varied and rich, with inevitable visual-language alignment gaps. For example, in an image with a black cat and a white cat, how would the method handle a prompt to segment the white cat using relative descriptions? Moreover, the dataset size is small (10k), making large models prone to overfitting.
2. Dependence on Scene Graph Quality: The method heavily relies on the quality of scene graphs. Inaccurate scene graphs can lead to incorrect segmentation results, which is a critical limitation for practical applications. For instance, the large performance gap indicated in Table 5 highlights this dependency.
3. Unclear Philosophy: The use of a semantic unifier (an LLM) to associate objects in the prompt and scene graph seems trivial. The rationale for using a differentiable forward reasoner instead of continuing to use an LLM is not well-justified. The necessity and advantage of having a "differentiable" reasoner are unclear. How does this reasoner fundamentally differ from previous neural-based methods like Grounding DINO or LISA?
4. Insufficient Experiments: The experiments are limited to relatively easy cases for prompt-based segmentation on a small-scale image dataset. The paper lacks evaluations on more challenging scenarios, such as multiple objects of the same category or prompts with more complicated relations. Comparisons with previous methods under these conditions would provide a more comprehensive evaluation of DeiSAM’s effectiveness.
5. Unclear Contributions: Modules of DeiSAM are borrowed from other existing works, and there is no novel architecture designed for the prompt-based segmentation. It appears more like a technique report for an engineering project instead of being a research paper suitable for a top-tier conference like NeurIPS.
Technical Quality: 3
Clarity: 3
Questions for Authors: Detailed questions are referred to as Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful comment and for acknowledging that the paper addresses limitations in neural segmentation models and is well-structured and easy to read.
We address the concerns next.
> Over-simplified Assumptions: How would the method handle a prompt to segment the white cat using relative descriptions?
Our method can handle such cases. Let us describe how to do that.
Given prompt: “Segment a white cat”, an LLM can generate rules in our format:
```
cond1(X):-type(X,Y),type(Y,white_cat).
target(X):-cond1(X).
```
Then, the semantic unifier will consider the embedding of the term `white_cat` and match it to an entity in the scene graph representing a white cat. The embedding-based semantic unification can address the visual-language alignment. If the scene graph misses entities of a white cat and a black cat in the first place, it would be hard for DeiSAM to perform reasoning correctly. A crucial unresolved issue is how to generate scene graphs in a zero-shot manner, akin to large visual-language models.
> Moreover, the dataset size is small (10k), making large models prone to overfitting.
The DeiVG dataset is proposed to evaluate segmentation models, and thus we believe that the dataset with 10k examples is not extremely small for this usage.
> Inaccurate scene graphs can lead to incorrect segmentation results, which is a critical limitation for practical applications.
We argue that segmentation models must understand the scenes to be faithful reasoners. Without this assumption, models can learn to *shortcut*, i.e., learn to pretend to answer via an incorrect scene understanding and reasoning [1]. This does not essentially solve the complex reasoning prompts that we aim to address in the paper.
[1] Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts, NeurIPS 2023
> the large performance gap indicated in Table 5 highlights this dependency.
We agree with this. The quality of scene graphs significantly impacts the segmentation quality, but DeiSAM demonstrates resilience to noise in scene graphs. If the quality of scene graphs is extremely low, the resulting segmentation quality will also be low, as the reasoning process relies on the scene graph representation. However, DeiSAM manages noise in scene graphs through embedding-based vocabulary matching and by learning a mixture of scene graphs, as demonstrated in Table 5.
> Unclear Philosophy: The rationale for using a differentiable forward reasoner instead of continuing to use an LLM is not well-justified.
The primary advantage is that segmentation models transform into reliable reasoners. LLMs often hallucinate on complex abstract prompts and scenes. To demonstrate this, we conducted additional experiments on 3D visual scenes with more complex prompts. For more details, refer to the general remark.
> The necessity and advantage of having a "differentiable" reasoner are unclear. How does this reasoner fundamentally differ from previous neural-based methods like Grounding DINO or LISA?
The advantage of the resulting model lies in its ability to learn through gradients, thereby enhancing its performance. In contrast, using an off-the-shelf discrete logic system (e.g., Prolog) for a segmentation model would render the entire system static by blocking the gradient flow. This static nature would severely limit its applicability, as it would necessitate expert intervention to rewrite rules whenever they are imperfect. To reconcile explicit logical reasoning with gradient-based learning, we employ a differentiable logic reasoner. As illustrated in Table 5, DeiSAM obviates the need for expert efforts to improve segmentation performance.
> The paper lacks evaluations on more challenging scenarios, such as multiple objects of the same category or prompts with more complicated relations.
Refer to the general remark.
> There is no novel architecture designed for the prompt-based segmentation. It appears more like a technique report for an engineering project instead of being a research paper suitable for a top-tier conference like NeurIPS.
We disagree with the premise. DeiSAM addresses segmentation tasks involving abstract, complex prompts by integrating logic reasoners. Experimental results substantiate its effectiveness, demonstrating that it outperforms strong neural baselines.
We believe that utilizing existing modules and a well-defined computational architecture should not be viewed as a disadvantage. We contend that the criteria for acceptance at a conference should be based on the overall contribution of the paper rather than solely on the introduction of a "novel architecture."
Thank you for your insightful suggestions. We hope our response effectively addresses your concerns, and we are happy to answer any further questions you may have.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
We hope we have answered all your questions and resolved the outstanding concerns. As the discussion phase is coming to an end, we would appreciate if the reviewer engages in a discussion so that we can answer any further concerns, if any.
Regards,
Authors
---
Rebuttal 2:
Title: Reviewer WXVv: please respond to the authors' rebuttal
Comment: Dear reviewer,
thanks for your participation in the NeurIPS peer review process. We are waiting for your response to the rebuttal. You gave a borderline rating (4). Is the response from authors satisfactory? Does it address weaknesses that you mentioned in the review?
- If yes, are you planning to increase your score?
- If no, could you help the authors understand how they can improve their paper in future versions?
Thanks,
AC | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful feedback and insights on the paper. Here, we would like to address concerns shared by several reviewers.
Reviewer WXVv and Reviewer 9P1c:
> What is the benefit of integrating (differentiable) logical reasoners into segmentation models?
The primary advantage is that segmentation models transform into reliable reasoners. It is well-documented that large language models (LLMs) often underperform on complex reasoning tasks [1,2]. Consequently, depending on LLMs within a segmentation model introduces an inherent bottleneck for abstract reasoning capabilities. This paper aims to propose a segmentation model capable of addressing abstract reasoning. Crucially, this approach is distinct from solving referential tasks, where abstract representations and complex reasoning do not play a central role.
[1] Large language models can be easily distracted by irrelevant context. ICML 2023
[2] Large language models cannot self-correct reasoning yet. ICLR 2024
This argument responds to a comment from Reviewer WXVv:
> The paper lacks evaluations on more challenging scenarios, such as multiple objects of the same category or prompts with more complicated relations.
Reviewer 9P1c:
> It is hard to know the real benefits on the combined logical reasoner and scene graphs.
To answer this, we conducted additional experiments using the CLEVR environment, wherein multiple abstract objects (e.g., a large cube and a small sphere) are presented in 3D scenes. Our objective is to demonstrate that DeiSAM can handle such abstract objects with complex prompts while neural baselines frequently struggle to reason with them.
**Task.** The task is to segment objects given prompts where the answers are derived by the reasoning over abstract list operations. We consider 2 operations: delete and sort. The input is a pair of an image and a prompt, e.g. “Segment the second left-most object after deleting a gray object?”. Examples are shown in Fig. 1 in the attached PDF. To solve this task, models need to understand the visual scenes and perform high-level abstract reasoning to segment.
**Dataset.** **(Image)** We generated visual scenes using the CLEVR environment [1]. Each visual scene contains at most 3 objects with different attributes: (i) colors of cyan, gray, red, and yellow, (ii) shapes of sphere, cube, and cylinder, (iii) materials of metal and matte. We excluded color duplications in a single image. **(Prompts)** We generated prompts using a templates: “The [Position] object after [Operation]?”, where [Position] can take either of: left-most first, second, or third. [Operation] can take either of: (I) delete an object, and (II) sort the objects in the order of: cyan < gray < red < yellow (alphabetical order). We generated 10k examples for each operation.
**Models.** We employed DeiSAM with pre-trained Slot Attention [2] to perceive objects. DeiSAM initially learns abstract list operations from the visual Inductive Logic Programming (ILP) dataset [3], which includes positive and negative 3D visual scenes for each list operation, thereby deriving rules to perform these operations. These rules are then applied to segment objects, augmented by those generated from textual prompts. We used GroundedSAM and LISA for neural baselines. Hyperparameters are described in Section E in the Appendix.
**Results.**
The table below shows mAP for each baseline. The result indicates that segmentation models relying on neural reasoning pipelines fail to deduce segmentations for abstract reasoning prompts, while DeiSAM effectively identifies and segments the object specified by the prompt.
| mAP ( $\uparrow$) | Delete | Sort |
|---------------|---------|--------|
| DeiSAM | 99.29 | 99.57 |
| GroundedSAM | 7.6 | 15.39 |
| LISA | 12.88 | 11.15 |
Moreover, Fig. 1 in the attached PDF shows qualitative results. DeiSAM successfully segments objects with high-level reasoning, but neural baselines often failed to identify the correct target object.
We will add these results to the final version of the paper.
Most importantly, our primary goal is not to outperform neural baselines across all types of reasoning tasks. Instead, our aim is to demonstrate that neural baselines are insufficient for solving abstract reasoning prompts and to enhance their capabilities by integrating differentiable logic reasoners.
[1] CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning. CVPR 2017
[2] Object-Centric Learning with Slot Attention. NeurIPS 2020
[3] Learning Differentiable Logic Programs for Abstract Visual Reasoning. Mach. Learn., 2024
Pdf: /pdf/1d2432355020c46d0c47ed43f960722ea1250baf.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Axioms for AI Alignment from Human Feedback | Accept (spotlight) | Summary: Authors argue that for RLHF, the preferences are pairwise and we need to train a model that respects the preferences in aggregate which is in scope of social choice theory. Then they evaluate different aggregation methods on if they respect well established axioms. They showed that the popular TB model does not respect some axioms and came up with novel rules for reward learning.
Strengths: - principled approach to model preferences from population, rooted in social choice theory
- relevant topic for this conference
- initated the filed to approach RLHF reward modeling to follow axiomatic guidance through social choice theory
Weaknesses: - only theoretical contributions, we do not know how the theory translates to in practice, even in toy settings
Technical Quality: 3
Clarity: 3
Questions for Authors: - could you cite the places where you obtained the axioms of social choice theory?
- since you did not include where these axioms are from, are you missing any investigations of other axioms? If so, why did you choose the ones you investigated and why leaved the others out?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: written in the discussion section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Could you cite the places where you obtained the axioms of social choice theory?
We consider canonical axioms from the social choice literature. See, for instance, "The Handbook of Computational Soical Choice," Chapter 2. We are happy to add further references in a revision.
> Since you did not include where these axioms are from, are you missing any investigations of other axioms? If so, why did you choose the ones you investigated and why leaved the others out?
Indeed, there are a great number of axioms in the social choice literature one could consider. Our choice of PMC and PO was due to these serving as very natural starting points for this kind of investigation. In particular, Pareto optimality is viewed as one of the most fundamental properties expected of social choice rules, and is satisfied by practically all reasonable social choice rules, whereas PMC related to Condorcet consistency, one of the most extensively studied axioms. We also considered several others (see some discussion of these in Appendix B). In any case, we believe that an extensive study of other social choice axioms is an important subject for follow-up work.
---
Rebuttal Comment 1.1:
Comment: The authors have adequately addressed all of my concerns. I will keep my original positive score. | Summary: Recent months have seen a flood of concurrent papers studying the relationship between RLHF, preference aggregation, and social choice theory. This paper joins these lines of work and studies how to aggregate diverse human preferences (in the context of RLHF) that are modeled as a random utility model (e..g., BTL). The authors adopt a social choice perspective and show that the BTL model (and similar ones) fail to satisfy basic axioms known from social choice theory. The paper then also proposes a leximax Copeland (subject to PO) rule that satisfies desirable properties such as pareto optimality and majority consistency.
Strengths: - The paper is very well-written.
- The studied problem is relevant and timely.
- The authors do a good job explaining concepts from social choice theory so that the paper is also easy to read for researchers who don't have a background in computational social choice.
- While unsurprising, the main theoretical result that linear aggregation (using a linearly parameterized reward function) is insufficient to achieve basic axioms of "fair" and reasonable preference aggregation is useful and interesting.
Weaknesses: - The assumption that for each voter the complete ranking is available is obviously unrealistic, which limits the applicability of the proposed leximax Copeland rule.
- I believe that this paper's practical relevance is quite limited. It is unclear how leximax Copeland could be implemented in practice. The sampling strategy over voters and candidates (prompt/responses) is far from obvious and not discussed in the paper. Any additional comments about how such a sampling strategy would look like would be appreciated.
- In contrast to the various related works on RLHF + social choice, which provide algorithms/solutions to augment the traditional RLHF framework, this work does not provide a tractable approach to address the problem of aggregating preferences in a reasonable way.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What about non-linear reward function classes?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors address the limitations of the work adequately in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The sampling strategy over voters and candidates (prompt/responses) is far from obvious and not discussed in the paper. Any additional comments about how such a sampling strategy would look like would be appreciated.
We believe you are referring to this passage in the discussion: "However, the complete rankings are not necessary for computing this rule, rather, all we need to know are PO dominance relationships and majority directions. We can therefore apply the rule whenever we can determine this information at least approximately through, e.g., sampling."
We had the following model in mind: Each participant is shown a certain number of randomly selected pairs of candidates. We should be able to estimate all pairwise margins within a small error as long as each pair is compared at least $\Omega(\log m)$ times in expectation.
The only information needed to compute LCPO is for each pair of candidates in the dataset (i) which a majority of voters prefer and (ii) whether all voters have this preference. Determining these exactly is impossible with only approximate pairwise margins. However, running LCPO on the estimated margins would get an "approximate" LCPO winner, in the sense that the output ranking would be the winner on a profile differing on only a small fraction of voters.
> What about non-linear reward function classes?
This is a great question. We first note that linear reward models are more general than they may first appear, as the "feature vectors" can be output embeddings of a pretrained LLM (and we just learn a linear layer of a reward model from pairwise comparisons). Consequently, even a linear model can in principle encode highly non-linear characteristics of the raw input (e.g., text).
Nonetheless, expanding beyond linear reward functions is an important direction for future work. A significant challenge is that once we consdier nonconvex model classes (e.g., even small multi-layer perceptrons), they seems difficult to analyze theoretically, as we now run into an issue of local optima and computational intractability of computing globally optimal solutions.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I'm raising my score to a 6. I think, at least in terms of numerical score, my original assessment was slightly too harsh.
I guess my main concern is still that there are so many concurrent and earlier papers addressing the same problem, which somewhat dillutes the contributions of this work. Nevertheless, this is solid, well-presented work, which I'm in favor of accepting. | Summary: The paper presents a axiomatic social choice framework for the problem of doing RLHF on group preferences. It shows that classical RLHF (and indeed a wider class of similar methods) violates the Pareto Optimality and Pairwise Majority Consistency axioms, and shows (via explicit construction) that there are mechanisms that satisfy both axioms along with two other axioms. (To simplify the model, the space of preferences is assumed to be implementable via linear classification.)
Strengths: **High importance and novelty**
- LLMs, and also AI systems in general, are expected to have large societal impact now and in the future. In light of this expectation, how to make sure model policies fairly represent the welfare of all stakeholders is important. This question, to my knowledge, has not received sufficient analysis except in this work and some of its concurrent works.
**Flawless construction of the theoretical framework**
- The theoretical setup seems flawless, covering exactly all the fundamental components of the problem, and in a very elegant manner.
Weaknesses: I have some suggestions for improvement/future work, but I don't feel like they count as "weaknesses" *per se*, since those are very high standards which I don't feel like a normal conference paper would be held up to. I am instead putting those suggestions in the limitations section.
Also, please note that I did not check the proofs.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Do you think the PO and PMC axioms (possibly along with majority consistency, winner monotonicity etc.) are in any sense the "gold standards"? Could there be other similarly reasonable axioms that are contradictory with PO/PMC? I'm asking this because I have the intuition (which could be wrong) that an Arrow-like impossibility result would also haunt the linear social choice setting; elaboration in the limitations section.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: I highly appreciate the detailed discussion of limitations and future directions in the Discussion section of the paper, and I agree with most of the points there. I have the following two additional remarks, which are meant not as critiques but as suggestions for exploration.
1. **(Non)existence of a gold standard in linear social choice**
- Let's look at Theorem 3.1 first. If the RLHF method decide to overturn a perfect consensus on one pair with a tiny margin, in order to much more significantly reduce loss on a divisive pair (which is possible due to strict convexity), this actually seems desirable, despite violating PO.
- One could counter that whether the margin is small or large does not matter when what we care about the outcome is only the ordering. However, the exact margins (<> the exact reward values) do matter a lot in practice, where they indirectly (in RLHF) or directly (in DPO) determines what probabilities to assign to each response (as opposed to merely a ranking relation between the responses).
- In general, I suspect there is some arrow-like impossibility result in this space, where all the desirable properties just cannot be met at the same time. It's unclear which among the conflicting properties (somewhere among them are PO and PMC, and somewhere else is "prioritize preference violations with larger margins", with many others) are the most important.
- This intuition seems to be confirmed by the LCPO construction, which introduces rather arbitrary requirements (namely hard-coding the PO rule into the LCPO algorithm) in order to satisfy PO.
2. **Human evaluations and human subject experiments**
- I suspect that human evaluation (e.g. letting human subjects judge the fairness of preference aggregation outcomes) could be an equally, if not more, important criteria for preference aggregation mechanisms than formal axioms are, given that (1) it's unclear which formal principles are more important than others and (2) these principles could be in conflict with each other (as in Arrow's theorem).
- This could be used to evaluate both aggregation mechanisms (does the outcomes align with human judgment?) and axioms (is human judgment in line with this axiom in general? If not, why, and who is right?).
- Here, the line between computer science and cognitive science seems to be dissolving.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your discussion of limitations. You raise excellent points on the challenges here (and some relate to discussions we have had amongst ourselves!). With regard to your direct questions:
> Do you think the PO and PMC axioms (possibly along with majority consistency, winner monotonicity etc.) are in any sense the "gold standards"?
We view both PO and PMC as relatively minimal requirements, more so than majority consistency and winner monotonicity. For example, PO (at least in traditional social choice) is a basic property that is satisfied by essentially all natural rules. Nevertheless, we agree that many other axioms can be considered, and it is indeed not self-evident that enforcing PO outweighs the tradeoff of not respecting large margins. We do not view our axioms and results as the final word. Rather, we hope the paper offers a starting point for having such conversations.
> Could there be other similarly reasonable axioms that are contradictory with PO/PMC?
This seems quite plausible. We were quite surprised by the fact that PO and C1 are incompatible given how easy they are to achieve in traditional social choice. It does suggest that there may be other strong impossibilities yet to be discovered.
---
Rebuttal 2:
Comment: I appreciate the authors' response. The authors' replies to my questions are reasonable, and I strongly encourage the authors to include these discussions in the paper. I will leave my original score unchanged. | Summary: The paper proposes an axiomatic approach to study preference aggregation for AI alignment. Inspired by works in social choice theory, the authors investigate a paradigm that they call linear social choice where preferences are representable by a linear model. In this context, they notably prove that if the linear rank aggregation rule minimizes some natural loss function (like the one induced by the Bradley-Terry model), then it cannot satisfy Pareto optimality. To circumvent this issue, the authors propose a variation of Copeland rule that outputs a ranking representable by a linear model.
Strengths: The paper investigates an important research question with a theoretically-founded approach.
The obtained results seem to be novel as far as I know. The failure of PO by minimizing a loss points to a strong limitation of the current approaches using human feedback.
The paper is well and clearly written. I notably appreciate the footnotes, comments, and connections with other works that the authors make.
Weaknesses: I feel that the linear social choice paradigm may be too restrictive. I believe that the authors focus on linear models, because this is indeed a natural machine learning model. However, I don't think it is necessary to assume that the rankings of voters (i.e., humans in AI alignment) have to be linearly representable. In my opinion, the paper would be stronger if the results were presented without this latter assumption.
There are some gaps between the setting in machine learning (e.g., RLHF) and social choice theory. As the authors mention, in the latter, one usually observes pairwise comparisons instead of rankings. In addition, I think in the latter, one may not be interested to recover the full ranking, but only part of it, e.g., in RLHF, as the agent is trained, the reward approximator only needs to be good in the region of state-action pairs visited by a good policy. Could the authors comment on how this would impact their results?
The proposed social choice-based rule seems to me to be a bit artificial and only created to enforce Pareto optimality. It is not clear if such rule could easily be implemented in RLHF for instance and obtained via learning. For instance, in RLHF, this rule would require to consider all trajectories, which is impractical for any complex problem.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1) Could the authors comment on how only trying to recover part of the full ranking would impact their results?
2) Could you the authors comment on their proposed rule could be implemented in practice?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The current discussion is adequate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Could the authors comment on how only trying to recover part of the full ranking would impact their results?
We believe our work addresses this concern. The prompt/response datasets are derived from sampling an LLM, ensuring the corresponding feature vectors are already in a reasonable region of feature space simply by virtue of being in the dataset. Our axioms are designed to guarantee good performance specifically on these vectors.
We could have instead chosen even stronger notions requiring the conditions to hold across the entire feature space. For instance, PO could be defined such that for *any* two feature vectors $\mathbf{x}, \mathbf{x}' \in \mathbb{R}^d$, if all voters prefer $\mathbf{x}$ to $\mathbf{x}'$, then $\theta^*$ should as well. This is (i) impossible to achieve with trivial counterexamples, and (ii) unnecessarily stringent for precisely the reasons you mentioned.
This idea extends to RLHF beyond LLMs, as long as the dataset defines a reasonable region for the reward approximator to perform well in.
> The proposed social choice-based rule seems to me to be a bit artificial and only created to enforce Pareto optimality. It is not clear if such rule could easily be implemented in RLHF for instance and obtained via learning. For instance, in RLHF, this rule would require to consider all trajectories, which is impractical for any complex problem. [...] Could you the authors comment on their proposed rule could be implemented in practice?
We believe that the problem you're alluding to may not be an issue. In our desired use case, all "trajctories" are sequences of tokens which can in principle be embedded into a fixed-dimension feature space. In existing implementations, we are given a dataset of pairwise comparisons over token sequences which are interpreted as feature vectors. In common variants of RLHF, a reward function is learned to maximize the Bradley-Terry (BT) likelihood over a dataset of such comparisons. LCPO is just an alternative to this step, where instead of maximizing likelihood we solve a small number of linear programs. Just as in the case of BT, LCPO can be used on a dataset which contains only a small subset of possible pairwise comparisons. Once the reward function is obtained using LCPO, it can be used for reinforcement learning in precisely the same way as the standard RLHF pipeline.
> I feel that the linear social choice paradigm may be too restrictive. I believe that the authors focus on linear models, because this is indeed a natural machine learning model. However, I don't think it is necessary to assume that the rankings of voters (i.e., humans in AI alignment) have to be linearly representable. In my opinion, the paper would be stronger if the results were presented without this latter assumption.
While the linearity assumption is admittedly a limitation, we note that linear reward models are more general than they may first appear, as the "feature vectors" can be output embeddings of a pretrained LLM (and we just learn a linear layer of a reward model from pairwise comparisons). Consequently, even a linear model can in principle encode highly non-linear characteristics of the raw input (e.g., text).
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
UniIF: Unified Molecule Inverse Folding | Accept (poster) | Summary: This paper proposes representing molecular systems as blocks of atoms and designs a neural network architecture that processes such representations. The method is applied for benchmark inverse folding tasks for protein/RNA. A materials-related task is also explored.
Strengths: - The proposed method seems to perform well on various protein/RNA inverse folding benchmarks that focus on the recovery rate metric. (this reviewer is not experienced in protein/RNA design tasks, so it is hard for this reviewer to put the results in context.)
- The idea of representing clusters of atoms as blocks is an interesting approach and is potentially useful for representing molecular systems across different scales. while such ideas have already been explored in several previous works as the authors noted, there are new elements such as vector basis for blocks.
Weaknesses: - The protein inverse folding ablation seems to show the various introduced techniques do not influence model performance much. The insights of the models seem obscured from this perspective, especially considering block-based representations and architectures have already been explored in previous works (GET/EPT). The RNA design task does not include ablation, and this reviewer is not convinced by the materials-related task (see below). It is rather unclear what a reader can take away from this work in terms what techniques actually contribute to improved performance.
- If "unified" is the key word of the method, perhaps the authors should include experiments where the model is trained on various data modalities and see if that improves performances on individual tasks.
- The materials sections are rather strange in this paper. Inverse folding for proteins/RNAs can be understood as inferring the sequence from a given structure. The materials task is not introduced in great detail. What is being predicted? How is it "inverse folding?"
- To follow up on that, the baselines that the proposed model is compared against are not what's considered state-of-the-art in materials representation. For materials, there are popular benchmarks such as Matbench (https://matbench.materialsproject.org/), JARVIS (https://pages.nist.gov/jarvis_leaderboard/), or Open catalyst (https://opencatalystproject.org/index.html), where results of SOTA models are readily available. This reviewer is not convinced by the materials-related results presented as significant details regarding the dataset/goals are missing, and the baselines are not really used for materials.
Technical Quality: 3
Clarity: 2
Questions for Authors: - In the experiments, what exactly does with/without ESM mean? How is ESM used? Why can't the proposed method also become "with ESM"? Why is it valuable to consider "without ESM"? Are models "with ESM" more expensive or what?
- Can the author clarify the materials task and why it is inverse folding? Can the authors compare to materials-focused SOTA methods in the space?
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors did not write about the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1** The protein inverse folding ablation seems to show the various introduced techniques do not influence model performance much. The insights of the models seem obscured from this perspective, especially considering block-based representations and architectures have already been explored in previous works (GET/EPT). The RNA design task does not include ablation, and this reviewer is not convinced by the materials-related task (see below). It is rather unclear what a reader can take away from this work in terms what techniques actually contribute to improved performance.
>> **Reply** As protein design is more mature than RNA design, we do ablation on it. However, the protein design performance has reached a bottleneck stage where any improvement is difficult. Things will be more clear on RNA dataset. Here we provide the ablation results on RNA dataset:
| model | short | medium | long | all |
|------------|-------|--------|-------|-------|
| w/o VFrame | 44.44 | 49.25 | 37.40 | 45.45 |
| w/o EAttn | 47.89 | 49.00 | 36.71 | 48.28 |
| w/o GDP | 46.88 | 48.00 | 37.19 | 47.06 |
| UniIF | 48.21 | 49.66 | 37.29 | 48.94 |
In addition, we have evaluated the performance of GET on protein design, which performes very poor.
**W2** If "unified" is the key word of the method, perhaps the authors should include experiments where the model is trained on various data modalities and see if that improves performances on individual tasks.
>>**Reply** Such a protocol may improve performance of protein-RNA or protein-molecule interaction prediction. However, for the design task, we observe that training on single dataset lead to better results, as we do not consider the complex interactions.
**W3** The materials sections are rather strange in this paper. Inverse folding for proteins/RNAs can be understood as inferring the sequence from a given structure. The materials task is not introduced in great detail. What is being predicted? How is it "inverse folding?"
**Q2** Can the author clarify the materials task and why it is inverse folding?
>>**Reply** This is a new task defined in Chili[1], which gives the material structure and predict the composition of atoms types that fit to this structure.
**W4** To follow up on that, the baselines that the proposed model is compared against are not what's considered state-of-the-art in materials representation. For materials, there are popular benchmarks such as Matbench (https://matbench.materialsproject.org/), JARVIS (https://pages.nist.gov/jarvis_leaderboard/), or Open catalyst (https://opencatalystproject.org/index.html), where results of SOTA models are readily available. This reviewer is not convinced by the materials-related results presented as significant details regarding the dataset/goals are missing, and the baselines are not really used for materials.
**Q2** Can the authors compare to materials-focused SOTA methods in the space?
>> **Reply** Thanks for your suggestion. On the binary classification task of "matbench_mp_is_metal", we provide the updated results as follows:
>>| model | mean rocauc | mean f1 |
>>|--------|-------------|---------|
>>| UniIF | 0.9617 | 0.9534 |
>>| CGCNN | 0.9520 | 0.9462 |
>>| ALIGNN | 0.9128 | 0.9015 |
>>| coGN | 0.9124 | 0.9012 |
>>| coNGN | 0.9089 | 0.8972 |
>>| SchNet | 0.8907 | 0.8765 |
where UniIF outperforms previous baseline methods.
**Q1** In the experiments, what exactly does with/without ESM mean? How is ESM used? Why can't the proposed method also become "with ESM"? Why is it valuable to consider "without ESM"? Are models "with ESM" more expensive or what?
>>**Reply** If ESM is allowed, the model can fintune the pretrained ESM model to refine the designed sequence; Otherwise, the model do not allowed to access the knowledge of ESM. The proposed method can also incooparate with ESM. Without ESM, the algorithm is more efficient, and there is do risk of label leackage to avoid potential concerns.
---
Rebuttal Comment 1.1:
Title: Further clarifications
Comment: >> **Reply to W1** In the task of protein design, GET can achieve 38.3% recovery. This is because GET is designed for protein-molecule interactions learning, and do not responsible for protein design.
Here is the citation of CHILI:
>> [1] Friis-Jensen, Ulrik, et al. "CHILI: Chemically-Informed Large-scale Inorganic Nanomaterials Dataset for Advancing Graph Machine Learning." arXiv preprint arXiv:2402.13221 (2024). | Summary: This paper propose a unified framework for inverse folding. Their model is applicable to small molecules, proteins and RNA.
More specifically, they propose a unified frame representation for amino acid, atom and nucleotide. They further propose a GNN-based model to learn the structure information.
Results show that their model can outperform other baselines in protein design, RNA design and material design.
---
I changed my score to 5 after the discussion with other reviewers and AC.
Strengths: (1) The paper is clarified clearly.
(2) The idea of unifying inverse folding for different types of molecules is interesting and reasonable.
(3) Experimental results show their model work good in different design tasks.
Weaknesses: Major points:
1. Some important experimental details are missing. For instance, while the paper proposes a unified inverse folding framework, it's unclear whether the model is trained on a combined dataset from different types of molecules. Specifically, the training sets for RNA design and material design are quite limited. If the model is trained with protein data, it would be beneficial to demonstrate whether incorporating data from other types improves performance.
2. Additionally, if I understand correctly, the paper does not explain the decoder part of the inverse folding model.
Minor points:
1. It would be helpful if the paper included a comparison of perplexity, as related works such as PiFold have used this metric.
2. The performance on protein design looks comparable with other models to me.
Technical Quality: 3
Clarity: 4
Questions for Authors: (1) As illustrated in the previous part, do you train the model on three types of molecules?
(2) What decoder model is used?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The limitations are not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1** Some important experimental details are missing. For instance, while the paper proposes a unified inverse folding framework, it's unclear whether the model is trained on a combined dataset from different types of molecules. Specifically, the training sets for RNA design and material design are quite limited. If the model is trained with protein data, it would be beneficial to demonstrate whether incorporating data from other types improves performance.
> **Q1** As illustrated in the previous part, do you train the model on three types of molecules?
>> **Reply** We train the model on different types of molecules. We have tried to train UniIF in mixed protein and RNA dataset, which achieve slightly lower performance than training on single dataset. We have used all the RNA data recoreded in PDB to train the model, which is actually the largest real data currently available. Maybe we can use data augmentation techiniques in the future.
> **W2** Additionally, if I understand correctly, the paper does not explain the decoder part of the inverse folding model.
> **Q2** What decoder model is used?
>> **Reply** A linear layer is served as the decoder to predict 20 amini acids, 4 neuclitides, or 128 atom types.
> **W3** It would be helpful if the paper included a comparison of perplexity, as related works such as PiFold have used this metric.
>> **Reply** We provide the perplesxity as follows:
| model | short | medium | long | all |
|--------|-------|--------|------|------|
| PiFold | 5.81 | 4.57 | 3.77 | 4.50 |
| UniIF | 5.53 | 4.41 | 3.62 | 4.32 |
> **W4** The performance on protein design looks comparable with other models to me.
>> **Reply** As far as we know, the protein design performance has reached a bottleneck stage where any improvement is difficult.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. I have no further comments.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thanks for your review service and good questions. Have a good day! | Summary: Previous approaches on molecule inverse folding (IF), which is crucial for drug and material discovery, focus separately on either macro-molecules or small molecules, leading to the lack of a unified approach for different molecule types. To this end, the paper proposes UniIF to unify the IF from a data and model perspective. Specifically, by introducing a unified block for processing different molecules and a geometric block to capture their 3D interactions, UniIF shows improvements in various tasks across different molecules.
Strengths: - The proposed UniIF is novel and effective for unifying IF across different molecules, including protein, RNA, and small molecules.
- The UniIF can excel in protein, RNA, and material design, which is impressive. Extensive experiments have been conducted to provide justification for the effectiveness of the proposed method.
- The paper is generally well-written, with clear illustrations and tables.
Weaknesses: - The ablation experiments in Table 1 (-GDP), which remove the geometric dot product features, show miner drawbacks in proteins with longer sequences. This makes the geometric interaction extractor mainly capture the interaction of the virtual inter-atoms. The idea for capturing long-range dependency or interaction using virtual blocks has been studied in [1,2,3,4]; the authors could consider providing a discussion on how their proposed techniques differ from these referenced works.
- In addition, the UniIF adopted both node and edge feature extractors as feature augmentation, which is model-agnostic. However, the comparison of all three tasks does not show the potential benefit of introducing such a featureizer.
- The paper could provide further insight into the benefit of employing a unified framework for protein, RNA, and material design. For example, could the UniIF model different molecules in a unified latent space, allowing the researchers to investigate their underlying interactions?
[1] Neural Message Passing for Quantum Chemistry. In ICML, 2017
[2] An analysis of virtual nodes in graph neural networks for link prediction. In LoG, 2022
[3] On the Connection Between MPNN and Graph Transformer. In ICML, 2023.
[4] Neural Atoms: Propagating Long-range Interaction in Molecular Graphs through Efficient Communication Channel. In ICLR, 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you provide some discussions or experiments on the effectiveness of the proposed featureizer?
2. Is it possible to train UniIF with mixed molecule datasets?
3. For Equation 7, the SVD decomposition could be time-consuming when scaling the input size. Could the author explain the motivation for such a design and why a simple approach, like "mean," does not work?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The proposed UniIF offers a unified framework for learning different molecules, such as protein and RNA. However, it still models different molecules individually, hindering the proposed framework's potential impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1** The ablation experiments in Table 1 (-GDP), which remove the geometric dot product features, show miner drawbacks in proteins with longer sequences. This makes the geometric interaction extractor mainly capture the interaction of the virtual inter-atoms. The idea for capturing long-range dependency or interaction using virtual blocks has been studied in [1,2,3,4]; the authors could consider providing a discussion on how their proposed techniques differ from these referenced works.
>> **R1** Thanks for your recommandation. Recent works show that virtual node could help the model to learn long-term dependencies. Our work is different in the virtual frame construction and interaction: we need to construct the local frame for each virtual node and also we use geometric operators to consider the communication between virtual nodes. Other GNN-related works do not consider the 3D geometric information.
> **W2** In addition, the UniIF adopted both node and edge feature extractors as feature augmentation, which is model-agnostic. However, the comparison of all three tasks does not show the potential benefit of introducing such a featureizer.
> **Q1** Could you provide some discussions or experiments on the effectiveness of the proposed featureizer?
>> **Reply** Previously, PiFold use a model-related featurizer for protein design and achieves good results. However, such a featurizer is complex and computationally expensive. Our contribution is to simplify the featurizer and mask it model-agnostic, while ensuring competitive performance. If one use PiFold's featurizer, the recovery of protein design can further improve about 0.3%. However, such a improvement is model-related and could not eazily extend to RNA data. With out simplified featurizer, RNA recovery improved much.
> **W3** The paper could provide further insight into the benefit of employing a unified framework for protein, RNA, and material design. For example, could the UniIF model different molecules in a unified latent space, allowing the researchers to investigate their underlying interactions?
> **Q2** Is it possible to train UniIF with mixed molecule datasets?
>> **Reply** Yes. We have tried to train UniIF in mixed protein and RNA dataset, which achieve slightly lower performance than training on single dataset. We plan to expend the experiments to protein-RNA-molecule complexes to study the underlying interactions as you suggested.
> **Q3** For Equation 7, the SVD decomposition could be time-consuming when scaling the input size. Could the author explain the motivation for such a design and why a simple approach, like "mean," does not work?
>> **Reply** Simple approach, like "mean", could not obtain a orthogonal frame vectors. We use SVD to get the rotation matrix to serve as orthogonal vector basis.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I retain my score considering the contribution of the submission.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thanks for your review service and appreciation! | Summary: This paper addresses the challenge of inverse folding, i.e. the design of novel molecules or macromolecules with specific desired 3D structure, with the goal of improving real-world drug and material design. The authors point out that this challenge has been addressed independently for different contexts, such as inverse folding for small molecules vs inverse folding for large macromolecules. They argue that this has resulted in redundant efforts. To address this issue, they propose a unified model for the inverse folding of all molecules. They demonstrate the efficacy of their novel unified approach across three benchmark molecular design tasks involving protein, RNA and material design.
Strengths: The authors make an interesting argument that inverse folding problems in different scientific domains should be addressed by a single approach. They present an elegant approach that unifies approaches across domains. Their results demonstrates improved performance as measured by sequence recovery at benchmark tasks involving protein, RNA and material design. The authors carry out ablation studies that provide some insight into the source of this improved performance.
Weaknesses: While the unified approach is elegant, it is not clear (and the authors do not assert) that the unification is responsible for any performance improvements. However, it is easy for the reader to be left with the impression that the improved results stem from unification. The authors should make it clear that either (i) unification is satisfying but is not connected to performance improvements, or (ii) unification leads to performance improvements and explain clearly the data supporting this.
The recovery metric is confusing, since it is not clear that higher recovery actually corresponds to the ability to design novel molecules. Indeed, the recovery metric penalizes the model for novelty, which seems to contradict the stated goal ('enabling scientists to synthesize novel molecules with the desired structure'). This undermines the claim that the approach presented in the paper 'can benefit multiple domains, such as machine learning, drug design, and material design'. Specifically, rediscovering existing macromolecules does not benefit these domains.
Moreover, even if the metric were aligned with the stated goals, it is not clear to this reviewer that existing inverse folding methods are not good enough to solve any real world application of inverse folding such as the stated goals described in the introduction to this paper of improvements in drug and material design. This leads me to wonder what problem this paper is trying to solve.
While the paper focuses on maintaining the structure of the target molecules (or material), no attempt is made to ensure that the function is maintained. This does not bode well for applications such as drug design, that require that the designed molecule functions correctly.
The paper does not include any wetlab experiments, instead relying on performance at benchmark tasks to demonstrate improved performance. The authors argue that this is the only limitation of this paper, and that wetlab experiments are by definition out of scope for an AI paper. This raises the question of whether improvements on the benchmark tasks and associated metrics used in this paper corresponds to progress on the stated goals of the paper.
Technical Quality: 2
Clarity: 3
Questions for Authors: In the absence of wetlab experiments, please could the authors address this limitation by providing data that demonstrates that improved performance at the specific benchmark tasks in inverse folding cited in this paper results in improved performance at novel molecule design. This data can be extracted from previously published work, with appropriate citation.
There have been a series of recent works presenting improvements at inverse folding. These works, including this paper, make lofty claims about the impact that such improvements will have on drug and material design. Please could the authors point to any realized impact on drug and material design resulting from improvements at inverse folding as measured by performance on these benchmark tasks.
If such realized impact is difficult to find, please could the authors identify and address this limitation.
Please could the authors propose strategies to mitigate the risk that further improvements in performance on benchmark tasks, such as the improvements reported in this paper, will not result in realized impact in drug or material design.
Please could the authors propose strategies to mitigate the risk that the metric used in this work penalizes the model for the design of novel molecules, thereby undermining the stated goals.
Please could the authors identify a set of real world applications for inverse folding, describing the problem, how inverse folding will contribute to solving this problem, and why existing inverse folding methods are not good enough to solve each specific real world problem identified (NB inferior performance on benchmark tasks does not demonstrate that existing methods are not good enough).
I note that the number of sequences for CATH 4.3 differs from the referenced papers - please could the authors explain this discrepancy.
For the time split protein tasks, it would be helpful to establish how similar each test sequence is to those used in training, both in terms of proximity in sequence space and in terms of 3D structure. Please then stratify performance as a function of each distance for each held out test set. It would be helpful to also compute these distances for the CATH topology-based split, to provide a watermark for comparison.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: In addition to the questions above, please could the authors describe limitations of their work that may prevent improvements at inverse design as measured by performance on benchmark tasks using a recovery metric from having impact on real-world challenges such as drug and material design.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **W1:** It is not clear that the unification is responsible for any performance improvements.
>> **Reply** The proposed unification method is responsible for performance improvements by using the block-level representation and modules, such as geometric featurizer, interaction operator, and virtual frame.
> The recovery metric is confusing, since it is not clear that higher recovery actually corresponds to the ability to design novel molecules.
>> **Reply** One might claim that higher recovery reduces diversity without considering similar designability. However, when methods with comparable designability—measured by self-consistent TM (scTM) scores—are considered, the conclusion changes: higher recovery actually increases diversity at the same scTM score level. This is a important finding of ProteinInvBench [1].
>> [1] Gao, Zhangyang, et al. "Proteininvbench: Benchmarking protein inverse folding on diverse tasks, models, and metrics." NeurIPS (2024).
> **W2:** Moreover, even if the metric were aligned with the stated goals, it is not clear to this reviewer that existing inverse folding methods are not good enough to solve any real world application.
> **W3:** The paper lacks wetlab experiments and relies on benchmark performance to demonstrate improvement. Do these benchmark improvements and metrics align with the stated goals?
> **Q1:** In the absence of wetlab experiments, the authors should provide data from published work showing that better benchmark performance in inverse folding leads to improved novel molecule design, with proper citations.
> **Q2** Can the authors provide examples of improvements in drug and material design resulting from better performance on inverse folding benchmark tasks? If such examples are hard to find, please address this limitation.
> **Q3:** Please could the authors propose strategies to mitigate the risk that further improvements in performance on benchmark tasks, such as the improvements reported in this paper, will not result in realized impact in drug or material design.
>> **R3:** The stated goals can be decomposed into two step: computation and wetlab experiments.
>> Most AI researchers [3,5,6,7] focus on improving the sequence recovery, which has inspired further biological research. For instance, the well-known ProteinMPNN[4] is largely based on GraphTrans[5]. AI researchers often lack the equipment and funding for wetlab experiments. If you can provide or recommend research opportunities, we would greatly appreciate it.
>> Improving recovery is crucial for wetlab experiments. ProteinMPNN[5] increased recovery from 39% to over 45%, achieving strong wetlab results. LigandMPNN[2] further improved recovery to over 60%, demonstrating the ability to generate small molecules and DNA-binding proteins with high affinity and specificity.
>> As suggested, AI researchers should collaborate with bio-labs to validate their algorithms. Additionally, evaluating proposed methods on more metrics, such as scTM and diversity, would help mitigate the risks mentioned.
>> [2] Dauparas, Justas, et al. "Atomic context-conditioned protein sequence design using LigandMPNN." Biorxiv (2023).
>> [3] Wu, Fang, and Stan Z. Li. "A hierarchical training paradigm for antibody structure-sequence co-design." NeurIPS (2024).
>> [4] Dauparas, Justas, et al. "Robust deep learning–based protein sequence design using ProteinMPNN." Science (2022).
>> [5] Ingraham, John, et al. "Generative models for graph-based protein design." NeurIPS (2019).
>> [6] Zheng, Zaixiang, et al. "Structure-informed language models are protein designers."ICML, 2023.
>> [7] Hsu, Chloe, et al. "Learning inverse folding from millions of predicted structures." ICML, 2022.
> **W4:** While the paper focuses on maintaining the structure of the target molecules (or material), no attempt is made to ensure that the function is maintained.
>> **R4:** We appreciate the suggestion; a relevant paper has been published in Nature[6]. We also aim to extend the algorithm to protein functions, such as solubility, but have not found a well-curated dataset for these functions. If you can recommend a suitable dataset, we would be very grateful.
>> [6] Goverde, Casper A., et al. "Computational design of soluble and functional membrane protein analogues." Nature (2024).
> **Q4** Could the authors propose strategies to mitigate the risk that the metric used in this work penalizes the model for the design of novel molecules?
>> **R4** Reducing the temperature can balance recovery and diversity: lower temperatures (temp) increase diversity while reducing recovery. At the same recovery level on CASP15 dataset, UniIF shows higher diversity compared to the baseline, aligning with findings from ProteinInvBench.
| | ProteinMPNN | | PiFold | | UniIF | |
|:----:|:------:|:----:|:------:|:----:|:-----:|:----:|
| temp | rec | diversity | rec | diversity | rec | diversity |
| 0.0 | 0.449 | 0.0 | 0.473 | 0.0 | 0.514 | 0.0 |
| 0.05 | 0.436 | 0.22 | 0.463 | 0.24 | 0.505 | 0.14 |
| 0.1 | 0.404 | 0.1 | 0.425 | 0.41 | 0.475 | 0.30 |
Kindly refer to official comments for more response.
---
Rebuttal Comment 1.1:
Title: Looking forward to your reply
Comment: Dear reviewer r3pY,
We express our sincere gratitude for your constructive feedback in the initial review. It is our hope that our responses adequately address your concerns. Your expert insights are invaluable to us in our pursuit of elevating the quality of our work. We are fully aware of the demands on your time and deeply appreciate your dedication and expertise throughout this review.
We eagerly anticipate your additional comments and are committed to promptly addressing any further concerns.
Once again, we extend our heartfelt thanks for your time and effort during the author-reviewer discussion period.
Best,
Authors
---
Rebuttal Comment 1.2:
Comment: Thank you for the considered response to my review. I fully agree with the authors that the lack of well curated datasets that would allow AI researchers to address key issues in this domain is a significant issue.
I also fully agree with the authors that AI researchers should collaborate with bio-labs to validate their algorithm, and increase their impact on the field - indeed carrying out wetlab experiments is not simply a matter of getting funding and equipment, there is significant technical expertise and experience required, so I would not suggest that AI researchers try setting up experiments from scratch.
Navigating collaborations with bio-labs is crucial to achieving significant impact, and I urge these authors to consider reaching out to potential collaborators from other fields, particularly those already equipped with funding, equipment etc.
More generally, while I like the idea of a Unified framework across these different tasks, it still remains unclear to me that the unification itself results in improved performance - i.e. that individual tasks benefit from training on the datasets for the other tasks.
---
Rebuttal 2:
Title: Author Rebuttal
Comment: > **Q5** Could the authors identify real-world applications for inverse folding, detailing the problem, how inverse folding could help, and why existing methods fall short for each specific application?
>> **R4** If the computing benchmark is not recognized and only wetlab experiments are considered, it becomes difficult for us to explain why existing methods fail in applications due to limitations in money, time, equipment, and experience. Here are some real-world applications of inverse folding:
- Binding site design[2]: Designing binding sites for small molecules starting from previously characterized designs generated using Rosetta.
- Membrane protein design[6]: Creating soluble and functional analogues of integral membrane proteins uisng AF2+ProteinMPNN.
Inverse folding methods are used for sampling new amino acid sequences for a given fold, where improving the recovery is an important goal in these research. By reducing the temperature, one can balance the recovery and diversity. In comparison, our proposed UniIF could balance recovery and diversity better than baselines.
> **Q6** I note that the number of sequences for CATH 4.3 differs from the referenced papers - please could the authors explain this discrepancy.
>> **R6** Previously, the protein number of CATH4.3 follows the extual description from ESMIF[7]. However, it was recently discovered that they released a dataset with a different number of proteins than what they described. Hence, we correct the description, but actually use the same dataset as in the baseline.
> **Q7** For the time split protein tasks, it would be helpful to establish how similar each test sequence is to those used in training, both in terms of proximity in sequence space and in terms of 3D structure. Please then stratify performance as a function of each distance for each held out test set. It would be helpful to also compute these distances for the CATH topology-based split, to provide a watermark for comparison.
>> **Q8** There is no way to find the CATH code for novel protein sequences. Following your suggestion, we can provide hirachical performance in terms of sequence and structure similarity using mmseq and foldseek in CASP15 dataset. We found that the sequence similarity positively correlated with the recovery more significantly.
| seq cut | rec | struct cut | rec |
|:-------:|:----:|------------|:----:|
| <0.3 | 0.48 | <0.1 | 0.51 |
| <0.5 | 0.51 | <0.3 | 0.49 |
| <0.8 | 0.52 | <0.8 | 0.52 |
---
Rebuttal 3:
Title: Thanks for your reply!
Comment: Dear reviewer r3pY,
Thank you once again for your reply! We agree with you that AI researchers should consider reaching out to potential collaborators from other fields, particularly those already equipped with funding, equipment etc. We are also looking for such opportunities.
We're glad to hear that you appreciate the unified framework and that we have the chance to clarify your question regarding whether individual tasks benefit from training on datasets for other tasks.
> The answer is no, based on our experiments. But our method already achieves SOTA performance on each single task.
For example, we attempted to train UniIF on a mixed dataset of proteins and RNA, which resulted in slightly lower performance compared to training on a single dataset. However, we would like to emphasize the benefits of the unified framework:
- UniIF can be trained on different molecules within a unified latent space, allowing researchers to explore underlying interactions—similar to what AF3 accomplished. We plan to extend this in the future.
- UniIF currently achieves state-of-the-art performance on individual tasks, thanks to the careful design of the model, including the interaction operator, GNN layer, and virtual frame. To evaluate the effectiveness of UniIF, we also provide the ablation results of RNA design. The performance gain obtained by virtual frame (VFrame) and geometric dot product (GDP) is more significant in RNA dataset. This is because the protein design performance has reached a bottleneck stage where any improvement is difficult. Things will be more clear on RNA dataset. Here is the RNA ablation results:
>| model | short | medium | long | all |
>|------------|-------|--------|-------|-------|
>| w/o VFrame | 44.44 | 49.25 | 37.40 | 45.45 |
>| w/o EAttn | 47.89 | 49.00 | 36.71 | 48.28 |
>| w/o GDP | 46.88 | 48.00 | 37.19 | 47.06 |
>| UniIF | 48.21 | 49.66 | 37.29 | 48.94 |
- UniIF can serve as a strong baseline that can be further enhanced by incorporating domain-specific knowledge, such as adding secondary structure information for RNA design. That is, individual tasks can be enhanced by adding domain-specific knowledge to UniIF, although UniIF has already achieved SOTA performance.
We sincerely hope this could clarify your concerns, and we would be happy to answer any additional questions. If we have adequately addressed your concerns, we kindly hope that you consider raising the score to support acceptance. Your comment and suggestion are important and valuable for us. Thank you.
Best,
Authors.
---
Rebuttal 4:
Title: Thanks for your comments!
Comment: Dear Reviewer r3pY,
Thanks for your review. We have tried our best to address your questions and we respectfully thank you for supporting the acceptance of our work. Also, please let us know if you have any further questions. Look forward to further discussions!
Sincerely,
The Authors | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Return of Unconditional Generation: A Self-supervised Representation Generation Method | Accept (oral) | Summary: The paper proposes a framework, coined Representation-Conditioned Generation (RCG), that aims to bring the advantages of Conditional Generation techniques to Unconditional Generation settings.
To do so, they use the output of a representation generator network instead of class labels.
This representation generator is trained in a prior stage to approximate the distribution of image features extracted by a pre-trained self-supervised network (MoCo-v3).
They test this technique with a set of very different generative models, including a latent diffusion model, a diffusion transformer, and a masked generative transformer, showing improvement accross the board on the ImageNet Unconditional Generation benchmark.
Strengths: a. The proposed method is sound, the technical details are delivered clearly and thoroughly.
b. Experiments and ablations are extensive, and results look robust, being confirmed for very different classes of generative models.
c. The paper provides new empirical evidence and numbers on very relevant points:
- Diffusion models can generate self-supervised representations convincingly.
- Those generated features can serve as good conditioning signals to help train image generation models.
Weaknesses: a. Technical novelty is limited: [48] also generates images by first generating pretrained features to condition the image generator, and works such as [5,A] manage to generate high-quality images using representations. The remaining differentiator to existing work in terms of technical contribution is then in the choice of the pre-trained feature extractor and image generator.
b. In that light, while thorough ablation results are presented, I find the interpretation of the results lacking. If those choices are important, and it seems to be the case considering the range of FID scores in the ablations, it would be very useful to provide intuitions about how to make those choices and why.
--
[A] SODA: Bottleneck Diffusion Models for Representation Learning. Hudson et al. CVPR 2024
Technical Quality: 4
Clarity: 3
Questions for Authors: - In Table 8, it is difficult to have an interpretation of the FD as we don't have a reference point. Maybe showing FD between training and validation set could be an option, provided the number of samples for the estimates is handled carefully. Also, FD is evaluated on the training set. It would be good to have validation scores as well.
- In a similar vein, I'd like to see the performance (FID) of RCG on features sampled from training images (similar to Figure 6). This would give another indication of how good the representation generation is. This is important since the representation generation is a major, and arguably the most important, contribution of the paper.
- The authors mention in Appendix B that the FID in the corresponding section are computed on the validation set. Is that also the case for the other values in the main paper?
- For the point raised in weakness b., a precise question I'd like to ask is why, as seen in Table 7a, MoCo v3 features are so much better for this task than DINO or iBOT features that are supposed to be better in other downstream tasks.
- Also linked to weakness b., there are strong connections between generative models and self-supervised learning. In fact, works such as [61, A] or [41] on which the current paper builds a lot of their results are already generative models that explicitly double as representation learners, or even have representation learning as their main objective. Those ties should be discussed in the paper.
Moreover, if a generative model can be a representation learner, and if using a representation can help achieve a better generative model, how to explain the good results in the submission becomes even less clear. I'd be happy to hear the authors' thoughts on that point.
In any case, I think the submission is already sound and of interest. It shows very good and solid results on a relevant topic, and opens up interesting research directions. But also, it doesn't provide much in terms of technical novelty or in terms of analysis. The paper touches upon, but barely scratches the surface of very fundamental questions related to self-supervised methods and the training of generative models. I would definitely consider raising my rating to 7 or 8 if it would have a slightly more complete evaluation of the feature generation part and if it provided a deeper analysis of the results.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: The authors make an attempt to address limitations by showing failed generation. They could discuss the scope in terms of data type: are there results likely to transfer to data? what hurdles to expect in doing so?
They also try to adress societal impacts by discussing generative model biases and hypothesize that unsupervised models should significantly mitigate the influence of human bias. I'd appreciate if they would either expand on that point or alternatively refrain from making such an assumption without a stronger backing. ImageNet photos certainly have been produced by humans and are not bias-free.
Since they are willing to discuss the limitations and impacts of generative models in general, they could also mention potential misuses such as deep fakes and misinformation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our method, extensive and robust experiments, and the potential to open up interesting research directions. Below we address the weaknesses (***W***) and questions (***Q***) raised by the reviewer. We hope the reviewer could consider raising the score given the additional evaluation in feature generation and the interpretation of the ablation results provided in the rebuttal.
***Technical novelty (W1)***
As emphasized in the general response, one major technical contribution of this work is the representation generator. Unlike previous works such as [5, A] ([48] not related to image generation) which require ground-truth images to provide the representation during the generative process, our approach does not rely on such impractical requirements and demonstrates the possibility of unconditionally generating pre-trained self-supervised representations.
***Additional evaluation of feature generation (Q1, Q2)***
As per the reviewer’s request, below we provide the reference points for FD and RCG’s performance using representations sampled from training images.
***Q1: Reference points for FD***
Since the MoCo v3 encoder is trained on the ImageNet training set, the representation distribution in the training set can be slightly different from that in the validation set. To establish a better reference point, we compute the FD between 50K randomly sampled representations from the training set and the representations from the entire training set, which should serve as the lower bound of the FD for our representation generator. The result is an FD of 0.38, demonstrating that our representation generator (with an FD of 0.48) can accurately model the representation distribution.
We also evaluate the representation generator against the validation set, resulting in an FD of 2.73. As a reference point, the FD between 50K randomly sampled representations from the training set and the validation set is 2.47, which is also close to the FD of our representation generator. We will include both results in the revision.
***Q2: Performance using representations sampled from training images***
In Table 9(a), we evaluate our image generator under different conditions, including oracle representations from ImageNet training images. The oracle conditioning yields 4.37 FID and 149.0 IS, while conditioning on our generated representations achieves 5.07 FID and 142.5 IS. This further demonstrates the effectiveness of our representation generator in producing realistic and high-quality representations.
***FID evaluation scheme (Q3)***
For FID in the main paper, we follow ADM’s evaluation suite which computes FID w.r.t. the ImageNet training set. This evaluation suite is widely adopted in prior works, so we need to follow it to make a fair comparison with them. Evaluating on the training set versus the validation set does not significantly change FID, and the ablation results trend remains consistent. To ensure consistency with the main paper, we will revise the FID results in the ablation section accordingly.
***Interpretation of the ablation results (W2)***
Most of the results in Tables 7-9 are standard hyper-parameter sweeps. Through these experiments, we aim to provide insights into which hyper-parameters are important and require tuning for future applications of our system.
Two results are particularly interesting: Table 9(a), which we have interpreted in our response to ***Q2***, and Table 7(a), which ablates different pre-trained encoders. We will explain the findings from Table 7(a) below.
***MoCo v3 features vs. DINO/iBOT features (W2, Q4)***
In Table 7(a), using representations from MoCo v3 achieves better FID than using representations from DINO/iBOT. This is likely because only MoCo v3 uses an InfoNCE loss. Literature has shown that optimizing InfoNCE loss can maximize uniformity and preserve maximal information in the representation [1]. The more information in the representation, the more guidance it can provide for the image generator, leading to better and more diverse generation. To demonstrate this, we compute the uniformity loss on representations following section 4.1.2 from [1]. Lower uniformity loss indicates higher uniformity and more information in the representation. The uniformity loss of representations from MoCo v3, DINO, and iBOT is -3.94, -3.60, and -3.55, respectively, which aligns well with their generation performance. We will include this result and discussion in the revision.
[1] Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
***Generative models and self-supervised learning (W2, Q5)***
We thank the reviewer for initiating this discussion. The community has a long-standing belief in the synergy between self-supervised learning and generative models: good representations should enhance the generative process, and generative models can learn robust representations. As the reviewer mentioned, several papers have provided evidence supporting the latter. On the other hand, we focus on the former, showing that explicitly providing generated representations can significantly improve unconditional generative models, which also offers new and compelling evidence to support the synergy.
***Limitations and negative societal impacts***
Applying RCG to other data types is beyond this paper's scope and could be an interesting future direction. It requires a pre-trained encoder, typically available off-the-shelf for common data types. We will include this discussion in the revision.
We acknowledge that image datasets can contain various types of human bias, including biases in data collection and labeling. RCG's unsupervised nature could mitigate labeling bias as it does not rely on human-provided labels. However, we agree that this topic is beyond the paper’s scope and will refrain from making this claim in the revision. We will also include discussions on potential misuses.
---
Rebuttal 2:
Comment: I thank the authors for the detailed responses, I have no major concerns left.
After going through the paper, all the reviews, and the rebuttals, I believe it is a strong submission that contains significant results and prompts important discussions. As such, I would recommend acceptance and I updated my rating accordingly. | Summary: This paper proposes generative models conditioned on representation obtained from a pre-trained self-supervised encoder to achieve high-quality diverse generation.
Strengths: 1. The writing is clear.
2. The experimental results demonstrate significant improvement over unconditional generation.
Weaknesses: 1. The idea of generation conditioned on representation is not new. As the authors mentioned, conditioning on instance representations has been well studied in the community. Though the authors argue that they require ground-truth images to provide representations during generation, the issue can be addressed by adopting the idea from VAE to align those instance representations with the prior noise or by incorporating the representation generator proposed in this paper. Therefore, I would say the main contribution of this work is to improves those representation generation works by introducing a generative modeling for the representation, which however is limited.
2. The deep clustering community has also adopted the technique of self-supervised learning for a more clustering friendly representation and has achieved significant improvement in complex datasets, like ImageNet. Therefore, generation conditioned on clustering structures also has potential to bring gaps between unconditional generation and conditional generation. I would say the studied problem has been well addressed in the community. Comparisons with conditioning on clustering should be included.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. When extending to class-conditional generation, is it required to fine-tune the model?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: The limitations should include the discussions on the necessity of a self-supervised encoder and how to obtain such an encoder for datasets, especially for other modalities such as text, speech.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our clear writing and strong experiment results. Below we address the weaknesses (***W***) and questions (***Q***) raised by the reviewer.
***Importance of the studied problem***
We respectfully disagree with the reviewer that “the studied problem has been well addressed in the community.” Unconditional image generation has lagged behind its conditional counterpart for a long time in the literature, especially on complex data distributions such as ImageNet. As shown in Table 2, none of the prior works achieve an unconditional generation FID less than 5 on ImageNet 256x256, and the previous state-of-the-art (RDM-IN) requires the ImageNet training set during generation to achieve a 5.91 FID. In contrast, as shown in Table 3, state-of-the-art class-conditional methods can easily achieve an FID around or below 2, demonstrating a significant gap between unconditional and class-conditional generation.
Other reviewers have also acknowledged the difficulty and significance of the studied problem. Reviewer xRbx noted, “unconditional image generation has remained stagnant compared to conditional generation.” Reviewer zDRz stated, “the proposed method is designed to solve difficult problems very simply and intuitively.” Reviewer pEoh mentioned, “the paper shows very good and solid results on a relevant topic, touches upon the surface of very fundamental questions related to self-supervised methods and the training of generative models.”
Our paper proposes a novel method based on generating self-supervised representations to address this long-standing open problem in the community. The proposed RCG framework significantly improves the quality of unconditional generation, regardless of the specific form of the image generator. It achieves an unprecedented unconditional generation FID of **2.15**, bridging the long-standing performance gap between unconditional and class-conditional generation methods for the first time. We hope the reviewer could reconsider the importance of the studied problem and the rating of our paper in light of this clarification.
***Conditioning on instance representations (W1)***
Prior methods for unconditional generation that use instance representations require ***existing images*** to provide representations during generation, which is impractical for many generative applications. Moreover, none of the prior works use a generative model to accurately model the pre-trained self-supervised representation distribution. Our RCG framework is the first unconditional generation framework that generates pre-trained self-supervised representations ***from scratch*** and uses them as conditions for the image generator. This novel approach significantly boosts unconditional generation performance and rivals class-conditional generation methods, all without the need for any images during the generation process.
***Conditioning on clustering (W2)***
We agree that using pseudo-labels from clustering methods as class labels can be an option for unconditional generation. In fact, we have included this ablation study in Table 9(a), where we experimented with our image generator under different conditions, including clustering labels obtained from MoCo v3 representations. Conditioning on clustering labels achieves 6.60 FID and 121.9 IS, while conditioning on ground-truth class labels achieves 5.83 FID and 147.3 IS. Conditioning on our generated representations achieves 5.07 FID and 142.5 IS. These results demonstrate that clustering-based conditions perform worse than ground-truth class labels, whereas our generated representations can outperform ground-truth class labels. This is because the generated representations provide richer semantic information to guide the generative process. Furthermore, common clustering methods require the dataset to exhibit clear and distinct groupings or clusters that can be easily identified by clustering algorithms. It also limits the diversity of conditions, as it cannot produce different conditions within the same cluster. We will include this discussion in the revision.
***Extending to class-conditional generation (Q1)***
RCG seamlessly enables class-conditional image generation and achieves competitive performance, as shown in Table 3 (RCG, conditional (MAGE-L)). This is accomplished by training a class-conditional representation generator without the need to retrain or fine-tune the image generator. As shown in Table 11, training the RDM is very lightweight compared to training the image generator.
***Limitations***
We thank the reviewer for the suggestion. Applying RCG to other data types is beyond this paper's scope and could be an interesting future direction. It requires a pre-trained encoder to extract representations from the data, which are typically available off-the-shelf for common data types such as images, videos, texts, and speech [1, 2, 3, 4, 5, 6]. We will include this discussion in the revision.
[1] An Empirical Study of Training Self-Supervised Vision Transformers
[2] Spatiotemporal Contrastive Video Representation Learning
[3] A Large-Scale Study on Unsupervised Spatiotemporal Representation Learning
[4] SimCSE: Simple Contrastive Learning of Sentence Embeddings
[5] W2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training
[6] Speech simclr: Combining contrastive and reconstruction objective for self-supervised speech representation learning
---
Rebuttal Comment 1.1:
Title: Thanks for the responses.
Comment: I thank the authors point out their ablation study about conditioning on cluster labels. So, I raised my score.
However, I still feel that the technical contribution of this paper is marginal as I demonstrated in Weakness 1. Moreover, as shown in their results, the performance of conditioning on cluster labels is relatively good, though a little worse than the proposed method. That proves that generation conditioned on clustering structures has great potential to bring gaps between unconditional generation and conditional generation. Image generation without labels does not lag so behind its conditional counterpart with labels. | Summary: This paper proposes RCG, a novel framework to enhance unconditional image generation by leveraging self-supervised representations. The main idea is first to generate self-supervised representations unconditionally using a pre-trained encoder and then condition the image generation on these representations. This process does not require human-annotated labels and involves training a lightweight diffusion model to generate representations efficiently. The authors demonstrate the effectiveness of RCG by significantly improving the quality of unconditional generation across various architectures, closing the performance gap between unconditional and conditional generation methods.
Strengths: **[S1]** The paper is well-motivated, addressing the importance of unconditional image generation for utilizing abundant data, which has remained stagnant compared to conditional generation.
**[S2]** The idea of utilizing self-supervised learning for image generation makes sense and is a novel idea.
**[S3]** The paper shows significant performance improvements.
Weaknesses: I found no weaknesses.
Technical Quality: 4
Clarity: 3
Questions for Authors: **[Q]** Is there any intuition as to why MoCo-v3 is the best representation for RCG?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: They addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our motivation, novel idea, and significant performance improvements. Below we address the question raised by the reviewer.
***Why MoCo-v3 is the best representation for RCG***
We compare self-supervised representations from different methods in Table 7(a), and the representations from MoCo v3 achieve the best FID. This is likely because MoCo v3 uses an InfoNCE loss, which attracts positive samples and repels negative samples. Literature has shown that optimizing such an InfoNCE loss can maximize uniformity and preserve maximal information in the representation [1], thus providing substantial guidance for the image generator and leading to better and more diverse generation results.
Nonetheless, Table 7(a) also demonstrates that RCG achieves substantial improvements over the unconditional baseline using representations from various image encoders, including self-supervised encoders such as iBOT and DINO, as well as the supervised encoder DeiT. This shows that RCG can effectively utilize different self-supervised encoders and consistently improve the performance of unconditional generation.
[1] Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! After reading the other reviews and the authors' comments, I stick to my score and recommend acceptance. | Summary: This paper proposes a very simple unsupervised image generation framework that does not rely on human labeled annotation without compromising generation quality. This framework has two stages: i) representation generator learning and ii) image generator learning. The representation generator is trained in the form of diffusion model training to take a noisy latent image as input and output the corresponding representation encoded by a self-supervised image representation model. The image generator is trained to generate the image corresponding the given representation (encoded by the self-supervised representation model). This paper demonstrates that this simple framework is effective in achieving generation quality comparable to the counterpart supervised learning methods, regardless of architecture type.
Strengths: 1. Simple but effective framework for unsupervised image generation. The proposed method is designed to solve difficult problems very simply and intuitively, so it will be able to provide inspiration not only in this field but also in a variety of other fields.
2. Great presentation. It was very helpful in understanding this paper since it explained the information covered in each section in a very informative but concise manner.
3. A variety of experiments can support the authors' claim and the effectiveness of the proposed method.
Weaknesses: I do not have a major concern for this work. I only have two questions in the method.
1. When training the image generator, why was the representation of the SSL model used instead of the output of the representation generator trained in the previous step? Although the representation generator is trained to distill the generation capability of the SSL model, isn't it more appropriate to train under the same conditions as in the inference in which the output (representation) of the representation generator is used?
2. Is there any way for the two generators to be trained together in an end-to-end manner?
Technical Quality: 4
Clarity: 4
Questions for Authors: Please respond the two questions I raised in the weakness section.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: This paper properly deals with the limitation and potential societal impact in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our simple but effective framework, the strong experiment results, and the great presentation of our paper. Below, we address the two questions (***Q***) raised by the reviewer.
***Training the image generator using ground-truth representations vs. generated representations (Q1)***
The reviewer asks why we use the representations from the SSL model instead of the output of the representation generator during the training of the image generator. This is because the image generator needs both the representation as a condition and the corresponding ground-truth image as supervision during training. For a representation output by the unconditional representation generator, it is challenging to determine the corresponding image. Therefore, if we were to use generated representations as conditions for our image generator, we wouldn’t have the corresponding ground-truth images needed for training. Additionally, our design allows us to fully decouple the training of the representation generator and the image generator, making our framework more flexible to train.
***End-to-end training (Q2)***
This is an excellent suggestion. Training the representation generator and the image generator together could potentially further enhance the performance of the entire system. However, similar to ***Q1***, one possible issue is that the denoised representation output from the representation generator might not match the ground-truth representation and the corresponding ground-truth image, especially at high noise levels. In this scenario, using the denoised representation as the condition for the image generator while using the ground-truth image for supervision might cause inconsistencies during training. Nonetheless, we believe this would be an interesting future direction to explore.
---
Rebuttal Comment 1.1:
Comment: I appreciate authors to address my questions. I will increase my rating.
---
Rebuttal Comment 1.2:
Title: Latent variables with non-collapsing posteriors
Comment: Just think of the representation space of the MoCo encoder as a latent space and the diffusion model trained over this space as a prior over the latents. You can induce a reasonable posterior distribution over a finite sample of these latents using the softmax formed by scaled cos sim between the MoCo representation for an input image X and a finite sample of representations generated by the diffusion-based prior over MoCo representations. You can sample a generated representation based on that softmax and use it to condition generation of X. You could also play around with, eg, varying the temperature of the softmax in order to limit the information capacity of the latent variable (higher softmax entropy means less info about X). We can think of the softmax in contrastive learning, when evaluated only over negative samples, as estimating a conditional distribution over a non-parametric dictionary of representations that assigns higher probability to representations which are most similar to the representation of the "positive sample". This conditional should work reasonably well for sampling "semantic" information about X to use in conditioning. This also avoids a train/test mismatch in the representations used for conditioning the decoder, since they're always samples from the prior.
Approaches like the one described above decouple decisions about what information to cram in a latent variable and how to decode that information to produce a generated image. In the olden days, with deep VAEs and such, there were often issues with "posterior collapse" when adding a more powerful decoder p(x|z). Basically, z would be ignored since the model for p(x|z) was powerful enough to just act like p(x) without much impact on the training likelihoods. In the "representation conditioned" setting described above, we can decide what information should be in the latent variable (eg, whatever MoCo happens to capture), and how much information to condition on when generating X. In the setup described above, the amount of info about X is limited to the log of the number of prior samples in the contrastive softmax minus the entropy of the contrastive softmax. This lets us define a nice latent space and non-collapsing posteriors over that latent space which represent strictly controlled amounts of "semantic" information which can guide data generation.
I also think this setup would work fine without the diffusion model over MoCo representations. You could probably get away with just defining the prior as uniform over the appropriate hypersphere. This would make training and sampling a bit quicker. It's also straightforward to extend this approach to hierarchical latents, latents with variable bandwidth, etc. | Rebuttal 1:
Rebuttal: We thank all reviewers for providing lots of insightful and constructive feedback. We will definitely improve our manuscript accordingly. We are glad to see the commonly recognized strengths highlighted by the reviewers:
1. The presentation of the paper is clear and concise (zDRz, fB14, pEoh).
2. The problem studied in the paper is difficult (zDRz), of importance (xRbx), and relevant (pEoh).
3. The introduced framework is novel (xRbx), sound (pEoh), and intuitive (zDRz). It “will be able to provide inspiration not only in unconditional image generation but also in a variety of other fields” (zDRz), “opening up interesting research directions” (pEoh).
4. The empirical results are extensive and robust (zDRz, pEoh). The performance improvement on unconditional generation is significant (xRbx, fB14).
We would like to reemphasize a major technical contribution of this work: we demonstrate the possibility of ***unconditionally generating a representation*** pre-trained by state-of-the-art self-supervised learning methods. These generated representations can be used as conditions to improve the unconditional generation performance of various image generators. Such a representation generator is key to enabling unconditional generation without relying on ground-truth images during the generative process.
Furthermore, we note that the contribution of our work extends beyond the technical aspects. The ground-breaking empirical finding that unconditional generation can rival the performance of conditional generation by generating and conditioning on representations is a significant contribution. We believe this approach and the promising results have the potential to liberate image generation from the constraints of human annotations and rekindle the community’s interest in the fundamental problem of unconditional generation.
As there are no outstanding common questions, we will address each reviewer’s specific questions in separate responses. We are also happy to continue the discussion if the reviewers have any further questions or concerns. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Oja's Algorithm for Streaming Sparse PCA | Accept (poster) | Summary: This paper studies the problem of finding the top eigenvector from samples.
Given $n$ samples drawn from a distribution whose mean is $0$ and covariance matrix is $\Sigma\in\mathbb{R}^{d\times d}$, we would like to find a vector $\hat{v}$ based on these $n$ samples such that $\hat{v}$ and the top eigenvector of $\Sigma$, $v$, have a small $\sin^2$ error which is defined as $1-\langle\hat{v},v\rangle^2$.
Furthermore, we assume that the sparsity of $v$ is $s$ and hence we expect the output $\hat{v}$ also has sparsity $s$.
The goal in this paper is to design a single pass algorithm using $O(nd)$ time and $O(d)$ space such that the error is minimized.
Oja's algorithm is a well known algorithm for finding the top eigenvector and works as follows.
In the $t$-step, we iteratively update our current solution towards the $t$-th sample.
The authors applied Oja's algorithm to achieve the goal and showed that the error is at most $O(\frac{\sigma_*^2 s\log d}{n})$ where $\sigma_*^2$ depends on the top two eigenvalues.
Strengths: - The problem seems to be a natural question and well-motivated.
- The general presentation is good.
The readers of all levels of expertise should be able to follow the main idea in this paper.
Weaknesses: - In terms of techniques, the main idea is to apply the known Oja' algorithm.
I am not sure if there are any fundamental new ideas introduced in this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Note:
- Line 31: $r_{\textsf{eff}}$ is not defined yet in the main text.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind words regarding the motivation of our problem and the simplicity of the presentation. We address your primary concerns below:
**[Re: Novelty of fundamental ideas]:** While our algorithm builds on top of Oja’s algorithm, hard thresholding of Oja’s vector for Sparse PCA has not been proposed or analyzed before. We introduce several novel ideas in this analysis and respectfully disagree with the claim of the lack of fundamental new ideas in our work. In what follows, we describe the motivation for the problem considered in our work, how it compares with relevant works under the computational and statistical regime of interest, and bring out the new ideas involved in our analysis.
### Importance of the Problem Setting
*Motivation*: Sparse PCA has been a long-standing problem, with many algorithms being proposed in the literature, motivated both by theoretical ideas and practical applications. It has seen a lot of interest from the Computer Science and Statistics communities. Table 1 in our paper provides a detailed summary of various important contributions over the last two decades, along with their statistical performance as a function of various computational and model parameters.
*Problem Statement*: In this paper, we consider a variant of the problem that has been relatively unexplored in the literature - *Can Sparse PCA be performed at the statistically optimal rate in linear time and space, without placing strong structural assumptions on the population covariance matrix?*. The only papers which operate under this tight statistical and computational budget, to the best of our knowledge, are Johnstone and Lu [JL09], Yang and Xu [YX15], and Wang and Lu [WL16]. However, all of them require a spiked population covariance model for their analysis. Figure 1(a) compares the performance of these algorithms on a population covariance model which slightly deviates from the spiked model and shows how critical this assumption is for their performance.
*Our Contribution*: This question on the efficiency of PCA, without sparsity, has been asked before (see e.g. Jain et. al [2016]) and Oja's algorithm has emerged as one of the clear winners, achieving the statistically optimal rate under tight computational and space constraints. However, its extension under the sparsity assumption is challenging and has not been analyzed before. In fact, this algorithm, despite its simplicity, has not been proposed in its current form in the literature before. We therefore believe that this algorithm and its analysis provide a valuable step in the direction of **computationally and space-efficient Sparse PCA** algorithms which work without strong assumptions on the data distribution.
### Novel Ideas in Our Analysis
1. We would like to point out that the analysis of thresholding, as pointed out by all other reviewers, is very different and novel, diverging significantly from analyzing regular Oja’s algorithm. In particular, it introduces a tight analysis of a system of linear recurrences, which is novel, as pointed out by Reviewer zu7o.
2. Our analysis further offers theoretical insights into the behavior of the entries of the Oja vector, which, to the best of our knowledge, has not been attempted before. In particular, Lemma 3.13 and 3.14 provide the first results of their kind and show that although the entries of the Oja vector do not concentrate, it is indeed possible to show a tail-bound on their deviation.
3. The closest algorithm to our work that we are aware of analyzes soft-thresholding of the Oja vector under a stronger assumption on the covariance model and an initialization close to the population eigenvector (see Wang and Lu, 2016). Furthermore, they provide a PDE-based asymptotic analysis of the problem, whereas our work provides a sharp and non-asymptotic rate of convergence.
**[Re: Definition of $r_\mathsf{eff}$]:** We define $r_\mathsf{eff}$ in Lines 4-5.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. I will take this into consideration during the AC-reviewer discussion.
Definition of $r_{\text{eff}}$: It may be helpful to formally define it after the abstract.
---
Rebuttal 2:
Comment: Dear Reviewer qSPb,
Thank you for your response. As per your suggestion, we will define $r_\mathsf{eff}$ in an equation after the abstract to make it easy to spot. We would also like to take this opportunity to reiterate that the proof techniques in our paper are completely different from those in the previous papers on streaming PCA and matrix products which were done under bounded $r_\mathsf{eff}$. We hope that our response has conveyed the novelty of our work. If you have any further questions that we can answer, please let us know. | Summary: The work proposes a one pass Ojas' algorithm which can achieve minimax error bound for high dimensional sparse PCA under standard technical conditions.
Strengths: The paper is extremely well written. The proposed one-pass Oja's algorithm is novel with detailed convergence analysis and convincing numerical experiments to support author's claims. I appreciate the thorough literature review highlighting how the current paper improves on over the previous works (in particular Table 1). The relaxation on effective rank assumption is particularly notable in the results. The mathematical ideas in the proofs are easy to follow and contain several new techniques.
Weaknesses: In general, I like the paper as it is quite clear and mathematically sound. In some minor places, the exposition can be slightly improved by defining terminologies before using them (for instance please define $\sin^2$ error before referring).
Technical Quality: 4
Clarity: 4
Questions for Authors: Q: Can something be said along the lines of these results for other top eigenvectors of $\Sigma$ (not just $v_1$)?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind words regarding our presentation, the novelty of our algorithm, the motivation of our problem, and the contribution to literature. We address your primary concerns below:
**[Re: Top-k principal components]** Recent results provide a black-box way to obtain k-PCA given an algorithm to extract the top eigenvector (see [a]) which could be employed treating our algorithm as a 1-PCA oracle. This deflation-styled approach has also been proposed in [b] in the context of Sparse PCA.
We also believe that an analysis such as [c] can be extended to the sparse setting to obtain top-k principal components simultaneously via QR decomposition and thresholding. This could be an interesting direction for future work.
References:
[a] Jambulapati, A., Kumar, S., Li, J., Pandey, S., Pensia, A. & Tian, K.. (2024). Black-Box k-to-1-PCA Reductions: Theory and Applications. Proceedings of Thirty-Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:2564-2607.
[b] Mackey, Lester. "Deflation methods for sparse PCA." Advances in neural information processing systems 21 (2008).
[c] Allen-Zhu, Zeyuan, and Yuanzhi Li. "First efficient convergence for streaming k-pca: a global, gap-free, and near-optimal rate." In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pp. 487-492. IEEE, 2017.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their insightful explanations. | Summary: The paper studies the problem of streaming sparse PCA under iid data. That is, we have $x_1,...,x_n \sim \mathcal D$ iid vectors in $\mathbb R^d$ which are revealed to us in an online fashion. We want to estimate the top eigenvector of $\Sigma = \mathbb E[xx^\intercal]$. We assume that this top eigenvector $v_1$ is $s$-sparse, for $s = O(\frac{n}{\log(n)})$. The error between the real and estimated top eigenvector is computed using the $\sin^2$ error metric $(1-\langle v_1, \tilde v_1\rangle^2)$.
This paper shows that Oja's algorithm can be used to both find the support of $v_1$ and the values in $v_1$ (assuming we are given some value $k \geq s$ and correctly choose a step size parameter).
This algorithm runs in $O(nd)$ time and $O(d)$ space.
The key technical assumptions made are:
- The data is iid subgaussian
- The sparsity of $v_1$ is $s = O(\frac{n}{\log (n)})$
- The effective dimension (ratio of trace to spectral norm) of the covariance matrix is at most $O(\frac{n}{\log(n) \sigma^2})$ where $\sigma^2 = \frac{\lambda_1}{\lambda_1 - \lambda_2} \cdot \frac{\lambda_2}{\lambda_1 - \lambda_2}$ measures the (square) of the singular value gap between the top two eigvals of $\Sigma$.
- The smallest nonzero entries of the (unit vector) top eigevnector $v_1$ is $\tilde\Omega(\frac{d^{1/8}}{n^{1/4}})$.
In contrast to prior works, this work achieves better $\sin^2$ error that prior $O(nd)$ time algorithms and $O(d)$ space algorithms. It also makes no assumption about the quality of the starting vector (i.e. it's not a local convergence result, it's a global convergence result).
The results are essentially all theoretical, with one experiments used to show that Oja works well here, and another used to elucidate which theoretical bounds are loose.
Strengths: The paper is a nice contribution to the literature, and it's well written. Sparse PCA is an important problem, and handling it in low space and time is important as well. The error metrics and assumptions are all reasonable. The paper is written clearly. It's just sorta all around solid.
The result is original in that it has a clear goal: achieve the $\sin^2$ error of large-space or large-time algorithms (those that use $\omega(nd)$ time or $\omega(d)$ space), but only using $O(nd)$ time and $O(d)$ space. It's especially nice that a pretty naive application of Oja's algorithm can achieve this.
There's also some nice novelty in the proof technique, which bounds the 2nd and 4th moments of the entries of the output of Oja's algorithm. In particular, the authors tighten a bound from the prior work by designing and solving a system of linear recurrences. A cool math setup that I do not often see.
I'm not an expert in the sparse pca world, not the streaming pca world, and certainly not streaming sparse pca. That said, assuming the authors are not omitting any relevant prior works, this results on low error in very low space and time seems pretty cool. It seems like a particularly nice step in the sparse PCA literature.
Weaknesses: There's a few notation inconsistency issues, some minor gripes. Nothing I'm really worried about. I'll push it all to the "questions" section below.
I accept this paper for publication.
Technical Quality: 4
Clarity: 4
Questions for Authors: None of these are game-breaking, and many of these are minor typos. Feel free to ignore whatever feels unfair to you. But, do at least make the notation for the initial vector consistent and make the figures easy to read.
1. Annoyingly, the authors __very often__ swap the symbols $y_0$, $u_0$, $w_0$, and $z_0$. Please fix this.
1. Figure 1 is too hard to read. The letters are too small. The axes have no labels. The error is negative somehow (if it's plotting $\pm1$ standard deviation, which is making the error bars negative, consider using 25th and 75th quantiles instead?).
1. It's not clear if Lemma 3.1 and Theorem 3.2 allow us to use over/underestimates of $\eta$, or if the theorems are very tied to that exact value of $eta$. Seems worth discussing.
1. It's not clear why Theorem 3.2 requires $k=s$ instead of $k\geq s$. Discussing this would be nice. Returning a vector whose support is $\log(1/\delta)$ times larger than $s$ seems fine to me, if that's the issue here.
1. Prop 3.4 is written a bit unclearly, namely around the "with sparsity parameter, $n=$" part, since $n$ is not a sparsity parameter.
1. Theorems 3.5 and 3.7 uses failure probability $d^{-10}$, which is fine I guess? But why not just use a $\delta$?
1. Remark 3.6 seems like it might make more sense to have back around assumption 2
1. Section 3.4 strikes me as a very standard analysis style. Not sure why you point to [KLL+23] specifically. You can say it's standard, and point to [KLL+23] as an example of this, maybe? If I'm missing something and it's not standard, lemme know.
1. Figure 2 is also too hard to read, especially in print. The letters are too small. The series look too much like each-other. The y-axis should be more specific in the error. It's not really clear what the dotted / population lines are showing -- is it $\log(E[e_i^\intercal B_n u_0])$, or is it $E[\log(e_i^\intercal B_n u_0)]$? Is it something else? What exactly does "error" mean here?
1. Line 221 can use $E[r_i | u_0]$ imo
1. Idk why equation (4) uses absolute values and $\pm$ on the non-top-eigenvalue terms. Isn't $E[B_n]$ PSD, and thus the second term guaranteed to be nonnegative?
1. Line 221 maybe mention that $E[B_n] = (1+\eta \Sigma)^t^?
1. Line 232 this line about [SSM11] taking $\lambda_1 = d^\alpha \rightarrow \infty$ is kinda confusing because it's not a scale-invariant claim. Is it like a condition number that's getting large?
1. Line 238 should mention that Section 4 explains the technique a bit more in detail
1. Line 247 should really point to Section 4 as well, explaining the technique in a bit more detail
1. [Line 256] If you have space, it'd be nice to understand why Theorem 3.5 needs a more general argument that $U = e_i$.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind and detailed feedback and comments about the clarity and presentation of our work, the novelty of our analysis, and the significance of our contribution to the Sparse PCA literature. We will correct all the typographical issues pointed out and will not address them here individually. We answer your primary concerns and suggestions below:
**[Re: $y_0, u_0, w_0$, and $z_0$]:** Thank you for pointing it out. We will take care to fix them in the final manuscript.
**[Re: Over/under estimates of $\eta$]:**: For step-sizes, we follow the convention in (Balsubramani et al. (2015), De Sa et al. (2015), Jain et al. (2016), Li et al (2017), Allen-Zhu et al. (2016)), Huang et al. (2021), where Oja’s algorithm is analyzed without sparsity and the optimal learning rate requires knowledge of the gap, $\lambda_{1} - \lambda_{2}$, and other model parameters to get the statistically optimal rate. Our sin-squared error roughly is of the form $O\left(\eta\lambda_1+\exp(-n\eta(\lambda_1-\lambda_2)\right)$. The optimal eta ensures that the first term dominates yielding our optimal error rates. However, a small $\eta$ resulting from plugging in an upper bound on the eigengap $\lambda_1-\lambda_2$ may make the second term dominate leading to a suboptimal sin-squared error. We will clarify this further.
**[Re: Figure 1(a) and 2(a), (b)]:** We have provided revised figures in the global author rebuttal document along with detailed explanations of each axis and the legend followed. Figure 1(a) plots the sine-squared error with iterations of the algorithm. The error bars currently represent standard deviations across 100 runs, leading to a negative error. We have now fixed that to plot the 25th and 75th percentile bars in the revised figure and added labels to the axes.
Figures 2 (a), and (b) have also been revised with a larger font size and clear axis labels. The y-axis in Figure 2(a) has been corrected to read the “value” of the referenced quantity instead of “error” and dotted lines have been removed to avoid confusion. The line width has been increased to enhance clarity. The lines labelled “sample” plot $\log(|e_{i}^{\top}B_{n}u_{0}|)$, whereas the “population” curves plot $\log(|\mathbb{E}[e_{i}^{\top}B_{n}u_{0}]|)$.
**[Re: $k \geq s$ in Theorem 3.2]:** Our probability boosting argument for Algorithm 3 requires a distance metric, such that when the true support $S\in \hat{S}$, $d(S,\hat{S})\leq \epsilon$ for some small $\epsilon>0$. And not only that, we also crucially need that $d(S,\hat{S})\leq \epsilon$ implies that $\hat{S}$ contains $S$. This is easily done for $k=s$ since the metric is just the indicator function, which returns 0 if two sets are equal and 1 otherwise. With $k \geq s$, we were as of yet unable to create such a metric that would be amenable to boosting to high probability.
**[Re: Prop 3.4]:** Thank you for pointing that out. It should read “with sparsity parameter s, such that n = …”
**[Re: Failure probability and $d^{-10}$]:** This is primarily to show that the failure probability can be $\frac{1}{\mathsf{poly}(d)}$ without affecting the sample complexity or the error by more than a constant multiplicative factor.
**[Re: Section 3.4 and [KLL+23]]:** Although Section 3.4 is reminiscent of a standard median-of-means type analysis, we were unaware of other existing algorithms that extended this framework to vectors, apart from techniques such as the geometric median(also suggested by Jain et al. (2016) for probability boosting), which may or may not return a sparse vector. One advantage of [KLL+23] is that they choose a vector from the given set.
**[Re: Eq(4) and absolute value terms]:** As described on Line 221, the $\pm$ notation is used here to describe the bound on the deviation. That’s why the second term has an absolute value, which is an upper bound on the deviation.
**[Re: Line 232 involving [SSM11]]:** Thank you for pointing this out. Yes, you are indeed correct, this is indeed a condition number since the second eigenvalue is 1 here. We will clarify this in the revised manuscript.
**[Re: General argument for $U=e_i$]:** We use other matrices in place of U, such as U=I_S, in the proof to handle truncation assuming knowledge of the true support. Lemma A.5.1 provides a sketch of which terms come into play and the corresponding values of U which are important. We included that in the appendix in the interest of space and can provide a brief sketch in the extended manuscript after revision. | Summary: The paper studies Principal Component Analysis with O(d) space and O(nd) time, where n is the number of datapoints and d is their dimensionality.
The authors provide the first single-pass algorithm that under a general \Sigma matrix whose top principal vector is s-sparse, manages to find a close enough vector in the sense of the sinus-squared error. Their main theorem is Th. 1.1. which states that under a structural assumption on the effective rank (ratio of the trace and the principal eigenvalue of the population covariance matrix Σ) not being too large (the function involves the spectral gap and the top eigenvalue).
Their algorithm relies on Oja's algorithm and the authors show that w.h.p. the Oja vector when initialized by a random unit vector, will actually converge to an output whose top k entries in terms of magnitude will include the true support of the s-sparse v_0. Then, the authors can ue the recovered support and achieve minimiza optimal sparse PCA, thus improving upon several prior works.
Strengths: +very interesting and well-motivated problem
+clean framework and clean algorithm
+novel analysis and simple algorithm that improves upon several prior works
Weaknesses: -no serious weaknesses
Technical Quality: 3
Clarity: 3
Questions for Authors: Some typos:
-matrix multiplication constant is not 2.732 as written.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your kind words regarding the problem statement considered, the simplicity and performance of our algorithm, and the novelty of our analysis. We will correct the matrix multiplication constant to ~2.372 based on recent developments (see [a]).
References:
[a] Williams, Virginia Vassilevska, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. "New bounds for matrix multiplication: from alpha to omega." In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 3792-3835. Society for Industrial and Applied Mathematics, 2024.
---
Rebuttal Comment 1.1:
Title: post-rebuttal
Comment: The reviewer has read the author's response and keeps the score unchanged. | Rebuttal 1:
Rebuttal: We want to first thank all the reviewers for their valuable suggestions and insightful feedback. We believe we have addressed nearly all of their main technical questions. In what follows, we will address some important points each reviewer has raised. We will correct all the typographical issues pointed out and will not address them here.
### **Re: Over/under estimates of $\eta$ (Reviewer zu7o)**
For step-sizes, we follow the convention in (Balsubramani et al. (2015), De Sa et al. (2015), Jain et al. (2016), Li et al (2017), Allen-Zhu et al. (2016)), Huang et al. (2021), where Oja’s algorithm is analyzed without sparsity and the optimal learning rate requires knowledge of the gap, $\lambda_{1} - \lambda_{2}$, and other model parameters to get the statistically optimal rate. Our sin-squared error roughly is of the form $O\left(\eta\lambda_1+\exp(-n\eta(\lambda_1-\lambda_2)\right)$. The optimal eta ensures that the first term dominates yielding our optimal error rates. However, a small $\eta$ resulting from plugging in an upper bound on the eigengap $\lambda_1-\lambda_2$ may make the second term dominate leading to a suboptimal sin-squared error. We will clarify this further.
### **Re: $k \geq s$ in Theorem 3.2 (Reviewer zu7o)**
Our probability boosting argument for Algorithm 3 requires a distance metric, such that when the true support $S\in \hat{S}_1$, $d(S,\hat{S}_1)\leq \epsilon$ for some small $\epsilon>0$. And not only that, we also crucially need that $d(S,\hat{S})\leq \epsilon$ implies that $\hat{S}$ contains $S$. This is easily done for $k=s$ since the metric is just the indicator function which returns 0 if two sets are equal and 1 otherwise. With $k \geq s$, we were as yet unable to create such a metric that would be amenable to boosting to high probability.
### **Re: Top-k principal components (Reviewer jbhG)**
Recent results provide a black-box way to obtain k-PCA given an algorithm to extract the top eigenvector (see [a]) which could be employed treating our algorithm as a 1-PCA oracle. This has also been proposed in [b]. We also believe that an analysis such as [c] can be extended to the sparse setting to obtain top-k principal components simultaneously via QR decomposition and thresholding. This could be an interesting direction for future work.
### **Re: Novelty of fundamental ideas] (Reviewer qSPb)**
While our algorithm builds on top of Oja’s algorithm, hard-thresholding of Oja’s vector for Sparse PCA has not been proposed or analyzed before. The closest algorithm analyzes soft-thresholding under a stronger assumption on the covariance model and an initialization close to the population eigenvector (see Wang and Lu, 2016). We would also like to point out that the analysis of thresholding, as pointed out by all other reviewers, is significantly different from analyzing regular Oja’s algorithm. In particular, it introduces a tight analysis of a system of linear recurrences, which is novel, as pointed out by Reviewer zu7o. Our analysis offers theoretical insights into the behavior of the entries of the Oja vector, which, to the best of our knowledge, has not been attempted before. Please refer to the reviewer rebuttal for a more detailed response.
References:
[a] Jambulapati, A., Kumar, S., Li, J., Pandey, S., Pensia, A. & Tian, K.. (2024). Black-Box k-to-1-PCA Reductions: Theory and Applications. Proceedings of Thirty-Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:2564-2607 Available from https://proceedings.mlr.press/v247/jambulapati24a.html.
[b] Mackey, Lester. "Deflation methods for sparse PCA." Advances in neural information processing systems 21 (2008).
[c] Allen-Zhu, Zeyuan, and Yuanzhi Li. "First efficient convergence for streaming k-pca: a global, gap-free, and near-optimal rate." In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pp. 487-492. IEEE, 2017.
Pdf: /pdf/45ecc741487c84292bb0dbdb36469f5955d32981.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
InversionView: A General-Purpose Method for Reading Information from Neural Activations | Accept (poster) | Summary: The paper introduces the method InversionView, which finds inputs that give rise to similar activations (the preimage). As the preimage grows exponentially with sequence length the authors train a conditional decoder to generate inputs from a target activation.
The authors show three use cases: a character counting task, Indirect Object Identification, and 3-digit addition.
Strengths: The approach of generating text examples that lead to similar activation patterns is intuitive and could be useful for interpretability since results are very human readable.
The example use cases (especially the toy task of character couting) seem promising.
Weaknesses: - the abstract could be a bit more specific
- in general, I think the reader is left wondering for a bit too long what the authors actually do and how the method works.
- line 28: "which in turn helps us put together the algorithm implemented by the model." promises a bit too much
- motivation and description of the method is too vague
- very few details on the method/decoder that generates the examples that are later inspected
- results of experiments are a bit hard to pass. It would be nice if the reader is guided through those a bit better at different levels of abstraction
- the approach (generating human interpretable examples that achieve a specific activation patters) seems closely related to feature visualization/ adversarial example and counterfactual example generation/ in general methods that generate in the input space, however those works are not discussed at all
Technical Quality: 2
Clarity: 2
Questions for Authors: - a graphic showing how to get from source transformer activations to pre image samples would be helpful
- Figure 2 is hard to parse: I would suggest highlighting your conclusion/interpretation (aka one activation encodes target character but not count, one activation encodes count but not target character etc) in the figure.
- in general, for the experiments, I would suggest to clearly state the task and your findings before jumping into technical details
- 3.3 "We apply InversionView to the components of the circuit" seems like you apply inversionView to specific activations found by [39] but then you state "InversionView unveils the information contained in the activation sites, with
results agreeing with those of Wang et al. [39], while avoiding the need for tailored methods."
So it seems like you needed the "tailored methods" to know where to apply inversionView in the first place, or am I misunderstanding sth?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: - I think when moving from toy tasks to SOTA models, inversionView might become very prone to human tendency for pattern matching
- in addition polysemanticity might be very relevant for InversionView results but is not discussed at all
- related work is not discussed in sufficient detail
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer fZCU,
Thanks for your feedback on our paper!
## Reply Regarding Questions
### Questions 1-3: writing and demonstrating suggestions
Thanks for your suggestions. Most importantly, we have created two figures describing the decoder and the workflow, included in the global rebuttal PDF. We will also address your other two suggestions.
### It seems like you needed the "tailored methods" to know where to apply inversionView in the first place, or am I misunderstanding sth?
Yes, there is a misunderstanding. There are two key parts in Wang et al. [39]: They (1) use path patching to identify important components, and then (2) study the function of these components using tailored methods. By "avoiding the need for tailored methods", we refer to (2). In order to know where to apply InversionView, we only need (1), i.e., a general-purpose method for finding circuits.
For example, [39] use path patching to identify certain heads ("S-Inhibition heads") affecting Name Mover head queries, and then use tailored patching experiments to show that they output both token and position signals. These experiments were designed specifically to disentangle these two effects, ablating or inverting token or position information. On the other hand, InversionView directly reads out these two kinds of information (Figure 17 shows an example for an S-Inhibition head containing position information), obviating the need for guessing possible information and tailoring patching experiments.
We will add this specific example to the paper.
## Reply Regarding Weaknesses
### Weaknesses 1-6: Writing and Presentation
Thanks for your suggestions, which we appreciate and which we will implement.
Key points:
- We will provide more specific information about our studies in the abstract.
- The two new figures described above will make it easy for the reader to quickly grasp how the method works.
- We will change the sentence in line 28 to "which in turn helps us identify how the information flows through the model. This is crucial to obtain the algorithm implemented by the model. "
- We respectfully submit that we do have formally described the method/decoder in Appendix C (decoder architecture), D.1, E.1, F.1 (decoder training details for individual tasks), and A.2 (sampling details). We will expand these sections with more discussion.
- We will revise the description of results to provide more high-level intuition. We would be grateful for any more detailed guidance.
### The approach seems related to feature visualization/ adversarial example and counterfactual example generation
Thanks for pointing out other methods that also generate in input space. We explain the difference between InversionView and these methods below. We will add this comparison to the paper.
Feature visualization generates inputs that *maximally activate* a certain neural network unit, while InversionView finds inputs that result in the *same* vector. Whereas feature visualization interprets an individual neural network unit (e.g., neurons) to understand its general role across inputs, InversionView interprets specific values of inner representations of neural networks. When the input changes, the value and thus the interpretation may change.
Adversarial example or counterfactual example generation methods generate input that is similar to the original input but results in different outcome. While similar in input space, the adversarial/counterfactual input is likely to be quite different in internal representation space, leading to a different output. In contrast, we are interested in how different inputs in input space are represented very similarly in internal representation space.
## Reply Regarding Limitations
### On SOTA models, inversionView might become prone to human tendency for pattern matching
We would like to argue that pattern matching is fundamental for interpretation. Pattern matching is essential when interpreting neurons, studying the function of an attention head, or, as done in our paper, inspecting preimages. Neural activations may contain arbitrarily complex information, hard to capture by a fixed set of templates. Therefore, analysis by an intelligent entity (artificial or biological), capable of identifying novel patterns, is necessary. As we discuss in our paper, LLMs are useful but not good enough to replace humans at this task of pattern matching yet.
Moreover, InversionView is a method for generating hypotheses, not final interpretations. Generated hypotheses must be verified with intervention experiments, as we do in the paper. An incorrect hypothesis due to an error in pattern matching will be identified at this stage, ruling out dependence on subjectivity in pattern matching.
### Polysemanticity might be very relevant for InversionView results but is not discussed
Thanks for raising this point. Polysemanticity is widely observed when one wants to find a unified interpretation across all inputs for a model component such as a neuron -- in contrast, InversionView interprets a specific activation (vector) on a specific input. Nonetheless, concepts similar to Polysemanticity may be defined in our setting. Specifically, we observe cases where the same (activation site, position) pair encodes different kinds of information on different inputs -- a certain kind of "polysemanticity". For example, in the factual recall task (see global rebuttal), some heads' outputs encode subject information on some inputs, and relation information on other inputs. InversionView is very helpful for studying this kind of "polysemanticity", because it decodes per-example information instead of average information. We will discuss in the paper.
### Related work is not discussed in sufficient detail
We will add work related to feature visualization / adversarial example. We request the reviewer to suggest any other related work they find missing from the paper. We would be happy to include it too.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. After reading your rebuttal and the other reviews I have updated my score to a weak reject since it is still hard for me to assess if the limitations regarding clarity of presentation will be sufficiently addressed in the final version. However, other reviewers seemed to quite like the paper, so I will not stand in the way of accepting the paper for publication.
Regarding related work: I would appreciate discussing not only differences, but also similarities to related work (for example GAN inversion techniques). Many ideas there seem quite related even when the final goal is not always interpretability. This can help readers from other domains to more quickly understand your work.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising your score, and thanks for your suggestion regarding related work. We agree with your suggestion, many readers from other fields may wonder how InversionView compares to these works, both in terms of similarities and differences. We will be happy to discuss both differences and similarities to related work. | Summary: The method proposed in the paper seeks to decipher the information encoded within neural network activations. The core idea is to examine subsets of inputs that produce similar activations, utilizing a trained decoder model conditioned on these activations. The authors perform their analysis on three different tasks: character counting, indirect object identification, and 3-digit addition.
Strengths: * The proposed method helps us find which parts of a network handle specific tasks. For instance, in a counting task, we can see which activations detect the target character and which ones do the counting. This is interesting because it helps us understand how neural networks work inside.
* Showing the underlying algorithm the model uses to solve tasks, like adding 3-digit numbers, gives useful insights.
Weaknesses: * Applying the method to extremely large models and more complex tasks is challenging.
* Selecting an appropriate distance metric and epsilon is challenging and requires tuning, making it difficult to adapt the method to different tasks. This also highlights the authors' impressive effort in making the method work.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Do you think it is possible to use other distance metrics besides L2-based metrics? Additionally, why do you believe the L2 distance is effective in this context? Is there something unique about the geometric space of these tasks that makes L2 distance particularly suitable? What types of tasks, in general, can be effectively solved using L2-based metrics?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer P3CM,
Thanks for your feedback on our paper!
## Reply Regarding Questions
### It is possible to use other distance metrics besides L2-based metrics? Why do you believe the L2 distance is effective in this context? Is there something unique about the geometric space?
Yes, we think it's possible to use other distance metrics. Since our work explores a novel direction, we would like to start with simple and common metrics, in order to show that the idea works without complicated metrics.
Therefore, we use L2 distance because of its simplicity and straightforward intuition, so that readers can easily grasp the idea and realize how much we can learn from the geometric space, instead of regarding L2 distance as the optimal choice.
In addition, in the paper we show L2-based metrics are effective empirically, as the information obtained using the simple L2-based metrics is well supported by causal and quantitative experiments.
## Reply Regarding Weaknesses
### Applying the method to extremely large models and more complex tasks is challenging
We agree that interpretation of extremely large models tends to be more challenging in general. On large models and complex tasks, a potential challenge is the complexity of the information itself, but this can in principle be overcome by scaling the decoder so it can learn more complex inverse mappings, and basing the decoder on pretrained language models can help. We show the feasibility of this approach in the factual recall task on GPT2-XL (see global rebuttal), where we read information from a model 10x larger than in the IOI case study.
### Selecting an appropriate distance metric and epsilon is challenging and requires tuning
As we mentioned above, we show that the method works with the most common and simple distance metric. There was in fact no need at all for effortful tuning of these aspects -- though using more sophisticated metrics could be a topic for follow-up work. Regarding the threshold $\epsilon$, we simply choose a small threshold for which the preimage shows meaningful patterns. Note that, unlike neural network training, where people train models repeatedly to tune hyperparameters, in the preimage returned by InversionView, one can immediately see possible interpretations resulting from different thresholds. We have more detailed discussion regarding the threshold, and our method's robustness to it, in Appendix A.4.
In summary, the fact that simple distance metric and threshold choice work highlights the robustness of our method.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I still believe this work takes a novel approach, so I will maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you. We are grateful for your support | Summary: This work is mainly based on the representational geometry of activations in the activation space and chooses samples whose distances are within a defined $\epsilon$-preimage distance. A single two-layer transformer decoder model is trained using activations of each layer from the investigated model. After finishing decoder training, the Euclidean distance is used as a metric to select generated samples based on each activation presentation of each layer, and compare those samples with the query input sentence. This approach is evaluated and analysed under three tasks, i.e., character counting with a 2-layer and 1-head transformer, IOI with GPT2-small and 3-digit addition with 2-layer, 4-attention head transformer. The comprehensive experiment results and analysis confirm the geometry hypothesis, and provide lots of insights to future work.
Strengths: - Activation representation of each layer from small neural networks to GPT2-small can be visualized and explained based on the geometry hypothesis with a trained decoder using those activations as input
- This hypothesis is evaluated using different tasks, i.e., character counting, IOI and 3-digit addition, and lots of meaningful analysis and insights are discussed
- Experiments are solid and the appendix includes many details about each case study.
Weaknesses: - I suggest authors use another figure to demonstrate the whole training and evaluation pipeline, i.e., what input is used for training probed model, what query input is for decoder training and what input and output are from the trained decoder, which will help to understand the whole framework much easier for readers.
- This method is mainly based on manual investigation to select and analyse generated samples, which might need large labour resources to extend to large-scale LLMs.
Technical Quality: 4
Clarity: 3
Questions for Authors: - In line 115, what does $\mathbf{z}$ represent here? and what is the difference between $\mathbf{z}$ and $\mathbf{z}^q$?
- In line 108, Is this input the same as the query input in figure 1? Another figure introducing the whole pipeline might help a lot to understand.
- Does this neighbour of geometry hypothesis always hold across all LLM activations?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: I am looking forward to how to automate this approach and apply it to the larger LLM interpretability analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer vTBG,
Thanks for your feedback on our paper!
## Reply Regarding Questions ##
### In line 115,what does $\mathbf{z}$ represent? ###
Sorry for not writing it clearly. The $\mathbf{z}$ is the same as $f(\mathbf{x})$ in the previous part of the paper. So it simply represents an arbitrary activation that is compared with the query activation $\mathbf{z}^q$. We will make it clear in the next version.
### In line 108, is this input the same as the query input in figure 1? ###
Yes, "input" here refers to the same as the query input in Figure 1. It's a sequence of tokens. Note that we use the modifier "query" in the context of comparing it to its neighbourhood during interpration. Thus, in line 108, we do not use "query" because the input is not mentioned in the context of a neighbourhood.
### Does this neighbour of geometry hypothesis always hold across all LLM activations? ###
Because transformers define continuous functions, the hypothesis should hold in principle. Of course, that doesn't guarantee it will always hold in a practically meaningful sense. As experimentally demonstrated in our paper, the hypothesis holds for the conducted case studies.
## Reply Regarding Weaknesses ##
### Another figure to demonstrate the whole training and evaluation pipeline ###
Thanks for your suggestion. We have made a new figure, which we have included in the global rebuttal PDF, and which we will add it to the paper in the next version.
### Need large labour resources to extend to large-scale LLMs ###
1. Neural activations can contain any kinds of information, so that any kind of template-based interpretation is likely not expressive enough. InversionView allows decoding various kinds of information. We believe manual or LLM-based interpretation is likely necessary given the variability of the information that can be encoded.
2. InversionView is naturally suitable for using LLMs to automate interpretation, because it produces samples in input space that LLM can easily read, rather than abstract data structures.
3. As our experiment (*Appendix J Automated Interpretability*) shows, LLMs can detect the main information in the preimage, despite occasional hallucination. We use manual investigation throughout in order to ensure correctness, since this is a scientific paper. We firmly believe that automated interpretation can be further improved in the future with better prompt engineering and better LLMs.
3. Note that, as InversionView is a method for reading information from neural activations, it facilitates reverse-engineering (which by nature requires a lot of work) but it does not require reverse-engineering to work. Our paper aims to show that InversionView is one of the tools in the ecosystem of interpretability research, rather than an all-inclusive solution. In the paper, we reverse-engineer models and provide causal verification in order to show the correctness of the information given by InversionView. The number of samples one needs to inspect to interpret a single activation vector does not necessarily increase with the model size, as we show in the case of factual recall task. For larger models, one may use it to study a specific part of the model.
4. Exhaustive interpretation or reverse-engineering requires a lot of work -- this is a feature of mechanistic interpretability research in general. InversionView is a faster way to generate accurate hypotheses. When combined with existing methods, InversionView makes the overall workflow more efficient, thereby decreasing guesswork and promoting faster iteration cycles in mechanistic interpretability research.
---
Rebuttal Comment 1.1:
Title: Reply by Reviewer vTBG
Comment: Thanks for those helpful responses. I think this work is solid and I maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you. We are grateful for your support. | Summary: In this paper, the authors propose InversionView, a method to inspect the information encoded in neural activations. The proposed method is based on checking the activations difference given different inputs. The authors showcase the effectiveness of this tool on mainly three tasks: character counting, indirect object identification, and 3-digit addition.
Strengths
1. The topic of model interpretation is important.
2. The empirical results confirm the effectiveness of the proposed method.
3. The paper is well written and easy to follow. In addition, the authors provide abundant experiments to showcase the effectiveness.
Weaknesses
1. Although being described through text, the precise algorithm for the proposed method is unclear to me.
2. It is unknown how the proposed method scales with larger models. The largest model size in the paper is limited to GPT-2, which is not considered large nowadays.
Overall, I think the paper is interesting and recommend a weak acceptance. I will consider raising my ratings if the above weaknesses are addressed.
Strengths: 1. The topic of model interpretation is important.
2. The empirical results confirm the effectiveness of the proposed method.
3. The paper is well written and easy to follow. In addition, the authors provide abundant experiments to showcase the effectiveness.
Weaknesses: 1. Although being described through text, the precise algorithm for the proposed method is unclear to me.
2. It is unknown how the proposed method scales with larger models. The largest model size in the paper is limited to GPT-2, which is not considered large nowadays.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer Mv2t,
Thanks for your feedback on our paper!
## Reply Regarding Weaknesses
### The precise algorithm for the proposed method is unclear
In the global rebuttal, we provide a new figure showing the training and sampling pipelines. In its caption, we also provide detailed explanation.
If you are unclear about the the decoder. We also provide a new figure showing decoder architecture in global rebuttal. The decoder model is basically a decoder-only transformer combined with some additional MLP layers. In order to condition the decoder on the query activation, the query activation is first passed through a stack of MLP layers to decode information depending on the activation sites and then made available to each attention layer of the transformer part of the decoder. There are also details about it in Appendix C in the paper (we should have linked section 2.2 to that).
We will add the new figures to the next version of the paper.
Taken together, these changes will make the precise algorithm much more accessible to the reader.
### How the proposed method scales with larger models
Thank you for raising this point. Please refer to the global rebuttal, where we describe our new experiment on a larger model, with 10x more parameters than in the IOI case study. We find that InversionView continues to produces interpretable results, and allows us to read out interesting information content.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I remain my stance to accept this paper.
---
Rebuttal 2:
Comment: Thank you. We are grateful for your support. | Rebuttal 1:
Rebuttal: We thank all reviewers for their reviews. We are encouraged that they found our method to be effective (Reviewer Mv2t) and to provide useful and intuitive insights (Reviewer vTBG, P3CM, fZCU), our experiments solid (Reviewer vTBG), and the paper well-written and easy to follow (Reviewer Mv2t).
We have addressed the weaknesses and specific questions from each reviewer in the individual responses. Below, we address two important points that are relevant to multiple reviews.
## Regarding writing and demonstration suggestions ##
We thank the reviewers for pointing out various potential improvements in writing and presentation.
We apologize for compressed writing; as we have a lot of content but limited space, we had moved many details to appendix. We will revise text for better readability, and add figures to make presentation more straightforward. If the paper is accepted, with an additional page, we believe we can further improve the paper in this respect.
Most importantly, besides improving the writing, we provide a figure for the overall pipeline, and a figure for the decoder architecture; both are shown in the attached PDF (Figures 1 and 2). By illustrating the InversionView workflow, this concretely addresses key concerns of Reviewers Mv2t, vTBG, fZCU.
## A new case study on a larger model ##
Several reviewers (Mv2t, vTBG, P3CM) asked about applicability to larger models. Since submission, we have done experiments on a larger model, GPT-2 XL, which has 1.5B parameters --- a 10-fold increase over the maximum model size in the submission. InversionView continues to produce interpretable results, and allows us to read out high-level information content. We've also updated the web application, so you can check results via the link provided in the paper (https://inversion-view.streamlit.app). We will incorporate the content about this experiment in the paper.
Below, we provide more details on the factual recall task, decoder training and sampling. We will include these in the final version of the paper. We observed many interesting examples. Due to space limitations, we put only one figure for it in the rebuttal PDF, but we strongly recommend playing around with our webAPP.
### Factual Recall: Background ###
The factual recall task is defined as predicting the attribute given a prompt containing the subject and relation.
The model is given a prompt such as *"LGA 775 is created by"*, containing the subject *"LGA 775"* and the relation *"is created by"*. The model predicts the next token *"Intel"*, an attribute of the subject. This task requires retrieving relevant knowledge, and we may expect that neural activations contain high-level concepts.
Previous work [1] suggests that, in attention layers of the upper part of the model, attributes of the subject are moved to the residual stream of the last token.
### Factual Recall: Implementation Details ###
In this case study, our intention is not to provide a full interpretation of the computations performed to solve this task, which we deem out of scope for this paper. Rather, we focus on a relatively small set of important attention heads in upper layers, and check if InversionView produces interpretable results. We select the 25 most important attention heads in the upper part of the model, as prior work found that attribute retrieval tends to happen there.[1] We estimate the importance of attention heads by the attribution method from Ferrando et al.[2]
The decoder model is fine-tuned from GPT-2 Medium (the components for processing query activation are randomly initialized), because we expect a more complex inverse mapping from activation to inputs to be learned. Concretely, to interpret activations encoding a certain attribute, the decoder may need to memorize knowledge about different subjects sharing the same attribute.
To train the decoder model, we collect text from 3 datasets: factual statements from COUNTERFACT[3] and BEAR[4], and general text from MiniPile[5].
We trained the decoder on outputs of the selected attention heads. We filtered out query activations resulting from heavily attending to BOS (weight > 0.6), which occurs in many attention heads in higher layers but likely results in attention outputs with little information.
We use the same distance metric as in the IOI study, and set $\epsilon=0.25$. As mentioned in the paper, a larger threshold produces more coarse-grained information; we found this to provide more stable results on this large model, potentially reflecting its higher-dimensional geometry.
### Factual Recall: Observation ###
We provide a sample figure in the attached PDF.
We observed many interesting examples, and strongly recommend playing around with our webAPP. When doing so, you may be interested in different kinds of information. When not attending exclusively to BOS, head $h^{29,9}$, $h^{30.8}$, $h^{32,12}$, $h^{33,0}$, $h^{37.7}$ almost always move information about the subject, and $h^{24,24}$, $h^{25,7}$, $h^{27,16}$, $h^{28,21}$, $h^{29,20}$, $h^{33,9}$ almost always move information about the relation, while other heads usually exhibit a mix of behaviors (see the complete list of heads in our webAPP).
[1]: Geva, Mor, et al. "Dissecting recall of factual associations in auto-regressive language models." arXiv preprint arXiv:2304.14767 (2023).
[2]: Ferrando, Javier, and Elena Voita. "Information flow routes: Automatically interpreting language models at scale." arXiv preprint arXiv:2403.00824 (2024).
[3]: Meng, Kevin, et al. "Locating and editing factual associations in GPT." Advances in Neural Information Processing Systems 35 (2022): 17359-17372.
[4]: Wiland, Jacek, Max Ploner, and Alan Akbik. "BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language Models." arXiv preprint arXiv:2404.04113 (2024).
[5]: Kaddour, Jean. "The minipile challenge for data-efficient language models." arXiv preprint arXiv:2304.08442 (2023).
Pdf: /pdf/a4d30175bdd0651dda3ce285f1a78c3b592e33a9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns | Accept (spotlight) | Summary: This paper proposes a novel and effective technique to enhance long-term time series forecasting. The proposed technique, Residual Cycle Forecasting (RCF), directly models periodic cycles with learnable parameters, decomposing the learning of time series into periodic cycles and residual components. The residual can be learned with a simple architecture like Linear or MLP, and this technique can be integrated into existing forecasting models. Results show significant improvement achieved by using this technique.
Strengths: * The proposed method is simple and effective. It not only improves accuracy but is also more parameter efficient and can be easily integrated into different backbones.
* The paper is well-written with good clarity. Both the problem and analysis are presented clearly.
* The experiments are conducted with high quality, providing extensive and comprehensive analysis of the proposed technique. The results are consistent with prior related work. Limitation of the proposed method is also clearly stated.
Weaknesses: The notations are not strictly consistent. The notations for instance normalization (in Section 3.2 and Algorithm 2) are independent from the whole framework and not consistent with the previous problem definition.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Since Figure 2 depicts the entire CycleNet architecture, instance normalization should also be included.
2. The description of alignment and repetition of $Q$ (Lines 152-157) can be further improved. The link to Algorithm 1 should be added, and a schematic figure (even in Appendix) would make it clearer to understand.
3. How extreme points affect RCF in the Traffic dataset should be further explained (Line 255). Extreme points can also affect other methods. It is unclear why this would specifically deteriorate RCF.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: See Weaknesses and Questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for your valuable comment!**
> **W1:** The notations are not strictly consistent. The notations for instance normalization (in Section 3.2 and Algorithm 2) are independent from the whole framework and not consistent with the previous problem definition.
Thanks for pointing this out. **We will correct it to make the notations in the paper more consistent.** In the original version of our paper, we used simpler notations (e.g., $x_{in}$, $y_{out}$) in Section 3.2 and Algorithm 2 to help readers better understand the model's input-output workflow. However, this indeed led to inconsistency in the notations. *Therefore, we will properly revise these in the revised paper.*
> **Q1:** Since Figure 2 depicts the entire CycleNet architecture, instance normalization should also be included.
Thanks, and **we will include instance normalization in Figure 2** to present the complete workflow of CycleNet.
> **Q2:** The description of alignment and repetition of Q (Lines 152-157) can be further improved. The link to Algorithm 1 should be added, and a schematic figure (even in Appendix) would make it clearer to understand.
Thanks for your suggestion. In our submission, due to space limitations in the main text, we adopted a concise writing style. **We will further elaborate on the alignment and repetition of $Q$ in the revised paper**, including more textual explanations in the main text, adding the link to Algorithm 1, and most importantly, supplementing with an intuitive schematic figure to help readers better understand this process (as shown in ***Figure 1 of the attached pdf***).
> **Q3:** How extreme points affect RCF in the Traffic dataset should be further explained (Line 255). Extreme points can also affect other methods. It is unclear why this would specifically deteriorate RCF.
The reason behind this is indeed complex. Firstly, the basic fact is that **there are indeed some outliers with extremely high values in the Traffic dataset.** For instance, the 12630 value of the 857th channel in the Traffic dataset, after global standard normalization, still exceeds 25, while the normal range of values for surrounding points is [-2, 2]. Secondly, **the fundamental working principle of RCF is to learn the historical average cycles in the dataset.**
In such cases, the average cycles learned in RCF can be affected by these significant outliers, *such as the mean of a certain point in the cycle being exaggerated.* Consequently, in each prediction process, the original sequence subtracts a locally exaggerated average cycle, resulting in an inaccurate residual component, thereby affecting the local point predictions within each cycle. The more inaccurate these local point predictions are, the larger the discrepancy between MSE and MAE, as MSE significantly amplifies the impact of a few large errors. **This explains why in Table 4, combining iTransformer with RCF decreases MAE but increases MSE, indicating overall prediction accuracy improvement but anomalies in local point predictions.**
Therefore, models like iTransformer and GNN, which accurately model inter-channel relationships, are more suitable for scenarios with extreme points and temporal lag characteristics. For example, when a sudden traffic surge occurs at a certain junction, these models, having correctly modeled the spatiotemporal relationships, can accurately predict possible traffic surges at other junctions. In contrast, the current CycleNet only considers single-channel relationship modeling, thus being somewhat limited in this scenario.
*However, **the core of CycleNet still lies in exploring a more effective periodic modeling approach and proposing a model that balances performance and efficiency.*** We believe that further addressing the issue of RCF being affected by extreme points and investigating how to incorporate channel relationship modeling within CycleNet in future work would be highly promising. *We will include these discussions in the revised paper.*
**Thank you again for your careful review, and we hope our responses address your concerns.**
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I have carefully reviewed your rebuttal and the attached PDF. All my questions have been clearly addressed. My rating has been updated.
---
Reply to Comment 1.1.1:
Comment: Thank you very much! We will incorporate the revisions mentioned in the rebuttal into the final paper. Once again, thank you for your careful review and for increasing our score! | Summary: The paper introduces CycleNet, a novel time series forecasting method that enhances long-term prediction accuracy by explicitly modeling the inherent periodic patterns present in time series data. The core contribution of paper is introducing the Residual Cycle Forecasting (RCF) technique, which leverages learnable recurrent cycles to represent these periodic patterns and predicts the residuals, significantly improving upon the performance of existing models with reduced computational complexity. CycleNet demonstrates state-of-the-art results across various domains, such as electricity, weather, and energy forecasting, while offering over 90% reduction in parameter quantity, highlighting its efficiency and effectiveness in capturing long-term dependencies for accurate forecasting.
Strengths: 1. The author tries to model the period information explicitly in the time series prediction task, and the motivation is intuitive and reasonable.
2. The method designed by the author is reasonable and closely related to motivation. Combined with the experimental results, the author gives a simple but effective method.
3. The author exposes the code and gives a detailed description, which increases the reproducibility of the model.
4. The limitations of this method are clearly discussed and the possible problems are pointed out.
Weaknesses: 1. In the introduction, the author's statement establishes a close relationship between long-term prediction and periodic information. In the absence of some experimental support, this is not rigorous. Periodic information may be useful for long-term forecasting in certain situations, but it is not appropriate for all tasks, nor is it the only important information that these tasks require. The author slightly obfuscates these to highlight the motivation of this article.
2. The authors select data sets with different periodicity to illustrate the validity of the model. However, the authors only demonstrate the validity, and experiments can be added to show how the proposed method performs differently on periodically different data sets and discuss the underlying rules. At the same time, it is also worth showing how CycleNet performs on data sets where there is no obvious periodicity.
3. Combined with the results in Table 2 and Table 4, the results predicted with Linear alone are even better than some well-designed methods, whether this is due to the particularity of the data set or some other reason.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for your kind and careful review!**
> **W1:** In the introduction, the author's statement establishes a close relationship between long-term prediction and periodic information. Etc.
**In fact, periodic information is indeed one of the most important factors for achieving long-term forecasting.** Recent works have demonstrated that by effectively utilizing periodic information, it is possible to achieve near state-of-the-art prediction accuracy with fewer than 1,000 parameters [1]. This strongly underscores the importance of periodic information in long-term forecasting tasks. Additionally, the popular work DLinear has shown that a single-layer linear model can outperform many well-designed models in long-term forecasting tasks [2]. This is because a simple linear layer can robustly extract periodic information from the historical data of a single channel (evident from the clear periodic patterns in the weight distribution of these linear models) [3].
*Without periodicity, long-term forecasting becomes very challenging.* For instance, DLinear shows that on financial datasets, even the most advanced deep learning models cannot outperform simply copying the most recent data point [2].
Thus, existing evidence highlights that the presence of periodicity in data is crucial for accurate long-term predictions. **Our paper builds on this premise, exploring a simple yet effective method of leveraging periodicity through RCF.** We acknowledge that periodic information is not the sole factor for accurate time series predictions. Other factors like short-term trends, multivariate dependencies, and seasonality are also critical, especially for short-term forecasting tasks. However, the core of this paper focuses on improving long-term forecasting performance by better utilizing periodicity in data. Therefore, this motivation and contribution here may be not in conflict with scenarios where other factors are more influential.
> **W2:** The authors select data sets with different periodicity to illustrate the validity of the model. However, the authors only demonstrate the validity, and experiments can be added to show how the proposed method performs differently on periodically different data sets and discuss the underlying rules. At the same time, it is also worth showing how CycleNet performs on data sets where there is no obvious periodicity.
*Sorry if we misunderstood your first point here.* We have already shown the performance of our model on datasets with different cycle lengths in Table 5 and Figure 3 of our submission, **indicating that when the model's hyperparameter $W$ is set correctly, the RCF technique significantly improves prediction accuracy, and the trainable recurrent cycle $Q$ can accurately learn the corresponding cycle patterns.**
We assume that your question is about how the proposed method performs differently with different cycle lengths and we have supplemented *Figure 2 (m-p) in the attached PDF* to visualize the learned recurrent cycle $Q$ when setting different cycle lengths $W$. We found that when $W$ is correctly set to 144 (the weekly cycle length of the Electricity dataset), $Q$ learns the complete cycle pattern, including weekly and daily cycles. When $W$ is set to 24 (the daily cycle length), $Q$ captures only the daily cycle. With $W$ set to 96 (four times the daily cycle), $Q$ learns four repeated daily cycles. However, with $W$ set to 23 (with no matching semantic cycle length), $Q$ learns nothing, resulting in a flat line. For datasets without obvious periodicity (e.g., Exchange-Rate dataset), the behavior of $Q$ is similar, learning a flat line. *We will further include these results in the revised paper.*
> **W3:** Combined with the results in Table 2 and Table 4, the results predicted with Linear alone are even better than some well-designed methods, whether this is due to the particularity of the data set or some other reason.
The results in Table 2 and Table 4 cannot be directly compared because Table 2 reports results averaged across all prediction horizons $H ∈ {96, 192, 336, 720}$, whereas Table 4 presents results for specific horizons individually.
However, if we average the results from Table 4, *the Linear model does indeed outperform some well-designed models*, such as Autoformer. This phenomenon can be traced back to the DLinear paper [2], **which demonstrated that a purely linear layer can outperform many well-designed Transformer models at the time**. DLinear's success is attributed to its channel-independent approach for multivariate forecasting, where each channel is modeled with a shared linear layer. This approach allows the model to focus on extracting historical information from individual channels, leading to more robust periodic information for long-term forecasting (as evidenced by the striped patterns in the learned weights of its linear layers). Previous models like Autoformer mixed information across multiple channels, making it difficult to extract robust periodic information. Since DLinear, many models have adopted the channel-independent approach for long-term forecasting, including PatchTST, FITS, SparseTSF, and our CycleNet.
**Therefore, it is reasonable that a simple linear model can outperform some well-designed models in this context.**
[1] Lin, Shengsheng, et al. "SparseTSF: Modeling Long-term Time Series Forecasting with *1k* Parameters." In International Conference on Machine Learning, 2024.
[2] Zeng, Ailing, et al. "Are transformers effective for time series forecasting?." Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 9. 2023.
[3] Toner, William, and Luke Darlow. "An Analysis of Linear Time Series Forecasting Models." In International Conference on Machine Learning, 2024.
**Thank you again for your kind review, and we hope our response can address your concerns.**
---
Rebuttal Comment 1.1:
Comment: **Dear Reviewer zzRb,**
Sorry to bother you. **We are eager to know whether our response has addressed your concerns as the discussion phase is nearing its end.** *If not, or if you have any additional questions, we would be more than happy to further address them.* Thank you for your time. | Summary: This paper presents a novel technique for improving the accuracy of multivariate long-term time series forecasting. The technique, called Residual Cycle Forecasting (RCF), involves learning the cyclical patterns of time series through recurrent cycles, which can be used as a pre-processing step for any forecasting model. The authors also propose CycleNet, a linear-based model that uses RCF to enhance its predictions. CycleNet first uses RevIn to account for distribution shift, then subtracts the learned RCF from the input data. The backbone of the model predicts the future residual, adds the learned RCF, and reverses RevIN from the outputs to obtain the final prediction.
The proposed method is evaluated on eight multivariate time series datasets and compared against several baselines. The results show competitive performance and resource consumption.
Strengths: * Clear and concise writing style
* Novel approach to time series decomposition
* Comparison of performance and resource consumption with baseline methods
* Ablation study and parameter impact analysis (e.g., $W$)
* Easy-to-understand model design and components
* Informative figures to illustrate model and results
* Thorough discussion of results, including strengths and limitations
* Code and data provided for reproducibility and transparency.
Weaknesses: * Lack of clarity to specify which results are from the authors (reproduced or produced) and which ones are collected from previous papers (if so, which ones)
* Incomplete comparison with existing time series decomposition baselines, such as LD, TDFNet, or SparseTSF (even though this one was in the related works)
* Model was not compared to RLinear (also relying on RevIN)
Technical Quality: 3
Clarity: 3
Questions for Authors: ## Baselines
The paper fails to properly compare the proposed CycleNet model with appropriate baselines.
Since CycleNet is based on RevIN, it would be more complete to include other RevIN-based baselines such as RLinear [1] for linear-based models and for instance RevInformer (from the RevIN paper) for transformer-based solutions. Although PatchTST uses RevIN, including these additional baselines would avoid any doubt and provide a more comprehensive comparison.
Additionally, the paper misses important baselines such as LD [2], which also uses RevIn and also argues for using learnable decomposition rather than moving average. According to the published results, CycleNet appears to be behind LD. But it would be interesting to compare LD against backbone + RCF.
Potential references to [3] is also missing.
[1] https://arxiv.org/pdf/2305.10721
[2] https://arxiv.org/abs/2402.12694
[3] https://arxiv.org/pdf/2308.13386
Finally, despite being describes as main baselines, why SparseTSF was not included in this paper? Especially, as SparseTSF seems to produce better results than proposal especially for ETT and not for Electricity and Traffic. What would be the reason for such differences?
Including such comparisons is required to correctly position CycleNet with all the other baselines and further discuss why CycleNet offers better performance with some datasets and not others. Such discussion is important to further investigate the benefit of time series decomposition for forecasting tasks.
## Visualization
The visualizations of the learned cycles are informative, but it is unclear whether these cycles change depending on the prediction horizon or lookback. If they do, it would be interesting to see the differences and whether they can explain the variation in performance for different prediction horizons.
Since the cycles are learned along with the backbone model, there could be differences in the learned cycles for different experiments. It would be helpful to plot the cycles for different prediction lengths, such as 96 and 720, and for different lookback lengths, such as 48, 96, and 336.
In addition, the cycle may change depending on the backbone. It would be interesting to show the difference if any when RCF is used with iTransformer, DLinear, etc. This would allow readers to better understand whether there are any differences in the learned cycles and whether they are due to the parameters, backbone model or other factors.
Figure 3(f) is surprising, and it is unclear whether it represents a household or an industrial building with specific working hours. In my opinion, it looks like there are only 3 main weekly patterns (the large pattern on the left seems to have ~50 time steps = 2 days) and weekends (one pattern seems to have ~24 time steps = 1 day). Therefore, it could be a factory or industrial building with specific working hours (6 days a week).
It would be interesting to discuss how RCF impacts previous backbones, such as DLinear. Specifically, it would be helpful to know how much DLinear's performance changes with the addition of RCF and plot it similarly to Figure 4.
Finally, it is important to provide Figure 4 and Table 4 for other datasets to give readers a global view of the impact of RCF/CycleNet depending on the dataset and its nature. This would help avoid generalizations that could be incorrect based only on results depicted in the current version, especially for datasets such as Traffic and Solar Energy where the impact might be drastically different.
## Discussion
“Traffic dataset exhibits spatiotemporal characteristics and temporal lag characteristics, where the traffic flow at a certain detection point significantly affects the future values of neighboring detection points.” However, the authors do not explain why this is not an issue for the Solar-Energy dataset, where weather conditions at one location may slowly impact neighboring locations in the future. Can authors comment on that?
Regarding the electricity consumption dataset, the authors state that "a user’s electricity consumption thirty days ahead does not directly correlate with their consumption patterns in the past few days." However, it is possible that long-term habits may still influence consumption patterns, for instance with a monthly routine.
The authors suggest that "the cycle length W depends on the a priori characteristics of the dataset and should be set to the maximum stable cycle within the dataset." However, they do not address how to account for yearly cycles, such as those found in weather datasets that repeat each year with the seasons or datasets influenced by human behavior, such as electricity consumption, which may increase in winter and summer due to heating and cooling needs. It would be helpful to clarify whether RCF is only suitable for mid-range stable cycles or if there is a way to account for longer cycles. This might need to be included in the limitations section.
## Additional points
The paper lacks a computation cost study to determine the overhead imposed by RCF on the existing backbone in terms of memory and consumption. It is important to evaluate whether the gain in accuracy justifies the increase in complexity and associated cost.
## Proof-read
* “a single-layer Linear or a dual-layer MLP” maybe find another way to express it to avoid repeating too much (cf., abstract, introduction, etc.)
* “surpassing complexly designed deep models”-> redundant, ”complex design” or “deep models”
* “leveraging this discovery to enhance” should change this as the fact that time series have cycle is not new leveraging this knowledge is the innovation
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Authors have discussed some limitations of their proposal and especially when each channel have different period cycle.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for your detailed and thoughtful review!**
> **W1:** Lack of clarity of the source of results.
We will clarify this in the main text of the revised paper. Previously, we clarified this in Appendix A.4.
> **W2 & W3 & Q1: Baseline:** Add more appropriate baselines to correctly position the RCF technique.
Thank you for your reminder.
(i) **Leddam (LD) and SparseTSF** are both accepted papers at ICML 2024. We timely noted the latter because it emphasizes the importance of periodicity, similar to our work. LD upgrades the Moving Average kernel (MOV) technique in STD to a weights-learnable module, while SparseTSF uses sparse techniques to decompose sequences. We will compare these two techniques with our proposed RCF technique.
(ii) **TFDNet** extracts features in the Time-Frequency domain after using STD techniques to decompose sequences. Since it fundamentally uses regular MOV-based STD techniques, it may not be necessary to compare it directly here.
(iii) **RLinear**: We will add a comprehensive comparison with it in the main text. In fact, the Linear and MLP in Table 4 are combined with RevIN, so they represent RLinear and RMLP, and CycleNet significantly outperforms them. Additionally, we thoroughly evaluated the impact of RevIN on CycleNet in Appendix Table 8.
**In summary, we compared CycleNet/Linear (RCF+Linear), LDLinear (LD+Linear), DLinear (MOV+Linear), SparseTSF (Sparse+Linear), and Linear** in *Table 1 of the attached pdf*. To ensure fairness, we did not use RevIN since DLinear originally did not use it.
It can be observed that:
(i) **CycleNet significantly outperforms other methods**, proving the superiority of RCF as a new STD technique.
(ii) As a sparse prediction method, SparseTSF may rely more on longer lookback length and RevIN, thus performing poorly here.
(iii) Most surprisingly, **the performance of LDLinear and DLinear is almost identical** in our setup. This result differs from the results reported in the LD paper (Table 4). Other researchers have also noted this phenomenon on the LD project's GitHub (Issue #1), but have not yet received a response from the authors. *To some extent, LD is essentially a weight-trainable MOV, so it is hard to achieve high performance gains compared to the original MOV.* We are not sure if there is any mistake here, so we will further confirm with the authors after the double-blind review phase.
> **Q2: Visualization:** Add more plots under different configurations.
**We have added these visualizations** in *Figure 2 of the attached pdf*. The basic configuration is Electricity-Lookback=96-Horizon=96-Backbone=Linear-W=168. It can be observed that:
**(i) Horizon:** *The learned patterns remain almost unchanged as the horizon changes.* This indicates that the horizon length does not affect the learned pattern results.
**(ii) Lookback:** The overall pattern remains unchanged as lookback changes. However, upon closer observation, *it can be seen that the learned pattern becomes smoother with increased lookback.* This is because a longer lookback provides the backbone with richer periodic information, thereby reducing the importance of the learned pattern component.
**(iii) Backbone:** *The patterns change somewhat with different backbones.* When DLinear is the backbone, the learned patterns are smoother, as DLinear's decomposition technique itself extracts certain periodic features. When iTransformer is the backbone, the learned patterns differ more, as it additionally models multichannel relationships, so the learned periodic patterns may consider multichannel feature interactions. PatchTST's performance is more similar to Linear, as it is also a regular single-channel modeling method, only with stronger nonlinear learning capabilities compared to the Linear model.
**(iv) Cycle length $W$:** When $W$ is 144 (the weekly cycle length for the Electricity dataset), the recurrent cycle $Q$ learns the complete periodic pattern, including weekly and daily cycles. When $W$ is set to 24 (the daily cycle length), the recurrent cycle $Q$ only learns the daily cycle pattern. When $W$ is set to 96 (four times the daily cycle length), the recurrent cycle $Q$ learns four repeated daily cycle models. However, when $W$ is set to 23 (without matching semantic meaning), the recurrent cycle $Q$ learns nothing, i.e., a straight line.
> **Q3: Discussion and other points.**
*Very sorry that the response here cannot cover every point you have mentioned **due to the text length limitation***. We will carefully incorporate your suggestions into the revision (e.g., providing more results of other datasets on Table 4 and Figure 4), and **we can further discuss any uncovered points during the discussion phase.**
> **Q4: Additional points:** The computation cost of RCF on other Backbones.
**In fact, the cost of RCF is fixed and independent of the Backbone.** The primary additional fixed overhead is the parameter quantity of $D×W$, and the CPU time required for alignments and repetitions of the recurrent cycles (~10 seconds per epoch in our experimental environment). **Overall, this additional overhead is very small compared to the other deep backbones.** We will supplement this quantitative analysis in Table 3 (i.e., the additional overhead introduced by RCF). By combining these overheads and the ablation results of other backbones in Table 4, it can be inferred whether the gain in accuracy justifies the increase in complexity and associated cost.
> **Q5: Proof-read.**
Thank you, and we will revise these points in the revised paper.
**Again, thank you for your thorough review, and we hope our responses address your concerns.**
---
Rebuttal Comment 1.1:
Title: Additional Rebuttal (Optional Reading) # Part 1
Comment: **Dear Reviewer 24Zo,**
Thanks again for you careful review. **It’s the discussion phase now, so we would like to continue to address the uncovered points from the rebuttal phase** (*due to the text length limits in the rebuttal*). *If you find these additional comments any inappropriate (e.g., out of rebuttal limits), please ignore these comments.*
> Finally, despite being describes as main baselines, why SparseTSF was not included in this paper? Especially, as SparseTSF seems to produce better results than proposal especially for ETT and not for Electricity and Traffic. What would be the reason for such differences?
As mentioned in the rebuttal, we noticed SparseTSF, which was recently accepted at ICML 2024, because it emphasizes the importance of periodicity in long-term forecasting tasks, similar to our work. Thus, we included it in the related work but did not perform a complete comparison due to the submission deadline constraints. We will supplement a complete comparison in the revised paper.
Additionally, SparseTSF performs better on the ETT dataset rather than the Electricity and Traffic datasets for two reasons. First, **the ETT dataset is smaller and noisier**. In such cases, the sparse techniques proposed in SparseTSF help the model focus more directly on corresponding elements in historical cycles, reducing interference from less relevant elements and improving overall performance. Second, the ETT dataset's dimensionality (number of channels) is significantly lower than that of the Electricity and Traffic datasets, with the former having only 7 channels compared to the latter's 321 and 862 channels. Note that SparseTSF fundamentally employs a channel-independent modeling approach (parameter-shared) and linear-based methods, which limit its performance on high-dimensional datasets. This is because such scenarios require non-linear capabilities to remember different patterns of multiple channels, or a separate linear layer for each channel (non-parameter-shared) to model each channel's patterns separately [1]. *In contrast, CycleNet’s RCF technique is a fully channel-independent modeling scheme (non-parameter-shared)*, meaning it models each channel's periodic pattern separately. In this case, even with Linear as the backbone, it performs better on high-dimensional datasets because it enhances the model's ability to capture different patterns of multiple channels. Therefore, as shown in our comparison results in *Table 1 of the attached PDF*, **our RCF technique overall still outperforms the Sparse technique** (even on the ETT dataset). *We will include full experiments on more datasets in the revised paper.*
[1] Li, Zhe, et al. "Revisiting long-term time series forecasting: An investigation on linear mapping." *arXiv preprint arXiv:2305.10721* (2023).
> Figure 3(f) is surprising, and it is unclear whether it represents a household or an industrial building with specific working hours. In my opinion, it looks like there are only 3 main weekly patterns (the large pattern on the left seems to have ~50 time steps = 2 days) and weekends (one pattern seems to have ~24 time steps = 1 day). Therefore, it could be a factory or industrial building with specific working hours (6 days a week).
Yes, you are correct. This might represent the typical working hours of a user, for example, working on Monday, Wednesday, and Friday, with the other days off. Indeed, this interesting phenomenon further reveals *the potential value of our proposed RCF technique*, as it may be **a superior way to help data engineers analyze patterns in time series data**.
> “Traffic dataset exhibits spatiotemporal characteristics and temporal lag characteristics, where the traffic flow at a certain detection point significantly affects the future values of neighboring detection points.” However, the authors do not explain why this is not an issue for the Solar-Energy dataset, where weather conditions at one location may slowly impact neighboring locations in the future. Can authors comment on that?
The Traffic and Solar-Energy datasets are indeed quite different.
---
Reply to Comment 1.1.1:
Title: Additional Rebuttal (Optional Reading) # Part 2
Comment: **First, the spatial characteristics of the solar scenario are weaker than those of the traffic scenario.** In the traffic scenario, traffic flow dynamically changes at each location, with complex internal spatial dependencies. In the solar power generation scenario, as in the Solar-Energy dataset, which records the solar power production of 137 PV plants in Alabama State, the spatial dependencies are much weaker because the weather conditions within a region are usually similar. Even if there are spatial relationships, such as differences in power generation due to longitude differences, they are minor and easily learned. The following table shows the average cosine similarity between different channels in the training set of each dataset (closer to 1 indicates more similar channels):
| Dataset | Traffic | Electricity | Solar-Energy | ETTh1 |
| --------------------- | ------- | ----------- | ------------ | ----- |
| **Cosine Similarity** | 0.578 | 0.471 | **0.913** | 0.261 |
*It can be seen that the power generation curves of different channels in the Solar-Energy dataset are very similar, which indirectly indicates weaker spatial characteristics.*
**Second, the solar scenario has fewer extreme points compared to the traffic scenario, so the impact of temporal lag characteristics is smaller.** In the traffic scenario, unexpected situations may cause a sudden increase in flow (i.e., extreme points), and a traffic surge at one intersection can affect the flow at other intersections over time. In this case, adequately modeling inter-channel relationships (i.e., temporal lag) can accurately predict traffic flows at other intersections after an extreme point occurs. In contrast, extreme points are rare in the solar scenario because the power generation of photovoltaic systems has a maximum power threshold. The following table shows the average number of extreme points per channel using Z-Score > 6:
| Dataset | Traffic | Electricity | Solar-Energy | ETTh1 |
| ------------------ | ------- | ----------- | ------------ | ----- |
| **Extreme Points** | 63.9 | 0.5 | 0 | 0 |
*It can be seen that the number of extreme points in the Traffic dataset is significantly higher than in other datasets.*
> Regarding the electricity consumption dataset, the authors state that "a user’s electricity consumption thirty days ahead does not directly correlate with their consumption patterns in the past few days." However, it is possible that long-term habits may still influence consumption patterns, for instance with a monthly routine.
Thank you for pointing this out. What we intended to convey is more along the lines of "a user’s electricity consumption thirty days ahead ***not only*** correlates with their consumption patterns in the past few days." That is, to achieve accurate long-term predictions, the long-term patterns play a crucial role, not just the short-term fluctuations. **We will revise the original statement.**
> The authors suggest that "the cycle length W depends on the a priori characteristics of the dataset and should be set to the maximum stable cycle within the dataset." However, they do not address how to account for yearly cycles, such as those found in weather datasets that repeat each year with the seasons or datasets influenced by human behavior, such as electricity consumption, which may increase in winter and summer due to heating and cooling needs. It would be helpful to clarify whether RCF is only suitable for mid-range stable cycles or if there is a way to account for longer cycles. This might need to be included in the limitations section.
Thank you for your suggestion. **We will include this in the limitations section.** Considering longer dependencies (such as yearly cycles) is indeed a more challenging task. Although theoretically, CycleNet’s $W$ can be set to a yearly cycle length to model annual cycles, the biggest difficulty lies in collecting sufficiently long historical data to train a complete yearly cycle (possibly requiring decades of data). For example, the Electricity training set covers only 2 years, and the Weather dataset includes less than a year. In this case, other existing models might also struggle to effectively model such long cycles. Therefore, CycleNet is more suitable for mid-range stable cycle modeling, and we believe future research needs to develop more advanced techniques to address this issue specifically.
**Thank you again for your nice review.**
---
Rebuttal Comment 1.2:
Comment: Thank you for your extensive effort during this rebuttal phase and for providing such detailed responses to my comments and those of other reviewers.
Overall, the authors have addressed all my points, and I still do not see any issues preventing the acceptance of this paper.
Even though it is a difficult task, please ensure that all the outputs provided during the rebuttal and discussion phase are added to the main paper (or at least a quick summary of them, with a reference to the appendix). These details are crucial for future readers to fully understand your work, its advantages compared to existing solutions, its limitations, and areas that require further investigation.
In light of this, I have increased my score from 7 to 8.
---
Rebuttal 2:
Comment: > Other researchers have also noted this phenomenon on the LD project's GitHub (Issue #1), but have not yet received a response from the authors.
I was not aware of the existing issue with LD. It is good to include these results in your final revision and mention the existing reproducibility issue with LD. Such information will be helpful for future readers. | Summary: This paper proposes a learnable Seasonal-Trend Decomposition method (CycleNet) to improve the prediction performance of current long-term multivariate time series forecasting models. Specifically, it firstly model the periodic patterns of sequences through globally shared recurrent cycles and then predicts the residual components of the modeled cycles. Extensive experiments are conducted to evaluate the proposed method.
Strengths: 1. The proposed method is a model-agnostic solution applicable to different kinds of models.
2. Although it is simple, it is able to achieve good performance improvement in many cases.
3. Extensive experiments are conduct to evaluate the proposal.
Weaknesses: 1. It seems that the proposal (CycleNet) does not work well for complex datasets, e.g., Traffic. It is better to show more results on the same kind of datasets like the PEMS datasets used in the iTransformer paper.
2. The Section 3 is not well written and lacks a lot of details, especially about the Learnable Recurrent Cycles. The authors may consider to reorganize Section 3 and Appendix A.1 to make Section 3 more clear.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the weaknesses.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Thank you for your valuable comment!**
> **W1:** It seems that the proposal (CycleNet) does not work well for complex datasets, e.g., Traffic. It is better to show more results on the same kind of datasets like the PEMS datasets used in the iTransformer paper.
The current CycleNet did not achieve SOTA on the Traffic dataset because **it is a simple single-channel modeling method that uses the proposed RCF technique to achieve a balance of performance and efficiency.** Its backbone for prediction is only a single Linear layer or a dual-layer MLP. For scenarios like Traffic, which exhibit spatiotemporal characteristics and temporal lag characteristics, more complex multi-channel relationship modeling is indeed required.
*As we know, traffic scenarios may experience sudden surges in flow (i.e., extreme points), and the surge at one intersection can affect the flow at other intersections over time.* The table below shows the average number of extreme points per channel, counted with Z-Score > 6:
| Dataset | Traffic | Electricity | Solar | ETTh1 |
| ---------- | ------- | ----------- | ----- | ----- |
| Avg. Count | 63.9 | 0.5 | 0 | 0 |
*We can see that the Traffic dataset has significantly more extreme points than other datasets.* In this case, models that fully model inter-channel relationships, like iTransformer, can accurately predict the flow at other intersections after an extreme point appears. In contrast, univariate models or over-parameterized multivariate models are less capable in such situations. This is why in Table 2, iTransformer significantly outperforms other models as it can accurately predict the flow at other intersections after an extreme event.
**However, it is also notable that, aside from iTransformer, CycleNet still significantly outperforms other models**. Especially, this result is achieved with CycleNet's backbone being simple linear and MLP layers. Therefore, returning to CycleNet's core contribution, which is exploring a simpler yet effective use of periodicity and establishing a method that balances performance and efficiency, it is undoubtedly successful.
Additionally, based on your suggestion, **we have added results of CycleNet on the PEMS datasets.** Here, it is essential to clarify that although PEMS and Traffic are both transportation datasets, PEMS has significantly fewer extreme points than Traffic:
| Dataset | Traffic | PEMS03 | PEMS04 | PEMS07 | PEMS08 |
| ---------- | ------- | ------ | ------ | ------ | ------ |
| Avg. Count | 63.9 | 0.9 | 0.1 | 3.5 | 4.8 |
Thus, in this case, CycleNet is expected to perform better on PEMS than on Traffic. Below are the MSE comparison results of CycleNet and other models in the scenario where the lookback length is 96 and the prediction horizon is 12:
| Dataset | CycleNet/MLP | CycleNet/Linear | iTransformer | PatchTST | Crossformer | TimesNet | DLinear | RLinear |
| ------- | ------------ | --------------- | ------------ | -------- | ----------- | -------- | ------- | ------- |
| PEMS03 | **0.066** | 0.080 | 0.071 | 0.099 | 0.090 | 0.085 | 0.122 | 0.126 |
| PEMS04 | **0.078** | 0.089 | **0.078** | 0.105 | 0.098 | 0.087 | 0.148 | 0.138 |
| PEMS07 | **0.062** | 0.075 | 0.067 | 0.095 | 0.094 | 0.082 | 0.115 | 0.118 |
| PEMS08 | 0.082 | 0.091 | **0.079** | 0.168 | 0.165 | 0.112 | 0.154 | 0.133 |
**In this case, CycleNet/MLP performs on par with iTransformer, which models multi-channel relationships. Even the CycleNet/Linear, with a single linear layer as the backbone, outperforms other deep nonlinear models.** Therefore, these results further validate the effectiveness of the proposed RCF technique and demonstrate that the CycleNet model achieves a balance of performance and efficiency. *We will include these analyses and full experimental results in the revised paper.*
> **W2:** The Section 3 is not well written and lacks a lot of details, especially about the Learnable Recurrent Cycles. The authors may consider reorganizing Section 3 and Appendix A.1 to make Section 3 clearer.
Thank you for pointing out this issue. We will carefully optimize Section 3 in the revised paper to make it clearer for readers. Additionally, **we have supplemented it with a schematic figure** in *Figure 1 of the attached pdf*, which more intuitively describes the workflow of the recurrent cycle $Q$. *We will add this schematic figure to Section 3 of the revised paper.*
**Thank you again for your valuable review, and we hope our response can address your concerns.**
---
Rebuttal 2:
Comment: Thank you for the responses. Could you give the full results on the PEMS dataset where the lookback length is 96 and the prediction horizon is {12, 24, 48, 96} (only the results of CycleNet, iTransformer and SCINet are OK).
*Sorry, the prediction horizon should be {12, 24, 48, 96} instead of {12, 24, 36, 48} according to Table 9 in iTransformer's paper.
---
Rebuttal Comment 2.1:
Comment: **Dear Reviewer SdEc,**
Thank you for your questions.
We have provided the full results based on your requested settings. I apologize for the time it took to conduct these experiments. We will include the full results, including comparisons with other models, in the revised paper.
As mentioned earlier, CycleNet is a simple, single-channel modeling method that uses only a Linear layer or a shallow MLP as the backbone. **Its purpose is to validate the proposed RCF technique, demonstrating that even with a simple backbone, combining RCF can achieve state-of-the-art prediction performance** (*except in the traffic scenario*) with very low computational overhead, balancing performance and efficiency.
In scenarios like traffic, where spatiotemporal relationships need to be considered, independent channel modeling methods (including PatchTST, etc.) may struggle to fully capture the dynamics, necessitating additional multichannel relationship modeling techniques, such as those employed by iTransformer. Therefore, it is reasonable that the simple CycleNet has certain limitations in these spatiotemporal scenarios. *We had pointed out these limitations when analyzing the experimental results in the original submission and further elaborated on them in the limitations and future work sections.*
In fact, iTransformer and SCINet are powerful models that achieve the best performance on the PEMS dataset. **The fact that CycleNet can nearly match their performance with just a simple backbone and independent channel modeling is noteworthy.** CycleNet’s backbone is merely a two-layer MLP, without any further design or deep stacking. (Due to length constraints, the results are attached in the next comments.)
Moreover, *when the RCF technique is removed from CycleNet, its performance drops significantly*, demonstrating that RCF is a major contributor to narrowing the gap between the shallow MLP and these state-of-the-art models. *When the proposed RCF technique from CycleNet is integrated into iTransformer*, which already achieves state-of-the-art performance, *iTransformer’s predictive accuracy is further enhanced*.
**Overall, for a basic two-layer MLP backbone, RCF brings about a 28% improvement in MSE and a 16% improvement in MAE.** **For iTransformer, which currently leads in the field, RCF brings an additional 4.9% improvement in MSE and 2.7% in MAE.** *This further validates the effectiveness of our RCF technique, which is our core contribution: a simple and novel method for better extracting periodicity in time series data.* In addition to improving predictive accuracy, it can also serve as a novel decomposition method for helping us to further analyze the patterns present in time series data (as shown in Figure 3 in the paper).
**I hope this addresses your concerns. Thank you again for your time and for reviewing our paper.**
---
Reply to Comment 2.1.1:
Comment: Due to length constraints, the results are attached here:
| | | CycleNet/MLP | | iTransformer | | SCINet | | CycleNet W/o. RCF | | iTransformer W/. RCF | |
| :----: | :--: | :-------: | :-------: | :----------: | :-------: | :-------: | :-------: | :---------------: | :---: | :------------------: | :-------: |
| | | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |
| PEMS03 | 12 | *0.066* | 0.172 | 0.071 | 0.174 | 0.066 | *0.172* | 0.077 | 0.186 | **0.064** | **0.170** |
| | 24 | 0.089 | 0.201 | 0.093 | 0.201 | *0.085* | *0.198* | 0.116 | 0.228 | **0.084** | **0.194** |
| | 48 | 0.136 | 0.247 | *0.125* | *0.236* | 0.127 | 0.238 | 0.181 | 0.289 | **0.116** | **0.228** |
| | 96 | 0.182 | 0.282 | *0.164* | *0.275* | 0.178 | 0.287 | 0.234 | 0.336 | **0.163** | **0.268** |
| PEMS04 | 12 | 0.078 | 0.186 | 0.078 | 0.183 | **0.073** | **0.177** | 0.092 | 0.201 | *0.075* | *0.182* |
| | 24 | 0.099 | 0.212 | 0.095 | 0.205 | **0.084** | **0.193** | 0.133 | 0.248 | *0.089* | *0.201* |
| | 48 | 0.133 | 0.248 | 0.120 | 0.233 | **0.099** | **0.211** | 0.203 | 0.314 | *0.110* | *0.225* |
| | 96 | 0.167 | 0.281 | 0.150 | 0.262 | **0.114** | **0.227** | 0.257 | 0.357 | *0.142* | *0.256* |
| PEMS07 | 12 | **0.062** | **0.162** | 0.067 | 0.165 | 0.068 | 0.171 | 0.073 | 0.177 | *0.063* | **0.162** |
| | 24 | *0.086* | 0.192 | 0.088 | *0.190* | 0.119 | 0.225 | 0.116 | 0.226 | **0.078** | **0.181** |
| | 48 | 0.128 | 0.234 | *0.110* | *0.215* | 0.149 | 0.237 | 0.201 | 0.301 | **0.100** | **0.202** |
| | 96 | 0.176 | 0.268 | *0.139* | 0.245 | 0.141 | *0.234* | 0.287 | 0.364 | **0.126** | **0.230** |
| PEMS08 | 12 | 0.082 | 0.185 | *0.079* | *0.182* | 0.087 | 0.184 | 0.094 | 0.199 | **0.076** | **0.179** |
| | 24 | 0.117 | 0.226 | *0.115* | *0.219* | 0.122 | 0.221 | 0.151 | 0.255 | **0.108** | **0.213** |
| | 48 | **0.169** | 0.268 | *0.186* | **0.235** | 0.189 | 0.270 | 0.231 | 0.312 | 0.188 | *0.238* |
| | 96 | 0.233 | 0.306 | **0.221** | *0.267* | 0.236 | 0.300 | 0.332 | 0.380 | *0.226* | **0.265** | | Rebuttal 1:
Rebuttal: **Dear AC and Reviewers,**
**Thank you very much for your time and effort in reviewing our submission.** The valuable comments provided are highly beneficial for improving the quality of our paper.
In this paper, **we propose the RCF technique**, which *utilizes learnable recurrent cycles to explicitly model the inherent periodic patterns within time series data*, followed by predicting the residual components of the modeled cycles. The RCF technique significantly enhances the performance of basic (or existing) models. **CycleNet** (which *combines RCF with basic Linear or MLP models*) **achieves consistent state-of-the-art performance** across multiple domains and offers **significant efficiency advantages**.
Overall, the four reviewers highly recognize the contributions of our submission, including comments such as "**Novel approach"**, "**Simple and effective**", "**Thorough discussion of results**", and "**Well-written with good clarity**". At the same time, the four reviewers have provided specific suggestions to improve the quality of our paper:
- **Reviewer SdEc** suggested supplementing experimental results on the PEMS dataset, and **we have supplemented these results** and demonstrated that the proposed method **remains very effective in this scenario**. In addition, Reviewer SdEc also suggested enhancing the description of the core technique proposed, and we will meticulously improve this part of the description in the final paper, and supplement an intuitive illustration to further explain the proposed technique (as shown in *Figure 1 of the attached pdf*).
- **Reviewer 24Zo** suggested adding comparisons with recent related work, and we have included this comparison in *Table 1 of the attached pdf*, **demonstrating the superiority of our methods**. Reviewer 24Zo also suggested showing more visualization results under different configurations, and **we have supplemented these results** in *Figure 2 of the attached pdf*. Additionally, Reviewer 24Zo raised more specific discussions and many detailed suggestions, including more discussion and some proof-reading. We will carefully revise the paper according to these suggestions.
- **Reviewer zzRb** suggested supplementing how the proposed method performs differently with different cycle lengths. **We have included this part of the experiment and discussion**. Additionally, Reviewer zzRb raised some issues about the introduction and results, and we have thoroughly analyzed these in the response.
- **Reviewer EqMa** provided suggestions for optimizing the writing, including notation consistency, figure completeness, and supplementary schematic figures. We will revise the paper and **have included the required schematic figure** in *Figure 1 of the attached pdf*. Additionally, Reviewer EqMa raised the issue of how extreme points affect the RCF technique, and we have **conducted an in-depth analysis**.
**Finally, thank you again for your valuable review.** We hope our response can further address your concerns, and we will carefully revise the paper according to the review.
Sincerely,
The Authors of Submission 9084
Pdf: /pdf/52fd25839cac2b96c5ceecb0b312bbf4c56b2af2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SplitNeRF: Split Sum Approximation Neural Field for Joint Geometry, Illumination, and Material Estimation | Accept (poster) | Summary: This paper tackles the problem of inverse rendering, which reconstructs geometry, material, and environmental lighting from a set of posed images with fixed lighting. It proposes two contributions: 1) representing pre-integrated illumination as a single MLP, 2) approximating self-occlusion on pre-integrated lighting and use it to supervise an occlusion MLP to disentangle shadows and materials.
The proposed method achieves state-of-the-art results while being very fast to train (less than one hour).
Strengths: 1. The idea of representing pre-integrated illumination as MLP is interesting. The regularization is particularly novel and clever. I really like the way how pure specular MLP (roughness=0) is used to regularize the training of the illumination MLP (Eq. 5)
2. The proposed occlusion factor estimation method is also very interesting and novel. Instead of ambient occlusion, the paper uses simple trick to factor out occlusion as an independent scalar for light integral. This scalar can be computed with MC integration and is distilled into a neural network for efficient inference.
3. The quantitative result shows that the improvement over SOTA is significant, especially on relatively diffuse scene (NeRFactor dataset).
Weaknesses: 1. More datasets could be tested, e.g. the TensorIR dataset.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The paper mentions that the relighting results were obtained via Blender's PBR shader. Does the shader use global illumination? If so it would be unfair for some of the baselines (TensorIR, NMF) as they can only render direct illumination. A more fair way would be run another experiment where all methods do not use global illumination.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations and potential negative societal impact are discussed properly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: The shader uses global (indirect) illumination. We decided to evaluate using blender's PBR shader, as done in previous works, since our aim is to generate relightable meshes for use in existing rendering pipelines. Evaluating the rendering quality from one such pipeline is the most direct way of benchmarking that goal. This is especially important in the context of radiance fields since the volumetric formulation supervised mostly through rendering allows for artifacts in 3D to be hidden in renders. For example, material properties could be distributed along rays near the surface of objects, leading to good quality renders but bad quality 3D representations. Both TensoIR and NMF take into account indirect illumination, although they restrict its calculation to two ray bounces. | Summary: This paper introduces a method for digitizing real-world objects by estimating their geometry, material properties, and environmental lighting from posed images under fixed lighting. Integrating image-based lighting techniques into Neural Radiance Field (NeRF) pipelines, the method uses a scene-specific MLP for pre-integrated lighting at arbitrary resolutions. A Monte Carlo sampling-based regularizer ensures accurate lighting representation and self-occlusion predictions.
Strengths: The method proposes a method to approximate the effect of self-occlusion to improve the material estimation
Weaknesses: - The proposed method, particularly in terms of material and lighting representation, is trivial.
- The correctness of the decomposed material is flawed in the qualitative results. The albedo clearly has specular and shadow bake-in, and the metalness and roughness do not match the material in the ground truth.
- The occlusion introduced in the paper to improve material estimation is not robust enough to validate its contribution.
Technical Quality: 2
Clarity: 2
Questions for Authors: In the discussion of occlusion loss, why is only the albedo prediction addressed? The paper claims that the occlusion is designed to improve the overall material estimation.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: The proposed method decomposes lighting and material using surface reflection. However, real-world objects often have multiple material layers, such as the car showcased in the qualitative results. The clear coat reflects stronger specular light, while the lower paint layer includes both specular and diffuse reflections. The proposed model does not account for these complexities.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We only adress the albedo prediction since albedo is the only material property that is shared across most commonly used BRDF models. The synthetic datasets we rely on come from 3D models which were hand-designed using a variety of complex BRDF models with properties which can't be directly translated into out model's "metalness" and "roughness". Due to this, there doesn't exist a ground truth "metalness" or "roughness" to evaluate against. Qualitatively, it is easier to visualize the effects of the occlusion loss on albedo due to its colored nature, which makes it more intuitive to evaluate. | Summary: The paper introduces SplitNeRF, a method that integrates the split sum approximation into Neural Radiance Field (NeRF) pipelines. This approach optimizes object geometry, material properties, and environmental lighting. The method employs a Multi-Layer Perceptron (MLP) to model scene-specific pre-integrated image-based lighting at arbitrary resolutions. The model is regularized using efficient Monte Carlo sampling to accurately capture pre-integrated lighting and self-occlusion effects. Experimental results indicate that SplitNeRF achieves state-of-the-art relighting quality with only about an hour of training on a single NVIDIA A100 GPU.
Strengths: 1. The method is efficient and achieves high-quality relighting results after only about an hour of training on a single NVIDIA A100 GPU, which is highly efficient compared to existing methods.
2. By using Monte Carlo sampling for regularization, the method accurately models pre-integrated lighting and self-occlusion, leading to high-fidelity reconstruction of scene geometry and material properties. The integration of split sum approximation into NeRF pipelines is novel and effective, allowing the method to disentangle environmental lighting from material properties.
3. The method demonstrates competitive performance on both synthetic and real datasets, showing its state-of-the-art performance in material decomposition and relighting.
Weaknesses: I do not have a major concern about this paper- the technical claims are sound and the paper is overall well presented. Several minor points:
1. Adding material editing visualizations in the paper could further strengthen the results and inspire downstream applications.
2. Normal could contain a significant error in e.g. lego and coffee scenes. Can you provide more insight and analysis into this?
Technical Quality: 4
Clarity: 3
Questions for Authors: See weaknesses 1, 2.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Limitations discussed in the conclusions section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Estimating geometry together with materials and illumination is a very complex and unconstrained problem. Because of this, the optimization can sometimes get stuck in local minima. We believe that is happening for those scenes. For example, it is possible to model reflections via small variations in geometry rather than higher frequency changes in the illumination. We hypothesize that is happening in the 'coffee' scene.
---
Rebuttal 2:
Comment: Thanks for the response. However I do not think the authors addressed my questions in the initial review, and agree with reviewer h9Hn that there could be flaws in occlusion handling. I'm leaning towards acceptance while acknowledging the paper could use some further improvements. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Fractal Patterns May Illuminate the Success of Next-Token Prediction | Accept (poster) | Summary: This paper provides a detailed application of ideas from fractal geometry to natural language data, using language models to compute the relevant information-theoretic properties. They find, in particular, tell-tale evidence of self-similarity (common structure across scales) and long-range dependencies. A particularly interesting result, in my opinion, is that using some of the estimated fractal parameters can help predict LMs' downstream performance over and above what can be predicted from raw LM performance (bits-per-byte), suggesting that some of this performance can be explained by their discovery of this fractal structure. While the paper can be a bit dense and contains a huge range both of theoretical background and experimental results that make it a little hard to read, I do believe that it pays off to understand and makes a nice contribution that will be of interest to many people in the field.
Strengths: * Provides a thorough and detailed application of fractal geometry to natural language data.
* Use of the new fractal metric helps predict downstream task performance more than just BPB alone.
* Shows that the main findings are robust to the choice of LLM that is used for estimating information-theoretic properties of text.
Weaknesses: * Could cite more literature about e.g. dependency lengths (https://doi.org/10.1073/pnas.1502134112) and other known properties of natural language. For instance, "duality of patterning" (https://doi.org/10.1515/langcog-2012-0015) has been used to refer to the fact that morphemes->words and words->sentences exhibit similar structural properties, which is akin to self-similarity.
* Although the paper is mostly self-contained, it can be very dense for readers not very familiar with the mathematics of fractals.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Lines 46-47: if the reader is not familiar with S, it's not clear what the value means and why it has the implications that it does. Can you explain more at this point or move these remarks to a bit later?
* Have you checked whether a power-law is genuinely the best fit to the data, versus other laws that often resemble them (e.g. linear in log-log)? See e.g. https://doi.org/10.1126/science.1216142
* Can you say more about the use of LLMs to compute and represent the information-theoretic quantities in this context? As a way of making sure that we are learning about language in particular and not just artifacts/properties of the models.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes; I appreciated especially the mention of the results being English-only
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful and insightful feedback. We especially appreciate the positive assessment of the thoroughness, originality, and importance of our work. We hope that our rebuttal below addresses all of the reviewer’s questions, are happy to provide more details, and look forward to the reviewer’s response to our rebuttal.
**Additional References**
Thank you for the great suggestions. We will add those references along with a brief discussion in the related works section.
**Clarity of writing**
We will add more clarity in the revised version of the paper. If there are any specific suggestions for restructuring or clarifying some points, please let us know and we would gladly incorporate them into the paper.
We will add more explanation about S and what it means prior to the discussion in Lines 46-47. Informally, a smaller value of S indicates a slower decay in the auto-correlation function and more fractal structure.
**Power law fit**
This is an important point and we appreciate the suggested reference. In our experiments, we observe a near perfect linear fit on a log-log plot over more than two orders of magnitude (Figures 2 and 3), in agreement with what (Stumpf and Porter, 2012) argued for. (Stumpf and Porter, 2012) wrote: “As a rule of thumb, a candidate power law should exhibit an approximately linear relationship on a log-log plot over at least two orders of magnitude in both the x and y axes.” This is exactly what we observe. We will clarify this point in the revised version of the paper.
**Information-theoretic complexity**
Thank you for raising this point. Please see our general comment above on why we believe an information-theoretic complexity is meaningful for our analysis. We will clarify this more in the revised version of the paper.
**Follow-up**
We are grateful again for your detailed and constructive feedback. If we have satisfactorily answered your questions, we hope you would consider revising your scores. Otherwise, please let us know if there are any other questions or concerns, so we can respond to them during the discussion period.
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Dear reviewer,
Thank you again for the detailed and constructive comments.
As the discussion period is about to end, we would like to ensure we've addressed all of your questions and concerns. If you feel we have satisfactorily responded, please let us know. If you have any further questions, we are happy to address them before the discussion period closes.
Sincerely | Summary: This paper introduce a new perspective to that language is self-similar and predictability and self-similarity together imply long-range dependency, based on empirical analysis across different scales of LMs and information theoretic views. This new perspective may enable us to understand the strong capabilities of current causal LLMs. They studies three parameters, self-similarity (Hölder) exponent, the Hurst parameter, and the fractal dimension, as well as Joseph exponent. They also introduces a new metric that can more precisely approximately downstream performance than BPB.
Strengths: - This paper provides some new interesting perspectives to advance our understanding of how LLMs acquire such strong capabilities after large-scale pre-training using simple autoregressive training objective.
- They conduct large-scale analysis using LMs with different scale in three different model families.
- They also empirically found that a median Hurst exponent can be a more reliable indicator of downstream performance than perplexity-based BPB. While perplexity has shown to strongly correlated with downstream performance, prior studies also show their limitations on predicting downstream performance, and this work may encourage future work to use this new metric instead.
Weaknesses: - To my understanding, this work views language as a mere sequence of negative log probabilities while prior studies in linguistic / NLP often consider much more rich structures of languages and am not fully convinced the validity of analysis.
- While the analysis includes three models, none of the models’ checkpoints aren’t publicly available (i.e., PaLM and PaLM2 and their newly trained T5-decoder only models) and followup work may not be able to reproduce the results. I’m curious why the authors didn’t test other models with openly available model checkpoints such as Llama 2, 3 / OLMo / Pythia.
- Overall I found the paper is a bit hard to follow (e.g., Introduction discusses prior literature in depth before providing high-level overview or motivation, Experimental setups come write after preliminaries). Some restructures of sections or writing may improves readability of the paper.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Did authors try other models with publicly available model checkpoints so that people can reproduce the results?
- Why did authors train T5-decoder model from scratch, instead of using T5 variant that went through next token prediction training (e.g., T5 LM adapt)?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors provide limitation sections that include some of the limitations I was thinking about.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful and insightful feedback. We especially appreciate the positive assessment of the thoroughness, originality, and importance of our work. We hope that our rebuttal below addresses all of the reviewer’s questions, are happy to provide more details, and look forward to the reviewer’s response to our rebuttal.
**Information-theoretic complexity**
Thank you for raising this point. Please see our general comment above on why we believe an information-theoretic complexity is meaningful for our analysis. We will clarify this more in the revised version of the paper.
**Reproducibility**
This is an important point and we thank the reviewer for raising it. Our response is threefold.
First, we have run new experiments using the released gemma-2b checkpoint (https://huggingface.co/google/gemma-2b) to compare it with the models used in the paper. Due to the short rebuttal time, we were able to use the smallest gemma model and a subset of the Pile validation split only. But, as discussed in the general comment above, we observe a good agreement overall. Gemma 2B is much smaller than all of the models we have used in our paper. Yet, we generally observe similar conclusions. Please see our general response above for details. Based on these findings, we do believe our results will continue to hold using publicly available checkpoints. We plan to continue running the analysis on the full validation data split of the Pile and include those results in the supplementary materials of the paper.
Second, the reason we included T5 trained from scratch is to ensure that our findings are reproducible. We believe that by training a model from scratch instead of using released checkpoints, we are more confident in our findings. We have provided all of our training details for reproducibility. We hope this answers your question.
Third, to ensure reproducibility and to encourage the community to explore these fractal properties in language, we also release a code for calculating all fractal params.
**Clarity of writing**
We will add more clarity in the revised version of the paper. If there are any specific suggestions for restructuring or clarifying some points, please let us know and we would gladly incorporate them into the paper.
**Follow-up**
We are grateful again for your detailed and constructive feedback. If we have satisfactorily answered your questions, we hope you would consider revising your scores. Otherwise, please let us know if there are any other questions or concerns, so we can respond to them during the discussion period.
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Dear reviewer,
Thank you again for the detailed and constructive comments.
As the discussion period is about to end, we would like to ensure we've addressed all of your questions and concerns. If you feel we have satisfactorily responded, please let us know. If you have any further questions, we are happy to address them before the discussion period closes.
Sincerely | Summary: The paper draws connections between fractal patterns and language by evaluating properties such as self-similarity and long-range-dependency. Using a range of LLMs, they estimate the Holder exponent, Hurst parameter and fractal dimension for language from different domains, including web text, code, and math problems. They show that these parameters may have connections with LLM learning ability, demonstrating that using the median Hurst parameter can improve prediction of downstream model performance over only using bits-per-byte.
Strengths: - The authors estimate fractal parameters across a range of datasets and LLMs, showing that they are fairly robust to choice of LLM and domain. For domains with significant deviation, such as DM-Mathematics, the authors are able to attribute this to the dataset's lack of long-range dependency.
- The authors show that the Hurst parameter captures useful information for predicting downstream performance that is not captured by BPB alone.
Weaknesses: - Even though fractal parameters were found to be consistent across LLMs, I think their stated contribution (this 'establish[es] that language is self-similar and long-range-dependent') is far too strong a claim. As the authors mention, their method ignores many important facets of language, such as semantic nuance, and is reliant on the current state of LLMs' ability to model language.
- I am unclear how future work can build upon these insights about language, especially that 'exploiting self-similarity more directly' could lead to further LLM optimization.
Technical Quality: 2
Clarity: 3
Questions for Authors: I don't follow the potential connection between self-similarity and the success of parameter sharing? (mentioned in Limitations section)
Typos:
- Line 131: such bits-per-byte (BPB)
- Line 132: BPB is a widely used as a
- Line 272: such as One example
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes, the authors discuss that their analysis is limited to English data and fails to capture the semantic meaning of language. However, as mentioned in Questions section, the possible connection between self-similarity and parameter-sharing is not obvious to me and could be further explained.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful and insightful feedback. We especially appreciate the positive assessment of the thoroughness, originality, and importance of our work. We hope that our rebuttal below addresses all of the reviewer’s questions, are happy to provide more details, and look forward to the reviewer’s response to our rebuttal.
**Information-theoretic complexity**
Thank you for raising this point. Please see our general comment above on why we believe an information-theoretic complexity is meaningful for our analysis. We will clarify this more in the revised version of the paper.
**Future work and parameter sharing**
We are currently exploring several directions for future work. For example, do texts generated by LLMs have a self-similar structure? What is the impact of RLHF on the self-similar structure of generated texts? Can we develop techniques during training to encourage models to mimic the self-similarity we see in language?
On the architecture side, self-similarity in language means that linguistic patterns repeat at different scales. This mirrors how parameter sharing works: the same set of parameters (attention mechanisms) is applied across different positions in a sequence, assuming similar operations are needed to understand the language at different levels. This is what we mean by saying that the success of parameter sharing techniques, such as ALBERT, may be partially explained by self-similarity. We will explain this more in the revised version of the paper. We have some preliminary experiments, for instance, that show that parameter-sharing is more effective for language than in vision (presumably because language is self-similar). But, it is unclear yet what the best way to exploit self-similarity in the architecture is. We believe these are exciting areas for future research.
**Typos**
Thank you for pointing them out. We will fix them in the revised version of the paper.
**Follow-up**
We are grateful again for your detailed and constructive feedback. If we have satisfactorily answered your questions, we hope you would consider revising your scores. Otherwise, please let us know if there are any other questions or concerns, so we can respond to them during the discussion period.
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Dear reviewer,
Thank you again for the detailed and constructive comments.
As the discussion period is about to end, we would like to ensure we've addressed all of your questions and concerns. If you feel we have satisfactorily responded, please let us know. Otherwise, please let us know your remaining concerns so we can address them before the discussion period closes.
Sincerely
---
Rebuttal Comment 1.2:
Comment: Thanks for your response and clarifications. I will maintain my current score.
---
Reply to Comment 1.2.1:
Title: Thank you
Comment: Thank you for engaging with us. We would appreciate it if you let us know of any remaining concerns so we can respond to them.
With regard to the "poor" contribution rating, we would like to respectfully emphasize the novelty and technical quality of our work.
- We believe our work is *quite original*: we are not aware of any prior work that has attempted to study the fractal nature of language using LLMs. We offer a quantitative evidence for the self-similar and long-range dependent nature of language.
- We demonstrate an *interesting* connection between the Hurst parameter and downstream performance of LLMs. This is by no means obvious or could have been expected in advance.
- We have conducted an *extensive* empirical analysis across various domains and architectures, showcasing the robustness of our findings. This includes downstream evaluations on popular benchmarks that cover many tasks (please see our response above), prompted using various strategies (e.g. direct and chain-of-thought with either 0, 3, 5, or 8 shots).
- Our work is *self-contained*: we provide the necessary mathematical background to explain how to calculate all fractal parameters, and we have strived to make it easy to read and follow.
- We include all details necessary to reproduce the results in the supplementary materials, in addition to releasing the code for calculating fractal parameters. We are also including results using gemma-2b as requested by reviewer cohu.
We agree that our work has some limitations, as one cannot answer all questions in a single paper (e.g. we focus on the English language alone). But, we respectfully disagree that the contribution of this work be considered "poor". We would appreciate it if you let us know of any remaining concerns so we can respond to them during the remainder of the discussion period.
Sincerely | Summary: In this paper, the authors try to reveal the existence of fractal structures to handle language in language modeling using recent language models based on the next token prediction. For that purpose, the authors rely on several aspects of fractal structures, which are self-similarity, long-range dependence, and information-theoretic complexity. The authors describe these aspects using metrics for typical phenomena under the assumption of fractal structure, which are self-similarity exponent, Hurst parameter, fragmental dimension, and Joseph effect. The experimental results show that the characteristics of modeling texts of ArXiv, Github, and Wikipedia data in The Pile validation split, whose tokens are longer than 4K, are along with their assumption. In the analysis of the downstream tasks, BBH, MMLU, and GSM8K also show that the task-solving performances are predictable from the language modeling performance based on their assumption, except for considering sequence lengths in training.
Strengths: - Providing an assumption to unify characteristics of interpreting language and language modeling under the variously used approach, next token prediction may help solving various kinds of tasks related to natural languages.
- The analysis is not only restricted to language modeling and the authors investigate the performance correlation between language modeling and downstream tasks based on their assumption.
Weaknesses: - The target language is limited to English. Thus, the validity of expanding the insights in the paper to other languages is uncertain.
- The assumed baseline in this analysis is a random sequence. The authors can use finite-state automatons (FSAs) or Context-free grammars (CFGs) to make sequences that are random but close to languages. Moreover, to focus on the success of recent pre-trained language models, you should have prepared N-gram language models as baselines.
- How strongly the observation results fit the assumed distribution is not calculated mathematically, like model selection by Information Criteria such as Akaike's Information Criteria (AIC) and Bayesian information criteria (BIC).
- There is a gap between the ability to mimic the characteristics of language and showing the required knowledge for questions. Thus, discussing the correlation between the performance of language modeling and downstream tasks is limited from this viewpoint. Hence, targeting generation tasks like summarization, story generation, and machine translation are more suitable for deepening the analysis.
Technical Quality: 2
Clarity: 2
Questions for Authors: - In the analysis, the authors seem to only focus on the context length in training even though the inference of long sequential tokens is closer to the aspect of long distant dependencies between tokens in language. What is the main reason for this decision?
[Comment] In presentation style, instead of showing the method name, showing the motivation or target phenomenon as the section or paragraph title, like "Joseph effect" -> "Burstiness," may support the reader's understanding.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitation does not include the gap between language modeling and solving downstream tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the careful and insightful feedback. We hope that our rebuttal below addresses all of the reviewer’s questions, are happy to provide more details, and look forward to the reviewer’s response to our rebuttal.
**The baseline**
The baseline we use in Figure 1 is used for illustration purposes only, mainly to show how a self-similar process looks like. We do not use random sequences in our analysis. While we appreciate the suggestion of using n-gram models, our work focuses on the self-similar structure of natural language. Note in particular that the output of an n-gram model, by definition, does *not* have long-range dependence (LRD). So, conducting the same analysis on the output of n-gram models would not yield meaningful insights into the fractal structure of language, as n-gram models cannot generate long-range dependencies.
**Empirical fit**
Thank you for raising this point. We do demonstrate how well the power law fits the actual distribution in Figures 2 and 3. We observe a perfect linear fit in a log-log plot over at least two orders of magnitude. In addition, since the power law is of the form $\beta n^{-c}$, there are only two parameters to estimate ($\beta$ and $c$) so AIC is also quite low, given the near perfect fit and the small number of parameters.
**Evaluation**
We use three popular and quite diverse benchmarks (BBH, MMLU, GSM8K) and we also try various prompting strategies (e.g. direct and chain-of-thought with 0, 3, 5, 8 shots). These benchmarks are quite diverse and include tasks such as logical deduction, multi-step arithmetic, disambiguation, as well as general knowledge (e.g. history, computer science, law, sports, movies, and dates, etc). In line with your suggestions, they also include translation-related tasks, such as Translation Error Detection in BBH. So, they cover a broad spectrum of tasks. We hope this addresses your concern about the evaluation datasets, and we will clarify this in the revised version of the paper.
**Context length**
The main reason for including the discussion about the context length at training time in our analysis is because our findings suggest one intriguing possibility: that language models might benefit from being trained on long contexts even if short contexts are used at inference time. However, we did not find any evidence yet to support this. We chose to include this in the analysis section because we believe that mentioning negative results would still be very valuable to our community.
**Presentation**
Thank you for the great suggestion about the presentation style. We will definitely do that in the revised version of the paper.
**Follow-up**
We are grateful again for your detailed and constructive feedback. If we have satisfactorily answered your questions, we hope you would consider revising your scores. Otherwise, please let us know if there are any other questions or concerns, so we can respond to them during the discussion period.
---
Rebuttal Comment 1.1:
Title: Follow up
Comment: Dear reviewer,
Thank you again for the detailed and constructive comments.
As the discussion period is about to end, we would like to ensure we've addressed all of your questions and concerns. If you feel we have satisfactorily responded, please let us know. Otherwise, please let us know your remaining concerns so we can address them before the discussion period closes.
Sincerely | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their insightful and constructive feedback. We appreciate the positive feedback on the thoroughness, originality, and importance of our work.
**Presentation**
We will incorporate the reviewers’ suggestions in the revised version of the paper. This includes providing missing references, clarifying the definition of S, and discussing the breadth of tasks covered in our evaluations, which include many tasks such as logical deduction, multi-step arithmetic, disambiguation, translation error detection, and general knowledge all prompted using various strategies (e.g. direct and chain-of-thought with either 0, 3, 5, or 8 shots). We believe our evaluations are quite thorough and diverse, and will clarify this more in the revised version of the paper.
**Public Checkpoints**
Please note that we have also applied the same methodology during the rebuttal period using the released gemma-2b checkpoint (https://huggingface.co/google/gemma-2b), in response to reviewer cohu's comments. Gemma 2B is much smaller than all of the models we have used in our paper. Yet, we generally observe similar conclusions. First, we have a self-similar structure with near perfect linear fits in a log-log plots. Second, we also observe power laws using the rescaled-range analysis, with a Hurst exponent of about 0.7 in most domains except DM Mathematics (smallest Hurst exponent of about 0.58) and GitHub (largest Hurst exponent of about 0.82), in general agreement to the rest of the models. Please see the attached PDF file for the figures generated using Gemma 2B. These results are based on a subset of the Pile validation split due to the short time constraint (for the rebuttal), so we plan to run the analysis on the full validation data and include those results in the supplementary materials of the paper.
**Information-theoretic complexity**
We discuss in Lines 84-91 why we believe this information-theoretic complexity is meaningful for our analysis. It corresponds to an intrinsic, irreducible description of language and the minimum compute overhead to comprehend/decode it. One experimental evidence in psychology for why this characterization is natural for language comes from reading time measurements, which turned out to correlate well with information-theoretic complexity. In addition, fractal behavior has a clear interpretation in this context: e.g. surprising paragraphs will follow predictable paragraphs, in a manner that is statistically similar to how surprising sentences follow predictable sentences. Please see the discussion and references in Lines 84-91. We will clarify this more in the revised version of the paper.
While our work focuses on this specific aspect of language modeling, we recognize the potential for future research to explore more language structures, and we believe our work is a first step in the direction of exploring the relationship between language modeling and fractals. In addition, our findings regarding the self-similarity of "surprise" in language and the connection between Hurst parameter and downstream performance both are robust findings (as we show in the paper) and can be quite valuable on their own right.
**Follow-up**
If we have satisfactorily answered your questions, we hope you would consider revising your scores. Otherwise, please let us know if there are any other questions or concerns, so we can respond to them during the discussion period.
Pdf: /pdf/d3ef6714869d53d5a49c1e79d2ec7281c7717ee2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
EGSST: Event-based Graph Spatiotemporal Sensitive Transformer for Object Detection | Accept (poster) | Summary: This paper introduces a novel event-based object detection network by processing event-based data as graph data. After incorporating an SSM that acts as the selection role, graph convolution neural networks, and various attention modules, the proposed pipeline achieves good results while retaining high efficiency. Ablation studies prove the effectiveness of the proposed modules. Although the overall results are outstanding, the overall writing is poor, making the paper hard to follow.
Strengths: 1. Processing event-based data as graph data to retain its spatiotemporal information is an interesting direction worth exploring.
2. The novel design of the SSM module suits the property of event-based data well.
3. The experiments illustrate the effectiveness and efficiency of the proposed method.
Weaknesses: 1. In section 3.3, the author claims that the SSM module mimics the human eye to output the degree of object dynamics in the event stream. This metric is measured by the number of events and the time span of each subgraph since more events usually indicate faster object movement. However, this measurement does not consider huge objects in the static background which also generate a large number of events with relatively low speed, which does not align well with the author's claim. Also, the definition of $f(\cdot)$ of SSM is not clear, which is only given by the author in line 176 as an example.
2. The $\pi$ function in section 3.3 is not detailedly introduced either.
3. In line 302, the author mentions that the CNN module is removable. Where is this removable CNN module? Additionally, the TAC module, Linear ViT, and DETR all process frame-based data, and CNN is included in the Linear ViT as illustrated in Figure 2. Based on these observations, I don't think the framework is event-based.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In line 294, the author indicates that the parameters involved in the forward computation vary dynamically due to the SSM module. Could the authors provide some statistical and qualitative results on the ratio of involved parameters to help readers better understand the role of SSM?
2. How is the gating signal produced from $F_{st}$ (line 215)? Could the author provide experimental results to prove the effectiveness of generating $Q$ from $F_{st}$ instead of $\mathcal{F}$?
3. In this paper, only quantitative results are shown, making me doubt whether I am reading a paper in the CV field, especially for the object detection task. Could the authors show more qualitative results? I think this is more valuable and significant than the data augmentation results which can be moved to the appendix.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors are very transparent about their limitations on deploying the GNN into production.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback. We have carefully revised the manuscript based on your suggestions, particularly regarding the clarity of function definitions and the roles of specific modules. We hope these revisions meet your expectations and ask for your reconsideration during the review process. We are committed to addressing any further questions or requirements during the discussion period.
**W1.** In response to the first issue you raised in the weaknesses section, we provide the following detailed replies and corrections:
1) To further illustrate the relationship between object and background dynamics, we have added an image of SSM in the rebuttal PDF, positioned near section 3.3 of the manuscript. Regarding the dynamics, large objects generate a large number of events, but their relatively slow movement speed results in a large time span. These events significantly influence the computation of the Event Global Motion (EGM), effectively reflecting the overall scene dynamics. Conversely, fast-moving objects, while generating a similar number of events, have a shorter time span, leading to a higher Event Local Motion (ELM).
2) We originally designed $f(\cdot)$ to quantify the dynamics in the event stream using the number of events and time intervals. However, our experiments indicate that a simple proportional function $f(x, y) = \frac{x}{y}$ adequately captures these dynamics without needing additional parameters from neural networks. To clarify and prevent misunderstandings, we will refine our explanation of this function in the revised manuscript, moving beyond examples to a clear definition.
**W2.** Thank you for highlighting the unclear definition of function $\pi$ in our manuscript. To address this, we will enhance our description in the revised paper. Function $\pi$ performs aggregation operations—minimum, maximum, and mean—to enable lightweight processing with flexibility. In our setup, we primarily use the mean method to compute centroid coordinates of input data points. Our findings suggest that the choice of aggregation method does not significantly affect the overall detection performance. We will detail this analysis in the revised manuscript, providing ablation study results to clarify the effectiveness of function $\pi$.
**W3.** Thank you for your comments.
1) To avoid any confusion about the CNN module, we will clearly mark this removable component in the updated Figure 2.
2) Regarding data processing being event-based or frame-based: Our method incorporates components from established object detection frameworks like YOLO and DETR but differs fundamentally in data processing and feature extraction. Initially, our feature extraction relies solely on graph structures to process event data, preserving rich spatiotemporal information unlike traditional methods that compress events into frames. Secondly, when converting graph features to frame features, we implement event-level operations. By mapping each event's features to specific spatial locations, we ensure that every position within the frame contains detailed event-level information. This approach maintains the integrity of the event data throughout the frame-based output, significantly reducing the information loss typical of standard frame compression techniques.
**Q1.** In response to the first question, we recognize the need for more detailed explanations in our manuscript and offer the following clarifications:
1) Experimental Setup: We used a fixed batch size of 1 across 30,000 batches, activating the TAC module for about 34.4% of the total event data. These details will be included in section 4.4 to better illustrate the dynamics of the TAC module during operations.
2) Correction of Errors: We identified errors in our previous ablation studies setup where the shuffle option was incorrectly active, and the batch size was not consistently 1. This affected the numerical results in the TAC Inactive line, and the corrected mAP should be 42.1%, with processing time of 4.3 ms. These corrections will be detailed in the revised manuscript.
**Q2.** Thank you for your attention to the details of our research methodology.
1) Flowchart Supplement: We have detailed the process flowchart of the TAC module in the revised Figure 2, which can be viewed in the PDF file.
2) Ablation Study Results: To verify the effectiveness of generating Q from Fst, we designed an ablation study to compare the outcomes of generating Q from Fst and from F. The results show that the method of generating Q from Fst with TAC always active increased the mAP accuracy by 0.3%, while the time delay increased by only 0.01 ms and did not lead to any additional parameter increase. Although there is a slight time delay, considering that the improvement in accuracy is more significant compared to the minimal time delay, we believe that obtaining Q from Fst is an effective and worthwhile method.
**Q3.** Thank you for your feedback. Based on your suggestion to include more qualitative results, we have implemented the following improvements:
1) SSM Visualization: We've added visualizations of the SSM module's output to the PDF, providing a clear view of object dynamics within scenes.
2) Qualitative Detection Results: We've incorporated several images showing detection outcomes in various real-world settings, highlighting our model's effectiveness in complex environments.
3) Data Augmentation Details: As suggested, we will move the data augmentation results to the appendix, focusing the main text more on our key contributions.
**Limitations:**
Deploying GNNs into production involves addressing practical challenges such as computational resources, inference performance, and data storage and management, requiring concerted efforts in both hardware and software.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thanks for your response. Regarding W1, say there are two objects with the same speed, where one object is larger than the other one. In this condition, would the relative value of the larger object be larger than the other object? Then would the model neglect the relatively small object due to the smaller relative value? I think the provided qualitative results are not enough to resolve my concern since the truck on the right side in Figure 2b has a larger N and smaller $\Delta t$, deserving a larger relative value. Instead, I want to see some comparisons between two objects one of which has a larger shape and larger $\Delta t$. Since the rebuttal file can't be renewed, I hope the author can explain this condition in words more detailedly.
Also, if I didn't miss something, it seems that you didn't answer the first part of my question 2, i.e., how is the gating signal produced from $F_{st}$ (line 215)?
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your detailed examination of our method. Here are our responses to your questions:
1. Regarding the issue you mentioned about large and small objects moving at the same speed but having different relative values: We have largely mitigated this issue when constructing connected subgraphs by introducing inter-subgraph attention to capture relationships across subgraphs. Specifically, by setting a distance threshold $R$ and limiting the number of neighboring nodes, a single large object may consist of multiple connected subgraphs, and the larger the object, the more subgraphs it includes. Thus, the relative value of the same object is determined by the combined relative values of these subgraphs. Since multiple connected subgraphs can segment the integrity of the target, we have subsequently introduced an inter-subgraph Graph Attention Network (GAT) (in line 202), which strengthens the connections within subgraphs of the same object and weakens those between different objects. Additionally, the GAT does not overlook relatively smaller objects; instead, it adaptively assesses their importance within the overall context.
2. We have included a detailed flowchart of the Temporal Activation Controller (TAC) in Figure 1 of the rebuttal PDF. After obtaining $F_{st}$ through convolutional aggregation, it is processed by a combination of two well-designed convolutional neural networks, the Attention Network and the Gating Network, to determine the probability values (i.e., gating values) for each Feature Map. Finally, these values are fused within the Fusion module with the original Feature Maps. | Summary: The paper proposes a novel event-based graph spatiotemporal sensitive network (EGSST), which is the first work using Graph Transformer for object detection tasks on event cameras. This work primarily involves two key innovative modules: a spatiotemporal sensitivity module (SSM) and an adaptive temporal activation controller (TAC). Additionally, the integration of a lightweight, multi-scale Linear Vision Transformer (LViT) significantly enhances processing efficiency. The results demonstrate that the proposed EGSST achieves state-of-the-art (SOTA) performance, especially outperforming other graph neural networks for event-based object detection.
Strengths: i) Graph neural networks for event-based object detection is a promising solution to achieve low-latency in real-world applications.
ii) The proposed EGSST achieves SOTA performance, especially outperforming other GNNs for event-based object detection.
iii) The writing is straightforward, clear, and easy to understand.
Weaknesses: i) The authors claim that the first Graph Transformer worked on event-based object detection, seemingly simply linking the feature maps generated by GNNs directly to DETR. Of course, I am certain that the author has contributed to the handling of events in spatiotemporal GNNs. However, the author may not make any innovative work on the unique combination of GNNs and Transformer. Could the author clarify the innovative aspects of the combination of graph transformer and others?
ii) In Table 1, the authors should separately place the existing method of event-based object detection using GNNs in the bottom few rows of the table. There are still several works that have not been searched and cited, such as DAGr in Nature 2024. In addition, by comparing the GNNs method, it can be found that the proposed EGSST has good performance and inference time.
iii) The authors' use of DETR to regress the output of object detection results is not consistent with the YOLOX approach used by existing GNNs for event-based object detection. It is challenging to determine whether the DETR module has significantly improved performance. The authors should conduct an experiment with a YOLOX detection head to clarify this.
iv) In the ablation experiment, the authors should ablate important parameters in the proposed innovative modules (i.e., SSM and TAC). In addition, the expansion experiment on the accumulation of event windows in Table 5 takes up too much space, which may affect the length of other important experimental content.
v) The writing section needs further improvement. For example:
a. In related works, the author can write according to event-based object detection, graph neural networks for event data. This writing may better reflect the focus and novelty of the work.
b. The biggest advantage of GNNs in processing event data is low latency. The author proposes a method that basically reaches the millimeter level, which is very good. It is recommended to emphasize the low latency advantage of the method more.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness and response each comment. Besides, Additionally, the authors should also answer the following question: Could the authors plan to further expand this unimodal work to multimodal work by integrating GNN for event processing and Transformer for image processing in the future?
If so, please cite some multimodal object detection methods [1, 2, 3] using events and frames.
[1] Event-based vision enhanced: A joint detection framework in autonomous driving, ICME 2019.
[2] SODFormer: Streaming object detection with transformer using events and frames, TPAMI 2023.
[3] Low-latency automotive vision with event cameras, Nature 2024.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: A minor flaw is that the authors claim to be the first to apply Graph Transformer to event-based object detection, seemingly by merely linking feature maps generated by GNNs directly to DETR. It suggests that the authors revise a statement to be slightly more conservative.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1.** We greatly appreciate your evaluation and suggestions regarding our integration of Graph and Transformer technologies. In our integrated framework, there are two key aspects:
1) Interaction Mechanism between SSM and TAC: The Spatiotemporal Sensitivity Module (SSM) we designed processes event data based on the graph structure and directly influences the activation state of the Temporal Activation Controller (TAC). This design enables the TAC to precisely focus on the temporal dynamics of the event data, thereby enhancing processing efficiency and response speed.
2) Event-level Information Transformation: In converting graph features to the format required by Transformers, we employ event-level operations. By accumulating features of each event at their corresponding spatial locations, each location not only retains rich event information but also reflects underlying dynamic changes, which is critical for object detection in dynamic vision systems.
Of course, we acknowledge the shortcomings pointed out by you in this section and in the later Limitations. Therefore, we have attempted to make our statements about the Graph Transformer more conservative, as follows:
- "Our model effectively combines the advantages of Graph and Transformer technologies. Based on event data, it is lightweight, fast, and accurate, providing a novel technological approach for performing object detection tasks within event data."
**W2.** Thank you very much for your careful reading and valuable suggestions. We will reorganize and optimize the method ordering in Table 1 according to your suggestions and will include some of the latest related literature, including S4D-ViT-B[1], S5-ViT-B[1], GET-T[2], and ERGO-12[3], to ensure our paper covers the most current research advancements.
Furthermore, based on your recommendation [4], we learned about the DAGr method, an innovative hybrid approach that combines event cameras with traditional frame cameras. DAGr leverages the high temporal resolution and sparsity benefits of event cameras, along with the rich contextual information from frame cameras, to achieve efficient, fast object detection while significantly reducing perception and computation latencies. Unfortunately, the DAGr method was published online just days after the NeurIPS 2024 submission deadline, which is why we were unable to reference this excellent work in time. However, we are impressed by this method and plan to use it as an important benchmark in our future multimodal research.
References:
[1] State Space Models for Event Cameras, CVPR 2024
[2] Get: Group event transformer for event-based vision, ICCV 2023
[3] From chaos comes order: Ordering event representations for object recognition and detection, ICCV 2023.
[4] Low-latency automotive vision with event cameras, Nature 2024.
**W3.** Thank you for your detailed attention to our choice of detection head technology. In response to your concerns about using the DETR or YOLO series detection heads, we have added additional experimental results using YOLOX as the detection head. These new results will be included in Table 1 to visually demonstrate the performance of YOLOX within our framework.
Due to the word count and PDF page limitations of the rebuttal, we are unable to detail all of the revisions and comparisons here. Therefore, only the additions in Table 1 are given in the author rebuttal.
**W4.** Thank you for your detailed guidance on our ablation study design and the layout of our paper. To evaluate the impact of SSM and TAC modules, we add ablation experiments such as $\pi$ in SSM and Fst in TAC. The experimental results indicate that while $\pi$ has a negligible overall effect, generating Q from Fst with TAC always active improves precision, increasing mAP accuracy by 0.3% with only a 0.01 ms increase in time delay and no additional parameter increase. More detailed ablation experiments will be added in the Appendix. Additionally, we have included visualizations of SSM in dynamic environments, further confirming the effectiveness of these combined modules. These results will help readers more intuitively understand the roles of the modules.
Regarding your concern about the space occupied by the extended experiments on event window accumulation, we have recognized this issue and plan to move this section to the appendix in the revised manuscript to save space in the main text and maintain the compactness of the paper.
**W5.** Thank you very much for your specific suggestions regarding our writing. We will carefully revise and improve our paper based on your feedback.
1) Related Work Section: We will reorganize the "Related Work" section to more detailedly categorize and discuss it according to event-driven object detection and graph neural network processing of event data, in order to better highlight the focus and innovations of our work.
2) Emphasizing Low Latency Advantage: We will more clearly emphasize in the paper the achievements of our method in achieving millisecond-level latency.
Due to word count limitations, we will reserve more detailed modifications and explanations for the revised manuscript. We hope these improvements will fully meet your expectations and further enhance the quality of the paper. Thank you for your valuable suggestions, and we look forward to you seeing these changes in the revised manuscript.
**Questions:**
Thank you for your question. Indeed, in future research, we plan to expand our current unimodal work to multimodal efforts. We believe that by integrating data from event cameras and traditional frame cameras, we can further enhance the system's perceptual capabilities and response speed, especially in dynamic and complex visual environments.
To support our future research direction and acknowledge the existing work in our field, we will cite the excellent studies you mentioned in the "Discussion and Conclusion" section of our paper.
---
Rebuttal Comment 1.1:
Comment: The author has addressed my concerns, and I will maintain the original score. I hope the author will incorporate the promised modifications into the camera-ready version. Additionally, I encourage the authors to explore several multimodal object detection methods and refer to the multimodal literature I recommended for future work.
---
Rebuttal 2:
Comment: Dear Reviewer
The author-discussion period ends at August 13, which is just 1 day away. Can you please discuss the rebuttal and the paper with the authors? Was there any concern that the authors' rebuttal did not address? Do you need further clarification from the authors?
Best Regards,
Your AC | Summary: This paper uses graph structure to model event data and realize event classification. Spatiotemporal Sensitivity Module (SSM) and an adaptive Temporal Activation Controller (TAC) mimic the response of the human eyes in dynamic environments by selectively
activating the temporal attention mechanism based on the relative dynamics of event data, thereby effectively conserving computational resources. In addition, the integration of a lightweight, multi-scale Linear Vision Transformer (LViT) markedly enhances processing efficiency.
Strengths: In this paper, SSM and TAC, as well as multi-scale Linear Vision Transformer (LViT) are used to improve the efficiency of calculation while maintaining good classification performance.
The data augmentation method of Dynamic Label Augmentation is also mentioned as a potential contribution, but is lacking in specifics.
Weaknesses: 1. Insufficient experiment. The Gen1 and 1Mpx datasets used in this paper are not the most widely used datasets for event classification tasks. Even with a focus on driving scenarios, the N-CARS [HATS: Histograms of averaged time surfaces for robust event-based object classification] dataset should be included for comparison with other existing methods.
2. The description of some details needs to be perfected. For example, the specific process of Dynamic Label Augmentation (312 lines) was not introduced, the key parameter $R$used in the experiment (137 lines, formula 2) was not clear, and the main idea of Efficient ViT (236 lines) used in the paper should also be briefly explained.
Technical Quality: 2
Clarity: 2
Questions for Authors: The highlight of this paper is the improvement of efficiency. I want to know whether the time statistics of the algorithm include the Graph Construction process, and how to calculate the time of the algorithm? What percentage of the total process of the algorithm is occupied by the construction time? How do the relevant hyperparameters (such as the number of edges, neighborhood radius, etc.) affect the algorithm when used?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: In addition to the limitations mentioned in Weaknesses, do the methods mentioned in this paper generalize to other visual tasks such as optical flow estimation? Verifying the generalization of the method proposed in this article will make the research more meaningful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1.** Thank you for your attention to our choice of datasets. We acknowledge that N-CARS [7] is a significant dataset for event classification tasks. However, our research focuses on event-based object detection, which is distinct from event classification. Literature [1,2,3,4] supports our use of the Gen1 and 1Mpx datasets, which are well-established in object detection research, hence their selection for our studies. Nevertheless, we recognize the potential of the N-CARS dataset for future research in event classification. We plan to explore this dataset further and will discuss these plans in our paper, citing relevant literature to support our research direction and dataset choices.
**W2.** Thank you very much for highlighting areas for improvement in our paper. We have provided more detailed explanations regarding Dynamic Label Augmentation, the key parameter R, and the Efficient ViT as follows:
1. **Dynamic Label Augmentation:** We will detail the methodology in the appendix and summarize it here. This technique dynamically adjusts the size of label expansion windows based on the time span of accumulated events, enhancing label accuracy. Unlike traditional methods that use fixed ranges—potentially leading to errors in dynamic scenes—our method adapts the expansion windows to suit the speed of moving objects, minimizing incorrect labeling. For example, expansion windows contract for fast-moving objects to avoid mislabeling, and expand in slower-changing scenes to capture more potential labels. This adaptive approach is particularly effective in environments with variable dynamics, such as traffic monitoring or motion tracking, ensuring objects are labeled more precisely.
2. **Parameter R:** We value your interest in the role of parameter R, a key hyperparameter determining the distance threshold for edges between graph nodes. We set R to 30 across two datasets, a value optimized through initial experiments to effectively construct graph data and generate adequate connected subgraphs for our analysis. We further enhance graph construction efficiency using the radius_graph function from the Pytorch Geometric library, which rapidly builds graphs based on this radius, thereby improving our model’s processing speed and efficiency.
3. **Efficient ViT [5]:** We appreciate your interest in Efficient ViT. For comprehensive details, please see its original ICCV 2023 publication. In our appendix, we will outline the core design and technical merits of Efficient ViT for rapid visual tasks:
- **Softmax Replacement:** Traditional ViT models use softmax attention, which is effective but computationally intensive. Efficient ViT uses multi-scale linear attention to reduce computational complexity and hardware latency.
- $ O_i = \sum_{j=1}^N \frac{\text{Sim}(Q_i, K_j)}{\sum_{j=1}^N \text{Sim}(Q_i, K_{j})} V_j $
- $ \text{Sim}(Q, K) = \text{ReLU}(Q) \text{ReLU}(K)^T $
- Consequently, $O_i=\frac{\text{ReLU}(Q_i) \left(\sum_{j=1}^N \text{ReLU}(K_j) V_j\right)}{\text{ReLU}(Q_i) \left(\sum_{j=1}^N \text{ReLU}(K_j)\right)^T},$ drastically reducing computational complexity and memory demands.
- **Optimizing Local Feature Extraction:** Linear attention reduces computational demands but is less effective at capturing local details. We address this by including depthwise convolution (DWConv) in each Feed Forward Network (FFN) layer.
- **Aggregating Multi-Scale Token Information:** The model aggregates neighboring Q, K, and V tokens into multi-scale tokens, enhancing linear attention's ability to process data across different channels efficiently and accurately.
**Questions:**
**Re.** Thank you for your detailed inquiry about our model. We are happy to provide clarifications:
1) Time Statistics: Our algorithm's timing involves the graph processing module, the Linear ViT module, and the detection head module. Graph processing includes constructing the graph and operations within the Graph Neural Network (GNN) and Spatiotemporal Sensitivity Module (SSM), with precise GPU timing provided by PyTorch's torch.cuda.Event.
2) Time Proportion: The graph processing module, utilizing the Pytorch Geometric library and the torch_scatter library for efficient sparse operations, is highly efficient, accounting for approximately 31.4% of the total algorithm time. This efficiency significantly boosts our model's performance in dynamic environments.
**Limitations:**
Thank you for highlighting the potential applications and emphasizing the need for broader validation of our method. Currently, our research focuses on event-based object detection, allowing us to refine detection techniques specific to event data.
However, we understand the importance of extending these techniques to a wider range of visual tasks. In future work, we plan to test the model's applicability to other areas, including event classification [1, 2, 4, 6] and optical flow estimation. We will continue our research to evaluate the generalization and adaptability of these methods and report our findings. We look forward to our continued contributions to both the research and practical applications in event camera vision.
**References:**
[1] Learning to detect objects with a 1 megapixel event camera, Advances in Neural Information Processing Systems, 2020.
[2] Recurrent vision transformers for object detection with event cameras, CVPR 2022.
[3] Asynchronous spatio-temporal memory network for continuous event-based object detection, IEEE Transactions on Image Processing, 2022.
[4] Better and faster: Adaptive event conversion for event-based object detection, AAAI 2023.
[5] Efficientvit: Lightweight multi-scale attention for high-resolution dense prediction, ICCV 2023.
[6] Aegnn: Asynchronous event-based graph neural networks, CVPR 2022.
[7] HATS: Histograms of averaged time surfaces for robust event-based object classification, CVPR 2018.
---
Rebuttal Comment 1.1:
Title: Unresolved and new questions
Comment: Your response partially addressed my question. However, since a significant portion of the article focuses on graph construction, I'm still more interested in that aspect.
Q1: The parameter $R$ is related to the normalization factor $β$ mentioned in line 133. I believe that both $β$ and the $R$ parameter significantly impact the subsequent results. How is $β$ determined, and what is the analysis of the relationship between $β$ and $R$ (particularly concerning different datasets or motion scenarios)?
You mentioned that $R=30$. Could you explain how this value was set? Additionally, could you provide a brief overview of the setting strategy and any comparisons made?
Q2: What is the specific purpose of Equations 6 and 7? I don't seem to find where they are used in the subsequent sections.
Q3: As raised by reviewer NBbr, in line 195, $π$ is the function used to perform the aggregation of the coordinate sets. This aggregation function needs at least a brief introduction, as you mentioned in your response to reviewer NBbr.
Following up on Q1 and Q2, my concern is to what extent the aggregation function reduces the subsequent data processing load. This will help me understand whether the efficiency improvements of your proposed method are primarily focused on the initial graph construction or on the subsequent graph network processing. To what extent can the efficiency be improved through optimization after constructing connected subgraphs and the aggregation of nearby events? Ablation studies on the impact of hyperparameter settings during the graph construction process on both efficiency and accuracy should be considered.
Q4: Does the provided code include the section for constructing connected subgraphs? I recommend adding a ReadMe file in the subsequent code versions to facilitate quicker access to relevant content.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for the series of questions you have raised. We will address each of these questions in turn.
**Q1.** Regarding the setting of $\beta$, we referred to the configurations described in references [1] and [2], aiming to normalize temporal locations via a factor and map them to a range similar to that of spatial coordinates.
As our method accumulates a fixed number of events and the model is event-based, there is no need to consider the impact of time spans on imaging like frame-based models. As long as the number of events accumulated by the model is consistent and the normalization range of temporal locations across datasets is identical, the selection of $R$ can naturally be consistent.
The determination of the parameter $R$ was made during the construction of the graph, taking into account the number of subgraphs and the number of nodes within each subgraph. Specifically, our setup retains about two-thirds of the total number of nodes, making the number of subgraphs approximately ten times the total number of objects. $R=30$ has been sufficient to enable the model to effectively focus on relevant objects, achieving low processing latencies and high detection accuracies.
**Q2.** Equations 6 and 7 are formulas for discriminating event dynamics. The results of these equations are incorporated as part of the features into the model (in line 197), influencing the dynamic indicators generated subsequently.
**Q3.** Regarding the suggestions made by you and reviewer NBbr, we will provide a brief introduction to the aggregation function $\pi$ and incorporate it into the revised version.
Regarding your question about how the aggregation function alleviates the burden of subsequent data processing and where our method primarily improves efficiency, we offer the following explanations:
- The aggregation function you are concerned with can essentially be understood as a graph pooling operation. It pools a large number of nodes and features from each subgraph into a few representative features, which are comprehensive summaries of all node features within each subgraph. Pooling is a common technique in graph processing, and our method adheres to standard pooling practices. For more details on different types of graph pooling operations and the potential efficiency gains they offer, please refer to reference [3]. We consider graph pooling to be a consensus method, hence no further testing experiments were conducted.
- In terms of efficiency improvements, our model's design aims to fully utilize the spatiotemporal characteristics of event data, minimize redundant computations, enhance efficiency, and reduce latency. In the graph processing component, we filter out unnecessary event nodes (including significant noise) through the construction of connected subgraphs and related graph processing operations, thereby enhancing the model's focus on effective objects and significantly reducing the computational load of graph processing due to the decrease in the number of nodes. Operations that reduce spatiotemporal processing and enhance processing efficiency are primarily focused on the combined application of SSM, TAC, and Linear ViT, with detailed descriptions of ablation studies included in the manuscript. For explanations regarding the settings of hyperparameters, please refer to the response to Q1.
**Q4.** The provided code includes the part for constructing connected subgraphs. We will update the code to add detailed comments and a ReadMe file for readers' reference.
**References:**
[1] Aegnn: Asynchronous event-based graph neural networks, CVPR 2022.
[2] Graph-based object classification for neuromorphic vision sensing, ICCV 2019.
[3] Graph neural networks: A review of methods and applications, AI Open 2020.
If our response has addressed your concerns, we kindly ask you to reconsider your rating. If you have any further questions, we are more than happy to address them. | Summary: The paper introduces a novel event-based graph spatiotemporal sensitive transformer framework aimed at enhancing the efficiency of object detection in dynamic vision systems. This framework leverages the unique properties of event camera data by modelling event data through a graph structure and incorporates key components such as the spatiotemporal sensitivity module and temporal activation controller to improve spatial and temporal processing efficiency. Additionally, the integration of a lightweight, multi-scale linear vision transformer further enhances processing efficiency.
Strengths: 1. The introduction of the event-based graph spatiotemporal sensitive transformer framework, combining event data processing and graph neural network technology, brings new perspectives and methods to the field of object detection.
2. By leveraging the unique properties of event camera data and incorporating key components such as the spatiotemporal sensitivity module, efficiency in spatial and temporal processing is significantly enhanced, leading to more effective object detection in dynamic vision systems.
3. The integration of a lightweight, multi-scale linear vision transformer further boosts processing efficiency, enabling the framework to excel in handling large-scale continuous spatiotemporal data.
Weaknesses: 1. Could the manuscript clarify the advantages of your model over the AEC+YOLOv5 method mentioned in reference [44], and how the combination of GNN + LinearViT demonstrates its effectiveness?
2. It would be better to provide a concise visualization of the data to analyze whether the SSM and TAC modules genuinely simulate the natural prioritization of fast-moving objects within the field of view, while lowering the priority of slower objects, rather than merely presenting a seemingly reasonable narrative.
3. Please provide a concise analysis of the description concerning the comprehensive utilization of temporal information mentioned in the contributions. Moreover, are SSM and TAC critical steps that enhance the effective and precise focus on targets within the event data? Additionally, please clarify whether the Connected Subgraphs Construction is referenced for conducting ablation experiments to demonstrate its value.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It would be better for the manuscript to provide more detailed experimental validation and comparative analysis data to support the effectiveness and superiority of the framework, such as the Nighttime Driving Detection dataset.
2. According to the content of reference [44] in the paper regarding AEC+DETR, there are factual errors. Please verify the specific metrics of this method on the two datasets and make the necessary corrections in the corresponding text.
3. The absence of visualized detection results on the dataset needs to be supplemented and enhanced.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback on our paper. We have revised the manuscript based on your suggestions and hope the changes meet your expectations. Please feel free to contact us with any questions or further information you might need during the review process.
**W1.** Regarding your suggestion to compare our method with the AEC+YOLOv5[2] approach, we find it highly valuable and will include this comparison in Table 1. Due to page constraints, we have included additional experimental data in the author rebuttal for your reference. To ensure a fair evaluation, we have replaced the detection head from RT-DETR[4] to the YOLO series, and specifically, following another reviewer's advice, to YOLOX[5], which is more commonly utilized in event camera-based object detection. On the Gen1 dataset, although our EGSST-B-Y method exhibits slightly lower detection accuracy compared to the AEC+YOLOv5 method, it demonstrates a 5% improvement in inference speed and is notably more lightweight. Similarly, on the 1Mpx dataset, the results are consistent, with our method achieving more than double the inference speed of the AEC approach.
In the field of event data processing, we employ the Graph Transformer to effectively capture the spatiotemporal characteristics of event data using graph structures, while leveraging the Transformer's advantages in multi-scale information fusion and global feature extraction. Moreover, the linear nature of our model ensures high processing efficiency. As indicated by the comparisons in Table 1, although our model does not achieve the best performance on all evaluation metrics, its remarkably low inference time and parameter count demonstrate its effectiveness and significant contribution to the field of event data processing.
**W2.** Thank you very much for the reminder that we have added an image of the visualisation results of SSM in the rebuttal pdf.
**W3.** Thank you for your insightful questions and here are the relevant responses:
1. **Utilization of Temporal Information in GNNs:** Our research utilizes Graph Neural Networks (GNNs) to effectively process spatiotemporal information, preserving the temporal attributes of event data throughout the transformation process. We employ an event-level transformation strategy for converting graph features to frame features, which significantly retains temporal information and boosts adaptability to dynamic environments.
2. **Roles of the SSM and TAC Modules:** Our ablation studies have confirmed the pivotal roles of the SSM and TAC modules in focusing accurately on targets within event data. Removing the TAC module notably decreases precision, highlighting its essential role in enhancing detection accuracy.
3. **Role of Connected Subgraphs:** The construction of connected subgraphs is crucial in filtering out noise and sharpening the model’s focus. Experiments show that processing 10,000 event points retains approximately 73% of events, which helps diminish noise interference. Eliminating smaller subgraphs allows the model to concentrate on those with critical labels, thereby boosting detection efficiency and accuracy. Visualizations in the SSM+TAC section demonstrate the distribution of these subgraphs effectively.
We appreciate your attention to these aspects of our work and hope these explanations address your queries satisfactorily.
**Q1.** Thank you for your suggestions. Regarding the nighttime driving detection dataset, we find that the MVSEC-NIGHTL21 dataset [1,2] is an excellent choice. However, we regret not conducting experiments with this dataset due to time constraints and our analysis. Our method, along with the AEC[2] and ITS[3] methods, are event-based and do not rely on RGB frame information. Effective detection can be performed as long as there is sufficient event data. Furthermore, research on the paper [2] shows that while AEC and ITS experienced a decrease in detection accuracy on the MVSEC-NIGHTL21 dataset, they still maintained a high level of performance, demonstrating that event-based detectors are almost independent of RGB information.
**Q2.** We are very sorry for this writing error and will make corrections in the table and the corresponding text in the paper.
**Q3.** Thank you very much for your kind reminder. We have added the visualization results of our experiments on the relevant dataset. The results can be viewed in the rebuttal PDF.
**References**:
[1] The Multivehicle Stereo Event Camera Dataset: An Event Camera Dataset for 3D Perception, 2018
[2] Better and Faster: Adaptive Event Conversion for Event-Based Object Detection, AAAI 2023
[3] Inceptive event time-surfaces for object classification using neuromorphic cameras, ICIAR 2019
[4] Detrs beat yolos on real-time object detection, CVPR 2024.
[5] Yolox: Exceeding yolo series in 2021, arXiv 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. It would be better for the author to provide their results on the Nighttime Driving Detection dataset in their final version. I tend to vote for a weak accept.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your suggestion. We have been actively conducting additional tests on relevant Nighttime Driving Detection datasets as per your request.
---
Rebuttal 2:
Comment: Dear Reviewer
The author-discussion period ends at August 13, which is just 1 days away. Can you please discuss the rebuttal and the paper with the authors? Was there any concern that the authors' rebuttal did not address? Do you need further clarification from the authors?
Best Regards,
Your AC | Rebuttal 1:
Rebuttal: We thank all reviewers for their very informative feedback. We have responded separately to each reviewer and attached a PDF file with figures and tables to enhance our rebuttal. Additionally, considering the page limitations of the PDF, we have briefly presented the new experimental data from Table 1 here for clarity. More detailed modifications will be reflected in the final manuscript. We appreciate all feedback from the reviewers and will incorporate it into the final version of our manuscript.
The additional experiments in Table 1 are listed below:
| Methods | Dataset | mAP (%) | Time (ms) | Params (M) |
| ---------------------- | ----------- | ----------- | --------- | ---------- |
| EGSST-B | Gen1 / 1Mpx | 44.6 / 45.4 | 4.6 / 5.1 | 3.5 |
| EGSST-E | Gen1 / 1Mpx | 49.6 / 50.2 | 6.0 / 6.3 | 12.3 |
| EGSST-B-Y (with YOLOX) | Gen1 / 1Mpx | 43.9 / 44.1 | 3.7 / 5.0 | 2.9 |
| EGSST-E-Y (with YOLOX) | Gen1 / 1Mpx | 47.8 / 48.4 | 4.2 / 5.3 | 10.4 |
Pdf: /pdf/f8718429d81f6a895a46b2b94b0f40c8562acffd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improving Generalization and Convergence by Enhancing Implicit Regularization | Accept (poster) | Summary: This paper proposes a new optimization scheme for deep learning. The idea is to periodically estimate the Hessian diagonal and use a larger learning rate on the smaller-curvature parameters. The paper argues that this algorithm enhances the implicit curvature regularization of the optimization algorithm while not degrading stability (which is what would happen if you used a larger learning rate on all parameters). The paper first motivates the scheme by analyzing a toy setting. The paper then shows empirically that adding this scheme to SGD and to SAM results in better generalizing and lower-curvature solutions, where curvature is quantified using Hessian trace. The paper then shows that adding this scheme to Adam accelerates the _optimization_ of Llama transformers, and leaves an explanation why to future work. Finally, the paper theoretically analyzes this scheme for SAM in the manifold-of-global-minima setting, and proves that this scheme accelerates the drift towards flatter regions of the manifold.
Strengths: I think the idea is novel, and the experiments are promising, so this paper could stimulate significant research in the future.
Weaknesses: One weakness is that the motivation comes from the stylized example in section 2, which may be unrealistic for general deep learning optimization. For example, the idea of "sharp coordinates" only makes sense in this stylized example.
Technical Quality: 3
Clarity: 3
Questions for Authors: Shouldn't this algorithm also speed up _optimization_? The algorithm uses a larger learning rate on some coordinates, which should also have an optimization effect. (This would not be visible in the analysis setting of manifold-of-global-minima because no optimization is occurring in that setting.)
How would you define IRE at the most abstract level? If we define an abstract version of preconditioned gradient descent as $\theta_{t+1} = \theta_t - P g_t$ (where vanilla GD with learning rate $\eta$ corresponds to $P = \eta I$), is the idea that it's better to use $P_2$ rather than $P_1$ provided that $P_2 \succeq P_1$ and both are locally stable i.e. $\lambda_{\max}(P H) \le 2$ for $P \in $ {$P_1, P_2$}?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation of our work and insightful comments. Below, we address the reviewer’s questions in detail.
- **W1.** One weakness is that the motivation comes from the stylized example in section 2, which may be unrealistic for general deep learning optimization. For example, the idea of "sharp coordinates" only makes sense in this stylized example.
**Response:** We thank the reviewer for this insightful question.
- **Clarification.** We clarify that the idea behind IRE is to distinguish between "sharp/flat directions" rather than "sharp/flat coordinates". The use of a diagonal Hessian in Section 2 is only for simplification purpose. In fact, the theoretical analysis in Section 5 considers a more general setting without assuming a diagonal Hessian (see Assumption 5.1).
- **Practical Considerations.** In the practical implementation described in Section 3, we indeed use the diagonal Hessian to approximate the full Hessian, aiming to reduce computational costs. The diagonal Hessian is commonly acknowledged as a reasonable approximation of the full Hessian in deep neural networks and is frequently employed in the design of deep learning optimizers, as noted in references [1][2]. For a more detailed discussion, please refer to lines 156-162 in our manuscript.
- **Q1.** Shouldn't this algorithm also speed up optimization? The algorithm uses a larger learning rate on some coordinates, which should also have an optimization effect. (This would not be visible in the analysis setting of manifold-of-global-minima because no optimization is occurring in that setting.)
**Response:** We thank the reviewer for this insightful comment and will include a discussion on this issue in our revised version. The motivation of IRE is to speed up the sharpness reduction, which only requires to increase learning rate along completely flat (zero-curvature) directions. However, practical implementation may also increase learning rate along directions with small but non-zero curvatures, which can further speed up loss convergence. However, explaining why this approach can provide a significant acceleration is unclear.
- **Q2.** How would you define IRE at the most abstract level? If we define an abstract version of preconditioned gradient descent as $\theta_{t+1}=\theta_t-Pg_t$ (where vanilla GD with learning rate $\eta$ corresponds to $P=\eta I$), is the idea that it's better to use $P_2$ rather than $P_1$ provided that $P_2\succeq P_1$ and both are locally stable i.e. $\lambda_{\max}(PH)<2$ for $P\in\{P_1,P_2\}$?
**Response:** We are grateful to the reviewer for sharing this high-level perspective, which can help explain the acceleration of loss convergence observed with IRE. Given a base optimizer with update $\theta_{t+1}=\theta_t-P_1g_t$, we aim to improve it by selecting a better $P_2=QP_1$, while ensuring $\lambda_{\max}(P_2 H)<2$ to maintain training stability. Let $P_{flat}$ be the projection matrix into the flat (zero-curvature) directions. In our IRE approach, we choose $P_2=(I+\mu P_{flat}) P_1$, which meets the condition $P_2\succeq P_1$ mentioned by the reviewer. However, we remark that our original motivation of this specific $P_2$ is to accelerate effective dynamics on the manifold of global minima, thereby accelerating the sharpness reduction.
[1] George et al. Fast approximate natural gradient descent in a Kronecker-factored eigenbasis. (NeurIPS 2018)
[2] Liu et al. Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training. (ICLR 2024)
---
Rebuttal Comment 1.1:
Comment: Thanks again for your valuable time and effort in reviewing our work!
We are wondering if our responses address your questions or concerns.
We are happy to try to address any other comments in the time remaining. | Summary: This paper proposes an Implicit Regularization Enhancement (IRE) framework to accelerate the convergence of optimization algorithms towards flat minima in deep learning, thereby improving generalization and convergence. The key idea behind IRE is to decouple the dynamics along flat and sharp directions in the optimization landscape. It speeds up the dynamics along flat directions while keeping the dynamics in sharp directions unchanged. This allows IRE to substantially boost the implicit sharpness reduction of the base optimizer without hurting training stability.
The authors provide a practical way to efficiently incorporate IRE with generic base optimizers without introducing significant computational overhead. Extensive experiments across image classification and language modeling tasks demonstrate that IRE consistently improves the generalization performance of base optimizers like SGD, Adam, and SAM. Surprisingly, IRE also achieves a 2x speedup compared to a well-tuned AdamW optimizer in pre-training Llama language models of various sizes.
Furthermore, the paper provides theoretical guarantees showing that IRE can achieve a substantial acceleration over the base SAM algorithm in minimizing the trace of the Hessian matrix, which measures the flatness of the loss landscape.
In summary, the key contributions are:
1. The IRE framework that enhances implicit regularization by decoupling and accelerating optimization dynamics along flat directions
2. A practical and efficient implementation of IRE that can be incorporated into generic base optimizers
3. Extensive empirical results showing IRE improves generalization and speeds up convergence across vision and language tasks
4. Theoretical analysis proving IRE can substantially accelerate the sharpness reduction of the SAM optimizer
Strengths: ## Originality
The key idea behind the proposed Implicit Regularization Enhancement (IRE) framework - decoupling the optimization dynamics along flat and sharp directions to accelerate convergence to flat minima - appears to be quite novel and creative. To the best of my knowledge, this specific approach has not been explored in the optimization and generalization literature before. The authors motivate IRE through an intuitive illustrative example, showing how selectively increasing the learning rate along flat directions can speed up the implicit regularization. Adapting this core idea into a practical algorithm that can be efficiently incorporated into existing optimizers is also a notable contribution.
However, the paper could benefit from a more detailed discussion comparing and contrasting IRE with other approaches that aim to improve sharpness-aware minimization, such as those reducing the computational cost of SAM (e.g., Kwon et al., 2021; Liu et al., 2022). This would help further highlight the novelty and uniqueness of IRE.
## Quality
The paper presents a solid mix of conceptual explanations, practical algorithm design, extensive experiments, and theoretical analysis. The illustrative example in Section 2 provides a clear and intuitive understanding of the mechanism behind IRE. The practical implementation of IRE in Section 3, particularly the efficient approximation of the projection operator using diagonal Hessian estimates, demonstrates the authors' attention to computational efficiency.
The experimental results on image classification and language modeling tasks are extensive and convincing, showing consistent improvements in generalization performance across various models, datasets, and base optimizers. The surprising 2x speedup over AdamW in Llama pre-training is particularly impressive and warrants further investigation.
The theoretical analysis in Section 5, proving the substantial acceleration of IRE over SAM in minimizing the trace of Hessian, adds rigor to the empirical findings. The proofs leverage reasonable assumptions and provide non-asymptotic guarantees.
However, the paper could be strengthened by providing more insights and discussions on the potential limitations and failure cases of IRE. For example, are there scenarios where the diagonal Hessian approximation might be less effective? How sensitive is IRE to the choice of hyperparameters (e.g., $\lambda$ and $\gamma$)?
## Clarity
Overall, the paper is well-structured and clearly written. The main ideas, algorithms, and results are presented in a logical flow, making it easy for readers to follow. The use of figures (e.g., Figure 1) and illustrative examples enhances the clarity of the exposition. The mathematical notations and definitions are introduced appropriately and used consistently throughout the paper.
One area that could be improved is the description of the experimental setup and implementation details. While the paper provides references to the appendix for more details, including some key information (e.g., hyperparameter tuning ranges, model architectures) in the main text would make the experiments more self-contained and easier to interpret.
## Significance
The proposed IRE framework has the potential to make a significant impact in the field of deep learning optimization and generalization. By providing a principled and efficient way to accelerate convergence to flat minima, IRE can lead to models with better generalization performance and faster training times. The consistent improvements demonstrated across a range of vision and language tasks suggest that IRE could be widely applicable.
Moreover, the theoretical analysis of IRE's acceleration over SAM opens up new avenues for understanding and improving sharpness-aware optimization. The non-asymptotic guarantees provide a solid foundation for further theoretical investigations.
The 2x speedup achieved by IRE in Llama pre-training is particularly significant, given the computational challenges in training large language models. If these speedups can be reliably reproduced and scaled to larger models and datasets, IRE could meaningfully contribute to advancing the state-of-the-art in language model training.
To fully assess the significance of IRE, it would be valuable to see more comparisons with other state-of-the-art optimizers and regularization techniques. Additionally, evaluating the downstream performance of models trained with IRE (e.g., fine-tuning Llama on benchmarks) would provide a more comprehensive understanding of its impact.
Weaknesses: ### Comparison with Related Works
While the paper presents a novel approach to enhancing implicit regularization, it could benefit from a more detailed comparison with related works. The authors should discuss how IRE differs from and improves upon other techniques that aim to accelerate convergence to flat minima or reduce the computational cost of sharpness-aware minimization (e.g., Kwon et al., 2021; Liu et al., 2022). This would help to better highlight the novelty and advantages of IRE.
### Limitations and Failure Cases
The paper could be strengthened by providing a more in-depth discussion of the potential limitations and failure cases of IRE. For example:
- Are there scenarios where the diagonal Hessian approximation might be less effective or lead to suboptimal results?
- How sensitive is IRE to the choice of hyperparameters (e.g., $\lambda$ and $\gamma$)? Is there a risk of instability or divergence if these hyperparameters are not tuned properly?
- Are there any particular types of models, datasets, or tasks where IRE may not provide significant benefits or even hurt performance?
Addressing these questions would help readers better understand the scope and applicability of IRE.
### Experimental Setup and Implementation Details
The description of the experimental setup and implementation details could be improved. While the appendix contains additional information, it would be helpful to include key details in the main text, such as:
- The specific hyperparameter tuning ranges for $\lambda$ and $\gamma$ used in the experiments
- The architectures of the models used (e.g., ResNet and ViT variants)
- The data augmentation and preprocessing techniques applied
Including these details would make the experiments more self-contained and easier to interpret and reproduce.
### Comparisons with State-of-the-Art Optimizers
To fully demonstrate the significance of IRE, it would be valuable to include comparisons with a broader range of state-of-the-art optimizers and regularization techniques. While the paper shows consistent improvements over SGD, AdamW, and SAM, it would be informative to see how IRE performs compared to other recent approaches, such as:
- Adaptive gradient methods like Adagrad (Duchi et al., 2011), Adam (Kingma and Ba, 2014), and their variants
- Second-order optimization methods like K-FAC (Martens and Grosse, 2015) and Shampoo (Gupta et al., 2018)
- Other regularization techniques like weight decay, dropout, and label smoothing
These comparisons would provide a more comprehensive understanding of IRE's performance and potential advantages over existing methods.
### Downstream Performance Evaluation
While the paper demonstrates impressive speedups in Llama pre-training, it would be informative to evaluate the downstream performance of the models trained with IRE. For example, fine-tuning the pre-trained Llama models on benchmark tasks like language understanding, question answering, or text generation would provide insights into the practical impact of IRE on model quality and generalization.
However, given the computational cost and time constraints of the rebuttal period, it may not be feasible to conduct extensive downstream evaluations. In this case, the authors could discuss this limitation and propose it as a direction for future work.
### Theoretical Analysis
The theoretical analysis in Section 5 provides valuable guarantees for IRE's acceleration over SAM. However, the paper could benefit from a more intuitive explanation of the key assumptions and their implications. For example:
- Discussing the practical significance of Assumption 5.1 (manifold of minimizers) and how it relates to the empirical observations in deep learning
- Providing a high-level interpretation of the non-asymptotic bounds and their dependence on the hyperparameters (e.g., $\eta$, $\rho$, $\lambda$)
Additionally, exploring the theoretical connections between IRE and other optimization techniques (e.g., momentum, adaptive methods) could provide further insights into its behavior and potential extensions.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. **Comparison with related works:**
- Question: How does IRE differ from and improve upon other techniques that aim to accelerate convergence to flat minima or reduce the computational cost of sharpness-aware minimization, such as those proposed by Kwon et al. (2021) and Liu et al. (2022)?
- Suggestion: Provide a more detailed discussion comparing and contrasting IRE with these related approaches to better highlight the novelty and advantages of IRE.
2. **Limitations and failure cases:**
- Questions: Are there scenarios where the diagonal Hessian approximation might be less effective or lead to suboptimal results? How sensitive is IRE to the choice of hyperparameters (e.g., $\lambda$ and $\gamma$)? Are there any particular types of models, datasets, or tasks where IRE may not provide significant benefits or even hurt performance?
- Suggestion: Include a more in-depth discussion of the potential limitations and failure cases of IRE to help readers better understand its scope and applicability.
3. **Experimental setup and implementation details:**
- Question: What are the specific hyperparameter tuning ranges for $\lambda$ and $\gamma$ used in the experiments, the architectures of the models (e.g., ResNet and ViT variants), and the data augmentation and preprocessing techniques applied?
- Suggestion: Include these key details in the main text to make the experiments more self-contained and easier to interpret and reproduce.
4. **Comparisons with state-of-the-art optimizers:**
- Question: How does IRE perform compared to other recent optimization approaches, such as adaptive gradient methods (e.g., Adagrad, Adam), second-order methods (e.g., K-FAC, Shampoo), and other regularization techniques (e.g., weight decay, dropout, label smoothing)?
- Suggestion: Include comparisons with a broader range of state-of-the-art optimizers and regularization techniques to comprehensively understand IRE's performance and potential advantages.
5. **Downstream performance evaluation:**
- Question: How does the downstream performance of models trained with IRE compare to those trained with other optimizers when fine-tuned on benchmark tasks like language understanding, question answering, or text generation?
- Suggestion: If feasible within the rebuttal period, evaluate the downstream performance of the pre-trained Llama models to provide insights into the practical impact of IRE on model quality and generalization. If not feasible, discuss this limitation and propose it as a direction for future work.
6. **Theoretical analysis:**
- Questions: What is the practical significance of Assumption 5.1 (manifold of minimizers) and how does it relate to the empirical observations in deep learning? Can you provide a high-level interpretation of the non-asymptotic bounds and their dependence on the hyperparameters (e.g., $\eta$, $\rho$, $\lambda$)?
- Suggestion: Provide more intuitive explanations of the key assumptions and their implications in the theoretical analysis. Additionally, explore the theoretical connections between IRE and other optimization techniques (e.g., momentum, adaptive methods) to provide further insights into its behavior and potential extensions.
Addressing these questions and incorporating the suggestions in the rebuttal or a revised version of the paper would help to strengthen the work and provide a more comprehensive understanding of the proposed IRE framework, its novelty, effectiveness, and impact.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: 1. **Limitations:**
- Add a dedicated subsection discussing the assumptions made in the theoretical analysis, potential scenarios where IRE may not provide benefits or even hurt performance, and the sensitivity of IRE to hyperparameter choices.
2. **Potential negative societal impact:**
- Include a subsection discussing the environmental cost of training LLMs, the potential misuse of LLMs for generating harmful content, and the possible widening of the gap between well-resourced and under-resourced research groups due to IRE's computational advantages.
- Propose mitigation strategies or areas for future research to address these concerns.
3. **Reproducibility and transparency:**
- Provide clear instructions for reproducing the experiments, make the code and pre-trained models publicly available, and discuss any limitations or challenges in reproducibility.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's recognition of our work and helpful comments. Below, we offer detailed responses to the reviewer’s questions:
- **W\&Q1. Comparison with related works.**
**Response:** We thank the reviewer for this question and will provide a more detailed comparison with related algorithms in the revised version. First, we clarify that the mechanism behind IRE in accelerating sharpness reduction is fundamentally different from that of SAM. Moreover, IRE can be integrated with generic base optimizers, including SGD, Adam, and SAM. In contrast, the SAM variants, including Kwon et al. (2021) and Liu et al. (2022), are specifically designed to improve SAM.
- **W\&Q2. Limitations and failure cases.**
**Response:** Thank the reviewer for the insightful question. We acknowledge that diagonal Hessian may not always provide accurate second-order information for identifying the flat (zero-curvature) directions. Recall that IRE iterates as $\theta_{t+1} = \theta_t - \eta (1+\kappa P) g_t$, where $P$ represents the projection to flat directions. When $\kappa=0$, it recovers the base optimizer. For problems where diagonal Hessian provides relatively accurate second-order information, a large $\kappa$ can be used. Conversely, for problems where diagonal Hessian is less informative, we can set $\kappa$ to be small or even to zero. This approach allows to control over the influence of whether the diagonal Hessian approximation is effective. This idea is analogous to the damping technique adopted in Newton's methods with an approximate Hessian. Specifically, we tune $\kappa\in$\{1,2\} for CNNs, $\kappa\in$\{2,3,4\} for Llama pre-training, and $\kappa\in$\{20,50\} for ViTs.
- **W\&Q3. Experimental setup and implementation details.**
**Response:** Thank the reviewer for this question. For image classification tasks, the models including ResNets and ViT-T/S follows the ones in Muller et al. (2023). For LLM pre-training, we use the Llama models from HuggingFace. For image data pre-processing, we apply the default normalization technique: *transforms.Normalize(mean, std)*. The search ranges of hyperparameters, as well as the data augmentation methods are specified in Appendix B. In the revised version, we will carefully incorporate all these experimental details in the main text.
- **W\&Q4. Comparisons with state-of-the-art optimizers.**
**Response:** Thank the reviewer for this question. We clarify that IRE is a generic framework for boosting optimizer's implicit bias and can be integrated with general optimizers. Due to space and resource limit, we have only compared IRE with the popular algorithm including SGD, SAM and AdamW. Specifically, we show that IRE can improve the generalization in image classification task and convergence of AdamW in LLM pre-training. As for other optimizers mentioned by the reviewer, Adagrad has been replaced by Adam(W) in most fields; second-order algorithms like K-FAC and Shampoo is computational expensive and it is less popular in training large models.
About regularization, IRE is proposed to enhance implicit regularization and thus, orthogonal to those explicit regularization, including weight decay, dropout and label smoothing. It is worth mentioning that our experiments incorporate various regularization techniques such as weight decay and label smoothing. Please refer to the Appendix for more details.
- **W\&Q5. Downstream performance evaluation.**
**Response:** Thank the reviewer for this constructive comment. It is valuable to explore the performance of the pretrained models in downstream tasks. However, due to the limited time for response, we could not fully evaluate the downstream tasks for pretrained models, but we conducted preliminary experiments to explore this issue.
- Notably, "Liu et al. (2023) Same pretraining loss..." points out a strong correlation between the flatness and downstream task performance among models. Specifically, it implies that *for models with the same pre-training loss, flatter solutions yield better performance on downstream tasks.*
- Inspired by this observation, we evaluated the flatness, measured by the trace of Hessian, of models pre-trained by AdamW and AdmIRE. Due to time constraints, we only focus on the experiments in Fig.3 (left), i.e., training Llama (60M) on wiki-103 dataset using either AdamW or AdmIRE. The experimental results, shown in the **attached PDF** in our **Response to All Reviewers**, demonstrate that AdmIRE not only achieves the same loss in *only half the iterations* required by AdamW, but also the solutions found by AdmIRE are *significantly flatter* than that found by AdamW.
In the revised version, we will include a comprehensive evaluation of downstream tasks.
- **W\&Q6. Theoretical analysis.**
**Response:** Thanks the reviewer for the question.
- We clarify that Assumption 5.1 essentially only assume: 1) the loss $\mathcal{L}(\cdot)$ is $C^4$ smooth and 2) the global minima are connected. For over-parametrized models, this connectivity assumption on the minima manifold has been empirically verified in works such as Draxler et al. (2018) and Garipov et al. (2018), and theoretically supported in Cooper (2018). We refer to lines 267-270 of our manuscript for more details.
- Our main theoretical results are explained in lines 286-369, where we provide an intuitive interpretation: IRE accelerates the "effective dynamics" of base optimizers in flat directions, achieving faster reduction of sharpness; moreover, IRE maintains stability by not affecting movement along sharp directions. This dual effect ensures IRE can improve generalization performance without compromising training stability.
- As for other optimization techniques, our experiments have incorporated momentum, weight decay, and the adaptive method (AdamW). However, these techniques poses new challenges to theoretical analysis, which we leave as future work.
---
Rebuttal Comment 1.1:
Comment: Thanks again for your valuable time and effort in reviewing our work!
We are wondering if our responses and new experiments address your questions or concerns.
We are happy to try to address any other comments in the time remaining. | Summary: The authors propose IRE to enhance the implicit regularization of base optimizers, thereby improving the generalization and convergence in deep learning. IRE decouples the dynamics of flat and sharp directions, reducing sharpness along flat directions while maintaining stability in sharp directions. The paper provides theoretical evidence that IRE can substantially expedite convergence towards flat minima in SAM.
Strengths: 1. IRE's ability to integrate with existing optimizers without major modifications makes it easily adoptable in current systems.
Performance Improvement: Empirical results show that IRE enhances the generalization capabilities of popular optimizers across multiple tasks and datasets.
2. In the pre-training of large language models, IRE has demonstrated a significant acceleration in convergence speed.
3. The paper offers a theoretical foundation for IRE's effectiveness in minimizing the trace of the Hessian, reinforcing its practical applications.
Weaknesses: 1. The improvements of IRE in Table 1 and 7 are not as significant as expected. It makes one wonder about its usefulness for CNN networks and its oversensitivity to hyperparameters
2. A big concern for me is that the experiment in Figure 3 does not appear to have converged yet, the rate of convergence in the early part of an experiment does not equate to the rate of convergence throughout the training process, as well as a more detailed analysis of the performance of the model after convergence should have been added to demonstrate that the final position of convergence is good.
3. Judging from the code, the cost of each training step is at least twice that of SGD.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could IRE help in the finetuning phase of LLMs, Including both convergence speed and convergence position properties?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. The author did not provide information on the computing resources they used.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s recognition of our work and helpful comments. Below, we offer detailed responses to the reviewers questions.
- **W1.** The improvements of IRE in Table 1 and 7 are not as significant as expected. It makes one wonder about its usefulness for CNN networks and its oversensitivity to hyperparameters.
**Response:** We thank the reviewer for this insightful comment.
- **CNNs vs ViTs.** It is important to emphasize that **CNNs are designed with a strong image prior**, which limits the potential for further improvement through regularization. Consequently, it is not unexpected that the improvement achieved by IRE for CNNs is less significant. This aligns with observations in previous regularization techniques, such as ASAM [1] and SAM-ON [2], which similarly shows limited improvements for CNNs. For instance, ASAM improves SAM by only 0.26\% points when training WRN-28-10 on CIFAR-100, and SAM-ON improves SAM by just 0.17\% points when training ResNet-50 on ImageNet. In contrast, ViTs, which have a much weaker image prior, benefit more from regularization, as shown in Table 3.
- **Sensitivity to hyperparameters.** 1) For image classification tasks, the optimal hyperparameters for IRE do vary depending the dataset and model. Nevertheless, Table 7 demonstrates that IRE **consistently** improves performance even without tuning the hyper-parameters, although the improvement is modest. 2) In contrast, when training Llama models, IRE with a fixed hyperparameter $\gamma=0.6$ performs effectively across varying model and data sizes, as shown in Fig.3.
- **W2.** A big concern for me is that the experiment in Figure 3 does not appear to have converged yet, the rate of convergence in the early part of an experiment does not equate to the rate of convergence throughout the training process, as well as a more detailed analysis of the performance of the model after convergence should have been added to demonstrate that the final position of convergence is good.
**Response:** We thank the reviewer for raising this concern.
- **Loss convergence in LLM pre-training.** It is important to note that in LLM pre-training, the training loss never decrease significantly due to the huge size of dataset. For example, as reported in [3], the official Llama pre-training achieves only a final loss around 1.55, despite the model size being 65B.
- **Final loss v.s. performance.** In LLM pre-training, it is widely observed that the performance on downstream tasks is predominantly determined by the final pre-training loss and has little correlation with other pre-training components such as model types. We refer to [4] for a detailed analysis of this issue. Therefore, the primary focus in LLM pre-training is to reduce the final loss as much as possible, given specific computational and data resources.
- **Final loss.** Our final loss aligns with expectations based on our model scale. For instance, for our baseline, our LLama (229M) was trained on openwebtext by AdamW for 100k steps, resulting in a final loss of 2.835. Fig.1(d) in [5] show that GPT models of varying sizes were trained on openwebtext by AdamW for 100k steps, achieving losses of 2.915 for GPT-small (125M) and 2.695 for GPT-middle (355M). Our loss of 2.835 for Llama (229M) (the baseline) is consistent with these results. Notably, our proposed AdmIRE achieves the same loss of 2.835 with only 50k iterations, which is half the number of iterations required by AdamW.
- **W3.** Judging from the code, the cost of each training step is at least twice that of SGD.
**Response:** We clarify that the cost of our IRE takes twice that of base optimizers only during the steps of estimating the diagonal Hessian and updating the mask. However, these steps are triggered only once every $K$ steps (see Alg. 1). In all our experiments, we set $K=10$ (see l.168-l.172), so the average cost of IRE is only **1.1 times** that of the base optimizer. Table 4 further verify this estimate empirically.
- **Q1.** Could IRE help in the finetuning phase of LLMs, Including both convergence speed and convergence position properties?
**Response:** We thank the reviewer for this constructive question. To address this, we have conducted a new Supervised Fine-Tuning (SFT) experiment. Specifically, we finetune pretrained Llama2-7B with LoRA on Stanford's Alpaca dataset. The results, shown in the **attached PDF** in our **Global Response to All Reviewers**, demonstrate that IRE can **accelerate the convergence in SFT**. This is consistent with the advantage of IRE in pretraining. Unfortunately, we do not have enough time to evaluate the performance of the convergent models on downstream tasks. In the revised version, we will include complete results for SFT.
- **L1.** The author did not provide information on the computing resources they used.
**Response:** Thank you for this reminder. For Section 4.1 (image classification), the experiments on Cifar-10/100 were conducted using a single A800 GPU, while the experiments on ImageNet were conducted using 4 A800 GPUs. Details regarding the computing resources for the experiments in Section 4.2 are provided in Appendix B (l.664 and l.678).
[1] Kwon et al. ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks. (ICML 2021)
[2] Muller et al. Normalization Layers Are All That Sharpness-Aware Minimization Needs. (NeurIPS 2023)
[3] Touvron et al., LLaMA: Open and Efficient Foundation Language Models. (2023)
[4] Du et al. Understanding Emergent Abilities of Language Models from the Loss Perspective. (2024)
[5] Liu et al. Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training. (ICLR 2024)
---
Rebuttal Comment 1.1:
Comment: Thanks again for your valuable time and effort in reviewing our work!
We are wondering if our responses and new experiments address your questions or concerns.
We are happy to try to address any other comments in the time remaining. | null | null | Rebuttal 1:
Rebuttal: ### **Global Response to All Reviewers.**
- We express our sincere gratitude to all reviewers for appreciating our results, i.e,
- A novel algorithm framework (IRE), which can improve both generalization and optimization, by enhancing the implicit regularization.
- Experimentally, IRE consistently improves generalization performance for image classification. Remarkably, IRE achieves a $2\times$ speed-up in pre-training Llama models.
- Theoretically, we demonstrate that IRE substantially accelerate convergence towards flat minima in sharpness-aware minimization (SAM).
- We also thank appreciate the reviewers for their valuable comments and suggestions for improving our paper. In our revised version, we will correct all typos, provide complete experimental settings and results, and incorporate the discussions with the reviewers.
- **New experiments.** To further explore the performance of IRE in various settings, we have conducted 3 new experiments. As suggested by Reviewer 69nS and QWSd, we examined:
- (i) Whether IRE can accelerate the convergence of AdamW in the finetuning phase;
- (ii) During the pre-training phase, whether the solution found by AdmIRE has better properties than that found by AdamW.
Additionally, to supplement our results on generalization, we conducted an additional experiment:
- (iii) Whether IRE can improve the generalization performance of AdamW when training ViT on ImageNet.
All of these results are reported in the **attached PDF**, which further verifies remarkable performance of IRE under more settings.
- We have addressed each concern raised by the reviewers through separate responses provided below.
Pdf: /pdf/19656e688a5dc5130000a53587d2223e13704e19.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Macroscopic Dynamics from Partial Microscopic Observations | Accept (poster) | Summary: This paper introduces an efficient framework for learning macroscopic dynamics of complex systems. The training data consists of microscopic configurations with only partial dynamics, i.e., time derivatives of only a small subset of the microscopic variables. The paper shows how this information nevertheless allows unbiased estimation of dynamics at the level of low-dimensional macroscopic dynamics. Experiments show the data-efficiency of the scheme compared to learning on full microscopic dynamics.
Strengths: * The paper’s approach is novel, addresses an important bottleneck in the field, and has the potential to become an important contribution in the area of ML for dynamical systems
* The paper is generally well written and the exposition is easy to follow.
* The experiments cover a diversity of dynamical systems, including MD and PDE systems.
Weaknesses: * Although the theoretical results are nice and the experimental results empirically support them, I am still skeptical of the efficacy of $L_x$ as a proxy loss for $L_z$. To bolster the claims, I strongly recommend that authors present some empirical analysis on the spectrum of the Jacobian $\psi’$, or even better, plot loss curves over the course of a training run showing the behavior of $L_z$ versus $L_x$. This will give the reader a much better intuition for how these different loss functions behave.
* Without an understanding of how significant the final test errors are, it is not particularly insightful to merely show test errors after an arbitrarily selected amount of training data. Instead it would be nice to see scaling plots like in Figure 2 for the other systems in the analysis.
* Table 1 and Figure 2 are redundant and I would recommend eliminating the former.
* Theorem 2 could be further clarified. What is $\arg \min L_{x,p}$ since $L_{x,p}$ is also a random variable?
* Is it difficult or tricky to tune the weight of the conditioning number loss term? What happens if the weight on this term is not high enough? What if it is too high? Ablation analyses of these factors would improve the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful feedback. Below are our responses:
**Q: Plot loss curves over the course of a training run showing the behavior of $L_x$ versus $L_z$. This will give the reader a much better intuition for how these different loss functions behave.**
A: Thanks for this suggestion. We have attached a pdf file in the top-level Author Rebuttal. In figure 1 of the pdf file, we plotted the training loss of $L_z$ and $L_x$, and their test metric. We observe that during the training of the model using $L_x$, the loss $L_x$ exhibits a sharp decline in the first several epochs. Then the loss $L_x$ exhibits a very slight decline in the following epochs, while the test error continues decreasing.
**Q:Without an understanding of how significant the final test errors are, it is not particularly insightful to merely show test errors after an arbitrarily selected amount of training data. Instead it would be nice to see scaling plots like in Figure 2 for the other systems in the analysis.**
A: In Figure 3, each model is trained with the same number of force computations. For each latent structure, we compare the relative performance of $L_{x, p}$ versus $L_{z}$, and $L_{x, p}$ always outperforms $L_{z}$. Thus we show our model is agnostic to different latent structures. Due to the time constraint, we find it difficult to finish the scaling plots. We will continue to refine our paper based on your valuable suggestion.
**Q: Table 1 and Figure 2 are redundant**
A: Thanks for pointing this out. We have revised the paper to remove Table 1 from the main content.
**Q: Theorem 2 could be further clarified. What is $\arg \min L_{x,p}$ since $L_{x,p}$ is also a random variable?**
A: In statistical learning theory, the empirical risk is considered a random variable. Denote the data by $X$ and assume $X \sim \mathcal{D}$. Let $X^n=$ {$X_1 , \cdots, X_n$} be a collection of n i.i.d. samples from $\mathcal{D}$. Then the empirical risk is :
$$
L_n(\theta) = \frac{1}{n}\sum_{i=1}^n l_{\theta}(X_i)
$$
where $l_{\theta}$ is a loss function that depend on $\theta$. Note that the empirical risk is a random variable since $X^n$ is a random variable. We can also take the expectation of the empirical risk $L_n(\theta)$ to get the expected risk:
$$ E_{X^n} L_n(\theta) = E_X l_{\theta}(X) $$
When we write $\arg \min_{\theta} L_n(\theta)$, it is implicitly assumed that the n data samples are already drawn from its distribution, then $L_n(\theta)$ can be calculated deterministically. Now in our case, $L_{x,p}$ is a random variable. When we write $\arg \min_{\theta} L_{x,p}$, we implicitly assume the data samples
{$ x^i, f_{I(x^i)}(x^i)$ } $_{i=1, \cdots, K}$
are already drawn from its distribution, thus $L_{x,p}$ can be calculated deterministically.
**Q: Is it difficult or tricky to tune the weight of the conditioning number loss term? What happens if the weight on this term is not high enough? What if it is too high? Ablation analyses of these factors would improve the paper**
A: In Eq 7, the loss $L_{AE} = L_{rec} + \lambda_{cond} L_{cond}$.
- If $\lambda_{cond}$ is too high, $L_{AE}$ will be dominated by $L_{cond}$. Then the autoencoder may not reconstruct the microscopic dynamics well, hence may not capture the closure terms well. If the latent space is not closed enough, we can not learn the macroscopic dynamics well.
- If $\lambda_{cond}$ is too low, $\varphi^{\prime}(x)$ may be very ill-conditioned. In Theorem 1, the eigenvalues of $\varphi^{\prime}(x)^T \varphi^{\prime}(x)$ is lower bounded by $b_1$ and upper bounded by $b_2$, and we have:
$$
b_1(L_x + C) \leq L_z \leq b_2 (L_x + C)
$$This theoretically guarantees the effectiveness of $L_x$ when $b_1$ and $b_2$ are close. If $\varphi^{\prime}(x)$ is ill-conditioned, then $b_1$ is much smaller thant $b_2$. Theorem 1 can not guarantee the effectiveness of $L_x$ anymore.
In our experiments, the value of $\lambda_{cond}$ is determined through a logarithmic grid search (more specifically, 1e-7, 1e-6, 1e-5, 1e-4, 1e-3). Thus in our experience the tuning $\lambda_{cond}$ is not very tricky. Based on our experience, tuning $\lambda_{cond}$ is relatively straightforward and not tricky.
Below we show the test error of the Predator-Prey system. We vary $\lambda_{cond}$ and train the autoencoder, then learn the macroscopic dynamics with $L_{x,p}(p=1/5)$ and report the test error.
| | | | | | | | |
| ---- | ---- |---- | ---- |---- |---- |---- |---- |
| $\lambda_{cond}$ | 0 | 1e-8 | 1e-7 | 1e-6 | 1e-5 | 1e-4 | 1e-2 |
| Test error of $L_{x,p} (p=1/5)$ | 6.42e-02 | 2.62e-03 | 2.84e-03 | *8.36e-04* | 3.66e-03 | 2.60e-03 | 4.24e-03|
We can observe that the performance of $L_{x,p}$ may deteriorate when the parameter $\lambda_{cond}$ is either too high or too low.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed response to my concerns and will raise the score to 7. However as I am not as familiar with existing literature on reduced order models as other reviewers, I will decrease my confidence to 3. | Summary: The authors aims to use ML to compute macroscopic dynamics of a system from partially observed microscopic dynamics. The paper defines the macroscopic dynamics to be the lower dimensional latent space of an autoencoder that encodes the microscopic system. Given the autoencoder they train a macroscopic dynamics model using the partially observed forces of the microscopic dynamics that are projected to the macroscopic space via the encoder. Experiments are carried out on toy systems and lennard jones potential systems that are scaled to a large number of particles.
Strengths: 1. The paper tackles an impactful problem with important scientific applications
2. The underlying idea is simple (training an autoencoder to obtain the mapping from fine to coarse dynamics and using it to project the microscopic forces). However, non-trivial technical problems arise which the authors solve with creative techniques that would be valuable to share and present at a conference.
3. Rigorous evaluation: The authors have several creative insightful evaluations that go beyond the experiments that I would have imagined or expected (this is with the caveat that I do not work on reduced order modeling and am not familiar with what the standard evaluations are).
Weaknesses: 1. Experiment system choices: The authors carry out experiments on small toy systems and large lennard jones systems of the same particles. It seems to me that the problem should become significantly harder with the "homogeneity" of the modeled system decreasing. It further seems to me that this would be the case in real scientific applications where we potentially have different atom types and multiple forces beyond lennard jones potential forces. Is this assessment correct? It would be great if you could put into perspective how close the experiments you carry out and the systems you choose are to actual scientific applications of interest. Thanks!
2. Presentation: you briefly mention closure modeling in the related work. Then in the methods section you say you follow closure modeling and define quantities such that a closed system arises. At this point I ask myself, what does closed mean? I hoped it becomes clear later in the paper but find there to be no sufficient explanation. Could you please let me know how you train the autoencoder in such a manner that the latent space captures specific desired macroscopic dynamics?
Minor:
1. Presentation: Motivation: What are some compelling application examples in which learning from partially observed microscopic dynamics is valuable?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What if the particles to observe are not chosen uniformly at random? Is that also a common scenario where we do not have a uniform subsampling but our observations are biased by the specific subregion that we observe.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper prominently discusses two limitations (sparsity assumption and sampled distribution of microscopic dynamics) which are significant and put the paper into perspective of the field of work. I do not see further meaningful limitations that should be discussed apart from the possible limitation in terms of realistic evaluation which I mention in the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful feedback. Below are our responses:
**Q: What does closed mean?**
A: Let the state of a system be $z$. When we say the system is closed, we mean the dynamics of the system can be written in the following form:
$$
\dot{z} = f(z)
$$where $f$ is a function that only depends on the system's state $z$, not on any external variables. Thank you for pointing this out. We will include an explanation of this in the paper.
**Q: Could you please let me know how you train the autoencoder in such a manner that the latent space captures specific desired macroscopic dynamics?**
A: The autoencoder is trained with Eq. 7 in the paper:
$$
L_{AE} = L_{rec} + \lambda_{cond} L_{cond}
$$ which is a reconstruction loss plus a condition number normalization loss.
The minimization of the reconstruction loss ensures the latent space captures almost all the key dynamics and structures in the high-dimensional system. Then the specific macroscopic observables along with the latent space can be viewed as a closed system since the latent space captures almost all the information. The autoencoder only finds the closure to the macroscopic observables. Next, We use a neural network to parametrize their dynamics and train the model with $L_{x, p}$ to capture the desired macroscopic dynamics.
**Q: The problem should become significantly harder with the "homogeneity" of the modeled system decreasing, In real scientific applications where we potentially have different atom types and multiple forces beyond Lennard-Jones potential forces. Is this assessment correct? It would be great if you could put into perspective how close the experiments you carry out and the systems you choose are to actual scientific applications of interest**
A: Thank you for the insightful question. The application of our method to real scientific problems is also a primary focus of our next step. We agree that compared to the Lennard-Jones system we choose, we potentially have 1. different atom types and 2. more complex forces in real scientific applications.
In the experiment of the Lennard-Jones system, we have validated that our method can be applied to particle systems and can scale to very large systems. We think the main challenge posed by different atom types and more complex forces lies in the closure modeling. More advanced autoencoders can be used for efficient closure modeling. Once the closure to the desired macroscopic observables is found, our framework can be readily applied to learn the macroscopic dynamics in real large-scale scientific problems.
**Q: What are some compelling application examples in which learning from partially observed microscopic dynamics is valuable?**
A: For example, in the design of Li-ion batteries, the viscosity and ionic diffusivity of liquid electrolytes are usually of key concern. But to obtain the macroscopic observables, existing method require long-term microscopic simulation where the calculation of the microscopic forces on all the atoms is extremely expensive [1]. In this situation, our method can learn the desired macroscopic dynamics from partial computation of microscopic forces, then we can significantly reduce the computational cost for force computations.
**Q: What if the particles to observe are not chosen uniformly at random? Is that also a common scenario where we do not have a uniform subsampling but our observations are biased by the specific subregion that we observe**
A: When we do not have a uniform subsampling, we can use importance sampling to reweight the data such that the probability of each particle being chosen is the same. For example, assume a region $\Omega$ is divided into four subregions $\Omega_i, i=1,2,3,4$. The i-th region is chosen with probability $p(\Omega_i) = p_i >0$, and $p_1 + p_2 + p_3 + p_4 = 1$ . We have $n$ samples drawn from the distribution. Then we can reweight each sample from region $\Omega_i$ by $1/p_i$. In the reweighted data, each particle is chosen with the same probability.
---
References:
[1] Jia, W., Wang, H., Chen, M., Lu, D., Lin, L., Car, R., ... & Zhang, L. (2020, November). Pushing the limit of molecular dynamics with ab initio accuracy to 100 million atoms with machine learning. In SC20: International conference for high performance computing, networking, storage and analysis (pp. 1-14). IEEE.
---
Rebuttal Comment 1.1:
Comment: Sorry for the late response - I reread the paper and will be faster to respond from now on.
(and thanks for the careful responses and explanations)
I think your answers that you put under these headings best relate to my main concern:
1. "Q: Could you please let me know how you train the autoencoder in such a manner that the latent space captures specific desired macroscopic dynamics?"
2. "Q: What are some compelling application examples in which learning from partially observed microscopic dynamics is valuable?"
I think your answer to the first question misses my concern a little bit - I should have explained my question better. I understand that you train an autoencoder and a model operating on its latent space to capture the dynamics of the latent space.
The concern is: why would the latent space capture the DESIRED macroscopic dynamics. If it is closed, it will capture all dynamics - sure. Also, it will be lower dimensional and capture SOME macroscopic dynamics. But why would the latent dimensions necessarily correspond to any of the macroscopic observables that "we" as the practitioners were interested in?
For instance, considering your example that you gave for question 2 where we care about the viscosity of the system: what guarantees us that any of the latent variables correspond to viscosity? It seems to me that I am missing the entire point of the paper and that the latent representations remain uninterpretable and we cannot ensure that they correspond to certain macroscopic observables that we were interested in.
---
Reply to Comment 1.1.1:
Comment: Thanks for your insightful question. Let us address the question more clearly.
In l 124, we mention '*we will use an autoencoder to find the closure $\hat{z} = \hat{\varphi}(x)$ to $z^{\ast}$ such that $z = (z^{\ast}, \hat{z})$ forms a closed system*'.
Here $z^{\ast} = \varphi^{\ast}(x)$ is the macroscopic observables that we are interested in, and the dynamics of $z^{\ast}$ is the DESIRED macroscopic dynamics.
The function $\varphi^{\ast}$ **is determined beforehand** and contains no trainable parameters. The closure $\hat{z} = \hat{\varphi}(x)$ is learned by the autoencoder. During the training of the autoencoder, $\varphi^{\ast}$ is fixed all the time, and only the parameters of $\hat{\varphi}$ are updated.
For example, in the Lennard-Jones experiment, we choose the macroscopic observable $z^{\ast}$ as the Temperature $T$. $z^{\ast} = \varphi^{\ast}(x)$ is defined through Eq 18:
$$
T = \frac{2}{3(N_{atoms}-1)} \times \sum_{i=1}^{N_{atoms}} \frac{m_iv_i^2}{2}
$$
In l 250, we mention we find another 31 closure variables using the autoencoder.
We **directly concatenate** the macroscopic observable $z^{\ast}$ and the closure $\hat{z}$ to get the latent variable $z$:
$$z = (z^{\ast}, \hat{z})$$
Then the first dimension of $z$ is the desired macroscopic observable.
The method mentioned above is common in closure modeling. For example, the authors in [1] want to learn the stretching dynamics of polymer. Then they fix $z^{\ast}$ to be the length of the polymer and find another two-dimensional closure variable $\hat{z}$. As for the interpretability, $z^{\ast}$ is interpretable since it is exactly the macroscopic observables we want. Usually, the closure variables $\hat{z}$ are not easy to interpret, since they are learned by the neural network. For simple systems like a nonlinear pendulum system, it may be possible to interpret $\hat{z}$ learned by the autoencoder [2, 3].
We hope that this explanation clarifies your question.
---
References:
[1] Chen, X., Soh, B.W., Ooi, ZE. et al. Constructing custom thermodynamics using deep learning. Nat Comput Sci 4, 66–85 (2024). https://doi.org/10.1038/s43588-023-00581-5
[2] Champion, K., Lusch, B., Kutz, J. N., Brunton, S. L. (2019). Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences, 116(45), 22445-22451.
[3] Evangelou, N., Giovanis, D. G., Kevrekidis, G. A., Pavliotis, G. A., & Kevrekidis, I. G. (2023). Machine Learning for the identification of phase-transitions in interacting agent-based systems. arXiv preprint arXiv:2310.19039. | Summary: The authors describe a method to efficiently obtain aggregate information about forces acting on all particles in a system, in an effort to compute dynamics of macroscopic (aggregated) quantities. The key idea described in the paper is to sub-sample the particles to a small set, and only compute the forces on the small set (instead of computing all forces). The forces are then used to train a vector field of a latent space variable (through the chain rule). The latent space is obtained by training an auto-encoder on the microscopic states, independent of the force computations. The authors demonstrate the efficiency on multiple numerical systems, and include theoretical insights into the loss functions on micro- and macro (latent) states.
Strengths: Obtaining dynamics of macroscopic variables from microscopic simulations is an important and challenging problem. Sub-sampling particles to reduce the burden on force computations and learning latent spaces of microscopic states automatically (instead of prescribing known quantities) are reasonable approaches to address this challenge. The chosen particle systems in the computational experiments are well-known and reasonable choices, and the reduction in the number of force computations (while keeping an accurate macroscopic state dynamics) are impressive.
Weaknesses: 1) The methods proposed in the paper are (a) sub-sampling particle indices to reduce number of force computations and (b) using a standard auto-encoder network to obtain a latent space. The former idea - especially when combined with the chain rule for the macroscopic dynamics - is noteworthy. Still, the general advancement of the state of the art is not large enough to warrant acceptance, in my opinion. In particular, the choice of auto-encoder is not suitable for the given problem (see question 6).
2) Some parts of the literature are not cited.
[A] l.65: reduced order modeling has not started in 2020 with deep learning methods. Appropriate literature from the decades before should be cited. A review can be found here:
[A1] Schilders, Wilhelmus H.A., Joost Rommes, and Henk A. van der Vorst, eds. Model Order Reduction: Theory, Research Aspects and Applications. Mathematics in Industry. Springer Berlin Heidelberg, 2008. https://doi.org/10.1007/978-3-540-78841-6.
[B] l.55: similarly, "learning from partial observations" is a decades (if not centuries) old problem. Key mathematical contributions are from Ruelle and Takens over 40 years ago, refined by Yorke:
[B1] Takens, Floris. “Detecting Strange Attractors in Turbulence.” Lecture Notes in Mathematics, 1981, 366–81. https://doi.org/10.1007/bfb0091924.
[B2] Ruelle, David, and Floris Takens. “On the Nature of Turbulence.” Commun. Math. Phys. 20, no. 3 (September 1971): 167–92. https://doi.org/10.1007/bf01646553.
[B3] Sauer, Tim, James A. Yorke, and Martin Casdagli. “Embedology.” Journal of Statistical Physics 65, no. 3 (1991): 579–616. https://doi.org/10.1007/BF01053745.
[C] In general, the idea of "sparse sampling" of the microscopic dynamics has also been published before. This must not just be cited but critically compared to, especially [C2].
[C1] Samaey, Giovanni, Ioannis G. Kevrekidis, and Dirk Roose. “Patch Dynamics with Buffers for Homogenization Problems.” Journal of Computational Physics 213, no. 1 (March 2006): 264–87. https://doi.org/10.1016/j.jcp.2005.08.010.
[C2] Liu, Ping, Giovanni Samaey, C. William Gear, and Ioannis G. Kevrekidis. “On the Acceleration of Spatially Distributed Agent-Based Computations: A Patch Dynamics Scheme.” Applied Numerical Mathematics 92 (June 2015): 54–69. https://doi.org/10.1016/j.apnum.2014.12.007.
[D] The idea of creating a latent space with an auto-encoder is of course also not new. The authors cite a few papers in l.143, but should do so in section 4.1.
3) l141: it is misleading to write "$g_\theta = \phi'(x)f(x)$". g depends on z, not x (correct?). It is not clear if the authors mean "$g_\theta(z) = \phi'(\phi^{-1}(z))f(\phi^{-1}(z))$, i.e., as a combination of $\phi'$ (encoder Jacobian), $\phi^{-1}$ (decoder) and $f$. At this point in the paper, it could also be that training of g happens directly on dynamics of z and avoid $\phi$. The latter is probably the case, looking at equation 9, but it is not clear from the text.
Minor:
1) l74: "where" instead of "here", and it is $f(x)\in\mathbb{R}^N$, not $f\in\mathbb{R}^N$ (the former is a vector, the latter a function).
2) l157: "we constrict the condition number..." (I think "constrain" is meant?) is not ideal wording; because the loss in eq. 6 does not constrain the condition number at all, it just penalizes it.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) What could be done with quantities that do not rely on individual molecule evaluations (e.g. force), but on integral quantities (e.g. "density")?
2) I assume the authors require all particles for the force computation because even if a particle is not chosen for force computaiton, it still needs to be present because other particles require its position for "their" force computation? This would also answer question 1.
3) What is the core difference between reference C2 and the approach by the authors?
4) l159: "we get rid of the matrix-vector product...": why is this beneficial? The new loss also has a matrix vector product with phi' and g.
5) Equation 2 is not precise enough (and contains two typos: the parenthesis and no period at the end). What is the "almost equivalent" here? What is assumed, precisely? How accurate does the approximate force need to be?
6) The autoencoder described in section 4.1 seems to have a very simple architecture. How is it possible to train it for such high-dimensional microscopic input states, if the encoder is not permutation invariant? For example, if the model is trained using a fixed index for each molecule, and then the indices are shuffled at random, the input to the autoencoder completely changes while the conformation (independent of the index) does not change at all. Since the autoencoder is not permutation invariant, it would need to learn all conformations including permutations, which is an incredibly high-dimensional space (not possible to sample in reasonable time).
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors do not discuss the limitations of the auto-encoder they use to map to a latent space, in particular regarding permutation invariance of the molecular indices (see my question).
Societal concerns are not addressed at all; e.g. that speeding up force computations can speed up insights into materials that are harmful (not just beneficial).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful feedback. We have corrected all the minor issues and typos that you pointed out. We have also cited the previously missing literature you mentioned. Below are our detailed responses to the questions:
**Q: Equation 2 is not precise enough. What is the "almost equivalent" here? What is assumed, precisely? How accurate does the approximate force need to be?**
A: Thanks for pointing this out. In Eq 2, we mean $f_i(x_1, \cdots, x_n)$ can be calculated accurately through the microscopic coordinate of at most $M$ particles. The approximation is accurate enough such that the error is neglectable. For example, it is a common practice in molecular dynamics simulations to employ a cutoff distance for force fields. We have changed the '$\approx$' in Eq 2 to '$=$'.
If we want to formalize Eq 2 more rigorously, we can require the error to be bounded by a tolerance $\epsilon$:
$$
||f_i(x_1, \cdots, x_n) - \tilde{f_i}|| < \epsilon
$$ where $f_i$ is the exact microscopic force and $\tilde{f}_i$ is the approximation.$\epsilon$ will play a role in Theorem 1 such that Eq 11 becomes:
$$
b_1(L_x(\theta) + C) + \mathcal{O}(\epsilon) \leq L_z \leq b_2(L_x(\theta) + C) + \mathcal{O}(\epsilon)
$$
The performance of $L_x(\theta)$ can still be guaranteed if $\mathcal{O}(\epsilon)$ is much smaller compared to $b_1 C$.
**Q: I assume the authors require all particles for the force computation because even if a particle is not chosen for force computations, it still needs to be present because other particles require its position for "their" force computation?**
A: Not really. By the sparsity assumption mentioned in l 100, the microscopic force on each particle $i$ depends only on the microscopic coordinates of several particles that are in $J(x_i)$. If we want to calculate the forces on particles $x_1, \cdots, x_m$, we only need the coordinates of the particles that belong to $J(x_1), \cdots, J(x_m)$. The coordinates of particles in $J(x_i)$ are only used for the force calculation on particle $i$, not for any other particles. The computation cost increases linearly with the number of particles for which the microscopic forces are calculated.
We also want to clarify that, since the macroscopic observable $z^{\ast}$ depends on the microscopic coordinate of all the particles, the calculation of $z^{\ast}$ needs all the particles to be present. But the calculation of $z^{\ast}$ is fast and cheap and the computation cost is almost neglectable.
**Q: l159: "we get rid of the matrix-vector product...": why is this beneficial? The new loss also has a matrix-vector product with phi' and g.**
A: The i-th entry of $\varphi^{\prime}(x)f(x)$ is $\sum_{i}\varphi_{ij}^{\prime}(x)f_j(x)$. It is difficult to find an unbiased estimation of $\sum_{i}\varphi_{ij}^{\prime}(x)f_j(x)$ using a subset of {$f_j(x)$}$_{j=1, \cdots, n}$.
However, in the loss $L_x$ shown in Eq 10, $||f(x) - (\varPhi^{\prime}(x))^{\dagger}g_{\theta}(z)||_2^2$ can be written as
$$\sum_{i=1}^n ||f_i(x) - ((\varphi^{\prime}(x))^{\dagger} g_{\theta}(z))_i||_2^2$$
For any subset {$f_j(x)$}$_{j \in I(x)}, |I(x)| = n \cdot p$,
$$ \frac{1}{p}\sum_{i\in I(x)}||f_i(x) - ((\varPhi^{\prime}(x))^{\dagger}g_{\theta}(z))_i||_2^2$$
which can be used as a stochastic and unbiased estimation of $||f(x) - (\varPhi^{\prime}(x))^{\dagger}g_{\theta}(z)||_2^2$. The new loss can learn the macroscopic dynamics from a subset of microscopic forces and requires less force computation compared to $L_z$. The computation cost reduction is manifested through less force computations required by the new loss.
**Q: What could be done with quantities that do not rely on individual molecule evaluations (e.g. force), but on integral quantities (e.g. "density")?**
A: Our method can learn the dynamics of any macroscopic observables that can be written as a differential function of the microscopic coordinate: $z^{\ast} = \varphi^{\ast}(x)$. The density within a fixed region $\Omega$ is defined as the ratio of the number of particles contained in $\Omega$ to the area of $\Omega$:
$$
\frac{\sum_{i} \mathbb{1}(x_i\in \Omega)}{|\Omega|}
$$The above function is not differentiable, but we can use a differential function to approximate it. Then our method can be applied.
Currently, our method still can not address the case where $\varphi^{\ast}$ is not differentiable, or not deterministic.
**Q: The autoencoder described in section 4.1 seems to have a very simple architecture. How is it possible to train it for such high-dimensional microscopic input states, if the encoder is not permutation invariant?**
A: Thank you for the insightful question. Yes, we use MLP for both the encoder and the decoder, which indeed have a very simple architecture. In I 587, we mentioned each configuration has the same atom positions and velocity directions, and we only vary the temperature of the system. The order of the particles is fixed in the input to the autoencoder. In this way, no permutation needs to be considered and our simple MLP is enough to learn a latent space. We mainly want to use the experiment on the Lennard-Jones system to validate that our method can be applied to particle systems and can scale to very large systems. Note that in PDE experiments, the initial configuration is sampled from the initial distribution, and our method can handle such case.
Thank you for pointing this out. We have added the limitation of the autoencoder in our paper. We will also try more advanced autoencoders such as graph neural networks that are permutation invariant to further improve our paper.
**Q: What is the core difference between reference C2 and the approach by the authors?**
A: We answer this question in the top-level Author Rebuttal [C].
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers and clarification. The discussion of the literature is adequate. Still, my concern about major advancement of the field of networks (and ML in general) stands - enforced by the clarification that the auto-encoder indeed cannot deal with permutations of the individual atoms. I will keep the review score at 3. | null | null | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for providing detailed reviews and constructive feedback that have improved the paper.
Reviewer d4vx mentioned some relevant literature. We would like to compare these studies with our research to provide deeper insights into our method.
**[A]** A1 includes many model order reduction methods such as the Krylov Projection Framework and proper orthogonal decomposition. A1 also mentions the data-driven reduced order method but considers the dimension reduction mapping to be the linear combination of several chosen basis functions. In this work, we use the neural network to parametrize the dimension reduction mapping, which has more flexible form but harder optimization problem. We want to mention that our method can be generally combined with any dimension reduction method as long as they can learn the closure to the desired macroscopic observables well.
**[B]** In B1, B2, and B3, the 'partial observation' refers to an observable of the microscopic state $x$ (e.g. the observable is a function of $x$ ). These studies aim to obtain information (e.g., attractor) about the microscopic system from these macroscopic observables.
In our work, 'partial observation' refers to the computation of microscopic forces on a subset of particles. Our work aims to learn the macroscopic dynamics from microscopic forces.
**[C]** The approach in C1, C2 belongs to the equation-free framework in [1]. The core difference between these studies and our approach is:
- C1 and C2 only consider partial differential equation (PDE) systems. The macroscopic observables in C1 and C2 are the solution of the PDE at the coarse spatial grid. Then the macroscopic observables depend locally on the microscopic coordinates. In our work, the macroscopic observables depend globally on the microscopic coordinates of all the particles. In the patch dynamics scheme mentioned in C1 and C2, the lifting and restriction operator can be viewed as the encoder and decoder in our case. The lifting and restriction operator is explicitly defined, but we need to train the autoencoder, which may cause some optimization problems.
- The approach in C1 and C2 does not learn the macroscopic dynamics. Our method explicitly learns the macroscopic dynamics parametrized by a neural network. When the closure to the macroscopic observables is difficult to learn, our method may not handle this case. However, the equation-free framework can still work by bypassing the derivation of the macroscopic evolution equations.
- In C1 and C2, during the simulation of macroscopic dynamics, microscopic simulation still needs to be performed in small spatial domains and for short times. In our method, once the neural network for parametrizing the macroscopic dynamics is trained, we can simulate the macroscopic dynamics directly in macroscopic space, without performing any microscopic simulation.
---
References:
[1] Gear, C. W., Hyman, J. M., Kevrekidid, P. G., Kevrekidis, I. G., Runborg, O., & Theodoropoulos, C. (2003). Equation-free, coarse-grained multiscale computation: Enabling mocroscopic simulators to perform system-level analysis.
Pdf: /pdf/f118c4254d28e122c676c44a5e4e9fb8d50887ea.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
IF-Font: Ideographic Description Sequence-Following Font Generation | Accept (poster) | Summary: This paper presents a method that generate Chinese glyphs using token prediction approach. The core contribution is to leverage the concept called Ideographic Description Sequence and develop a network architecture to generate IDS that represent the target character.
Strengths: The strengths of this paper are:
- The problem formulation using Ideographic Description Sequence is novel and interesting.
- The evaluation is thorough and the generated Chinese glyphs are convincing.
Weaknesses: The weaknesses of this paper are:
- The exposition is sometimes confusing. I would encourage the authors to explicit show the output of the network before putting the output together as a glyph (I suppose the output sequence is composed of multiple tokens.)
- I understand the proposed method aims to focus on generating Chinese characters, but it seems can not be generlized to other writing systems, including Roman characters?
Technical Quality: 3
Clarity: 3
Questions for Authors: My questions and concerns are:
- The exposition of the paper is quite unclear to me. In the paper, I did not find an explanation of what each token really represents. Is it a part of the glyph? The decoded results shown in the paper are always a completed glyph.
- It is unclear how to compose the predicted token into a completed glyph.
- I recommend the authors to survey and discuss more about Ideographic in previous font manipulation is not thorough.
- e.g., Ariel Shamir, A. Rappoport, Compacting oriental fonts by optimizing parametric elements, The Visual Computer, 1999
- It is unclear what does "IDS is not perfect enough to identify Chinese characters" in the limitation section mean?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - as mentioned above, I am curious how well the method can generalized to other writing system? If it is hard, then I suggest authors discuss this in the limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: what is the output of the network before putting the output together as a glyph? Is it composed of multiple tokens?
Yes, the output sequence consists of multiple tokens. Please refer to the right of Figure 1, where the small green squares around Transformer Decoder are tokens (vector quantized tokens), which are **the indexes of codes in the VQ-GAN codebook** (VQ-VAE or other vector quantized methods are also applicable).
Specifically, after encoding an image into a feature map, each feature vector is replaced by the closest code in the codebook. Since all vectors are selected from the codebook, we can use the corresponding indexes to replace them, yielding a 2D array of integers. This array is then flattened into a sequence where each integer represents a token.
### Q2: How to compose the predicted token into a completed glyph?
The token sequence is first restored to a 2D array and replaced by vectors from the codebook, resulting in a quantized feature map. This feature map is then **decoded by a VQ-GAN decoder** to restore the original image. Since our work focuses on modeling the tokens of glyphs, decoding the predicted tokens into a glyph falls beyond the paper’s scope
### Q3: The survey and discussion about Ideographic in previous font manipulation is not thorough.
Thank you for providing a valuable reference, which we carefully read and found that it deals with the parametric representation of glyphs, allowing for a elegant trade-off between glyph quality and the amount of compression. It seems to be more closely related to the VQ-GAN we used, both trying to compress glyph representations. In future work, using this parametric elements as an input/output format may be promising.
Following your suggestion, we will include relevant discussions and cite this reference in the final submission. Thanks for your kind response.
### Q4: what does "IDS is not perfect enough to identify Chinese characters" in the limitation section mean?
The shortcomings of IDS are reflected in two aspects:
1. **Rule conflicts**. A very small number of characters are too similar, and their IDS of stroke granularity is identical.
2. **Insufficiently clear spatial descriptions**. For example, the left-right structure puts two components on the left and right, but the distance between them is not precisely specified, which requires the model to learn enough to distinguish.
### Q5: How well the method can generalize to other writing systems?
We focused on CJK characters because they have spatial structures, can be used to better demonstrate our main contributions. We have not tested it on other writing systems yet and are uncertain about its generalizability. Thanks for your valuable feedback, we will clearly state this in the manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response and I think most of my concerns are addressed. | Summary: This paper approaches the task of Few-shot Font Generation (FFG) for Chinese characters, proposing to model the target glyph with Ideographic Description Sequence (IDS) tokens to achieve style-content disentanglement. Reference font images and the target character’s IDS are fed into a VQGAN-based pipeline to decode the output image autoregressively as quantized codebook tokens. The results show that this method outperforms SOTA in one- and few-shot settings and for significantly differing font styles.
Strengths: The paper approaches an important problem and the idea of using IDS as input for FFG is clever and logical. The qualitative and quantitative results of the essential idea are convincing and the array of comparisons and attention to fair evaluation is appreciated. The presentation is mostly clear.
Weaknesses: The differences in most metrics when ablating components (Table 3) mostly seem quite small, compared to the larger differences in metrics when comparing to competing methods (Table 2). It’s not immediately convincing that the added complexity of these components (particularly IHA) are justified. On the other hand, the significant drop in FID when using I+S without C in Table 2 seems important and is not explained. If the use of these components gives a qualitative advantage that is not fully reflected in the metrics, this should be illustrated or included in the user study.
The design of the SSA (Sec 3.2) includes sub-components (global and local branches) which are not ablated. It would also help to indicate which is which in Figure 4. Claims about these controlling features such as stroke width, edges etc. (L134-147) could be justified. I also found Sec 4.4 hard to follow; how does Figure 7 show that SSA is effective? It’s not obvious where the attention should be focussed for positioning IDC’s or for those that don’t appear in the reference glyphs.
The method is tested on data collected by the authors. It is unclear if the proposed method would outperform the competing methods (Sec 4.2) if trained on datasets used in prior works.
Technical Quality: 2
Clarity: 3
Questions for Authors: Is this approach specific to CJK characters? If so, it seems like this should be mentioned in the title, abstract, and conclusion.
The use of “SCA” is confusing as in L248 it is mentioned as a “module”, but the acronym “SCA” does not appear before. Assuming this refers to “Style Contrast Augmentation” (Sec 3.3) this seems to be the contrastive learning loss term and not a module in the model architecture. I’m also not sure if “augmentation” is the right term since this is not performing data augmentation.
What does it mean on L111 that directly employing the character would be “impractical”? Isn’t this done in prior works? Additionally, L119 says that the long-tail distribution of IDC’s poses a challenge for training, but isn’t this shown in the first row of Table 3 with overall good results? (relative to the competing methods in Table 2)
Will the curated data and IDS table (L202) be made available?
The paper mentions a user study (L208 etc.). Who were the participants, were they paid (L236 mentions “volunteers”), and does this require IRB approval?
Typos: L19-20 incomplete sentence, L199 L131-136 L217 L311 L325-326 grammar, L139 L176 L250 formatting issues, L497 missing period.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 4
Limitations: Limitations and societal implications are adequately discussed.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: The differences in most metrics seem quite small
On one hand, we attribute this to **the advantages brought by the new paradigm**. The usage of IDSs and VQ-token based decoder have already achieved good results, making our baseline strong.
On the other hand, this is due to **marginal effects of the metrics**. Because the metrics of baseline are already quite excellent, the improvement brought by modules is relatively weak in terms of metrics.
We show the visualization in Figure 6, and these changes may not be important for all samples, but they are crucial for certain glyphs and effectively improve generation quality.
### Q2: The significant drop in FID when using I+S without C
Because FID has limitations and a certain gap with human's perception. We provide corresponding visualization in Figure 6, which shows that our module can effectively improve the quality of glyph.
### Q3: The sub-components of the SSA are not ablated
Thank you for your insightful comments and observations regarding our model. We have added below the ablated results for the two branches of SSA module:
| SSA (3shot, ufuc) | FID↓ | L1↓ | LPIPS↓ | RMSE↓ | SSIM↑ |
|------------|---|---|---|---|---|
| wo/ global | **7.9766** | 0.1607 | 0.1347 | 0.3789 | 0.4775 |
| wo/ local | 8.2578 | 0.1620 | 0.1364 | 0.3805 | 0.4756 |
| full | 8.4922 | **0.1597** | **0.1338** | **0.3775** | **0.4782** |
Figure R2 (in the rebuttal PDF) is our modified version of Figure 4, with the two branches highlighted. If you have any further suggestions, please feel free to continue discussing with us.
### Q4: How does Figure 7 show that SSA is effective?
Figure 7 shows the attention maps for the local branch in SSA. When there are target IDCs or components present in the reference glyphs, more attention is paid to the corresponding areas. For example, the first, second, fourth attention maps in the first row and third, fifth attention maps in the second row are highlighted accordingly.
When a reference glyph does not contain any target elements, the local branch tends to pay less attention to it, which is why the third row appears almost blank.
Rather than forcing attention allocation to introduce interference, we think it is better to adopt the overall features extracted by the global branch.
### Q5: Why not test methods on datasets used in prior works?
This is a very reasonable concern. However, in the font generation field, due to **copyright protection**, there is no widely recognized open-source dataset. The current mainstream methods, such as FontDiffuser (AAAI 2024), CF-Font (CVPR 2023), VQ-Font (ICCV 2023), LF-Font (TPAMI 2022), are all use private dataset.
Our dataset is large enough and covers a variety of fonts and most commonly used Chinese characters, We believe that the comparison results are relatively fair. If we use other datasets, we are confident that the proposed method can still outperform the competing methods.
### Q6: Is this approach specific to CJK characters?
Our methods is not specific to CJK characters. We focus on CJK characters because of their spatial structures, which can better reflect the characteristics of our method. If we expand the vocabulary and train with corresponding data, IF-Font can also be suitable for other characters.
### Q7: The use of “SCA” is confusing
Sorry for the confusion: We define SCA as a module because there are some network layers used to extract features in addition to the loss part. "SCA" refers to "Style Contrast Augmentation", we apologize for not introducing it in Section 3.3, and thank you for pointing it out.
The term "augmentation" might not be suitable, we intended to emphasize the improvement brought by the contrastive loss. Perhaps the term "enhancement" is a better choice? We will continue to refine the wording and make revisions to the draft, any further discussions are welcome.
### Q8: Why is it “impractical” to directly employ the character
Some early font generation works have indeed used characters as input directly, but this approach is only suitable for those with a small vocabulary table, such as the Latin alphabet.
In this paper, we focus on CJK characters. Due to their large number, if we were to use them as input directly, an **enormous vocabulary table** is needed. At the same time, new characters are constantly being added, low-frequency characters may lack font support, leading to **insufficient training data**. Directly employing the characters makes model difficult to generalize.
### Q9: Why is the problem caused by long-tail distribution not obvious in metrics?
Most Chinese characters are left-right or top-bottom structures, the improvements made by the IHA module are not significant in terms of metrics. We demonstrate this in the fourth cloumn (the column "+I") of Figure 6.
### Q10: Will the curated data and IDS table be made available?
Sure, we will release these data to ensure that our model can be reproduced.
### Q11: Who were the participants in user study, were they paid (L236 mentions “volunteers”), and does this require IRB approval?
Our user study was conducted through Fuxi Youling Crowdsourcing Platform[1]. We paid the platform to publish a questionnaire with a limited number of 30 slots. After data collection is completed, we received anonymous responses from the platform. Therefore, the term "volunteers" might not be accurate, maybe "platform users" or "participants" would be more suitable?
The platform users fulfill a task to earn points, which can be redeemed for money. Since we collaborated with the third-party platform, IRB approval is not required.
[1] https://fuxi.163.com/solution/data
### Q12: Typos
Thank you for your detailed reading of our manuscript. We will correct these typos in our final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough and lucid response. I believe this addresses my concerns so I have updated my rating accordingly. I encourage the authors to incorporate these findings and discussion into the final version of paper.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer's insightful suggestions and the increased rating. We will incorporate these findings and discussion into the revised paper. | Summary: This paper proposed IF-Font handles the task of few-shot font generation via a VQ-GAN based framework. Compared to most existing methods that encode content images, IF-Font only encodes Ideographic Description Sequence (IDS) to convey content information of target characters. Experimental results show the proposed method excels in synthesizing glyphs with neat and correct strokes, and enables the creation of new glyphs based on provided IDS.
Strengths: (1) The presentation is good and this paper is easy to follow.
(2) The proposed method can generate visually pleasing glyph images, from Figure 5.
(3) It is an interesting idea to ONLY encode IDS to model the shape of target characters. However, I have concerns about whether this idea is fully substantiated (see Weaknesses).
Weaknesses: (1) The idea of utilizing component/stroke information of glyphs has already been exploited in many existing papers, such as LF-Font and XMP-Font. What is the key difference that distinguishes IF-Font from those methods?
(3) If the key difference is only encoding IDS to convey content information, I wonder if there is an ablation study verifying that this is a better design than encoding both modalities (i.e., component sequence and glyph images) in XMP-Font? The current Section 4.4 (Ablations Studies) is a bit unclear to me.
(3) The authors mentioned that they construct multiple equivalent IDSs for the same character through random selection. Does it mean during training, a random IDS is selected to feed the IDS encoder if there are multiple equivalents? Can the IDS encoder resolve the ambiguities in representing the shape of a character? My guess is this is somewhat similar to many-to-one mapping so it can be done. Correct me if I am wrong.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the detailed structure of IDS-Encoder? Is it a Transformer to encode a sequence?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discussed the limitations in Section 4.6. Regarding the first limitation (fancy and irregular font styles), I believe the discussion could be expanded further. One key reason might be that novel font styles do not always adhere to the typical topology of characters. For example, the second-to-last font in Figure 5 deviates from the norm, challenging the necessity of using IDS (or pre-defined component sequences). I would like to hear the authors' opinions on this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: The key difference between IF-Font and other methods that utilize component/stroke information
We believe the key difference lies in **whether style-content disentangling is performed**. Previous methods use component/stroke information but still relied on separating and combining corresponding content and style features to generate glyphs. Our method uses IDS as a condition to directly generate characters, rather than morphing the content glyph.
### Q2: Is the key difference lies in only encoding IDS?
Perhaps reviewer wonders why our method outperforms previous methods with only one input modality. As mentioned above, **the component/stroke information used in previous methods is not equivalent to IDS**. Hence, it is challenging to attribute the key difference to whether only IDS is used.
We are happy to include an ablation study that uses both modalities to address the reviewers' concerns. However, the addition of the new modality leads to a substantial increase in training costs. We respectfully request permission from the reviewer to present the experimental results in the subsequent discussion period.
### Q3: Multiple equivalents IDSs for single character
We apologize that our descriptions are not clear and cause confusion. As shown in Figure 2, an input character is first broken down into single IDS, then fed into IDS encoder during training. IDS encoder analyzes the input IDS using some rules, and if an IDS has multiple equivalents, it randomly selects one for encoding.
Strictly speaking, IDS encoder is not responsible for resolving ambiguities. Since IDS is similar to text, there are cases where different descriptions refer to the same object. **We designed IDS encoder to present as many diverse inputs as possible to the decoder**, in order to prevent overfitting. However, from the perspective of the decoder, this is indeed similar to a many-to-one mapping, as it needs to generate tokens of the same character for these equivalent IDS.
### Q4: The detailed structure of IDS Encoder
IDS encoder does not contain a Transformer, in fact its learnable parameters are limited to a simple embedding layer. We apologize for any confusion that may have arisen from the name "encoder".
IDS encoder does not perform complex encoding because **we want to preserve the independence of each element within an IDS**. This also allows the decoder to maintain fine-grained attention on each position of the input sequence.
Specifically, IDS encoder consists of three parts: decomposition, equivalent construction, and embedding.
Firstly, it recursively breaks down the input IDS into one IDC and corresponding components at each level to figure out whether there is a equivalent. Next, it produces all the equivalents according to certain rules. Finally, it pads to the maximum length and passes through an embedding layer to obtain the final features.
### Q5: More about the first limitation (fancy and irregular font styles)
Thank you for your thorough review of our work, and we are happy to discuss and clarify this issue further. We believe that fancy and irregular fonts are difficult to generate due to several reasons:
1. As pointed out by the reviewer, one reason is that **novel fonts have a great topological difference from typical ones**, making it challenging for models to learn.
2. **The subjective and random nature of font design**. For example, in Figure 9's first font, the "auspicious clouds" decoration appears on every glyph, but its position, proportion, and shape are carefully designed, making it difficult for the model to grasp the regular pattern.
3. **Limited reference samples**. More complex styles often require more reference samples to imitate, which requires a trade-off between reference quantity and generation quality.
Although we only show the results of our method in Figure 9, please note that **this limitation is common to all font generation methods**. LF-Font[1] has discussed this issue and CF-Font[2] aims to reduce the difference between content and target styles through content fusion. Since the font in Figure 9 is too novel and lacks generality, we did not include it in the comparison in Figure 5.
We include all methods' results on those novel fonts in Figure R1(in the rebuttal PDF) to support our conclusions. Figure R1 shows that methods based on disentanglement are limited by the content font, and generate worse results when the content style is far away from the target style. Unfortunately, this limitation has not been well addressed yet, we hope to leave it for future works.
[1] Park, S., Chun, S., Cha, J., Lee, B., & Shim, H. (2022). Few-shot Font Generation with Weakly Supervised Localized Representations. In IEEE Transactions on Pattern Analysis and Machine Intelligence: Vol. PP (pp. 1–17). https://doi.org/10.1109/TPAMI.2022.3196675
[2] Wang, C., Zhou, M., Ge, T., Jiang, Y., Bao, H., & Xu, W. (2023). CF-Font: Content Fusion for Few-shot Font Generation. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
---
Rebuttal Comment 1.1:
Title: Result of ablation study requested in Q2
Comment: Dear reviewer, we added content glyph as input in addition to the IDS. The experimental results are listed below:
| UFUC&3shot | FID↓ | L1↓ | LPIPS↓ | RMSE↓ | SSIM↑ |
|------------|---------|---|---|---|---|
| Both Modalities | **8.2603** | **0.1576** | **0.1316** | **0.3744** | **0.4830** |
| Only IDS | 8.4922 | 0.1597 | 0.1338 | 0.3775 | 0.4782 |
As expected, the additional information and parameters bring about an improvement in performance. However, please note:
1. This improvement is not significant compared to the increase in training costs (double the previous amount).
2. The style of the content font will affect the generated results to some extent.
Unfortunately, it appears that image uploads are not allowed during the author-reviewer discussion period, we are unable to share the visualization. | Summary: IF-Font introduces a novel approach to few-shot font generation by using Ideographic Description Sequence (IDS) instead of traditional source glyphs to control the semantics of generated glyphs. This method quantizes reference glyphs into tokens and models the token distribution of target glyphs using IDS and reference tokens. IF-Font effectively synthesizes glyphs with neat and accurate strokes, significantly outperforming existing methods in both one-shot and few-shot settings, particularly when the target styles differ from the training font styles. The method redefines font generation as a sequence prediction task, enhancing the quality and consistency of the generated glyphs.
Strengths: 1. Novel Paradigm: IF-Font introduces a new approach by using IDS to control glyph semantics, eliminating the need for content-style disentanglement and reducing artifacts.
2. High-Quality Generation: The method excels in producing glyphs with neat and correct strokes, maintaining consistent style even with limited reference glyphs.
3. Cross-Linguistic Capabilities: IF-Font allows for the creation of new and non-existing Chinese characters, demonstrating flexibility and adaptability across different linguistic structures.
Weaknesses: 1. The author proposes a content-style disentanglement method where style extraction relies on the decomposition of glyphs. However, this setup may not be necessary. In the fields of diffusion models and generative AI, there are many methods that can extract styles from a single image with minimal content interference, such as IP-Adapter and Instant-Style. The work FontDiffuser, which generates text using diffusion models, does not decompose the text but still achieves excellent style and content disentanglement. Therefore, I have doubts about the advancement and necessity of the method proposed in this paper.
2. The compared methods are relatively old and do not include comparisons with the most advanced methods such as FontDiffuser.
Since the target glyph is reconstructed from a quantized sequence, the accuracy of this reconstruction sets a ceiling on IF-Font's potential performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is there any cherry-picking of results?
2. What is the real usability rate of the proposed method?
3. Besides inference speed, what advantages does the VQ-GAN method have compared to diffusion-based methods?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Q1: The advancement and necessity of our method
Font generation differs from general image generation tasks. **Our contribution lies in finding a way to describe ideographic characters as "text" and successfully applying it to font generation**, solving the problem of low-quality generation in previous methods.
IP-Adapter and Instant-Style are diffusion-based text-to-image methods. They use text to control the generation and aim to generate results that strictly follow the text prompt while having the style of reference images. However, their target is much more complex than glyphs, and the corresponding information can usually be described in natural language. Their generation conditions are relatively loose, and there are multiple reasonable results, but with the problem of content leakage.
The content of glyphs is just a character, which is difficult to describe with text, so prior works use the glyph image as the input content. Since the glyph used as the input content also has a style, there is a problem of style leakage.
In summary, these two fields have some intersection but are not identical, and the problems they face are different. We believe that our method has outstanding advantages and contributions in font generation.
### Q2: The comparison with FontDiffuser
We have included the comparisons with FontDiffuser, which shows that although FontDiffuser performs well, our method is still at an advantage:
| Model(ufsc) | FID↓ | L1↓ | LPIPS↓ | RMSE↓ | SSIM↑ |
|------------|---------|---|---|---|---|
| FontDiffuser (1shot) | 3.9969 | 0.1938 | 0.1371 | 0.4180 | 0.4076 |
| Ours (1shot) | 6.7695 | 0.1529 | 0.1307 | 0.3688 | 0.4915 |
| FontDiffuser (3shot) | **3.6979** | 0.1774 | 0.1248 | 0.3980 | 0.4370 |
| Ours (3shot) | 6.8359 | 0.1478 | 0.1258 | 0.3620 | 0.5021 |
| FontDiffuser (8shot) | 4.1017 | 0.1748 | 0.1234 | 0.3947 | 0.4420 |
| Ours (8shot) | 6.7383 | **0.1429** | **0.1216** | **0.3552** | **0.5140** |
| Model(ufuc) | FID↓ | L1↓ | LPIPS↓ | RMSE↓ | SSIM↑ |
|------------|---------|---|---|---|---|
| FontDiffuser (1shot) | 8.2524 | 0.1914 | 0.1527 | 0.4157 | 0.4163 |
| Ours (1shot) | 8.4844 | 0.1651 | 0.1387 | 0.3845 | 0.4676 |
| FontDiffuser (3shot) | **7.6444** | 0.1771 | 0.1413 | 0.3981 | 0.4418 |
| Ours (3shot) | 8.4922 | 0.1597 | 0.1338 | 0.3775 | 0.4782 |
| FontDiffuser (8shot) | 8.9166 | 0.1702 | 0.1367 | 0.3890 | 0.4543 |
| Ours (8shot) | 8.3203 | **0.1561** | **0.1305** | **0.3728** | **0.4864** |
### Q3: Performance ceiling due to quantization
Yes, our performance is limited by the reconstruction accuracy of VQ-GAN. However, glyph images are binary and relatively simple, fine-tuning VQ-GAN can minimize the precision loss.
We use original VQ-GAN checkpoints without fine-tuning in order to highlight our main contributions.
### Q4: Cherry-Picking & Real Usability Rate
In the paper, we selected representative samples to demonstrate our method's distinguishing features. In fact, almost all generated samples are of high quality, except for those shown in the 4.6 failure cases section.
### Q5: The advantages of the VQ-GAN method compared to diffusion-based methods
1. **Conforms to writing habits**: Using IDS and quantized tokens together in auto-regressive modeling implicitly incorporates writing order.
2. **Scalability**: The VQ-GAN method has fewer parameters, and is easier to scale. In addition, it has an advantage in terms of training speed and memory usage, making it suitable for deployment on terminal devices.
3. **Robustness**: Due to vector quantization, characters are represented by a limited number of tokens (only 256 types), which reduces the difficulty of modeling for the decoder and makes it less likely to produce artifacts.
As is well known, the diffusion model is excellent, but the focus of this paper is not on a specific model. Indeed, our key idea is model-agnostic, which can be also applied to diffusion or other types of networks.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. Due to the debate regarding the paper's motivation and novelty, I am glad to improve my current score to 6. Nevertheless, I hope this work can cite the following paper.
1. CLIPFont: Text Guided Vector WordArt Generation
2. FontDiffuser: One-Shot Font Generation via Denoising Diffusion with Multi-Scale Content Aggregation and Style Contrastive Learning
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our work. We would be glad to cite the two papers in Section 2 (Related Works) of the final version.
Just a gentle reminder: We have noticed that the score has not been updated yet (the reviewer mentioned that it would be raised to 6), which may be due to forgetting to save. :D | Rebuttal 1:
Rebuttal: We attempted our best to address the questions as time allowed. We believe the comments & revisions have made the paper stronger and thank all the reviewers for their help. Please find individual responses to your questions below. The PDF file for the figures is attached to this general response.
Pdf: /pdf/a26874cdb15a42a5376ce233073873c1a08ce726.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Transformer Doctor: Diagnosing and Treating Vision Transformers | Accept (poster) | Summary: This paper proposes a vision transformer diagnosing and treating framework, namely Transformer Doctor, to reveal the problems that bring about negative impact on the network performance and fix them. The paper firstly proposes the information integrating hypothesis, which argues that transformers do information refining and integrating at the lower and higher layers respectively. Based on this hypothesis, the inter-token information dynamic integration and intra-token information static integration mechaisms are designed to show the inner mechanisms of ViTs and help to treat them.
Strengths: 1. This paper designs a possible way to explain why a transformer cannot work well and try to fix the potential problems to improve network performance.
2. The information integration hypothesis is proposed, and two situations (self-attention and fully-connected layers) are analyzed.
3. Based on the information integration hypothesis, a transformer diagnosis and treatment method is proposed.
4. The experimental results on several databases can support the proposed method.
Weaknesses: 1. The evidence of the correctness of the information integration hypothesis is not strong enough, it is better to consider to prove the hypothesis in mathematic way
Technical Quality: 4
Clarity: 3
Questions for Authors: N.A.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors consider the limitations of the method in the part of Conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on this work. We are pleased that you highlighted the core of our research, which is designing a potential method to explain why Transformers may not perform well and attempting to address these issues to improve network performance.
We acknowledge that there are still some imperfections in our manuscript, but we have actively addressed these issues with the aim of improving the work. Below are our responses to your main comments (each of your comments is highlighted in italics).
> Q1: *The evidence of the correctness of the information integration hypothesis is not strong enough, it is better to consider to prove the hypothesis in mathematic way*
>
Thank you for your review and valuable feedback. We understand your concern about the mathematical proof of the information integration hypothesis. However, our study is primarily empirical, aiming to validate the hypothesis through experimental evidence rather than mathematical derivation.
Specifically, the motivation for this work stems from error mechanisms observed in biological vision systems. We chose an empirical approach to explore the practical effects and applications of the information integration hypothesis, which is a commonly used and accepted method in related research fields. In validating of the information integration hypothesis, we conducted extensive experiments and data analyses, including numerous qualitative analyses (as detailed in Sections 4.1, 4.2, Appendix B, and Appendix D) and thorough quantitative analyses (as shown in Appendix A, Appendix C, etc.). Additionally, in the error treatment based on the information integration hypothesis, we performed extensive qualitative analyses (in Sections 6.3, Appendix H, Appendix I, Appendix J) and substantial quantitative analyses (in Sections 6.2, Appendix G, Appendix K, Appendix L). These experiments provide substantial evidence for the effectiveness of the information integration hypothesis.
Nevertheless, we highly value your suggestion and will emphasize in the discussion section of the paper why we chose an empirical approach and explain its advantages and limitations relative to mathematical proof. Additionally, we will attempt to supplement the theoretical background and relevant mathematical models to further support our hypothesis. We hope this response addresses your concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, I would like to keep my original score unchange.
---
Reply to Comment 1.1.1:
Comment: Thank you for recognizing the value of our work. We also greatly appreciate your thorough review and insightful comments. | Summary: Inspired by the information integration mechanisms and conjunction errors in the biological visual system, this paper investigates the error mechanisms within Transformers. Through a comprehensive analysis and experimental validation of the computational processes of the two core modules of Transformers, MHSA and FFN, the authors introduce the Transformer Doctor framework. The principal components of this framework are as follows:
1. Diagnosis: The paper identifies dynamic information integration among tokens within MHSA and static information integration within tokens in FFN, as well as conjunction errors occurring during the integration process.
2. Treatment: To mitigate the conjunction errors identified in MHSA and FFN, the paper proposes heuristic dynamic information integration constraints for MHSA and rule-based static information integration constraints for FFN.
To validate the efficacy of Transformer Doctor, the authors performed extensive quantitative and qualitative analyses on various Vision Transformer architectures. The findings indicate that Transformer Doctor effectively reduces conjunction errors in Transformers and enhances overall performance.
Strengths: 1. The investigation into the error mechanisms of Vision Transformers presented in this paper is uniquely motivated. The authors provide a novel perspective by drawing an insightful connection between error mechanisms in biological vision and machine vision. The paper conducts in-depth analyses of both MHSA and FFN, uncovering intriguing phenomena related to dynamic and static information integration, respectively.
2. The paper offers a comprehensive and detailed review of existing interpretability methods. The proposed approaches for diagnosing and treating Transformer errors are rigorous and persuasive. Extensive experiments, featuring abundant quantitative results and qualitative visual analyses, convincingly demonstrate that the proposed methods effectively mitigate errors and enhance model performance.
3. The structure of the paper is well-organized, clearly articulated, and highly accessible. Each section is meticulously arranged, providing a systematic and comprehensive flow from background and motivation to methods and experimental results.
4. This paper's comprehensive investigation into the error mechanisms of Vision Transformers addresses a relatively underexplored area in the existing literature. Drawing inspiration from error mechanisms in biological visual systems, the proposed methodologies for diagnosing and rectifying Transformer errors provide substantial insights. The elucidated integration mechanisms and identified error phenomena within Transformers are not only of considerable interest but also highly enlightening for the field.
Weaknesses: 1. The visualization from vertical lines to diagonals in the attention matrices presented in Fig. 2(a,c) lacks sufficient clarity. This visualization is essential for validating the authors' information integration hypothesis in MHSA. It is recommended that the authors refer to similar visualization results in studies such as [1].
2. In Table 1, the proposed method does not demonstrate significant improvements on certain datasets, particularly on the CIFAR-10 dataset. Regarding this phenomenon, the authors need to provide a detailed analysis.
3. In the treatment of dynamic information integration in MHSA (Eqn. (6)), it is noted that not all datasets contain annotated foregrounds. The necessity for manual foreground annotation presents a significant challenge to the practical implementation of this method.
4. minor issues: \
a. This paper lacks a detailed introduction to Transformers. The authors should provide a conceptual overview and offer clearer explanations of certain variables in the equations. For instance, the meanings of $(a_{ij})$ in Eqn. (2) and $(z_{im})$ in Eqn. (4) should be explicitly stated.
b. If Eqn. (4) is correct, the symbol labeled as $(X)$ in Fig. 1 should be $(Y)$.
5. In Eqn. (8), the authors constrain the static information integration in FFN by specifying aggregation rules. However, it is worth considering whether the number of samples used to establish these rules could also impact the final results.
6. Lack of explanation why the performance improvements under other computational forms are not as significant in Table 2.
[1] Trockman, Asher, and J. Zico Kolter. "Mimetic initialization of self-attention layers." International Conference on Machine Learning. PMLR, 2023.
Technical Quality: 4
Clarity: 4
Questions for Authors: See weakness
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The paper has already discussed its limitations and potential impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments. We are pleased that you find our work combining Transformers with biological vision error mechanisms to be novel and that you find our methods and findings insightful and inspiring. Below are our responses to each of your comments (each of your comments is highlighted in italics).
> Q1: *The visualization from vertical lines to diagonals in the attention matrices presented in Fig. 2(a,c) lacks sufficient clarity. This visualization is essential for validating the authors' information integration hypothesis in MHSA. It is recommended that the authors refer to similar visualization results in studies such as [1].*
>
Thank you for pointing this out. The lack of clarity in Figures 2(a) and 2(c) is due to compression artifacts in the PDF version of the paper, which diminished the visibility of the diagonal and vertical lines. As suggested, similar observations can be found in the study you referenced [1]. Additionally, similar visualizations are presented in [2], but their focus is on parameter initialization or architectural design, which differs from our study on Transformer error mechanisms.
[1] Trockman, Asher, and J. Zico Kolter. "Mimetic initialization of self-attention layers." International Conference on Machine Learning. PMLR, 2023.
[2] Chang, Shuning, et al. "Making vision transformers efficient from a token sparsification view." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2023.
> Q2: *In Table 1, the proposed method does not demonstrate significant improvements on certain datasets, particularly on the CIFAR-10 dataset. Regarding this phenomenon, the authors need to provide a detailed analysis.*
>
Indeed. It is important to note that Transformer Doctor shows less improvement on smaller datasets such as CIFAR-10 compared to larger datasets. There are two potential reasons for this phenomenon. First, for smaller datasets, errors in Transformer recognition are not solely due to conjunction errors; they may also arise from insufficient feature extraction at early network stages (lines 283-289 of the paper). Second, during the treatment of Transformers, the small image size of CIFAR-10 limits the effectiveness of dynamic aggregation constraints. Consequently, we only employed static aggregation constraints for treatment. In contrast, larger datasets benefit from both dynamic and static aggregation constraints, resulting in better performance. More detailed results are provided in Table 3 of Appendix G.
> Q3: *In the treatment of dynamic information integration in MHSA (Eqn. (6)), it is noted that not all datasets contain annotated foregrounds. The necessity for manual foreground annotation presents a significant challenge to the practical implementation of this method.*
>
While we acknowledge that not all datasets come with annotated foregrounds, it is important to note that the amount of foreground annotation required for dynamic information integration treatment is quite minimal. For instance, in our experiments using ImageNet-S, the number of images with foreground annotations in the training set is 9,190 out of a total of 1,200,000 images. Despite this small proportion, excellent results were achieved.
Moreover, our further experiments indicate that selectively annotating low-confidence images in the training set is more beneficial compared to randomly annotating some images. Therefore, in practical applications, one can selectively annotate a very small number of low-confidence images (e.g., 5-10 per class) in the training set to perform the Transformer treatment effectively.
> Q4: *This paper lacks a detailed introduction to Transformers. The authors should provide a conceptual overview and offer clearer explanations of certain variables in the equations. For instance, the meanings of ($a_{ij}$) in Eqn.(2) and ($z_{im}$) in Eqn.(4) should be explicitly stated.*
>
Thank you for your suggestion. We provide an explanation of $\mathbf{a}$ on line 129 of the paper, where $\mathbf{a}\_{ij}$ denotes the weight associated with the $j$-th token when integrating information to form the $i$-th token in inter-token information integration. Additionally, $\mathbf{z}$ is explained on line 154 of the paper, where $\mathbf{z}\_{im}$ represents the weight corresponding to the $m$-th dimension when integrating information to form the $i$-th token in intra-token information integration. We will include these clarifications in the Methods section of the revised manuscript.
> Q5: *If Eqn.(4) is correct, the symbol labeled as $X$ in Fig.1 should be $Y$.*
>
We apologize for this error. We have corrected the label and thoroughly reviewed the entire manuscript to ensure that all symbols are accurate.
> Q6: *Lack of explanation why the performance improvements under other computational forms are not as significant in Table 2.*
>
Thank you for your question. In Table 2, the computational forms that show the best performance improvements all involve gradient-based methods. As described on lines 223-225 of the paper, gradients help differentiate the importance of each head in MHSA, leading to more accurate multi-head integration weights $\hat{a}$. Other computational methods, such as using the minimum, maximum, or average across all heads, result in less accurate multi-head integration weights $a^h$ because they include weights from less important heads. Consequently, these methods do not achieve as good results when used for dynamic integration constraints.
Similarly, as noted on lines 238-240 of the paper, gradients establish a connection between integration weights $z$ and specific classes. This results in more precise integration weights $\hat{z}$ for each class and avoids the issue of constraining redundant dimensions during static integration constraints. Thus, gradient-based methods yield better performance improvements. We will include this explanation in the revised version of the paper.
---
Rebuttal 2:
Comment: Thank you for your reply, my concerns have been resolved. So I vote for acceptance.
---
Rebuttal Comment 2.1:
Comment: Thank you for your valuable review and suggestions. We are pleased to hear that your concerns have been addressed. | Summary: This study introduced a framework, namely Transformer Doctor, to reduce internal errors, e.g. conjunction errors, in a general vision transformer model. Building upon the information integration hypothesis, the proposed method performs several constraints, including heuristic dynamic constraints and rule-driven static constraints, to enhance information integration at higher layers. The experiential results are conducted on five classification datasets using seven small-scale vision transformer models.
Strengths: -- This study investigates the topic of improving vision transformers, which seems interesting and of practical importance.
-- The proposed method is inspired by solutions in biological vision. The solution might be reasonable.
Weaknesses: -- This is a pure empirical paper with no theoretical results. However, the experimental results are insufficient to fully convince the reviewer.
* The vanilla transformer results reported in Table 1 do not align with their original paper. e.g. the DeiT-Tiny is reported at 72.2% on ImageNet, which is even higher than the improved results (66.8% $\to$ 70.6%).
* This study is only evaluated on small-scale vision transformers, with no results presented for large-scale versions.
* The evaluation is only conducted on 10 samples per class (Appendix F). The sample size seems insufficient.
* Table 1 can be further improved by presenting results with both the vanilla and blank training.
-- The reviewer is unclear about the potential mechanism/logic/reason of the proposed method on the following questions
* why the proposed method can reduce the conjunction error
* To what extent (fully/partially) can the proposed reduce the error
* Besides the information integration hypothesis, do you rely on other assumptions?
* Is the information integration hypothesis valid in vision transformers, especially for large-scale versions?
The above questions might be potentially solved by adding theoretical analysis or more detailed explanations.
-- This paper has many typos, even in the result presentation (e.g. Table 1 BeiT/Imagenet-1K cell).
Technical Quality: 2
Clarity: 2
Questions for Authors: See above weakness.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: As I can see, this paper has no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your diligent review and comments. We are pleased that you find this research on improving Vision Transformer both interesting and practically valuable. We are also glad that you consider our biologically inspired approach to be reasonable. Below are our responses to each of your comments (each of your comments is highlighted in italics).
> Q1: *The vanilla transformer results reported in Table 1 do not align with their original paper. e.g. the DeiT-Tiny is reported at 72.2% on ImageNet, which is even higher than the improved results (66.8% → 70.6%).*
>
We apologize for any confusion caused. To achieve the best possible performance for models such as ViT and TNT within our limited computational resources, we adhered to the following detailed experimental settings:
We used data augmentation methods including Auto Contrast, Equalize, Invert, Rotate, Posterize Increasing, Solarize Increasing, and Solarize Add. The AdamW optimizer was employed with a momentum setting of 0.9, weight decay of 0.05, and epsilon of 1e-8. The learning rate scheduler used was CosineLRScheduler with an initial learning rate of 1e-2, a minimum learning rate of 1e-5, no warmup steps, a cycle limit of 1, and T_max set to 300. The training batch size was 256, and the models were trained for 300 epochs.
These settings differ from the commonly used configurations for DeiT, such as a total batch size of 1024 and an initial learning rate of 1e-3. However, it is important to emphasize that this does not impact the fairness of exploring the internal error mechanisms of the models or addressing their errors. During the experimental phase, the comparison settings in the paper are sufficiently fair; except for the number of training epochs for the treated models being fewer, the experimental settings for the treated and baseline models are identical. The experimental results demonstrate that the proposed method has substantial potential for correcting model errors and enhancing performance (as shown in Table 3 in the appendix). We understand your concerns and will add a more detailed description and further experiments in the paper to clarify this matter. We sincerely hope this response addresses your concerns.
> Q2: *This study is only evaluated on small-scale vision transformers, with no results presented for large-scale versions.*
>
Thank you for your comment. Transformer Doctor is indeed applicable to large-scale Vision Transformers. To demonstrate this, we have additionally conducted preliminary experiments using ViT-Large on ImageNet-10. The results are as follows:
Base: 61.30; +Blank: 61.50; +IDI:62.80; +ISI:62.50; +IDI, +ISI: 62.90;
These results indicate that Transformer Doctor remains effective for large-scale ViTs. The reason is that for any Vision Transformer architecture, such as ViT, the fundamental structure is similar across different sizes, with variations primarily in the number of blocks, heads, and dimensionality of hidden features. Each block consists of MHSA and FFN components. The information integration hypothesis upon which Transformer Doctor is based comprehensively covers both MHSA and FFN (Sections 3.1 and 3.2). Thus, the size of the model does not affect the fundamental efficacy of Transformer Doctor in enhancing model performance.
> Q3: *The evaluation is only conducted on 10 samples per class (Appendix F). The sample size seems insufficient.*
>
Thank you for your comment. In fact, the amount of data used for evaluation is more than sufficient and far exceeds 10 samples per class. The "10 samples per class" mentioned refers not to the evaluation data but to the dynamic information integration constraints during training. This indicates that our proposed dynamic integration method requires only a very small amount of annotated foreground masks per class to achieve good results.
Of course, having more foreground annotations during training can provide more information for dynamic integration constraints, thereby further improving model performance. However, considering practical applications, we opted to use a minimal amount of foreground annotations in our experiments to demonstrate the practicality of our proposed method.
> Q4: *Table 1 can be further improved by presenting results with both the vanilla and blank training.*
>
Thank you for your valuable suggestion. Due to space constraints in the main text, detailed results for both vanilla and blank training have been presented in Table 3 of Appendix G in our original manuscript. As observed, blank training shows almost no improvement over vanilla training, indicating that the performance gains are largely attributed to the proposed method. We will make an effort to include these results in Table 1 of the main text if space permits.
---
Rebuttal 2:
Title: Rebuttal by Authors [Q5-Q9]
Comment: > Q5: *Why the proposed method can reduce the conjunction error*
>
Thanks. Taking the conjunction error in MHSA as an example, in the advanced stages of Transformer’s MHSA, the integration weights $\mathbf{a}$ dynamically integrate specific foreground information between tokens for high-confidence samples, while they incorrectly integrate background-related information for low-confidence samples (lines 175-182 of the paper). During the treatment phase, our heuristic dynamic information integration method constrains the integration weights $\mathbf{a}$ through the loss function to encourage integration of foreground token information (lines 231-234 of the paper).
Importantly, we introduced gradients to differentiate the importance of each head in MHSA, obtaining the integration weights $\hat{\mathbf{a}}$ in the multi-head scenario and constraining them (lines 223-227 of the paper). We then update the MHSA parameters through backpropagation of the loss function’s gradient, which helps MHSA produce more accurate integration weights $\mathbf{a}$ and reduces the occurrence of erroneous background information integration.
Similarly, for the conjunction errors in the inter-token information static integration within FFN, we apply constraints to the integration weights $\mathbf{z}$ using the loss function, and update FFN parameters through gradient backpropagation. This process helps FFN generate more accurate integration weights $\mathbf{z}$ and reduces the occurrence of conjunction errors (lines 244-247 of the paper).
We hope these explanations address your concerns.
> Q6: *To what extent (fully/partially) can the proposed reduce the error*
>
This is an insightful question. The extent to which conjunction errors are reduced can be assessed through the decrease in the loss functions mentioned, specifically $\mathcal{L}\_{IDI}$ in Equation (6) and $\mathcal{L}\_{ISI}$ in Equation (9). In the experiments presented in the paper, both loss functions show a significant downward trend and ultimately converge, indicating that the conjunction errors are reduced to the maximum extent possible based on these loss functions.
However, depending on the dataset size and the choice of various hyperparameters, the loss functions do not decrease to zero but rather stabilize at certain values. Therefore, while conjunction errors are reduced significantly, they are not entirely eliminated. We appreciate your question and will update this discussion in the paper to clarify this aspect further.
> Q7: *Besides the information integration hypothesis, do you rely on other assumptions?*
>
Thank you for this insightful question. Yes, we do rely on additional assumptions. Specifically, the proposed Transformer Doctor is fundamentally based on the Information Integration Hypothesis. This hypothesis is inspired by conjunction errors observed in biological visual systems[1].
Studies such as [2] suggest that these conjunction errors can be mitigated through certain stimuli and cues. Similarly, during the treatment phase, we assume that such errors within the Transformer can also be improved using specific cues.
In summary, as we mentioned in Section 1, the motivation for this work stems from the error mechanisms observed in biological vision. Thus, Transformer Doctor is closely tied to insights from these related works.
[1] Treisman, Anne M., and Garry Gelade. "A feature-integration theory of attention." *Cognitive Psychology* 12.1 (1980): 97-136.
[2] Prinzmetal, William, David E. Presti, and Michael I. Posner. "Does attention affect visual feature integration?" *Journal of Experimental Psychology: Human Perception and Performance* 12.3 (1986): 361.
> Q8: *Is the information integration hypothesis valid in vision transformers, especially for large-scale versions?*
>
Thank you for your question. The Information Integration Hypothesis is indeed valid for large-scale Vision Transformers. As addressed in response to your second comment (Q2), the depth of the Transformer, the number of heads, and the dimensionality of features do not affect the information integration process in MHSA and FFN. Quantitative experiments also confirm that the hypothesis is effective for large-scale Vision Transformers, such as ViT-Large, demonstrating similar beneficial results.
> Q9: *This paper has many typos, even in the result presentation (e.g. Table 1 BeiT/Imagenet-1K cell).*
>
We sincerely apologize for the typographical errors. The issue you mentioned has been addressed in the revised manuscript, and we have conducted a thorough review of the entire paper to avoid similar issues.
---
Rebuttal 3:
Comment: The authors have addressed some of my concerns about experimental results. I have increased my rating.
---
Rebuttal Comment 3.1:
Comment: Thank you for your positive feedback and recognition of our work. We are pleased that our response addressed your concerns. We sincerely appreciate your valuable comments, which have helped improve this work. | Summary: This paper presents Transformer Doctor, which diagnoses the issues with the Transformer attention mechanism and resolves them via several information integration hypotheses. The primary motive of the paper is to identify the source of incorrect information aggregation, which leads to erroneous predictions, and the inspiration comes from biological groundings. The paper proposed Inter-token Information Static Integration, Intra-token Information Static Integration, Heuristic Information Dynamic Integration Therapy, Rule-based Information Static Integration Therapy, Joint Therapy of Dynamic and Static Integration, and extensive evaluation. The paper improves the performance of the Transformer mechanism significantly.
Strengths: 1. The paper is well written.
2. Each proposed component is well-motivated and discussed.
3. The step-by-step introduction of the hypothesis from section 3 to section 5 provides a clear picture.
4. The inter-token and intra-token diagnosis hypotheses are very promising for understanding the internals of MSHA.
5. The improved attention maps show the potential of the proposed information integration mechanism.
6. The integration equations are straightforward and easier to understand.
Weaknesses: 1. It appears from sec6.1 that the proposed approach requires a pre-trained model. Please correct me if not because I could not find explicit training settings in the paper. If it is trained for longer epochs, it increases the training time.
2. Since the model is pre-trained, even though attention maps are improving, it is unclear whether the gains came from further fine-tuning or information integration. In other words, how much the integration mechanism contributed to the gains?
3. In Figure 4, the regions in red always lie on the object, i.e. without and with transformer doctor applied. How can it be inferred that the attended region after using the doctor mechanism is the sole reason for improved performance? Similarly Fig12.
4. Selecting the gradient based on the actual class label is possible with supervised learning. Hence, how can this method be applied to DINO-like self-supervised methods, given that DINO also uses a Transformer?
Note: I am open to adjusting ratings if my concerns are resolved, especially 1,2,3.
Technical Quality: 4
Clarity: 3
Questions for Authors: See weakness.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Overall, the main limitations are the use of pre-trained models as a baseline, unclear hypothesis verification on the improved attention maps, and usability across self-supervised models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your diligent review and comments. We are pleased that you found the paper well-written and appreciated our step-by-step introduction of the proposed methods and hypotheses. Your recognition of our diagnostic hypotheses and the potential of the proposed information integration mechanism is highly encouraging. Below are our responses to each of your comments (each of your comments is highlighted in italics).
> Q1: *It appears from sec6.1 that the proposed approach requires a pre-trained model. Please correct me if not because I could not find explicit training settings in the paper. If it is trained for longer epochs, it increases the training time.*
>
We apologize for any confusion caused. The proposed Transformer Doctor indeed diagnoses and treats an already trained model, i.e., a model that has undergone normal training and fitting, as mentioned on lines 261-263 of the paper. Similar to most interpretability methods based on trained models, the proposed information integration hypothesis assumes that the trained model exhibits error mechanisms akin to those in biological vision (lines 126-129 of the paper). Specifically, in the diagnostic and therapeutic experimental setup, the pre-trained model was trained for 300 epochs, and continuing training does not improve performance (Table 3 in Appendix G). During the diagnosis and treatment phase, we retrain for an additional 100 epochs using the same training settings, which yields excellent results without significantly increasing the overall training time. We will emphasize this part in the paper to avoid any further confusion.
> Q2: *Since the model is pre-trained, even though attention maps are improving, it is unclear whether the gains came from further fine-tuning or information integration. In other words, how much the integration mechanism contributed to the gains?*
>
Thank you for your comment. The pre-trained model used in our study is a fully trained and fitted model, so further fine-tuning does not enhance its performance. However, significant improvements in accuracy are observed only after diagnosis and treatment with Transformer Doctor. This can be seen in Table 3 of Appendix G. For instance, in the case of ViT-Tiny on ImageNet-10, the baseline accuracy is 78.90%. After further fine-tuning ("*+Blank*"), the accuracy remains almost unchanged. However, with the application of information dynamic integration constraints ("*+IDI*"), information static integration constraints ("*+ISI*"), and both dynamic and static integration constraints ("*+IDI, ISI*"), the accuracy improves by 1.40%, 1.60%, and 2.00% respectively. Therefore, the performance gains are primarily due to the integration mechanism improvements rather than further fine-tuning. Similarly, the improvements in attention maps are also mainly attributable to the integration mechanism rather than further fine-tuning.
> Q3: *In Figure 4, the regions in red always lie on the object, i.e. without and with transformer doctor applied. How can it be inferred that the attended region after using the doctor mechanism is the sole reason for improved performance? Similarly Fig12.*
>
Thank you for your valuable comments. It is true that in Figures 4 and 12, some images show red regions on the object both before and after applying Transformer Doctor. However, after applying Transformer Doctor, the red regions on some objects become more focused (e.g., the "orange" in Figure 12), indicating that the model is attending to more useful features. Additionally, in some images, the red regions initially lie on the background, but after applying Transformer Doctor, they shift to the object (e.g., the "leopard" in Figure 4), suggesting that the model is now focusing on information most relevant to the prediction task. Furthermore, referring back to Table 3 in Appendix G, the performance improvement does not come from further fine-tuning, indicating that it is solely due to the proposed Transformer Doctor mechanism. We hope these explanations address your concerns.
> Q4: *Selecting the gradient based on the actual class label is possible with supervised learning. Hence, how can this method be applied to DINO-like self-supervised methods, given that DINO also uses a Transformer?*
>
This is an excellent question. While our method has been validated on most commonly used Transformer architectures, it is indeed challenging to directly apply the proposed method to self-supervised learning methods. As you correctly pointed out, the calculation of gradients in our method requires the availability of actual labels. In methods like DINO, which update parameters through contrastive learning and momentum updates for visual representation learning, it is difficult to compute gradients related to specific true labels, significantly limiting the applicability of our method.
A potential approach is to utilize the intrinsic mechanisms within self-supervised learning methods to find alternatives to the gradient-based approach. For instance, in DINO, the contrastive learning signal between the teacher and student networks can be used to estimate the importance of different heads in the Transformer. This signal could serve as a proxy for the gradients in our method.
Thank you for raising this important question again. Exploring the integration hypothesis and diagnostic and treatment methods for more complex tasks like self-supervised learning will be a direction of our ongoing research.
---
Rebuttal 2:
Title: Response to Authors
Comment: I thank the authors for the detailed responses to my comments.
Overall, I am convinced about my question on the source of the accuracy gains i.e. higher training epochs or information integration.
I have read comments from other reviewers and responses to them and they are in line with my understanding of the paper.
I have a particular concern about the reported results for DieT commented on by reviewer BM6N. I agree with the reviewer that resource constraints lead to smaller batch sizes for training and thus unfair comparison. However, I also applaud the authors for training the baseline with the same baseline settings. At the same time, I suggest authors to match the exact settings of the baselines because the results presented in this paper would become a baseline for comparison in future. Hence to avoid confusion among readers, it is advisable to add a detailed training setting section in the supplementary and also a small footnote or caption in the table stating the reasons for the accuracy difference of DieT while also adding the results from the original paper.
Based on the reviews, I vote for acceptance. I have increased my rating.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for your thoughtful review and the positive feedback. We sincerely appreciate your acknowledgment of our efforts to address your concerns and your recognition of the improvements made in the paper.
We understand your concern about the reported results for DieT, as highlighted by reviewer BM6N. We agree that ensuring fair comparisons is critical. To address this, we will include a detailed training settings section in the supplementary materials and provide a footnote or caption in the relevant tables to explain the accuracy differences for DieT. We will also add the results from the original paper for completeness.
Once again, we sincerely thank you for your valuable feedback and for recommending our work for acceptance. Your insights have been instrumental in improving the quality of our paper. | Rebuttal 1:
Rebuttal: Dear Reviewers ZXFK, BM6N, 3nqh, and KQuQ,
Thank you for your diligent reviews and constructive feedback. We particularly appreciate your recognition of the novelty and insightfulness of our work and are pleased that you find our approach of integrating error mechanisms from biological vision with Transformers both reasonable and inspiring.
Through our detailed responses to each of your comments, we believe the paper has significantly improved. We have addressed each comment individually and collected the following common concerns raised by the reviewers. We hope our responses address your concerns and would be grateful if you could consider raising your scores. If you have any additional questions, we are more than happy to engage in further discussion.
> Reviewer BM6N pointed out that the number of foreground annotations used for evaluation is insufficient, while Reviewer 3nqh noted that not all datasets have foreground annotations, which limits further usage.
>
We apologize for the confusion caused to both reviewers. To address Reviewer BM6N's concern, it is important to clarify that the foreground annotations are used for training, not evaluation. Considering that not all datasets have foreground annotations, our method achieves good performance with only a small number of foreground annotations. Regarding Reviewer 3nqh's concern, we emphasize that during the dynamic information integration treatment, very few manually annotated foregrounds are required to achieve significant improvements, which does not hinder the practical applicability of our method. For more details, please refer to our responses to each of your individual comments.
> Reviewer BM6N raised concerns about training baselines and the validation of large-scale Transformers.
>
We fully understand the reviewer's concerns. In response to these issues, we have provided detailed explanations and added relevant experiments below the reviewer's comments. It is important to emphasize that the experiments in the paper were conducted under fair experimental settings for comparison and analysis. We sincerely hope that the corresponding responses address the reviewer's concerns effectively.
> Reviewers ZXFK and 3nqh both raised issues related to gradients. One mentioned the applicability of gradients in self-supervised learning methods like DINO, while the other questioned why gradient-based methods are more effective.
>
We apologize for any confusion caused. In the responses to the reviewers' comments, we have provided detailed explanations on the role of gradients, their applicability in DINO, and potential alternatives to gradients. We believe these responses address the reviewers' concerns.
> Reviewers BM6N and 3nqh both pointed out several typos in the paper.
>
We appreciate the reviewers’ attention to detail in identifying these typos. We have corrected these errors and thoroughly reviewed the entire paper.
In addition to addressing the above issues, detailed responses to each reviewer’s comments can be found below their respective feedback. Thank you once again to all the reviewers for their meticulous review and valuable comments, which have significantly improved the paper. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization | Accept (poster) | Summary: This paper studies the effect of large learning rates for near-homogeneous logistic regressions. Especially, it extends previous results on linear predictors under large learning rates (an EoS phase and a stable phase for the convergence of GD; faster convergence) to nonlinear predictors with Lipschitzness and Lipschitz smoothness, for example, the two-layer neural networks with last layer fixed. The authors also prove the margin improvement for large learning rates and this near-homogeneous model. In the end, they demonstrate a faster convergence using large learning rates.
Strengths: 1. This work is the first to prove the margin improvement for large learning rates and this near-homogeneous model.
2. The authors analyze a nonlinear (near-homogeneous) model, which is closer to practice than previous works.
Weaknesses: The role and effect of large learning rates, especially compared to small learning rates, may need more illustration.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. What is the initial condition in Theorem 3.2?
2. For the margin improvement part (Theorem 2.2), it would be clearer if the dependence on the learning rates in the modified margin function could be explained in the main body. Also, it would be good if there were direct comparisons between small and large learning rates results of margin improvement, e.g., different dependence on learning rates or other parameters. This may help better illustrate the advantages of using large learning rates.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Although the authors improve the theory from linear models to near-homogeneous models, it is still far from the ones used in practice. If possible, some discussions on the intuitions of more nontrivial nonlinear models may be helpful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive feedback. We answer your questions as follows.
---
**Q1**. The role and effect of large learning rates, especially compared to small learning rates, may need more illustration.
**A1**. Corollary 4.2 and Theorem 4.3 together show a separation between large and small stepsize, where a large stepsize enables $O(1/T^2)$ loss, but small stepsizes suffer from at least a $\Omega(1/T)$ loss. This illustrates the effect of large stepsize for fast optimization.
---
**Q2**. What is the initial condition in Theorem 3.2?
**A2**. We do not have explicit assumptions on the initial condition of $w_0$. For the bound in Theorem 3.2 to be non-vacuous, $|| w_0 ||$ needs to be small compared to $\eta t$.
---
**Q3**. For the margin improvement part (Theorem 2.2), it would be clearer if the dependence on the learning rates in the modified margin function could be explained in the main body. Also, it would be good if there were direct comparisons between small and large learning rates results of margin improvement, e.g., different dependence on learning rates or other parameters. This may help better illustrate the advantages of using large learning rates.
**A3**. The modified margin is defined in Equation (10) in the appendix with explicit dependence on the stepsize. We will add this to the main paper.
The impact of the stepsize on the margin improvement is complex. Let us use $s$ to denote the starting time of the stable phase. If we assume $w_s$ is independent of the stepsize, then the loss and the parameter norm depend on the stepsize through the bounds $\Theta(1/\eta(t-s))$ and $\Theta(\log(\eta(t-s)))$, respectively. However, $w_s$ depends on the stepsize through a complex function, making it difficult to determine the quantitative effect of stepsize on margin improvement.
---
**A4**. Although the authors improve the theory from linear models to near-homogeneous models, it is still far from the ones used in practice. If possible, some discussions on the intuitions of more nontrivial nonlinear models may be helpful.
**Q4**. We believe our intuitions about the transition from an EoS phase to a stable phase extend to more general nonlinear models. Specifically, the initial EoS phase happens when GD oscillates within a sharp valley, and GD enters the stable phase when it navigates itself into a flat valley. Our theory of large stepsize is consistent with the celebrated flat minima intuition. We will add this discussion in the revision.
---
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the rebuttal. Please consider adding the discussions on the initial condition in Theorem 3.2 to the paper. I will maintain my score. | Summary: This work analyzes dynamics of large step-size for GD under logistic loss for non-homogenous two layer networks. They characterize two phases, first in which empirical risk oscillates and then monotonically decreases in the second phase. Additionally, they show
1) normalized margin grows nearly monotonically in the second phase
2) in the oscillatory EOS phase the average empirical risk decreases under certain conditions
3) With a larger step-size, the GD undergoing phase transition is more efficient.
Strengths: 1) The paper is well written and builds on top of exisitng works on EOS for logistic loss https://arxiv.org/pdf/2305.11788 and https://arxiv.org/pdf/2402.15926 and extends these two works from linear network to two layer non-linear network.
2) This paper analyzes the large learning rate regime with non-homoegenous two layer networks (for logistic loss). This is a practical setting since most works study gradient flow or linear overparameterized models.
Weaknesses: 1) I liked the way the authors introduced the three model conditions to extend their result from Theorem-1 in https://arxiv.org/pdf/2402.15926 in terms of the three constants (Lipschitzness, smoothness and near-homogenity). However, these conditions do not hold for Relu network (assumption 1.B and 1.C breaks), which probably is most used non-linear activation out there. Infact there are a bunch of non-differentiable activation functions for which these assumptions do not hold as well as the final result.
2) By comparing the proofs of this work with that of https://arxiv.org/pdf/2402.15926 , it seems like similar proof technique can be utilized to incorporate the three assumptions to derive the same results involving the three constants.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Will the loss landscape with non-homogenous layers be the same as the linear networks case? For example, the authors show that in logistic regression, the landscape is a valley instead of a quadratic basin. The iterates converges quickly in max-margin subspace direction and the oscillations within the orthogonal direction becomes small progressively with time. Can you give an intuition of how the loss landscape may look for non-homogenous layers?
2) Is it true that unllike MSE losses, where there is a chance of divergence for very high lrs, the logistic regression losses always converge irrespective of how large lr is used.
3) I missed the part in the paper where the authors discuss progressive sharpening. Usually for MSE losses, PS first takes place unless the sharpness hits 2/lr after which loss oscillates around walls of basin. Does logistic regression loss do not exhibit PS? Seems like EOS starts from the offset and then oscillations stop when iterates enter a valley of flatter oscillation directions.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As discussed the analysis breaks when considering non-differentiable activation functions like RELU. The analysis presented in the paper is an extension of previous work which I mentioned with some relaxation on the non-linearity (see the three conditions). It would be better if the authors can emphasize this limitation and point out the novelty directions in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We address your concerns below.
---
**Q1**. However, these conditions do not hold for Relu network (assumption 1.B and 1.C breaks), which probably is most used non-linear activation out there. Infact there are a bunch of non-differentiable activation functions for which these assumptions do not hold as well as the final result.
**A1**. Note that Assumption 1.C holds for ReLU networks with $\kappa=0$ when replacing the gradients with subgradients.
Since our focus is non-homogeneous networks, we choose to work with differentiable predictors to simplify the analysis. While we do not cover ReLU activation, we have addressed many other commonly used activation functions, such as GeLU and SiLU. These two are especially interesting due to their non-homogeneity. Therefore, they cannot be covered by the prior theory. Given that our work is the first to prove margin improvement for non-homogenous predictors, we believe our contributions are very significant.
Extending our results to non-differentiable predictors using tools from [Lyu and Li, 2020] is interesting, and we will comment on this as a future direction.
---
**Q2**. By comparing the proofs of this work with that of https://arxiv.org/pdf/2402.15926 , it seems like similar proof technique can be utilized to incorporate the three assumptions to derive the same results involving the three constants.
**A2**. We emphasize that our results are for nonlinear networks, but [Wu et al., 2024] only focused on linear predictors (or networks in the NTK regime). While our EoS analysis is motivated by [Wu et al., 2024], identifying the correct set of assumptions to enable the analysis for nonlinear networks requires nontrivial efforts.
Besides, our stable phase analysis also proves margin improvement for non-homogenous predictors, which partially solves an open problem noted by [Ji and Telgarsky, 2020]. To prove this result, we use techniques significantly different from those of [Wu et al., 2024].
---
**Q3**. Will the loss landscape with non-homogenous layers be the same as the linear networks case? For example, the authors show that in logistic regression, the landscape is a valley instead of a quadratic basin. The iterates converges quickly in max-margin subspace direction and the oscillations within the orthogonal direction becomes small progressively with time. Can you give an intuition of how the loss landscape may look for non-homogenous layers?
**A3**. No, the loss landscape for non-homogeneous networks is significantly different from that of logistic regression. Note that the formal is non-convex while the latter is convex. It is difficult to develop geometric intuitions about the non-convex landscape of non-homogeneous networks. This difficulty also demonstrates the significance of our contributions to establishing margin improvement and EoS theory for non-homogeneous networks.
---
**Q4**. Is it true that unllike MSE losses, where there is a chance of divergence for very high lrs, the logistic regression losses always converge irrespective of how large lr is used.
**A4**. This is a good question and we do not know the answer. In practice, the loss can diverge when training deep networks with very large stepsizes, even under logistic loss. However, we are not sure if this is a numerical issue or if there is a provable upper bound for the convergent stepsize.
---
**Q5**. I missed the part in the paper where the authors discuss progressive sharpening. Usually for MSE losses, PS first takes place unless the sharpness hits 2/lr after which loss oscillates around walls of basin. Does logistic regression loss do not exhibit PS? Seems like EOS starts from the offset and then oscillations stop when iterates enter a valley of flatter oscillation directions.
**A5**. Empirically, progressive sharpening also happens under logistic loss, as shown by [Cohen et al., 2020]. However, this is beyond the scope of this work, which focuses on the implicit bias and convergence of large stepsize GD.
---
---
Rebuttal Comment 1.1:
Title: Reviewer response-1
Comment: I thank the authors for their response. Since most of the comments seem addresses, I think Q-2 could have been addressed with more details rather than "analysis for nonlinear networks requires nontrivial efforts" and "use techniques significantly different from those of [Wu et al., 2024]". I think this question is important since the nature of results overlap with those of https://arxiv.org/pdf/2402.15926. I would suggest that the authors explain or mention how different are the proof techniques from https://arxiv.org/pdf/2402.15926 atleast briefly in the manuscript.
---
Reply to Comment 1.1.1:
Title: A technical comparison with [Wu et al., 2024]
Comment: Thank you for the suggestions. We make a detailed technical comparison with [Wu et al., 2024] below. We will add these discussions in the revision.
Our stable phase analysis shows the margin improvement in non-homogenous networks. In comparison, the stable phase analysis in [Wu et al., 2024] only concerns the convergence of the loss. As margin improvement is harder to show (especially in non-homogenous cases), the techniques of [Wu et al., 2024] are insufficient to achieve our goal. To achieve our goal, we analyze the evolvements of several modified versions of the margin. None of these quantities appear in [Wu et al., 2024]. So our techniques here are significantly different from [Wu et al., 2024].
Regarding our EoS and acceleration analysis, we use tools from [Wu et al., 2024], as there are not many tools that can deal with large stepsizes. Besides extending their results from linear models to networks, we make two innovations in our analysis. First, our comparator $u_1$ (see Equation 17 in Appendix B) contains an extra component to accommodate the non-homogeneity of the predictor. Second, our Lemma C.4 uses a sharper convergence analysis, which removes some logarithmic factors in Theorem 4.1 and Corollary 4.2, comparing to these results in [Wu et al., 2024]. For instance, our Corollary 4.2 gets $O(1/T\^2)$ while their Corollary 2 gets $O(\\log(T)\^2 / T\^2)$. This is mentioned in Lines 224-227, but we will emphasize it more in the revision. | Summary: This paper studies GD for nearly-1-homogeneous neural networks with large stepsizes. It provides two main results.
The first one describes the late, *stable phase* of training and can be seen as an extension of Lyu and Li (2020) result to the large stepsize, nearly homogeneous setting. Yet it comes with weaker conclusions, showing that if the training loss becomes smaller than some threshold at some point, then afterwards:
- the training loss converges to 0 at a $\frac{1}{t}$ rate
- the normalized margin "increases", and hence converges
The second one studies the early stage, *Edge of Stability* phase for linearly separable data for a more restricted one hidden layer neural network architecture (with fixed output weights) and shows that during this early stage, the loss decreases at some rate **in average**. This then allows to provide a result illustrating the advantage of large stepsize (with an optimization error of order $\frac{1}{T^2}$) vs small stepsize (optim error $\frac{1}{T}$).
Strengths: The study of GD with large stepsizes is of high relevance. This paper puts in perspective the phenomenon of edge of stability and illustrates on a simple example how it can be beneficial to faster rates for the training loss. The claimed results seem sound and the paper is nicely written.
Weaknesses: The provided results and settings might be a bit too weak in my opinion. As a major drawback, the authors insist a lot on the importance of large step sizes for implicit bias in the paper. However, the only implicit bias result is claiming the normalized margin is increasing in the late stable phase. As commented by the authors in the paper, Lyu and Li provided a KKT point convergence result in their work, and I was hoping the same conclusion could be possible here. As a consequence, this paper mostly studies the convergence rates of the training loss, which I find less interesting from a personal point of view.
Actually, discrepancies in the normalized margins can be seen in the different experiments of the paper, but I think it should be discussed more. In the light of the literature on EoS, large stepsizes do not only lead to faster convergence rates, but also to a stronger implicit bias towards "nice" (e.g. sparse) features. Figure 1b) does not support this claim here, but 1c) still seems to suggest that larger stepsizes can get better test loss. In consequence, I would have also liked a similar comparison of the test accuracy in figure 2 for CIFAR-10.
Additionally, the setting focuses on nearly-1-homogeneous parameterization. While the near homogeneity assumption is mild and really nice, considering 1-homogeneous networks is very restrictive and simplistic in my opinion. Again, Lyu and Li provide a general result for $L$-homogeneous parametrizations. Having a $2$-homogeneous assumption (or even general one) would largely improve the current work.
Actually, combining this assumption with the derivative condition (Assumption 2.A) makes the considered parametrization nearly linear. As a consequence, I fear that the proof of the second main result, which depends on these assumptions, does not significantly differ from the linear regression case, from a high level, abstract point of view.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Is there a specific mathematical challenge (eg wrt Lyu and Li) to prove a KKT point convergence type of result?
- Same question for extending $1$-homogeneity to $L$-homogeneity for any positive integer $L$
- Can we say more about the normalized margin in the restricted setting of Theorem 4.1 ?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We address your comments below.
---
**Q1**. Is there a specific mathematical challenge (eg wrt Lyu and Li) to prove a KKT point convergence type of result?
**A1**. Good question. The key difficulty is that KKT points with respect to an optimization problem are not even well defined for non-homogenous predictors, so extending the KKT point convergence analysis Lyu and Li is challenging.
Since our nearly homogeneous function $f(w;x)$ is asymptotically homogeneous, we conjecture that, under suitable conditions, GD may converge to KKT points given by a proxy homogeneous function
$$
\tilde f(w;x) := \left [ \lim_{t \to \infty} \frac{f(t w;x)}{t} \right] \cdot ||w||.
$$
There are still two obstacles to proving the above conjecture.
1. Verifying the (sub)differentiability of $\tilde f(w;x)$ is nontrivial.
2. The GD dynamic for learning $f$ is different to that for learning $\tilde f$. Specifically, there are examples such that $|f(w)-\tilde f(w)|$ is uniformly lower bounded by a fixed constant. Then the GD dynamic for $f$ is away from that for $\tilde f$, and might not satisfy the corresponding $(\epsilon, \delta)$ KKT conditions in [Lyu and Li].
We believe additional regularity assumptions about $f$ are needed to show KKT convergence. Nonetheless, we think this is a great question, and we will discuss it in the revision.
---
**Q2**. Is there a specific mathematical challenge (eg wrt Lyu and Li) to extending 1-homogeneity to
L-homogeneity for any positive integer?
**A2**. As mentioned in Line 300, this is an avenue left for future exploration. Extending our results from near 1-homogeneity to near L-homogeneity will require non-trivial modifications to both the margin functions and the stable phase conditions outlined in Theorem 2.2. Furthermore, verifying that the proposed stable phase conditions can be satisfied (even in special cases) could be challenging. We will discuss this in the revision. We think our margin improvement results in this work, which extends prior results from 1-homogeneity to near 1-homogeneity, is already a significant step toward addressing the general case.
---
**Q3**. Can we say more about the normalized margin in the restricted setting of Theorem 4.1?
**A3**. Good question. Although we consider linearly separable data, the predictor is nonlinear and, therefore, can achieve a normalized margin larger than the maximum $\ell_2$-margin. Additionally, we observe that in practice, while the normalized margin tends to be positive and increasing, the normalized margins for individual neurons can stay negative (please check Figure 2 in the pdf). A full characterization of the normalized margin is technically challenging even for linearly separable data. We will comment on this as a future direction in the revision.
---
**Q4**. The provided results and settings might be a bit too weak in my opinion… the only implicit bias result is claiming the normalized margin is increasing in the late stable phase…. In the light of the literature on EoS, large stepsizes do not only lead to faster convergence rates, but also to a stronger implicit bias towards "nice" (e.g. sparse) features. Figure 1b) does not support this claim here, but 1c) still seems to suggest that larger stepsizes can get better test loss. I would have also liked a similar comparison of the test accuracy in figure 2 for CIFAR-10.
**A4**. We respectfully disagree that our results and settings are weak. In fact, extending the implicit bias results for homogeneous predictors to general non-homogeneous predictors is an open problem listed in [Ji and Telgarsky, 2020]. Our work, by establishing the margin improvement results to general near 1-homogeneous predictors, partially solves this open problem. Furthermore, we also extend prior neural network analysis for GD with small or infinitesimal stepsizes to an arbitrary large stepsize, which is more relevant to practice. We believe the contributions of this work are already significant.
While understanding the generalization benefits of large stepsizes is very interesting, this question is beyond the scope of the current paper, which focuses on margin improvement and fast optimization.
Please see the attached pdf for Figure 1 reporting the test accuracy for CIFAR-10. We will include it in the revision.
---
**Q5**. ….Actually, combining this assumption with the derivative condition (Assumption 2.A) makes the considered parametrization nearly linear….
**A5**. We would like to point out that Assumption 2.A is a sufficient technical condition to enable the EoS phase analysis. However, we only need Assumption 1.A for Theorem 2.2, which does not assume a lower bound on the derivative and allows the predictor to be highly non-linear.
---
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answer. I now understand more clearly how these possible extensions still require extensive work. I think the additional discussions in the revised version will improve the quality of the paper.
I raise my score in consequence | Summary: This work studies the phase transition (from EoS phase to stable phase) of GD with large step sizes for training two-layer networks under logistic loss. Specifically, the authors proved the following:
- If the empirical risk is below a threshold depending on the step size, GD enters a stable phase where the loss monotonically decreases and the normalized margin nearly monotonically increases.
- For linearly separable datasets, GD with an arbitrarily large step size exits the EoS phase due to the convergence of the average loss across iterations. Moreover, a tighter bound on the phase transition time is also provided.
- For linearly separable datasets, GD with an appropriately chosen step size achieves an accelerated convergence rate of $O(1/T^2)$.
Strengths: This work makes significant contributions by extending existing results. The authors proved convergence with an accelerated convergence rate in the stable phase, whereas [Wu et al. (2024)] treated the linear predictor. Additionally, the authors proved margin improvement with non-homogeneous activation functions, while previous work focused on small step sizes and homogeneous activation.
Weaknesses: - Despite studying two-layer neural networks, the main theorems require linear separability of datasets, except for Theorem 2.2.
- The paper missed some relevant works. For instance, [D. Barrett and B. Dherin] showed that gradient descent (in discrete time) optimizes the loss plus gradient norm and studied the modified ODE capturing this property. A stochastic variant of this dynamics was also studied by [Q. Li, C. Tai, and W. E]. Additionally, [M. Andriushchenko, A. Varre, L. Pillaud-Vivien, and N. Flammarion] and [Y. Ren, C. Ma, and L. Ying] studied the benefits of large learning rates as well. Regarding optimization in the mean-field regime, [F. Chen, Z. Ren, and S. Wang] and [T. Suzuki, D. Wu, and A. Nitanda] proved the convergence of mean-field Langevin dynamics in the finite-neuron setting.
[D. Barrett and B. Dherin] IMPLICIT GRADIENT REGULARIZATION. ICLR, 2021
[Q. Li, C. Tai, and W. E] Stochastic Modified Equations and Dynamics of Stochastic Gradient Algorithms I: Mathematical Foundations. JMLR, 2019.
[M. Andriushchenko, A. Varre, L. Pillaud-Vivien, and N. Flammarion] SGD with Large Step Sizes Learns Sparse Features. ICML, 2023.
[Y. Ren, C. Ma, and L. Ying] Understanding the Generalization Benefits of Late Learning Rate Decay. AISTATS, 2024.
[F. Chen, Z. Ren, and S. Wang] Uniform-in-time propagation of chaos for mean field Langevin dynamic. 2022.
[T. Suzuki, D. Wu, and A. Nitanda] Convergence of mean-field Langevin dynamics: time-space discretization, stochastic gradient, and variance reduction. NeurIPS, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Equation (5) in Theorem 2.2 seems to implicitly make assumptions about the data structure/distribution and the number of neurons. Can you provide any non-trivial examples other than linearly separable data? I’m wondering if this theory covers XOR or k-parity datasets, as these could be benchmarks to see the separation from the kernel regime. For instance, see the following papers:
[M. Telgarsky] Feature selection and low test error in shallow low-rotation ReLU networks. ICLR, 2023.
[T. Suzuki, D. Wu, K. Oko, and A. Nitanda] Feature learning via mean-field Langevin dynamics: classifying sparse parities and beyond. NeurIPS, 2023.
- In Figure 1(c), GD with a small step size of 0.02 seems to achieve the best test accuracy at the beginning phase. What happened?
- Can Assumption 1-C be relaxed to $\kappa \leq 1$?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The limitations have been well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your support.
---
**Q1**. Despite studying two-layer neural networks, the main theorems require linear separability of datasets, except for Theorem 2.2.
**A1**. We acknowledge that linear separability is a strong assumption. We use this mainly as a sufficient condition for two-layer neural networks, regardless of width, to reach the initial bound of the stable phase when employing large stepsizes. Additionally, our stable phase results do not need this assumption. We believe the linear separability condition can be relaxed, which is left for future work. We will comment on this more in the revision.
---
**Q2**. The paper missed some relevant works. For instance, [D. Barrett and B. Dherin] showed that gradient descent (in discrete time) optimizes the loss plus gradient norm and studied the modified ODE capturing this property. A stochastic variant of this dynamics was also studied by [Q. Li, C. Tai, and W. E]. Additionally, [M. Andriushchenko, A. Varre, L. Pillaud-Vivien, and N. Flammarion] and [Y. Ren, C. Ma, and L. Ying] studied the benefits of large learning rates as well. Regarding optimization in the mean-field regime, [F. Chen, Z. Ren, and S. Wang] and [T. Suzuki, D. Wu, and A. Nitanda] proved the convergence of mean-field Langevin dynamics in the finite-neuron setting.
**A2**. Thank you for bringing our attention to these works. We will cite and discuss them in detail. Specifically, the stepsize considered by [D. Barrett and B. Dherin] and [Q. Li, C. Tai, and W. E] are small or even infinitesimal, whereas our stepsize is large and causes EoS. The work by [Y. Ren, C. Ma, and L. Ying] studied special linear networks under MSE loss, while we focus on two-layer networks under logistic loss. The work by [M. Andriushchenko, A. Varre, L. Pillaud-Vivien, and N. Flammarion] studied the effect of large stepsize through experiments while we take theoretical approaches. The works by [F. Chen, Z. Ren, and S. Wang] and [T. Suzuki, D. Wu, and A. Nitanda] demonstrated the convergence of mean-field Langevin dynamics for two-layer networks with a finite but large number of neurons. In comparison, although we adopt the parameter scaling from mean-field theory, we use a different proof technique that allows any number of neurons.
---
**Q3**. Equation (5) in Theorem 2.2 seems to implicitly make assumptions about the data structure/distribution and the number of neurons. Can you provide any non-trivial examples other than linearly separable data? I’m wondering if this theory covers XOR or k-parity datasets, as these could be benchmarks to see the separation from the kernel regime.
**A3**. This is a good question. In fact, we only need to ensure that the dataset is realizable, that is, there exists a parameter direction $W$ such that $\inf y_i f(a W; x_i) \to \infty$ as $a\to \infty$. This indicates that the loss can be made arbitrarily small. Since XOR or k-parity datasets can be realized by two-layer networks (with Softplus, for instance), Equation (5) can be satisfied. We will discuss this in the revision.
---
**Q4**. In Figure 1(c), GD with a small stepsize of 0.02 seems to achieve the best test accuracy at the beginning phase. What happened?
**A4**. You are correct that GD with a small stepsize achieves the best accuracy early (after about 20000 iterations). We think this is because of a mild overfitting since we only use a small dataset of 1,979 samples, while the MLP has about 30,000 parameters.
---
**Q5**. Can assumption 1-C be relaxed to $\kappa\le 1$?
**A5**. In Assumption 1-C, $\kappa$ can be any fixed nonnegative number.
---
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will maintain my positive evaluation. | Rebuttal 1:
Rebuttal: Thank you for all your feedbacks. We add two figures in the pdf.
- Figure 1 shows the test accuracy of two-layer networks for CIFAR-10 under the same setting of Figure 2(d)-(f). The
results support the intuition that large stepsizes lead to stronger implicit biases with “nicer“ features.
- Figure 2 shows training loss and margins of a two-layer network with leaky softplus activations
on a synthetic linear separable dataset. There are five samples in the dataset, which are $$((0.05, 1,2),1), ((0.05, -2,1),1), ((-1,0,2),-1), ((0.05,-2,-2),1),((0.05, 1,-2),1).$$ The max-margin direction is $(1,0,0)$ with a normalized margin of $0.05$. The network only has two neurons with fixed weights $1/2$ and $-1/2$. The leaky softplus activation is $\tilde \phi(x) = ( x + \phi(x))/2$, where $\phi$ is the softplus activation. The stepsize is $3$. We can observe that both neurons have negative normalized margins during the training, while the network's normalized margin increases and becomes positive.
Pdf: /pdf/8229bbfd073fb9ef94a6076df1f6577a78e29d20.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Discretely beyond $1/e$: Guided Combinatorial Algortihms for Submodular Maximization | Accept (poster) | Summary: This paper studies the fundamental problem of improving the approximation factor for non-monotone submodular maximization subject to cardinality or matroid constraints. In particular, the focus of the work is on designing combinatorial algorithms as opposed to continuous methods based on the multilinear extension.
The paper has the following contributions:
for cardinality constraints:
- a combinatorial $0.385 - \varepsilon$ approximation is proposed, which runs in $O(nk/\varepsilon)$. The algorithm is randomized but can be derandomized with a multiplicative deterioration of the query complexity which is exponential in $1/\varepsilon$
- a deterministic $0.377 - \varepsilon$ approximation is obtained, with running time $O(n \log k\cdot C_{\varepsilon})$, where $C_{\varepsilon}$ is exponential in $1/\varepsilon$.
- for context, the previous best combinatorial algorithm provided a $0.367$ approximation, while the overall state of the art is a continuous $0.41$-approximation.
for matroid constraints:
- a combinatorial $0.305 - \varepsilon$ approximation is proposed, which runs in $O(nk/\varepsilon)$. The algorithm is randomized but can be derandomized with a multiplicative deterioration of the query complexity, which is exponential in $1/\varepsilon$
- the previous best combinatorial algorithm provided a $0.283$ approximation, while the overall state of the art is a continuous $0.41$-approximation.
Strengths: - Submodular maximization is an important problem to the ICML and NeurIPS community, with many practical applications and a rich literature in these conferences
- Closing the approximation gap for non-monotone submodular functions is an exciting research question. To get some context, the current state of the art, i.e., the 0.401 approximation by Buchbinder and Feldman, has recently been presented at STOC, one of the top conferences in theoretical computer science
- finding combinatorial algorithms is important as continuous algorithms are impractical to implement and extremely expensive in terms of queries to the submodular function
Weaknesses: the deterministic algorithms are impractical, as they feature an exponential dependence on the approximation parameter $1/\varepsilon$. Their relevance is mainly theoretical.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The paper has no potential negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | null | Summary: In this paper, the authors develop a combinatorial algorithm for non-monotone submodular maximization under both size and matroid constraints. Their algorithm uses local search to guide RANDOM GREEDY and INTERPOLATEDGREEDY, providing both a randomized algorithm and a deterministic algorithm. For size constraints, the algorithm achieves an approximation ratio of 0.385, and for matroid constraints, it achieves an approximation ratio of 0.305. It is the first combinatorial algorithm with an approximation ratio greater than 1/e for size constraints. The technique in this paper is similar to that in [6]. However, while the algorithm in [6] is a continuous algorithm requiring a query complexity of $\Omega(n^{10}) $, the algorithm presented in this paper only requires $O(kn)$ queries of a value oracle. They also propose another algorithm based on this approach, which replaces the greedy part in INTERPOLATEDGREEDY with the descending thresholds technique. This algorithm achieves an approximation ratio of 0.377 for size constraints in nearly linear query time $O(k \log n)$, which is slightly greater than 1/e.
Strengths: 1. The paper provides two new deterministic algorithms with approximation ratio better than 1/e.
2. The paper provides practical randomized algorithm with both theoretical analysis and experimental evaluation.
Weaknesses: 1. The results in the paper are not strong enough. For the randomized algorithm, the paper an algorithm with query complexity $O(kn/\epsilon)$ and approximation ratio 0.385. It is worse than the most recent paper (https://arxiv.org/abs/2405.13994). For two deterministic algorithms, they are impractical since the dependence of $\epsilon$ is too large.
Technical Quality: 3
Clarity: 3
Questions for Authors: Both the results and techniques in this paper are somewhat similar to those in the recent paper (https://arxiv.org/abs/2405.13994). It might be beneficial to explain the relationship between your paper and theirs, especially the difference in the techinique ideas.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Both the results and techniques in this paper are somewhat similar to those in the recent paper (https://arxiv.org/abs/2405.13994). It might be beneficial to explain the relationship between your paper and theirs, especially the difference in the techinique ideas.
Thank you for bringing this paper to our attention; we note that
it was released on arXiv after the submission deadline
for NeurIPS 2024. Also, we would like to respectfully point out the
policy of NeurIPS concerning contemporaneous work.
Indeed, the authors of arXiv:2405.13994 had a similar goal to our work,
namely to produce practical algorithms with guarantee larger than $1/e$.
It appears to be a case of independent, parallel work.
Broadly speaking, the main idea of the two works is similar -- to develop
methods to guide the random greedy algorithm with a fast algorithm to
find a local optimum.
Our local search procedure is deterministic, while theirs is randomized
and uses different ideas. Their local search
is faster for small $k$ values. They also guide a faster variant of
RandomGreedy to achieve their overall time complexity of
$O(n + k^2)$.
On the other hand,
in our work, we 1) develop a guided algorithm
for matroid constraints; 2) for size constraint,
we have an asymptotically faster algorithm that uses a novel
way of guiding with partial solutions from random greedy itself,
which are not local optima, thereby achieving ratio $0.377$ in $O_{\epsilon}(n \log k)$;
and 3) we derandomize.
For 1) and 2), we had to develop novel recurrences to analyze
random greedy, which depart further from the
original guiding and analysis of the continuous measured greedy algorithm
to achieve ratio $0.385$. Random greedy for matroids is much
more like a local search algorithm than a greedy algorithm,
as the best swap is made at each iteration.
In particular, the guided recurrences stated in Lemmas C.5 and C.6
for the matroid constraint, and their closed form solution in
Lemma C.7, as illustrated in Fig. 4.
Further, derandomization for the matroid constraint
required development of Algorithm 8 and its analysis in
Lemma D.1. For 2), we think the use of random greedy to
guide itself is interesting, and we believe the resulting
analysis (depicted in Fig. 2) aids in the understanding
of the random greedy algorithm, perhaps opening up further
uses such as bicriteria algorithms, etc. Although the deterministic version has poor dependence on $\epsilon$, this idea might be used to produce a faster randomized version of this algorithm in subsequent work.
In future versions of our paper, we will add a citation to this independent, parallel work
and a discussion of the technical relationship.
---
Rebuttal Comment 1.1:
Title: response to contemporaneous work
Comment: I want to echo the authors on this point. I do not think it is appropriate to compare their results with results from work that was clearly done in parallel. Here is the NeurIPS policy on contemporaneous work.
**Contemporaneous Work**: For the purpose of the reviewing process, papers that appeared online within two months of a submission will generally be considered "contemporaneous" in the sense that the submission will not be rejected on the basis of the comparison to contemporaneous work. Authors are still expected to cite and discuss contemporaneous work and perform empirical comparisons to the degree feasible. Any paper that influenced the submission is considered prior work and must be cited and discussed as such. Submissions that are very similar to contemporaneous work will undergo additional scrutiny to prevent cases of plagiarism and missing credit to prior work.
---
Rebuttal Comment 1.2:
Comment: Thank you for your response. I've increase my score. | Summary: This paper studies the classical problem of maximizing a non-monotone submodular function subject to a cardinality and a matroid constraint.
A long line of work has developed approximation algorithms for these problems which achieve a 0.401approximation; however, every algorithm which achieves better than $1/e = 0.368$ relies on the multilinear extension and is primarily of theoretical interest, as the run times are very large (but polynomial).
The more practical combinatorial algorithms have not been able to break the $1/e$ barrier.
The main contribution is a suite of algorithms which work via an "guided" local search procedure.
The main algorithm works as follows: a local search algorithm obtains a first solution and then an "guided" local search uses this first solution to obtain a second one.
If the parameters of the algorithm is appropriately set, at least one of these two solutions is of high (expected) quality.
The main result is that this (randomized) algorithm achieves a $0.385-\epsilon$-approximation for cardinality constraint and $0.305-\epsilon$ approximation for matroid constraint and runs in $\mathcal{O}(n k / \epsilon)$ time.
There are a variety of other results, including several de-randomizations of this algorithm, though those are mostly of theoretical interest because their run time is exponential in $\epsilon$.
Simulations are run comparing the algorithms to two reasonable benchmarks.
Strengths: The main strength of the work is in the development of simple algorithms with improved approximation ratios for submodular maximization.
In particular, the randomized algorithm achieving $0.385-\epsilon$-approximation for cardinality constraint and $0.305-\epsilon$ approximation for matroid constraint is the main contribution of the paper.
Of particular note is that this algorithm is analyzed for both cardinality and matroid constraints, though different switching times $t$ should be used.
I really enjoyed the clarity of the writing in Section 2.2 which provided intuition for the theoretical results.
Weaknesses: **Paper Clarity**: For the most part, the paper is not well organized and can be challenging to read. With the exception of Section 2.2, the writing is a bit rushed and lacking focus. Many times I had to go back and forth in the paper just to read it. Perhaps the reason for this is that several algorithms are being presented with various guarantees and settings. While the results are great, the paper requires some serious efforts in improving the readability.
**Simulations**: While I appreciate the authors' work in constructing simulations, they are admittedly quite weak or at the very least poorly explained. This may be due in large part to space constraints. I summarize the issues here:
- The objective functions are not presented and the movie summarization dataset is not discussed.
- It is claimed that max-cut is run on 3 different random graph models -- but then how does only one line appear in the Figure 3c?
- If FastLS + GuidedRG is random, then why does it not also have some uncertainty quantification (i.e. standard error bars) on its objective value and run time?
- The simulations are weak because in all of these instances, the more sophisticated algorithms provide very little improvement over the greedy algorithm. I'm sympathetic to this phenomenon, I know it can happen. But then I think that the experiments should focus on practical instances where the sophisticated algorithms do actually improve over the greedy algorithm.
Technical Quality: 3
Clarity: 1
Questions for Authors: I thank the authors for their manuscript and invite them to respond to the points raised in my review.
However, I do not feel that I have any questions to raise at this time.
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: All limitations sufficiently addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback. We respond to each point below.
> Paper Clarity: For the most part, the paper is not well organized and can be challenging to read. With the exception of Section 2.2, the writing is a bit rushed and lacking focus. Many times I had to go back and forth in the paper just to read it. Perhaps the reason for this is that several algorithms are being presented with various guarantees and settings. While the results are great, the paper requires some serious efforts in improving the readability.
We apologize for the difficulty. Due to space constraints, it was difficult to decide what
should be presented in the main text vs. the appendices. We decided 1) to limit the main
text to the size constraint, as it is more accessible than the matroid constraint,
and 2) to try to explain the most important ideas in the main text at a high level, and
leave the formal analysis to the appendix. However, this means we sometimes refer to
lemmas, etc., in the appendix, which decreases readability. In the next revision, we will
make a further refinement in the hope of increasing readibility further and to minimize
the necessity of consulting back and forth between the appendices and main text.
> The objective functions are not presented and the movie summarization dataset is not discussed.
Due to space constraints, the objective functions for each application,
and the movie summarization dataset details are discussed in Appendix F.
We will make this more clear in the next version.
> It is claimed that max-cut is run on 3 different random graph models -- but then how does only one line appear in the Figure 3c?
In the main text, we only present the results on ER graphs.
The results on the other two models are provided in Fig. 6 in Appendix F.
We will add references in the experiment section to clarify.
> If FastLS + GuidedRG is random, then why does it not also have some uncertainty quantification (i.e. standard error bars) on its objective value and run time?
Thank you for asking this. Interestingly, it turns out that the objective value of our algorithm is usually dominated by
the local search subroutine, which is deterministic. Thus, there is little variance
in the objective value, which is why the error bars are not visible (they are there).
For the runtime, the number of queries of GuidedRG is deterministic.
Thus, the number of queries of our main algorithm is deterministic.
> The simulations are weak because in all of these instances, the more sophisticated algorithms provide very little improvement over the greedy algorithm. I'm sympathetic to this phenomenon, I know it can happen. But then I think that the experiments should focus on practical instances where the sophisticated algorithms do actually improve over the greedy algorithm.
In our results, we observe an improvement over Greedy of up to 2\%,
which in our opinion is significant.
However, we agree that we could have chosen applications, perhaps weakly submodular ones,
in which further improvement on Greedy is possible. Analyzing our algorithms in the weakly submodular case both theoretically and empirically is an interesting avenue for future work.
In addition, Greedy does not guarantee a constant approximation ratio for non-monotone submodular maximization problems.
Compared to those algorithms which do provide constant approximation ratio,
our algorithm outperforms RandomGreedy with respect to the objective value,
and outperforms Lee et al.'s LocalSearch with respect to the query complexity.
---
Rebuttal Comment 1.1:
Title: response to authors
Comment: I thank the authors for their thoughtful response to my review.
Their responses on the simulations have been satisfactory. I understand and agree with all points (except perhaps that 2% increase is significant, but we don’t have to discuss this further). I believe they will be addressed in a revision.
I understand that the space constraints make writing the paper difficult. I am not opposed to the idea of putting the size constraint in the main body and the matroid, together with all formal analysis in the appendix
But let me be a bit more specific about the issue of clarity. The way I see it, the main algorithm of the paper is Algorithm 2. However, when the reader reads the pseudocode of Algorithm 2 on top of page 4, it conveys very little meaning about the algorithm itself. Namely, GuidedRG is not defined nearby in the paper (indeed, it is only defined in the appendix). So, as the reader, it is quite confusing to read Section 2.0 as the algorithms aren’t defined sufficiently well for us to have an understanding yet. A similar thing happens in Definitions 2.1 and Lemma 2.2 — these are outputs of an algorithm which is still unfamiliar to the reader at that point. In this way, the reader is having to skip back and forth between parts of the paper upon a first read, which does detract from the clarity. You might consider shortening or moving Sec 2.3 and 3 to the appendix to describe GuidedRG in the main paper.
I mention this only to be more specific about what I meant by paper clarity and offer suggestions for improvement.
I like the theoretical results quite a lot and I think that authors are capable of revising the paper so as to improve clarity. The only difficult is that such revisions appear to require non-trivial re-organization of the paper. As a reviewer, this puts me in a tough place. I am uneasy with significantly raising my score because the authors could revise the paper to make it clearer. We should be reviewing the submitted paper rather than the paper that could be resubmitted. This is especially true given that there is no way to accept only after verifying the appropriate revisions have been made. On the other hand, it would be a shame for the paper to not appear in this conference, given its strong technical results. I would also feel bad if rejection of this paper meant it would no longer be viewed as contemporary to Tukan et al (2024).
I will raise my score to weak accept to signal to the AC that I am supportive of the technical results in the paper, but still have concerns about the presentation.
---
Reply to Comment 1.1.1:
Title: response
Comment: Thank you for the feedback and for the specific example concerning the presentation. We agree that our style of introducing the final algorithm or result first, before defining each piece, is difficult to read. However, we believe we can significantly improve the readability without a large amount of reorganization. For example, when introducing Alg. 2, we can rephrase the description to give a better sense of what each component is, and also that their precise description is deferred:
> In overview, Alg. 2 consists of two components, which are detailed below. The first component is a local search algorithm described in detail in Section 2.1. In brief, the local search starts from a constant-factor solution and efficiently improves it to nearly a local optimum. The second component is a random greedy algorithm that is guided by the output of the local search, described in detail in Section 2.2...
In any case, thank you again for the feedback, and we will improve the readability in the next version. | Summary: The paper investigates combinatorial approximation algorithms for constrained submodular maximization problems. Observing the algorithms that pass the $1/e$ threshold are generally carries the problem to the continuous domain, they investigate the answer to the following question: "Is it possible to obtain approximation ratios that are better than $1/e$ for constrained submodular maximization problems with combinatorial algorithms?" They answer this question affirmatively by combining Buchinder et al.'s RandomGreedy algorithm with a novel fast local search algorithm. Additionally, they propose derandomized versions of these algorithms by altering the InterpolatedGreedy algorithm by Chen and Kuhnle.
Strengths: The paper improves the known existing approximation ratios without resorting to continuous algorithms while using the same number of queries. More specifically, they improve the combinatorial approximation ratios of Buchbinder et al. from $1/e \approx 0.367$ to $0.385 - \varepsilon \approx 1/e + 0.018$ for the cardinality constraint and from $0.283$ to $0.305 - \varepsilon$ for matroid constraints while still requiring $\mathcal{O}(kn)$ queries. The idea of guiding the RandomgGreedy algorithm with the Fast Local Search (FastLS) is original. The ideas are conveyed clearly and the improvement over the state of the art for combinatorial algorithms is quantifiable.
Weaknesses: - On lines 73-75, you mention "Unfortunately, there is no known method to derandomize continuous algorithms, as the only known way to approximate the multilinear extension relies on random sampling methods." You may want to revise this sentence because as far as I know, there is a work that proposes estimating the multilinear relaxation of coverage-like functions with Taylor series expansions instead of sampling. Citation: Özcan, Gözde, Armin Moharrer, and Stratis Ioannidis. "Submodular maximization via Taylor series approximation." Proceedings of the 2021 SIAM International Conference on Data Mining (SDM). Society for Industrial and Applied Mathematics, 2021.
- On line 283, you mention an InterlaceGreedy algorithm for the first time as far as I can tell. Is this a typo or a different algorithm? If it is a different algorithm, where it is described?
Technical Quality: 3
Clarity: 3
Questions for Authors: - On lines 114-115, you mention using an LP method for derandomizing the RandomGreedy algorithm and how this method did not work in this case. Could you please elaborate more on why this was the case? Was the problem asymptotic query complexity or something else?
- Why the dummy elements are needed?
-
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Both in Sections 1.1 and 5, the authors discuss the limitations of their work fairly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive comments.
> On lines 73-75, you mention "Unfortunately, there is no known method to derandomize continuous algorithms, as the only known way to approximate the multilinear extension relies on random sampling methods." You may want to revise this sentence because as far as I know, there is a work that proposes estimating the multilinear relaxation of coverage-like functions with Taylor series expansions instead of sampling. Citation: Özcan, Gözde, Armin Moharrer, and Stratis Ioannidis. "Submodular maximization via Taylor series approximation." Proceedings of the 2021 SIAM International Conference on Data Mining (SDM). Society for Industrial and Applied Mathematics, 2021.
We thank the reviewer for pointing us to this work, which provides an interesting way to approximate the multilinear extension in the case of coverage functions. We will add it to the discussion, and modify the claim as the reviewer suggests.
> On line 283, you mention an InterlaceGreedy algorithm for the first time as far as I can tell. Is this a typo or a different algorithm? If it is a different algorithm, where it is described?
We apologize for the confusion. InterlaceGreedy is a subroutine of the InterpolatedGreedy of [11] (number 11 of the bibliography in the manuscript). We develop guided versions of it in Alg. 8 and 9 of the appendix.
> On lines 114-115, you mention using an LP method for derandomizing the RandomGreedy algorithm and how this method did not work in this case. Could you please elaborate more on why this was the case? Was the problem asymptotic query complexity or something else?
The LP method relies on a standard lemma for non-monotone submodular functions,
which needs a bound on the probability that any element is chosen into the solution.
Specifically, Lemma 2.2 of [A] and its generalization in [B]. When the algorithm switches back from the guided
behavior to unguided, we were only able to provide a loose upper bound on this probability
that wasn't good enough for the analysis. In particular, we had difficulty ordering the
the probabilities of the elements in a way that would allow us to apply the lemma.
It is possible that a generalization of Lemma 2.2, or a clever way of considering
the probabilities is possible and that the LP method could be used.
[A]: Feige et al. Maximizing Non-monotone Submodular Functions. SIAM Journal on Computing 40 (4).
[B]: Buchbinder et al. Submodular Maximization with Cardinality Constraints. SODA 2014.
> Why are the dummy elements needed?
We introduce dummy elements to simplify both the pseudocodes and the analysis.
The simplification comes from the fact that we may assume that an optimal
solution is of size $k$, or a base of the matroid, by adding dummy elements.
Moreover, our set $M_i$ in RandomGreedy
of the top $k$ marginal gains has $k$ non-negative gains. Otherwise, we would
have to allow $M_i$ to potentially be smaller and adjust the probabilities of elements,
for example: choose no element with some probability, and one of the remainder
uniformly randomly. This would create more cases in the analysis and complicate the
picture unnecessarily. | Rebuttal 1:
Rebuttal: We thank all reviewers for the contructive comments. We hope that we were able to answer the questions of all reviewers in our individual responses. To summarize,
in the next version,
- we will make efforts to improve the readability of the paper by minimizing the necessity of consulting the appendix to understand the theoretical results.
- In the experimental section, we will add more pointers to the appendix to clarify settings and results that are omitted from the main text for space reasons.
- We will add citation of the contemporaneous work [1].
We note that [1] appeared on arXiv on 22 May 2024,
20:56 UTC, after the NeurIPS 2024 submission deadline,
at 22 May 2024, 20:00 UTC -- thus, there was no way for
us to address this paper in the submitted manuscript.
We discuss in detail the differences between the two works in our response to Reviewer qSWx, and a version of this discussion will be added to the paper.
[1]: Tukan et al. Practical 0.385-Approximation for Submodular Maximization Subject to a Cardinality Constraint. arXiv:2405.13994 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Transition Matrix-Based Extended Model for Label-Noise Learning | Reject | Summary: This paper studies the problem of learning with noisy labels. To handle the instance-dependent noise, the authors propose an extended model for transition matrix-based methods. Specifically, their model combines a class-dependent transition matrix with a sparse implicit regularization term. The authors provide a theoretical analysis of the proposed method. Experiments conducted on both synthetic and real-world noisy label datasets verify the effectiveness of their method.
Strengths: 1. Theoretical analysis of the convergence and generalization are provided.
2. Experiments are conducted thoroughly, including experiments on synthetic and real-world datasets. The ablation study is also conducted.
Weaknesses: 1. The method proposed in this paper appears to be a straightforward combination of VolMinNet and SOP.
2. The experimental results for TMR are missing for the CIFAR-N, Clothing1M, and WebVision datasets.
3. An important baseline, CCR [1], which is the state-of-the-art among transition matrix-based methods, is absent.
4. The paper lacks an analysis of the estimation error of the transition matrix. It would be beneficial to compare the estimation errors of the transition matrix for TMR against those of other baselines.
**Reference**
[1] Cheng, De, et al. "Class-dependent label-noise learning with cycle-consistency regularization." *Advances in Neural Information Processing Systems* 35 (2022): 11104-11116.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Why is the residual term $r_i$ designed as $r_i = u_i \odot u_i - v_i \odot v_i$? If the residual term $r_i$ were designed as $r_i = u_i$, would there be any theoretical or empirical differences?
2. How does the residual term impact the estimation error of the transition matrix? Does it reduce or increase the error? An analysis or experimental results highlighting the effect of the residual term on the estimation error would strengthen the paper.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: I did not find that the authors have discussed the limitations and potential negative societal impact of their work. To improve the paper, the authors can provide a thorough analysis of the limitations of their method in an independent section. For example, scenarios where the method might not perform well can be included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Comment: Since the authors did not post their responses, I maintain my initial rating. | Summary: In learning from noisy labels, existing methods generally focus on class-dependent (but instance-independent) noise that can be modeled by a transition matrix $\mathbf{T}$. Some methods have also been proposed for instance-dependent noise (modeled by $\mathbf{T}(x)$). This work belongs to the latter. In particular, it proposes to implicitly model $\mathbf{T}(x)$ using an extended model based on the transition matrix $\mathbf{T}$ and a residual term $\mathbf{r}(x)$. Some theoretical properties (e.g. convergence and generalization) of the proposed algorithm (TMR) are analyzed under certain conditions. Experiments show that the proposed algorithm outperforms baselines.
Strengths: **Originality**
The paper studies the challenging problem of instance-dependent label noise, which is less addressed in the literature compared to class-dependent noise. The proposed extended model for transition matrix, which is a combination of a transition matrix with residual terms, seems novel and effective. Related work is adequately cited.
**Quality**
The experiments are quite comprehensive. The paper compares the proposed method with multiple methods (including some state-of-the-art ones) on various datasets. The experimental results show that the proposed method outperforms all those baselines. Some theoretical properties (e.g. convergence and generalization) of the proposed algorithm are also analyzed under certain conditions.
**Clarity**
The description of the proposed method is clear. The experiment section is generally clearly written and well-organized.
**Significance**
The proposed method shows significant improvements compared with various baselines. Therefore, it has the potential to be adopted by other researchers and practitioners, advancing the state of the art in learning from noisy labels.
Weaknesses: **Originality**
- In Lines 120-125, the residual term $\mathbf{r}(x)$ is introduced. However, it is not clear to me how novel it is compared to the previous work [57,25,30,31]. The authors should elaborate on this point.
- I can see why residual term $\mathbf{r}(x)$ might be useful, but why is it modeled as in the form in Line 124? The motivation should be explained.
**Quality**
- The convergence analysis seems very restrictive to me because it requires too many assumptions (Lines 171-175, Lines 183-186, and Appendix B.2).
- The generalization analysis (Theorem 3.2) is w.r.t. the training loss (surrogate loss) under the noisy distribution $\tilde{\mathbb D}$, but the test accuracy under the clean distribution $\mathbb D$ is what people really care about. Is it possible to prove any consistency guarantees?
- Knowledge of the ground truth $R_*$ is required to derive Theorem 3.2, but we do not know $R_*$ in practice.
- Section 3 is not clearly written, and I found it hard to follow and assess its correctness (see below).
**Clarity**
Section 3 is not clearly written, and I found it hard to follow and assess its correctness. Specifically:
- In Lines 173-174, is $R_{\ast}$ assumed to be $U_{\ast} \odot U_{\ast} - V_{\ast} \odot V_{\ast}$?
- In Lines 203-205, $\mathcal F$ is a set of loss functions. What is the exact meaning of "about the data"? Why is $R$ not considered in $\mathcal F$? Is a fixed $R$ being used here?
- In Lines 206-207, what is the definition of $\epsilon$-cover?
- In Lines 207-208, what are the mathematical definitions of the "average losses"?
- In Lines 210-213, it seems that here $R_{\ast}$ is fixed. Yet, it does not make sense to me because $R$ should depend on the transition matrix $T$ and the distributions $\mathbb D$ and $\tilde{\mathbb D}$. What is "ground truth" w.r.t. here?
**Significance**
The significance of the proposed method could be further enhanced through a more rigorous theoretical analysis (see above).
Technical Quality: 3
Clarity: 2
Questions for Authors: Besides my questions listed above, here are my additional questions:
- For real-world noisy datasets, how did you get clean test labels?
- In Appendix C.3, what are symmetric noise and flip noise?
---
Minors:
- Eq. (20): $R^*$ ---> $R_*$.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I did not see where the authors discussed the limitations of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Title: Maintain my initial rating
Comment: Since the authors did not post their responses, I maintain my initial rating. | Summary: In noisy label learning problem, noise is often characterized by confusion matrix. In contrast to instance-independent noise, this work considers a setting where confusion matrices could be different for different samples. Under this setting, the authors proposed to use a global confusion matrix shared by all instances and a residual term for each instance to account for the different between instance-dependent confusion matrix and the global confusion matrix. For learning, an MLE loss combined with an implicit sparsity regularizer is optimized.
Strengths: The work is tackling a challenging yet important setting in noisy label learning. The proposed model is a natural and intuitive extension to instance-independent confusion matrix as it allows for a wider range of noise. The proposed algorithm (TMR) is simple to implement, and demonstrated to be effective under synthetic and real-data experiments.
Weaknesses: - Motivation for the use of sparsity regularizer is not clear. The authors does not discuss much on why the vector $\textbf{r}$, or matrix $\textbf{R}$ in their model should be sparse. They did point out in page 3, line 117 that the difference when using the global transition matrix and the instance-dependent transition matrix should be small. However, that is not sufficient to promote sparsity, as any other $l$-p ($l>1$) norm could have promoted that goal.
- The use of implicit regularizer is also not clear. And more importantly, since the output of $\textbf{T}^T P(\textbf{Y} | X) + \textbf{r}(X)$ is a probability vector, $\textbf{r}$(X) has to satisfy certain constraints. This is not discussed nor specified anywhere in the paper. And hence it is questionable how the parameterization of $\textbf{r}(X)$ could produce valid probability vector $\textbf{T}^T P(\textbf{Y} | X) + \textbf{r}(X)$.
- The analysis might contain flaw. Equation (14) is incorrect: $\widetilde{\textbf{Y}}$ is a matrix composing of one-hot vectors while the RHS is a matrix composing of probability vectors. The two are not equal in general. This equation seems to be the key step to motivate the objective to be analyzed in (17), and also the key step in the proof of Theorem 3.1 (page 15, line 534).
- The analysis is based on linear model which is not very realistic.
Technical Quality: 1
Clarity: 2
Questions for Authors: - Should condition 1 be a design of the algorithm instead of a condition, since the learning rate should be in our control?
- In real-data experiments, should features extracted from self-supervised learning technique also be used for baselines, since it is applicable to most methods?
- How noisy is generated in synthetic setting?
- How does TMR combat a larger noise? Are there hyper-parameters we can tune in such situations? If yes, how were they selected in the experiment? If no, can you provide some intuition on how different noise level can be dealt with using the same algorithm configuration?
Confidence: 3
Soundness: 1
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Comment: Since the authors did not post their responses, I decide to maintain my initial rating. | Summary: This paper introduces a method that supplements the traditional estimate of a class-dependent transition matrix, which is popular in label-noise learning. Traditional transition matrix methods are less effective for instance-dependent noise. To overcome the limitation, the proposed method adds a residual term such that it can extend the projection of a class-dependent T on label predictions to fit the true one as if we have an instance-dependent T. Theoretical analyses of the algorithm confirm its convergence and generalization properties under specific assumptions. Experimental results on various synthetic and real-world noisy datasets such as CIFAR-N and Clothing1M show the performance.
Strengths: 1. The performance is eye-catching.
2. The method is proposed with both theoretical analyses and experimental results.
Weaknesses: 1. The intuition of the proposed residual is not clear. For example, why a sparse structure is preferable in this problem? Why do u and v enable a sparse structure? Why is a Hadamard product employed? Why not simply use a vector u?
2. The theoretical part of the main paper is heavy but the outcome is not convincing. Specifically, there is a huge gap between Eq. (17) and Theorem 3.1.
3. The assumption in Eq. (7) is too strong.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Is Theorem 3.1 based on infinite data or finite data?
2. It is not clear how the convergence is guaranteed. The critical results refer to another paper.
3. How do you guarantee the uniqueness of $\mathbf \theta^*$, $\mathbf T^*$, and $\mathbf R^*$? I believe it is hard to prove and simply assuming the uniqueness is too strong.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Lookback Prophet Inequalities | Accept (poster) | Summary: This paper extends the standard online selection problem by enabling the decision-maker to choose previous items with some discount that is captured by decay functions. The authors analyze the competitive ratio for different observation orders by giving a reduction from general decay functions to simple decay function.
Strengths: 1. This paper extends the standard prophet inequality to a more general and realistic form.
2. This paper gives a general reduction that characterizes the property of decay functions.
3. The competitive ratio for adversarial order is tight.
Weaknesses: 1. The analyses for random order and IID cases are not ideal. It seems neither upper nor lower bounds are tight for general $\gamma$.
2. I am not sure about the technical contribution of this paper. What is the biggest technical challenge in the analysis?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the biggest technical challenge in the analysis?
2. I feel it is better to emphasize the choice of distribution with support ${0,a,b}$ when proving the upper bound of $\gamma$-prophet inequality (maybe change the statement of theorems). This is because $CR^D(ALG) \le \sup_A \frac{\mathbb{E}[A^{\gamma_D}(I)]}{\mathbb{E}[OPT(I)]}$ for $I$ taking values in $0,a,b$ and $\sup_A CR^{\gamma_D}(A) = \sup_A \inf_I \frac{\mathbb{E}[A^{\gamma_D}(I)]}{\mathbb{E}[OPT(I)]} \le f(\gamma_D)$ (current statement in Theorem 4.4/4.5) does not directly imply $CR^D(ALG) \le f(\gamma_D)$.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent on our submission. Below we address the raised concerns and questions.
### Weaknesses
* **Upper bounds.** The reduction from the $D_\infty$- to the $\gamma$-prophet inequality relies on using distributions with support {$0,a,b$}, which is a limitation. Proving better upper bounds would require improving the reduction itself. We are not sure if this is feasible with arbitrary decay functions
* **Lower bounds.** The algorithms we provide for the IID and random order models match the optimal single-threshold algorithms for $\gamma = 0$. Better algorithms require using time-dependent thresholds, which are technically difficult to analyze even for $\gamma = 0$. Multiple prior works on prophet inequalities have studied either the IID or the random order model (see the related work section), incrementally improving the lower and upper bounds. Moreover, finding an optimal algorithm in the prophet inequality with random order is still an open question.
### Questions
* **Technical challenges.** The first big technical challenge lies in the reduction from the $\mathcal{D}$- to the $D_\infty$-prophet inequality, as we consider very general classes of decay functions with minimal assumptions. Moreover, the reduction relies on explicitly constructing hard instances for which the decay functions $\mathcal{D}$ and $D_\infty$ are equivalent, which requires very adequate concentration inequalities and rigorous tuning of the parameters in the construction. Moreover, a separate proof is given for each order model, as different arguments are required. The second big technical challenge lies in analyzing the algorithms for the $\gamma$-prophet inequality. With single-threshold algorithms, as shown in Equation (4), there is an additional term of $E[\gamma (\max X_i) 1_{\max X_i<\theta}]$ compared to the case of $\gamma=0$, which cannot be lower bounded by $c E[\max X_i]$ for a universal constant $c$ independent of the distributions. Hence manipulating this term to improve upon the competitive ratio of the case $\gamma = 0$ is challenging. Proving upper bounds is also more technical than the case $\gamma = 0$.
* **Statement of upper bounds in the $\gamma$-prophet inequality.** We agree with the reviewer and thank them for the suggestion. There is a discussion on this point in Section 4.4. However, we will revise the statements of Theorems 4.3, 4.4, and 4.6 to place greater emphasis on this aspect.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. I do not have further questions now.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for getting back to us. We understand from their response that we have adequately addressed the concerns raised in their review. If this is the case, we hope the reviewer will consider raising their rating to reflect their satisfaction with our response. Otherwise, we are more than willing to engage in further discussion and provide the necessary clarifications. | Summary: **Problem Studied**
This paper introduces the problem of "lookback prophet inequalities", which is a variant of prophet inequalities. In this problem, at the stopping time, instead of always selecting the current item, the algorithm is allowed to select any item that has arrived up to now. However, items in the past have their value discounted by known decay functions. The goal is to design algorithms with a good competitive ratio, which is the ratio of the algorithm's expected value to the expectation of the highest-value item in hindsight.
**Main Results / Contributions**
The first main contribution of this paper is to define the problem of lookback prophet inequalities. The paper considers the problem in the adversarial order, random order, and IID models. It turns out that for all three models, when analyzing the worst-case competitive ratio, one can assume that the decay functions are all of the form $x \to \gamma x$, for some $\gamma$. For the adversarial model, matching upper and lower bounds are derived. For the random order and IID models, upper and lower bounds are derived, but they do not match.
Strengths: The paper is generally well-written. The definitions and theorem statements are clear, and the proof sketches in the first 9 pages of the paper gives the reader a good intuition about why the statements are true. The paper introduces a new problem (lookback prophet inequalities), which to my knowledge has not be studied before. The problem definition is clean and I think it is a reasonably natural formulation.
Weaknesses: There are already numerous papers on variants of the prophet inequality, and this paper introduces yet another one. At this point, one must question how interesting or significant this contribution is. However, the problem definition appears reasonable, and the proofs seem to be fairly clean, which is why I am recommending a weak acceptance.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. I think the real estate example doesn't make too much sense. If the seller returns to a previous buyer, and it turns out the buyer is no longer interested, the seller can continue to sell the house.
2. The restaurant example also seems a bit strange to me. If the cost of revisiting a restaurant depends on the distance you need to walk back to it, this doesn't seem to be capturable with the current definitions of the decay functions, no? (Currently, the decay functions only model how many steps back into the past you look. E.g. If you are going from restaurant 3 to restaurant 1, this uses the same decay function as going from restaurant 5 to restaurant 3.)
3. It seems like the reduction to $\mathcal{D}_\infty$ relies on the fact that $n$ goes to infinity. Are there better competitive ratios that are possible for a fixed $n$?
4. There are several typos in the paper, notably in the title ("Inequalitites" -> "Inequalities").
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent on our submission. Below we address the raised concerns and questions.
### Weakness
Indeed, many variants of the prophet inequality are used to model specific use cases. For example, 'Learning Online Algorithms with Distributional Advice' (ICML 2021) explores a situation where a decision-maker with access only to samples competes against an adversary with access to distributions, and 'Fairness and Bias in Online Selection' (ICML 2021) examines the case of items belonging to distinct, incomparable groups.
On the other hand, the model we study is a generalization of the prophet inequality, which removes the overly pessimistic assumption of irrevocable rejection decisions, and captures much more general and more realistic online selection problems, which is why believe it is a highly relevant model.
### Questions
* **Real estate example.** In the introduction, we use examples to illustrate that rejection decisions can often be reversed in practical scenarios. While we acknowledge the reviewer's point that the lookback prophet inequality does not perfectly model a real estate scenario, we believe it offers a more realistic representation than the standard prophet inequality, and constitutes an important step towards closing the gap between theory and practice in online selection problems.
* **Restaurant example.** Imagine restaurants lined up along a street, with roughly equal distances between them. If there is a long distance between two consecutive restaurants, then it can be represented by empty restaurant spots, each with a zero reward. This can be modeled with decay functions of the form $D_j = cj$ with $c$ a constant. In a more general scenario, restaurants could be located anywhere on a map, which can be modeled as a decision-maker traversing a path in an arbitrary metric space. This problem is not captured by the lookback prophet inequality, and exploring it would be an interesting direction for future research.
* **Competitive ratio for fixed $n$.** The competitive ratio is defined as the worst-case ratio over instances of all sizes, this is why considering arbitrarily large instances for proving upper bounds is not a limitation. Nonetheless, for the random order and IID models, we believe that better ratios are possible for a fixed $n$, as suggested for example in Lemma 4.5. Some prior works, such as "Comparisons of stop rule and supremum expectations of iid random variables", focused on characterizing the optimal ratios for fixed $n$ in the IID model. The authors use dynamic programming to give a recursive characterization of these ratios, but the computations are cumbersome and highly technical, and yet they fail to give closed-form expressions or precise estimations. Investigating this question in the Lookback Prophet inequality with arbitrary decay functions is surely a very interesting research direction, but also very challenging.
* **Typos.** We thank the reviewer for their remark. We have reviewed the paper in detail and corrected all the typos.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, I appreciate it! | Summary: This paper studies a variant of prophet inequalities where the agent can reuse a previous item at a decaying price. Analyzing three models, adversarial model, random order model, and iid model, this paper gives various lower and upper bounds about the competitve ratio that an algorithm may achieve in these setups.
The paper mainly follows a reduction idea. It first show that for any decaying function family $\mathcal D$, the optimal competitive ratio is close to that for the limiting decaying function $D_{\infty}$, which reduces the $\mathcal D$-prophet problem to a $D_{\infty}$ one. This idea is further generalized so that it suffices to consider linear functions (i.e., $x\mapsto \gamma x$ for some $\gamma$) as the CR for $D_{\infty}$ is again close to that of $x\mapsto (\lim_{x\to \infty} \frac{D_{\infty}(x)}{x})x$. Equipped with these reductions, the authors
1. establish an upper bound on the CR in the adversarial model, where a matching online algorithm is also proposed;
2. develop an upper bound that is more general than previous works (considering classical prophet inequalities) in the random order model, but the lower bound algorithm is neither computationally feasible nor optimal; and
3. design an algorithm in the iid case whose asymptotic CR is the same as that in the random order model.
Strengths: 1. The setup is new, and it looks to of importance.
1. The paper is well-written so it is easy to get the intuitions behind the reductions.
2. The result for adversarial case is optimal.
Weaknesses: 1. For the first reduction ($\mathcal D$ to $D_{\infty}$), it seems that only some order models mentioned in the main text allows such a reduction. Why you only consider these models, and are there any other models excluded from this reduction (or do you have any intuition on what kinds of order models can ensure the reduction)?
2. For random order models and iid models, the algorithms designed are sub-optimal and cannot be implemented easily.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. See Weakness 1
2. 'However, considering larger classes of algorithms, the competitive ratios achieved in the IID model are better than those of the random order model.' Can you give some examples? I see in the limitations you mentioned that such algorithms are not designed in your $\mathcal D$-prophet model. Why they do not generalize?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Clearly stated
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort spent on our submission. Below we address the raised concerns and questions.
### Weaknesses
* **Other models.** We studied the models where the only decision made by the algorithm is the stopping time. The only model not included in our analysis is the "order selection model", where the decision-maker additionally decides in which order to observe the samples. To prove the reduction in that model, our intuition is that hard instances can be constructed using a trick similar to the one we used for the IID model, but then the optimal order in which the samples of these instances are observed should also be characterized.
* **Optimality.** The algorithms we provide for the IID and random order models match the optimal single-threshold algorithms when $\gamma = 0$. Better algorithms require using time-dependent thresholds, which are difficult to analyze even for $\gamma = 0$. Multiple prior works on prophet inequalities have studied either the IID or the random order model (see the related work section), incrementally improving the lower and upper bounds. Moreover, finding an optimal algorithm in the prophet inequality with random order is still an open question.
* **Implementation.** The algorithms we propose only require computing the solution $p$ of the equation $1-(1-\gamma)p = \frac{1-p}{-\log p}$, which can be easily computed numerically with very high precision as explained in Line 307. The proposed threshold then depends on the distribution of the maximum reward, which is standard for prophet inequalities. Additionally, we give in Corollary 4.4.1 a simpler threshold that results in a slightly weaker competitive ratio.
### Questions
* **Separation between the IID and random order models.** For $\gamma = 0$, the optimal single-threshold algorithm for both the iid and random order models has a competitive ratio of $1-1/e$. However, as mentioned in the related work section, using time-dependent thresholds enables a competitive ratio of $\approx 0.745$ in the iid model, which is optimal. On the other hand, in the random order model, no algorithm has a competitive ratio better than $\sqrt{3}-1 \approx 0.732$, which shows the separation between the best competitive ratios in both models.
* **Generalizing the algorithms.** A key property satisfied by single-threshold algorithms is Equation $(4)$ in line 270, on which our analysis relies. For algorithms using time-dependent thresholds, this property is no longer true (see Line 272), and different analysis techniques must be used to generalize them. Studying such algorithms is difficult and very technical even for $\gamma = 0$, and multiple separate works have focused either on the IID or the random order model to advance the analysis of such algorithms (see the related work section).
---
Rebuttal Comment 1.1:
Comment: Thanks for your response! I'd like to keep my score unchanged. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Graph Diffusion Transformers for Multi-Conditional Molecular Generation | Accept (oral) | Summary: In molecular generation tasks, there is often a desire for the generator to produce molecules that satisfy multiple properties simultaneously. Previous architectural designs did not pay particular attention to the scenario of optimizing multiple constraints in tandem. Therefore, the author has proposed a Multi-Conditional Graph Diffusion Transformer. In the noise addition phase, the author has employed a novel noise addition method, successfully establishing connections between the graph's nodes and edges. At the same time, the author has designed a new condition encoder and a graph diffusion transformer architecture. This architecture is capable of addressing both numerical and categorical conditions simultaneously. Experiments have demonstrated that the proposed method exhibits excellent performance.
Strengths: - The writing is mostly clear and easy to follow
- The evaluation is comprehensive.
Weaknesses: - some comparisons are not fair, see the questions for detail.
- some metrics of evaluation is not so suitable, see questions for detail
Technical Quality: 3
Clarity: 4
Questions for Authors: **Major Issues**
- The comparison presented in Figure 1 is not fair. Molecules that rank low on a certain single property may not necessarily rank low on other properties. Therefore, for a fair comparison, the statistical graph shown in Figure 1(b) should follow the same format as in Figure 1(a). That is, the authors should calculate and visualize the minimal $ K $ at which a molecule can satisfy all the constraints within rank $ K $ for all the generated molecules.
- All the datasets used for drug design suffer from a severe class imbalance issue, i.e., the ratio of positive to negative samples is less than 1 to 20. Could the authors specify the approximate ratio of positive to negative samples in the input condition (label) when calculating the accuracy in Table 2? If the ratio of positive to negative samples follows the original dataset's proportion, using ROC-AUC seems to be a more reasonable metric.
- How is the Avg. Rank calculated in Table 2?
- In section 3.1, if the setting of $\mathbf{Q}\_V$ follows the setting of Digress, and $\mathbf{Q}\_{EV}$ is derived from the marginal distribution of $V$, then we have, for any $k$, $\sum\_{i=1}^{F_V}\mathbf{P}\_{k,i}=N+1 \neq 1$, where $\mathbf{P}=\mathbf{X}\_G^{t-1}\mathbf{Q}\_G^t$, $\mathbf{P}\_{k,i}$ represents the probability of node $k$ being of category $i$. The edges have the same issue. Therefore, $q(\mathbf{X}\_G^t|\mathbf{X}_{G}^{t-1})$, as defined by the formula in the article, does not seem to be a distribution. However, in the specific implementation process, the graph is sampled from the "distribution" instead of directly using the probability as the edge and node features, which means the flaws mentioned above theoretically should not affect the model's performance. But the authors need to make corrections in the writing.
**Minor Issues**
- Many diffusion-based generation methods for conditional generation only require the introduction of conditions during the generation process, without the need to incorporate conditions during the training process. Can the proposed method achieve the same?
**If you can address all the major issues, I would be very willing to raise my score**
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: I believe the societal impacts and limitations of this work are discussed to a sufficient extent.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Major 1: Ranking comparison in Figure 1
## Results are updated to show ~$2\times$ improvement
Thank you for your suggestion. We have created a new figure (attached in the rebuttal PDF) based on your feedback. We first identify the maximum ranking position among the three single-conditional generation sets for each multi-condition generated polymer. Then, we calculate the median of these maximum ranking positions, which is 16—approximately $2\times$ better than single-conditional generation, which has a median value greater than 30.
# Major 2: Data imbalance issue
## The datasets are balanced, and the accuracy is reasonable
Thank you for your comment. Please refer to Line 200:
> For drug design, we create three class-balanced datasets from MoleculeNet [45]: HIV, BBBP, and BACE...
The positive-to-negative ratio is 1:1, making accuracy a reasonable metric in this case.
# Major 4: How avg. rank calculated
It is calculated by averaging the ranking positions of model performance for each condition (property).
# Major 5: The issue with $q(\mathbf{X}_G^{t} | \mathbf{X}_G^{t-1})$
## We correct this issue
Thank you for pointing this out! The term $\mathbf{X}_G^{t-1}\mathbf{Q}_G^{t}$ results in unnormalized probabilities. In the implementation, we need to split it into node and edge components and normalize them separately. We have corrected this issue and revised the description of Eq. (6) as follows:
We introduce a new diffusion noise model. At each forward diffusion step $t$, noise is applied to $\mathbf{X}_G^{t-1}$, resulting in an unnormalized probability $\mathbf{\tilde{p}}= \mathbf{X}_G^{t-1} \mathbf{Q}_G^t$. We first separate and normalize the first $F_V$ columns of $\mathbf{\tilde{p}}$ to obtain the noisy node states $\mathbf{X}_V^t$. We then reshape and normalize the remaining $N \cdot E$ dimensions to obtain the edge states $\mathbf{X}_E^{t}$. These components are combined to construct $\mathbf{X}_G^{t}$.
# Minor 1: Can the proposed method achieve no condition during training
Yes, here are two strategies:
1. Replace the condition encoder with a molecular encoder that uses a self-supervised task during large-scale pretraining. In this case, the generation conditions should be the molecular structure rather than labels. For property conditions, we can retrieve molecular structures with similar labels.
2. Learn null embeddings of the conditions during pretraining. Although fine-tuning the condition encoder for specific tasks will still be necessary, pretraining can help reduce costs and label requirements.
All these strategies are promising directions for extending Graph DiT.
---
Rebuttal Comment 1.1:
Title: Further Questions about Major 2
Comment: Am I correct in assuming that the dataset employed in your study is a balanced subset of the HIV dataset, which includes an equal number of positive and negative samples, rather than the complete dataset?
---
Reply to Comment 1.1.1:
Title: Further response about Major 2
Comment: Thank you for your prompt feedback. Yes, it is the balanced subset with 2,372 examples. All the information about the dataset is provided in Table 4 (Appendix). | Summary: This paper presents Graph DiT, which initially learns conditions (including categorical and numerical properties) through clustering and one-hot encodings. Subsequently, Graph DiT utilizes a Transformer architecture during the diffusion denoising phase to refine noisy molecular graphs incorporating these conditions. Experimental results underscore Graph DiT's efficacy in multi-conditional generation and polymer inverse design, emphasizing its capability to innovate in molecule creation.
Strengths: - **This paper makes somewhat novel contributions**: For example, unlike previous studies that treat node and edge state transitions independently, potentially misaligning with the denoising process, this research proposes a graph-dependent noise model. Graph DiT constructs a transition matrix based on the joint distribution of nodes and edges, enhancing the coherence of the denoising process.
- **The experiments address three primary questions**: (1) The authors validated the generative capabilities of Graph DiT against baseline models from molecular optimization and diffusion. (2) The authors explored polymer inverse design for gas separation. (3) They conducted additional analysis to further examine the capabilities of Graph DiT.
- A portion of the experimental results showed superior performance compared to the baselines.
Weaknesses: - The paper should emphasize the main contributions and acknowledge its limitations.
- The presentation of the paper should be improved as it is difficult to follow.
- **Lack of experiments**: Similar to DiGress, the paper should compute statistics such as uniqueness, novelty, and VUN (Valid, Unique, and Novel graphs) to provide a comprehensive assessment of the method's efficacy and innovation in graph generation and optimization. These metrics are crucial for evaluating the quality and originality of the generated graphs, ensuring a thorough comparison with existing approaches in the field.
- The experimental results in Table 2 did not demonstrate superior performance compared to other baseline methods. The effectiveness of the proposed Graph DiT appears somewhat trivial to some extent. Only the results in conditional control show good performance.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Please refer to the "weaknesses" section above.
- The authors argue in lines 25 to 27 that the diverse scales and units of properties present a significant challenge in multi-property optimization. This diversity can complicate the comparison and combination of different properties, potentially leading to skewed optimization results. In my view, a straightforward solution to this issue could involve scaling the properties to a common range, such as 0 to 1. This approach would normalize the scales and facilitate fair comparisons and effective optimization across different properties, thereby addressing the challenge effectively.
- What does "LCC" signify in DiT-LCC?
- What visualization method is used in Figure 6? Is it a linear technique like PCA, or a non-linear method such as UMAP, utilized for dimensionality reduction?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # W1, W2: Contribution, limitation, and paper presentation
Thank you for your comment. We can highlight lines 41-43 and 62-64 using italics to emphasize the contributions and organize them into bullet points. Any further suggestions or discussion are highly appreciated.
We discussed limitations in Section 4.4 (Lines 281, 288-289) regarding generation diversity and Oracle functions used for evaluation. We revise the main paper to acknowledge these again in the Conclusion.
We have corrected Figure (4) colors and revised Eq. (6) for clarity. Any further suggestions or discussions are welcome.
# W2: Lack of metrics (Novelty & Uniqueness)
## Results on required metrics
We show that Graph DiT achieves good values for novelty and uniqueness.
| Method | Validity | Novelty | Uniqueness | V * N * U |
|------------|----------|---------|------------|---------------------------------|
| Graph GA | 1.0000 | 0.9950 | 1.0000 | 0.9950 |
| MARS | 1.0000 | 1.0000 | 0.7500 | 0.7500 |
| LSTM-HC | 0.9910 | 0.9507 | 0.9550 | 0.8997 |
| JTVAE-BO | 1.0000 | 1.0000 | 0.6847 | 0.6847 |
| Digress | 0.9913 | 0.9908 | 0.9730 | 0.9556 |
| DiGress v2 | 0.9812 | 0.9799 | 0.9820 | 0.9442 |
| GDSS | 0.9190 | 0.9190 | 0.1532 | 0.1293 |
| MOOD | 0.9867 | 0.9867 | 0.9730 | 0.9473 |
| Graph DiT | 0.9760 | 0.9702 | 0.8919 | 0.8445 |
## Flaws in uniqueness and novelty
We did not choose uniqueness and novelty as major metrics because recent research shows they can be flawed [1]: randomly adding carbons to existing molecules can yield almost 100% novelty and uniqueness. Instead, we use internal diversity to better reflect generation diversity.
## Current evaluation has 9 metrics, enhanced by expertise
We defend our nine metrics for evaluating molecular quality, practical utility, and diversity. These metrics cover chemical validity, distribution matching (diversity, distance, similarity), and multi-condition controllability.
Additionally, Section 4.3 includes case studies with domain experts, providing valuable assessments often lacking in previous studies.
Based on the above points, we respectfully request a reconsideration of the assessment if the reviewer finds these metrics sufficient. We welcome further discussion on this matter.
# W3: Results in Table 2
## From 0.6 to 0.9, Graph DiT significantly outperforms the baselines
Thank you for your comment. Condition control is a core goal in inverse molecular design, as we need to design drugs/materials that meet human requirements. As shown in Table 2, Graph DiT effectively improves existing baselines from 0.6 to 0.9 in this regard.
Based on the above points, we respectfully request a reconsideration of the assessment that "... results in Table 2 did not demonstrate superior performance..."
## More thoughts on effectiveness and practical utility
We value the reviewer's opinion that an ideal model should excel across all metrics. However, we respectfully argue that defining ground-truth diversity and distribution distance is still debated with many choices (e.g. uniqueness or internal diversity). Therefore, we use up to 9 metrics to comprehensively evaluate model performance from diverse perspectives. Comparing very similar numbers, such as a distance metric of 6.7 vs. 7.0, is less compelling for demonstrating a model's ability to generate practically useful molecules. Significant improvement in multi-conditional controllability is more indicative of practical utility. Additionally, in Section 4.3, we involve domain experts to verify the generated results in real applications.
We greatly appreciate the reviewer’s comments and invite further discussion on the matter for any remaining concerns.
# Q2: Diverse scales of properties
## Lines 217-218 align with the reviewer’s question
Thank you for the excellent question. As detailed in Lines 217-218, our implementation for molecular optimization baselines aligns with your question. Results in Tables 1 and 2 demonstrate that Graph DiT effectively outperforms these baselines.
# Q3 LCC meanings
## Lines 185-188: LCC denotes largest connected component
Thank you for your question. Please see Lines 185-188 for more:
> A common way of converting generated graphs to molecules selects only the largest connected component [42], denoted as Graph DiT-LCC in our model. For Graph DiT, we connect all components by randomly selecting atoms. It minimally alters the generated structure to more accurately reflect model performance than Graph DiT-LCC.
>
# Q4: Visualization method in Figure 6
We use PCA (Principal Component Analysis) to reduce the dimensionality of Morgan Fingerprints [2] to two dimensions for visualization.
# Reference
[1] Genetic Algorithms are Strong Baselines for Molecule Generation. 2023.
[2] Extended-Connectivity Fingerprints. JCIM 2010.
---
Rebuttal 2:
Title: Response to Authors' Rebuttal
Comment: Thank you for the authors' responses. The authors' rebuttal does not convince me.
Firstly, I disagree with the authors' claim that "randomly adding carbons to existing molecules can yield almost 100% novelty and uniqueness. Instead, we use internal diversity to better reflect generation diversity." Additionally, I could not find evidence of the alleged deficiencies in uniqueness and novelty as mentioned in reference [1]. Could the authors provide evidence supporting the above claims? On the contrary, simply adding carbon atoms, such as carbon chains, tends to result in nearly 100% validity. Uniqueness is an effective measure to assess the model's distribution learning capability (i.e., whether it only generates simple carbon chains), while novelty serves as an indicator of whether the model is experiencing overfitting.
Based on the additional experimental results, the proposed DiT model's uniqueness is not high, at only 0.89. Furthermore, the validity (second to last) and novelty (third to last) do not outperform the baseline models. This suggests that the model's ability to learn molecular graphs (validity), distribution learning (uniqueness), and resistance to overfitting (novelty) are relatively trivial.
---
Rebuttal Comment 2.1:
Title: Response to follow up comment
Comment: Thank you for your follow-up comment.
Regarding the evidence: Table 1 in Reference 1 indicates that the AddCarbon method achieves 99.94% Novelty and 99.86% Uniqueness, both approaching 100%. This supports the claim that "randomly adding carbons to existing molecules can yield..."
Based on this, Uniqueness may not always work as expected, particularly in cases as mentioned by the reviewer when evaluating a model "whether it only generates simple carbon chains"
For Novelty, consider three examples: `C` (a single carbon), `CC` (two carbons), and `c1ccccc1` (an aromatic ring). The Novelty metric produces a score of 1 when comparing `C` with either `CC` or `c1ccccc1`, yet the internal diversity metric, as used in the paper, better reveals structural differences when comparing `C` with `CC`/`c1ccccc1`.
We agree with the reviewer that Uniqueness and Novelty are important indicators and appreciate the feedback. Graph DiT has achieved a Validity of 0.9760, Novelty of 0.9702, and Uniqueness of 0.8919, which we believe demonstrates its ability to successfully model the data distribution of complex molecules. Additionally, we want to emphasize the outstanding performance of the condition control in Graph DiT. It is the primary focus of our paper, as previous methods have struggled to generate desirable small molecules or polymers for drug and material discovery.
For further discussion on distribution learning, we refer to Lines 229-235, which remain relevant with the updated Novelty and Uniqueness results:
>GraphGA is a simple yet effective baseline for generating in-distribution molecules, e.g., on BBBP and HIV generation datasets. Diffusion model baselines such as DiGress and MOOD could produce diverse molecules but often fail to capture the original data distribution in multi-conditional tasks. Graph DiT shows the competitive performance of diffusion models in fitting complex molecular data distributions. Using fragment-based similarity and neural network-based distance metrics, we achieve the best in the polymer task and rank second in the HIV small molecule task, involving up to 11 and 29 types of heavy atoms, respectively.
>
We appreciate the reviewer's follow-up questions and welcome further discussion on any concerns the reviewer feels have not yet been fully addressed. | Summary: This research introduces the Graph Diffusion Transformer (Graph DiT) for generating molecules with multiple properties, such as synthetic score and gas permeability. Unlike previous models, Graph DiT uses a new noise model and a Transformer-based denoiser to better handle molecular structures. Experiments show that it performs well in generating both polymers and small molecules.
Strengths: 1. The paper successfully applies the DiT framework from computer vision.
2. The method jointly applies noise between atoms and bonds,
Weaknesses: 1. **Limited Demonstration of Multi-Condition Capability:** The paper demonstrates conditional generation with only two conditions, such as Synth. and HIV, yet claims capability for multiple conditions. To substantiate this claim, it would be beneficial to test on more complex condition sets such as GSK3β+JNK3+QED+SA, as evaluated in the MARS framework and also many other methods. Or at least three properties like Synth, HIV, and BBBP to validate multi-conditional generative capacity.
2. **Unexplored Texture Conditions:** Given that the model incorporates strategies from DiT, it is imperative to assess how it performs under texture conditions. Testing under these conditions would provide a more comprehensive evaluation of the model's versatility and adherence to the principles derived from DiT.
3. **Incomplete Evaluation Metrics for Unconditional Generation:** The evaluation metrics employed for unconditional generation do not adequately measure key aspects such as uniqueness, novelty, and FCD (Fréchet ChemNet Distance). Including these metrics would offer a more rounded understanding of the model’s performance in generating novel and diverse molecular structures.
4. **Ambiguity in Graph-Dependent Noise Schedule Impact:** While the advantages of a graph-dependent noise schedule are discussed, Figure 4(c) does not provide clear empirical evidence of its impact, particularly in comparison with separate discrete diffusion schedules.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How do you determine the $\text{rank}\( K \)$ of molecules under single-condition constraints, and how does this compare to the $\text{rank}\( K \)$ in multi-condition scenarios?
2. Could you clarify the methodology used by the Oracle to rank these graphs, particularly how closely attributes of ranked molecules match the conditional attributes?
3. Does the graph-dependent noise schedule enhance the validity of your method compared to a separate discrete diffusion schedule? Could you provide comparative results?
4. Given the higher similarity and lower distance of your generated molecules to a reference set, how do you ensure that the novelty is not compromised?
5. What advantages does using a learnable dropping embedding offer over simply excluding the condition encoder from the training process in unconditional generation?
6. Have you explored generation with three or more conditions, such as Synth., HIV, and BBBP? If not, what constraints prevent such multi-condition generation?
7. Could you provide examples of true polymers corresponding to the conditions shown in Figure 3, to facilitate a direct comparison and enhance the evaluation of your model’s performance?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # W1: Limited Demonstration of Multi-Condition Capability
## We evaluate models on up to four conditions, not two
Thank you for your comment. In Table 1, we evaluate all models on up to four conditions: $O_2$, $N_2$, and $CO_2$ permeability.
Based on the above points, we respectfully request a reconsideration of the assessment that "Limited Demonstration of Multi-Condition Capability".
# W2: Text condition
## Text condition is out of the scope of the paper
We appreciate your comment. Property conditions align with our research objectives in inverse molecular design.
## The DiT work [1] did not explore text conditions
The original DiT work [1] used classes from ImageNet rather than text conditions, leaving text conditioning for future work.
Based on the above points, we respectfully request a reconsideration of the assessment that "it is imperative to assess how it performs under texture condition" and "adherence to the principles derived from DiT".
# W3: Incomplete uniqueness, novelty, and FCD
## FCD was used in the paper
Line 208, Tables 1 and 2, and Figure 4 presented FCD as the distance metric.
## Uniqueness and novelty are provided
Here are results complementing Table 1:
| Method | Validity | Novelty | Uniqueness | V * N * U |
|------------|----------|---------|------------|---------------------------------|
| Graph GA | 1.0000 | 0.9950 | 1.0000 | 0.9950 |
| MARS | 1.0000 | 1.0000 | 0.7500 | 0.7500 |
| LSTM-HC | 0.9910 | 0.9507 | 0.9550 | 0.8997 |
| JTVAE-BO | 1.0000 | 1.0000 | 0.6847 | 0.6847 |
| Digress | 0.9913 | 0.9908 | 0.9730 | 0.9556 |
| DiGress v2 | 0.9812 | 0.9799 | 0.9820 | 0.9442 |
| GDSS | 0.9190 | 0.9190 | 0.1532 | 0.1293 |
| MOOD | 0.9867 | 0.9867 | 0.9730 | 0.9473 |
| Graph DiT | 0.9760 | 0.9702 | 0.8919 | 0.8445 |
Graph DiT achieves good results. However, we note these metrics alone don't necessarily indicate practical utility.
## Uniqueness and novelty may be flawed metrics
Recent research has shown uniqueness and novelty can be easily flawed [2]: Randomly adding carbons to existing molecules can yield almost 100% novelty and uniqueness. We have opted to use internal diversity, which offers a more nuanced reflection of generation diversity.
## Tables 1 and 2 present up to 9 comprehensive metrics
The nine metrics (Lines 204-211) are comprehensive, evaluating chemical validity, distribution matching, and multi-condition controllability. They assess diverse and useful molecule generation. Section 4.3's expert case studies provide additional valuable model assessment.
# W4: Ambiguity in graph-dependent noise
## Figure 4$(c)$ shows ~2x improvement on controllability
Thank you for your feedback. Figure 4$(c)$ shows the non-dependent noise model has only 49-55% of the graph-dependent model's controllability for gas permeability, demonstrating a significant improvement. We respectfully request a re-examination of the results.
# Q1: Ranking and K value
## Details are in Lines 35-37, 56-57
For single-condition constraints (Lines 35-37):
> we check whether a shared polymer structure that meets multi-property constraints can be identified across different condition sets. If we find the polymer, its rank K (where K is between 1 and 30) indicates how high it appears on the lists, considering all condition sets. If not, we set K as 30.
>
For multi-condition generated graphs (Lines 56-57)
> The Oracle determines the rank of this graph among 30 single-conditionally generated graphs for each condition.
>
# Q2: How oracle rank the molecular graphs
## Details are in Appendix B.3 (Lines 520-522)
For how Oracle ranks polymers (Lines 520-522):
> We rank these polymers based on the mean absolute error between the generated properties (evaluated by a random forest model trained on all the data to simulate the Oracle function) and the conditional property.
We use ranking positions to measure closeness to target conditions. For multi-conditional generated polymers, their median ranks are 4, 9, and 11 for Synth., $O_2$, and $N_2$ permeability (Lines 57-58, Figure 1).
# Q3: Does Graph-dependent noise model enhance validity?
## Yes, it improves validity from 0.4946 to 0.8245
# Q4: Novelty
## Good similarity and distance do not imply bad novelty
Novelty is provided in response to W3.
In Line 204, the reference set consists of left-out test cases unknown during training. Therefore, good similarity and distance to the reference set do not indicate poor novelty, which is typically defined on the training set.
# Q5: Learnable dropping embeddings
It is widely used [1,3,4] by DiT [1] and DALL-E 2 [4]. It enhances flexibility for handling missing values and improves training stability by learning representations for null embeddings.
# Q6: Have you explored three or more conditions?
## Yes, we explored and reported results for up to four conditions in Table 1
The HIV and BBBP datasets contain only one overlapping molecule. We may use learnable dropping embeddings to handle missing condition values. But we cannot obtain enough test cases for multi-conditional evaluations.
# Q7: True polymer in Figure 3
The SMILES string is:
```
NC1=C(*)C=CC(=C1)C1=CC(N)=C(C=C1)N1C(=O)C2=CC=C(C=C2C1=O)C(C1=CC=C2C(=O)N(*)C(=O)C2=C1)(C(F)(F)F)C(F)(F)F
```
The figure is in the rebuttal PDF and will be updated in the paper.
# Reference
[1] Scalable Diffusion Models with Transformers. ICCV 2023.
[2] Genetic Algorithms are Strong Baselines for Molecule Generation. 2023.
[3] Classifier-Free Diffusion Guidance. NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications.
[4] Hierarchical Text-Conditional Image Generation with CLIP Latents. 2022.
---
Rebuttal 2:
Comment: Thank you for your response. The feedback addressed several of my concerns; however, I still have a few remaining issues. Regarding W1, the conditions $O_2$, $N_2$, and $CO_2$ represent similar conditions without any competitive dynamics. Consequently, I would appreciate further clarification on the performance of the proposed method in relation to GSK3β+JNK3+QED+SA. For W3, Uniqueness and Novely may be flawed. However, a high VUN value does not necessarily indicate high performance of generation, whereas a low VUN value indicates underperformance in the generative process.
---
Rebuttal 3:
Title: Response to follow up comment
Comment: # W1: Conditions on Gas Permeability
Thank you for your feedback. We believe that conditional generation based on multiple gas permeability properties is a challenging task.
First, **the condition space for gas permeability is vast**, as noted in Line 26, where gas permeability varies widely, exceeding 10,000 Barrier units. This variability presents significant technical challenges in controlling polymer generation across such a broad condition space.
Second, **the task relatedness in multiconditions does not reduce the difficulty but rather increases it**. As detailed in Section 4.3, polymers often require high permeability for one gas and low permeability for another, making it challenging for the generation model to capture the crucial differences between various gas permeability properties. The GSK3β and JNK3 properties mentioned by the reviewer also have relatedness, as both are serine/threonine protein kinases.
Finally, we have provided additional results to further clarify our model's performance (best results are highlighted for distribution learning and condition control).
| Method | Validity ↑ | Coverage ↑ | Diversity ↑ | Similarity ↑ | Distance ↓ | Synth. (MAE↓) | QED (MAE↓) | GSK3$\beta$ (Acc ↑) | JNK3 (Acc ↑) |
|------------|------------|------------|-------------|--------------|-------------|---------------|------------|---------------|--------------|
| Graph GA | 1 | 7/7 | 0.8727 | 0.9438 | **17.5076** | 0.8405 | 0.1864 | 0.6050 | 0.7240 |
| MARS | 1 | 7/7 | 0.7010 | 0.6568 | 39.2837 | 0.7961 | 0.2053 | 0.6440 | 0.7390 |
| LSTM-HC | 0.999 | 7/7 | 0.8739 | 0.9313 | 18.7856 | 0.8366 | 0.1864 | 0.6096 | 0.7608 |
| JTVAE-BO | 1 | 5/7 | 0.6695 | 0.8567 | 48.8672 | 0.8614 | 0.2346 | 0.6280 | 0.7170 |
| DiGress | 0.251 | 7/7 | **0.8977** | 0.6508 | 32.5904 | 3.1438 | **0.1670** | 0.6693 | 0.7331 |
| DiGress v2 | 0.265 | 7/7 | 0.8976 | 0.7475 | 31.4630 | 2.9744 | 0.1772 | 0.6566 | 0.7358 |
| Graph DiT | 0.852 | 7/7 | 0.8647 | **0.9458** | 19.7717 | **0.7430** | 0.1805 | **0.9416** | **0.9777** |
Due to time constraints, we sampled a subset of 600 data points from the kinase dataset provided by the MARS paper and split them into training, validation, and test sets in the same manner as described in the paper. We then generated 1,000 examples for evaluation. The results show that Graph DiT significantly outperforms other methods on the GSK3β and JNK3 properties, with strong distribution matching to the reference set.
# W3: VUN Metrics
Our model demonstrates reasonable values on the VUN metrics. We believe that a Validity of 0.9760, Novelty of 0.9702, and Uniqueness of 0.8919 illustrate Graph DiT's ability to successfully model the data distribution of complex molecules.
We agree with that reviewer that Novelty and Uniqueness are important perspectives in evaluating molecular generation models. We would also like to share additional thoughts on drug and material discovery. This field may focus on individual instances that have significant real-world impacts. A model capable of suggesting a single valid, novel, and unique drug or material that satisfies diverse property requirements, even if it produces many invalid ones (resulting in lower VUN metrics), may be promising as well compared to other models that generates numerous valid, novel, and unique suggestions but fails to meet specific condition requirements.
We appreciate the reviewer's follow-up questions and welcome further discussion on any concerns the reviewer feels have not yet been fully addressed.
---
Rebuttal Comment 3.1:
Comment: The results of W1 appear promising. However, I am uncertain whether these results demonstrate the model's ability to generate molecules that concurrently satisfy QED $\ge$ 0.6, SA $\le$ 0.4, the inhibition scores of GSK3β and JNK3 $\ge$ 0.5 [2]. Could you report the success rate (SR) of four properties as described in MARS [1] and RationalRL [2]; or alternatively, provide the top-$k$ average property score (APS) of four properties as described in DST[3] (Appendix C3).
MARS: Success rate (SR) is the percentage of generated molecules that are evaluated as positive on all given objectives (QED $\ge$ 0.6, SA ≥ 0.67 \{This should be SA $\le$ 0.4 \}, the inhibition scores of GSK3β and JNK3 $\ge$ 0.5);
[1] Xie, Yutong, et al. "MARS: Markov Molecular Sampling for Multi-objective Drug Discovery." International Conference on Learning Representations, 2021.
[2] Jin, Wengong, Regina Barzilay, and Tommi Jaakkola. "Multi-objective molecule generation using interpretable substructures." International conference on machine learning. PMLR, 2020.
[3] Fu, Tianfan, et al. "Differentiable Scaffolding Tree for Molecule Optimization." International Conference on Learning Representations, 2022.
---
Reply to Comment 3.1.1:
Title: Response to follow up comment
Comment: Thank you for your prompt response.
Our multi-conditional generation setting differs from [1,2] in that we use true property combinations from the test set, rather than focusing on optimizing molecules toward a single target combination, i.e., the reviewer suggested QED $\geq$ 0.6, SA $\leq$ 4 (before scaling, according to [2]), GSK3 $\geq$ 0.5, JNK3 $\geq$ 0.5. This approach allows us to flexibly condition on various property combinations, especially for continuous conditions.
We can adjust the input conditions for Graph DiT to generate molecules with specific properties as requested by the reviewer. Below, we present the results for 1,000 generated molecules based on the input conditions: (1) QED randomly sampled from [0.6, 0.9], (2) SA randomly sampled from [1, 4] (before scaling), (3) GSK3β=1, and (4) JNK3=1. The success rate is 93.37%.
We greatly appreciate the reviewer's time and suggestions and hope this discussion helps address the concern.
# Reference
[1] Xie, Yutong, et al. "MARS: Markov Molecular Sampling for Multi-objective Drug Discovery." International Conference on Learning Representations, 2021.
[2] Jin, Wengong, Regina Barzilay, and Tommi Jaakkola. "Multi-objective molecule generation using interpretable substructures." International conference on machine learning. PMLR, 2020. | Summary: This work proposes Graph Diffusion Transformer (Graph DiT), which is a a molecular generation model based on multi-condition, and diffusion process. Graph DiT enables multi-conditional molecular generation, integrating multiple properties, eg. synthetic score and gas permeability.Graph DiT employs a graph-dependent noise model (instead of node level or edge level), and is claimed to improve noise estimation accuracy in molecules. Empirical validation across polymer and small molecule generation tasks shows Graph DiT’s better performance in condition control and distribution learning.
Strengths: - The topic of how diffusion models can be further made effective for multi-conditional molecular generation is focused and respective limitation of existing work is illustrated with examples. Although there can be multiple directions to improve the integration of conditions, which are discussed.
- The use of graph-dependent noise model forms the basis of enhancing noise estimation beyond what is seen in DIGRESS and similar models. While doing this, several challenges can arise. These challenges are discuss upon by the paper to a god extent.
- Empirical results show improvements in multiple metrics compared to Digress, MOOD and other baselines.
Weaknesses: - Due to the graph dependent noise, the model may be limited or unscalable to medium or large scale graphs.
- Minor format comment: the color shades in the charts like in Figure 4 are inconsistent with their labels.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Since the model is Graph DiT, the methodology is unclear on how 'graph structure' is modeled within the architecture. If not, it would resemble as not a specific graph transformer based model.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # W1: Scalability of graph dependent noise
## Subgraph-level noise model is possible for larger graphs
Thank you for your comment. The transition matrix is manageable for molecules, as we only need to model heavy atoms, which usually number less than 50 for a molecule [1].
For larger graphs, we could explore building the matrix at the subgraph level in the future, treating subgraphs as nodes and maintaining only inter-subgraph edges. This approach could handle larger-scale graphs more efficiently.
# W2: Color shades
Thank you for your comment. The discrepancy is due to different alpha values for transparency. We have updated Figure 4 with consistent alpha values and provided it in the rebuttal PDF.
# Q1: How graph structure is modeled
Thank you for your question. Graph DiT differs from vision and language Transformers by using graph tokens. Given a node on the graph, our new graph token concatenates node features with all related edge features, preserving node connectivity.
# Reference
[1] Molecular sets (MOSES): a benchmarking platform for molecular generation models. Frontiers in Pharmacology. 2020. | Rebuttal 1:
Rebuttal: We appreciate the time and effort of all the reviewers in evaluating our work. In response to the following comments, we have attached a PDF with three figures
1. **RRe9**: "Minor format comment: the color shades in the charts like in Figure 4 are inconsistent with their labels."
2. **hJ6K**: "Could you provide examples of true polymers corresponding to the conditions shown in Figure 3"
3. **rG3q**: "The authors should calculate and visualize the minimal $K$ at which a molecule can satisfy all constraints within rank $K$ for all generated molecules."
We believe that all concerns have been adequately addressed. Should there be any important issues that we may not have fully addressed, we would greatly appreciate the opportunity to discuss them further.
Pdf: /pdf/080a7dfd3e79e5f32f84095291c386b0ae4ca7e6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Long-Tailed Out-of-Distribution Detection via Normalized Outlier Distribution Adaptation | Accept (poster) | Summary: The paper proposed a novel approach, namely normalized outlier distribution adaptation (AdaptOD), to tackle this distribution shift problem, where ID classes are heavily imbalanced, i.e., the true OOD samples exhibit very different probability distribution to the head and tailed ID classes from the outliers. One of its key components is dynamic outlier distribution adaptation that effectively adapts a vanilla outlier distribution based on the outlier samples to the true OOD distribution by utilizing the OOD knowledge in the predicted OOD samples during inference. Further, to obtain a more reliable set of predicted OOD samples on long-tailed ID data, a novel dual-normalized energy loss is introduced in AdaptOD, which leverages class- and sample-wise normalized energy to enforce a more balanced prediction energy on imbalanced ID samples.
Strengths: 1. The paper is written well and is easy to understand.
2. The studied problem is very important.
3. The results seem to outperform state-of-the-art.
Weaknesses: 1. It would comprehensive if the accuracy is reported on ID data
2. I am curious why there is not a weight coefficient before DNE loss
3. It is better to ablate on the number of outlier data used during adaptation
Technical Quality: 3
Clarity: 3
Questions for Authors: see above
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of the novelty of our approach, empirical justification, problem setting, paper clarity, as your invaluable feedback on further enhancing the work. Please find our one-by-one responses to your concerns below:
> **W1: It would be comprehensive if the accuracy is reported on ID data**
We have reported the ID accuracy of all methods on CIFAR10-LT and CIFAR100-LT in Tab. 2 and on ImageNet-LT in Tab. 4. It is impressive that our AdaptOD achieves not only consistently substantial improvement over current SOTA methods on OOD detection metrics but also consistently better ID classification accuracy on all three widely-used ID classification benchmarks.
> **W2: I am curious why there is not a weight coefficient before DNE loss**
This is mainly because the DNE loss works stably with the cross-entropy loss in Eq. 11. We provide some ablation study results of having a coefficient hyperparameter for regularizing the DNE loss in Tab. G below, from which it is clear that the DNE loss can work well within its importance hyperparameter set in [0.5, 2.0].
Table G: Averaged AUC results of OOD detection on CIFAR10/100-LT for AdaptOD with different importance weights of the DNE loss
| Weight | 0.5 | 1.0 | 1.5 | 2.0 |
| :---------: | :---: | :---: | :---: | :---: |
| CIFAR10-LT | 94.51 | 94.69 | 94.65 | 94.56 |
| CIFAR100-LT | 81.85 | 81.93 | 81.90 | 81.86 |
> **W3: It is better to ablate on the number of outlier data used during adaptation**
If we understand your question correctly, the results in Fig. 3 should be the results you are looking for, where we report the performance of our proposed DODA and two existing TTA methods with an increasing number of OOD samples. It shows that our AdaptOD can utilize the test-time OOD data to adapt the true OOD distribution much faster and substantially more effective. | Summary: This paper addresses the issue of true OOD samples having distribution shifts in scenarios of Long-tailed Recognition (LTR) with heavy imbalance. To cope with this, the paper proposes normalized outlier distribution adaptation (AdaptOD) with two key components: Dynamic Outlier Distribution Adaptation (DODA) and dual-normalized energy (DNE) loss. DODA dynamically adapts the outlier distribution during inference to align better with the true OOD distribution, while DNE helps achieve balanced prediction energy across imbalanced ID classes. This method was tested on various benchmark datasets, including both low- and high-resolution datasets.
Strengths: - The proposed method, reducing distribution shift of OOD on training and inference stage, is novel.
- The paper includes a detailed ablation study that demonstrates the generality of the proposed methodology.
- The paper's class-wise and sample-wise normalization techniques improve the balance in prediction energy for imbalanced ID samples, enhancing overall performance compared to existing methods.
- This framework shows good experimental results compare to several baselines.
Weaknesses: - The overview of Figure 2 is difficult to understand. Simplifying and highlighting only the necessary information would be beneficial. There are too many redundant details.
- While recent methodologies improve long-tailed OOD detection performance using synthetic outliers without real outliers, this approach still relies on auxiliary outliers. This dependency can limit the practical applicability of the method.
- The method's effectiveness is questionable when handling distribution shifts between training auxiliary outliers and actual real-world outliers, potentially compromising its performance if the shift is large.
Technical Quality: 2
Clarity: 2
Questions for Authors: N/A
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and appreciation of the novelty of our work and its empirical justification. We provide our response to your concerns one by one in the following.
> **W1: The overview of Figure 2 is difficult to understand. Simplifying and highlighting only the necessary information would be beneficial.**
Thanks for your advice, we have provided a revised Figure 2 in the attached PDF and simplified the overview of Figure 2 as follows.
- we will rewrite the DODA part as: Each test sample is assigned a global energy-based OOD score $\mathbb G(x)$ to adapt the outlier distribution $\mathcal{P}^{out}$. DODA then uses the adapted outlier distribution $\mathcal{P}^{out}$ to calibrate the global energy score $\mathbb G(x)$, obtaining the calibrated global energy score $\mathbb G^{\mathcal{P}}(x)$ as the OOD score.
- we will also rewrite the DNE part as: For each iteration, DNE first applies Batch Energy Normalization on logit output to obtain the normalized energy, and then utilizes this energy to optimize a dual energy loss function at both the class and sample levels.
> **W2: While recent methodologies improve long-tailed OOD detection performance using synthetic outliers without real outliers, this approach still relies on auxiliary outliers. This dependency can limit the practical applicability of the method.**
As far as we know, existing SOTA long-tailed OOD detection methods all take the outlier exposure (OE) approach, relying on the availability outlier data; non-OE-based methods still cannot work well on long-tailed ID data due to the heavy class imbalance. However, all these existing OE-based methods are challenged by the distribution gap between the distributions of outlier data and true OOD data. The proposed AdaptOD effectively tackles this common, crucial challenge by the novel DODA and DNE modules. We agree that there might be application scenarios where no auxiliary outlier data may be available. We will explore to extend our method to this setting by having the initial outlier distribution trained on synthetic outlier data, or even randomly initializing the outlier distribution. Specifically, we attempt to randomly Gaussian initializing which has each dimension sampled from an isotropic Gaussian distribution as the auxiliary outliers to train the OOD models as shown in Tab. E and Tab. F below.
Table E: Results on CIFAR10-LT using randomly Gaussian initializing as auxiliary outliers
| | AUC $\uparrow$ | AP-in $\uparrow$ | AP-out $\uparrow$ | FPR $\downarrow$ |
| :------: | :------------: | :--------------: | :---------------: | :--------------: |
| EnergyOE | 84.65 | 85.27 | 82.66 | 52.70 |
| COCL | 90.53 | 89.44 | 88.92 | 37.89 |
| AdaptOD | 93.06 | 92.75 | 92.87 | 29.31 |
Table F: Results on CIFAR100-LT using randomly Gaussian initializing as auxiliary outliers
| | AUC $\uparrow$ | AP-in $\uparrow$ | AP-out $\uparrow$ | FPR $\downarrow$ |
| :------: | :------------: | :--------------: | :---------------: | :--------------: |
| EnergyOE | 71.85 | 72.37 | 68.61 | 80.27 |
| COCL | 76.07 | 76.74 | 69.58 | 78.62 |
| AdaptOD | 79.21 | 81.25 | 74.89 | 70.48 |
> **W3: The method's effectiveness is questionable when handling distribution shifts between training auxiliary outliers and actual real-world outliers, potentially compromising its performance if the shift is large.**
To mitigate the influence of a large distribution shift between training auxiliary outliers and actual real-world outliers during DODA, we utilize the statistic of training ID data to guarantee the effective optimization of outlier distribution in Eq. 3 during the testing phase, which effectively alleviates the reliance on the prior from the training auxiliary outlier data during the adaptation of DODA, thereby reducing the large adverse effects brought by the auxiliary outlier data that is highly different from the true OOD data. As a result, our AdaptOD can achieve SOTA performance over six large diverse OOD test datasets (including both the near-OOD dataset and the far-OOD dataset) and three synthetic OOD datasets as well on the CIFAR10/100-LT-based ID dataset benchmarks. | Summary: This paper addresses the challenge of OOD detection in LTR scenarios, where the distribution of classes is heavily imbalanced. The key issue highlighted is the absence of true OOD samples during training, which hampers the effectiveness of OOD detectors, especially the significant distribution shift between outlier samples and true OOD samples in LTR scenarios. The authors propose a novel approach called AdaptOD to tackle this problem by reducing the distribution gap between the outlier samples used for training and the true OOD samples encountered at inference time.
AdaptOD introduces two main components: DODA performs test-time adaptation to dynamically adjust the outlier distribution to better align with the true OOD distribution using the OOD knowledge from predicted OOD samples during inference. And DNE is designed to balance the prediction energy for imbalanced ID samples during training, which helps in learning a more effective vanilla outlier distribution that is crucial for DODA's adaptation process.
The authors demonstrate the effectiveness of AdaptOD through extensive experiments on CIFAR10/100-LT and ImageNet-LT. The results indicate its superiority in handling the long-tailed OOD detection problem.
Strengths: 1. Exploring OOD detection in the context of long-tailed recognition has practical value, and the discovery of strong distribution shift between outlier samples and true OOD samples in LTR scenarios, as shown in the Fig. 1a, is meaningful.
2. The motivation is clear. AdaptOD utilizes the DNE to learning a better vanilla outlier distribution first, and then performs DODA to dynamically adapt the outlier distribution to the true OOD distribution.
3. The method is elegant. DODA only calibrates the outlier distribution, effectively eliminating the retraining or memory overheads, and well approximating the upper-bound performance. DNE effectively adapt the standard energy loss in LRT scenarios with batch energy normalization, and further eliminated the dependence of energy bar on imbalance factors and training datasets.
4. This paper is well-written, and the empirical evaluation is comprehensive.
Weaknesses: 1. The goal of the CET component is to "enforce more balanced prediction energy for imbalanced ID samples" as line 88, which is similar to BERL[1]. The author should further clarify their differences.
2. AdaptOD performs better in near OOD dataset CIFAR, which cannot be achieved by previous methods. It would be better to further discuss the reason.
[1] Choi, H., Jeong, H., Choi, J.Y.: Balanced energy regularization loss for out-of-distribution detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 15691–15700 (2023)
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. DNE is designed for more effective DODA, what's the most important factor in applying TTA methods on OOD detection in LTR during training stage?
2. The updated form of the outlier distribution in Eq.4 is sample-wise, how does it compare to batch-wise updates?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations have been discussed in Appendix E.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating our contribution and for your thoughtful and detailed feedback. Please find our responses to your concerns below:
> **W1: The goal of the DNE component is to ``enforce more balanced prediction energy for imbalanced ID samples" as line 88, which is similar to BERL**
BERL only balances the prediction energy at the sample level, while our DNE balances not only the sample level but also the class level, with the support of the novel batch energy normalization. This difference allows AdaptOD to achieve a much better energy balance and thus a better OOD detection and ID classification accuracy, as shown in Tabs. 2 and 3. It also provides stable energy margins across different datasets, eliminating the need of manual tuning of these margins as in BERL, helping a better adaptation during DODA.
> **W2: AdaptOD performs better in the near-OOD dataset CIFAR, which cannot be achieved by previous methods. It would be better to discuss the reason in further detail**
Near OOD dataset CIFAR is similar to the training ID data, so previous methods are always confused between them. However, DODA can effectively utilize the test-time OOD data during inference to effectively adapt the outlier distribution and subsequently calibrate the global energy score. As a result, it helps AdaptOD discriminate near OOD samples from the ID samples better than previous methods. This is supported by the results in Fig. 1 in the attached PDF in the general response, where the performance of three TTA methods, including ours, increases significantly with an increasing number of OOD samples from CIFAR involved in the adaptation.
> **Q1: DNE is designed for more effective DODA, what's the most important factor in applying TTA methods on OOD detection in LTR during the training stage?**
Energy regularization is a mainstream method to improve OOD performance, and in long-tailed OOD detection, it is used to alleviate the bias of the global energy toward head samples in existing studies such as BERL. But we observe that there is a class-level bias toward head classes, in addition to the sample-level bias. Thus, one major contribution from DNE is to enable more balanced energy predictions overhead and tail samples due to its sample- and class-level energy debiasing. Another important contribution lies at its Batch Energy Normalization, which addresses a largely ignored issue in long-tailed OOD detection that requires careful manual tuning of energy margins to work well on different ID/OOD datasets. DNE provides stable energy margins for different long-tailed ID/OOD datasets, alleviating the aforementioned biases toward head classes without relying on the data-specific manual margin tuning.
> **Q2: The updated form of the outlier distribution in Eq.4 is sample-wise, how does it compare to batch-wise updates?**
In DODA we utilize the statistic of training ID data to guarantee the optimization of outlier distribution in Eq. 3 during the test phase, and the assigned pseudo label for each unlabeled test sample only depends on this statistic. Therefore, our predicted OOD samples used to adapt the outlier distribution once is sufficient for our method to work well. If batch-wise updates are used, it can help the optimization of outlier distribution become more stable at the beginning of DODA, with only minor difference in detection effectiveness, as shown in Tab. C and Tab. D below. Sample-wise dynamic updates as in Eq. 4 are more desired than batch-wise approaches in the pursuit of instant OOD detection, so we focused on the former one in this work.
Table C: Result of sample-wise and batch-wise update on CIFAR10-LT with AdaptOD.
| | AUC $\uparrow$ | AP-in $\uparrow$ | AP-out $\uparrow$ | FPR $\downarrow$ |
| :---------: | :------------: | :--------------: | :---------------: | :--------------: |
| Sample-wise | 94.69 | 93.89 | 94.12 | 27.26 |
| Batch-wise | 94.74 | 93.95 | 94.18 | 27.22 |
Table D: Result of sample-wise and batch-wise update on CIFAR100-LT with AdaptOD.
| | AUC $\uparrow$ | AP-in $\uparrow$ | AP-out $\uparrow$ | FPR $\downarrow$ |
| :---------: | :------------: | :--------------: | :---------------: | :--------------: |
| Sample-wise | 81.93 | 83.09 | 77.83 | 67.37 |
| Batch-wise | 82.06 | 83.14 | 77.88 | 67.32 |
---
Rebuttal Comment 1.1:
Comment: Thanks for your thorough responses, which have addressed my concerns. After reading your rebuttal and the other reviews, I determine to maintain my rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer t7zA,
We're very please to know that our response has addressed your concerns. Thank you very much for the affirmative and positive comments on our work. | Summary: This paper introduced the normalized outlier distribution adaptation (AdaptOD) which adapts the outlier distribution to the true OOD distribution from both the training and inference stages. Such AdaptOD includes two components: one is dynamic outlier distribution adaption (DODA), which adapts the outlier distribution to the true OOD distribution. The other is dual-normalized energy loss (DNE), which includes class-level and sample-level normalized energy to enforce more balanced prediction energy for imbalanced ID samples. The experiments are conducted on three OOD LTR benchmarks.
Strengths: 1. The introduction of DODA and DNE is novel and addresses the key issue of distribution shift in OOD detection, especially in LTR scenarios.
2. The combination of DODA and DNE demonstrates significant improvements over existing methods, both individually and collectively, as evidenced by the ablation studies.
3. The paper is exemplarily structured and articulated. It offers a lucid and comprehensive elucidation of the AdaptOD methodology, its constituent elements, and the experimental framework, rendering it accessible to readers.
Weaknesses: The motivation and mechanism underlying dynamic outlier distribution adaption (DODA) are unclear and lack a theoretical foundation.
(1) My primary concern is on the setting. The proposed AdaptOD with the introduction of test time adaptation (TTA) seems unfair. In previous work on OOD, the model's update process could not access the true OOD data and adopted the outlier as an alternative, while in this paper, the true OOD data is used to update the outlier distribution in the module DODA. As described in Tab. 5, without the DODA modules, the AUROC is 72.04 on ImageNet-LT, and such an increase is relatively small compared to SOTA. That indicates the main improvement depends on the knowledge leakage of true OOD samples during TTA.
(2) I am confused about the motivation of DODA under the particular OOD setting, which adapts the outlier distribution to true OOD distribution. Suppose we could access the true OOD distribution in the test phase and update the outlier distribution with the predicted OOD sample. Why don't we use the true OOD distribution to calibrate the scores? The DODA provides access to the true OOD samples and provides the perfect solution for auxiliary OOD data that directly uses the predicted OOD sample as auxiliary OOD data. Moreover, as described in Fig. 1, the curves of adapted outliers almost coincided with the true OOD. The authors would better give more interpretation.
(3) I am confused about the mechanism of DODA. Whether the DODA would adapt the outlier distribution to the true OOD distribution, as the initialized model detects the OOD sample at the beginning, the ID samples would be wrongly predicted as OOD, and that would lead to the optimization of the outlier distribution to the ID data distribution rather than the OOD distribution. The direction of the update of outlier distribution is uncertain how the DODA guarantees the direction to True OOD distribution. The authors would better give more theoretical analysis and support.
(4) Is adapting the outlier distribution to true OOD distribution with specified OOD samples meaningful for OOD detection? As the OOD samples can be drawn from any unknown distribution, the benchmarks are just built to evaluate the models’ ability of unknown detection. The specified adaption of outlier distribution would improve the performance on corresponding benchmarks. However, the specified adaption can not generalized to other benchmarks and would be useless for the OOD sample from an unknown distribution.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The experimental comparisons for Tab. 1 seem unfair. The COCL and EnergyOE can not access the true OOD data, while the proposed method AdaptOD updates the global energy score with true OOD data in the test phrase.
2. For Tab. 2 and Tab. 3, the previous methods with DODA modules obeying the OOD setting should be considered.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations of this paper, and there is no negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the novelty of our method and its improvement over existing methods. On the other hand, there might be some misunderstanding regarding the setting, which we will clarify in the following responses and further improve our writing.
> **W1: concern for the setting**
- **Test-time adaptation (TTA) for OOD detection**
TTA is a widely used technique that utilizes test data (without knowing their class labels) to help models improve performance and adapt to real-world scenarios. Its application to OOD detection is relatively less explored, but it can help quickly adapt to new OOD scenarios and improve OOD performance due to the **possibility** of utilizing real OOD samples. This has been demonstrated in recent studies, AUTO and AdaOOD, and more recently published RTL at CVPR2024 [Ref1]. These justify the importance of designing TTA methods to enhance OOD methods. However, please note that the TTA module is assumed NOT to be able to get access to the ground truth of the test data in all these methods, including ours, meaning that it handles unlabeled test data. Ground truth of test data is used in detection performance evaluation only. Thus, there is no data (label) leakage issue.
- **Unfair comparison to methods that could not access the true OOD data**
It is true that our method and other TTA-driven methods have the advantages of accessing the samples drawn from true OOD distribution. To ensure a fair, comprehensive comparison, in Tab. 2, we compare our full method AdaptOD with DODA-enabled four SOTA, non-TTA methods. Furthermore, in Tab. 3, we compare our DODA with the two existing TTA methods for OOD detection, where each of them is respectively combined with different current OOD detection methods. Additionally, we further compare our AdaptOD with RTL [Ref1] in the general response. All of these experiments demonstrate the superiority of AdaptOD in long-tailed OOD detection.
- **Small AUC improvement on ImageNet-LT compared to SOTA without the DODA module**
Both our modules, DODA and DNE, make major contribution to the overall detection performance of our method AdaptOD. We agree that the DNE module leads to relatively small AUC improvement on ImageNet-LT, but it can consistently improve not only all OOD detection metrics but also ID classification accuracy across all three benchmarks. Additionally, DNE also provides stable energy margins on different datasets, eliminating the need of manual tuning of these margins.
[Ref1] Fan, Ke, et al. "Test-Time Linear Out-of-Distribution Detection." Proceedings of CVPR, 2024.
> **W2: the motivation of DODA**
- **Why don't we use the true OOD distribution to calibrate the scores?**
There may be a setting misunderstanding. Although we can access the test data that contains the true OOD data, we do not know the true OOD distribution since we cannot access the test labels. Therefore, DODA assigns the pseudo labels to test data at first, and then uses the predicted OOD samples (maybe involving false predictions) to gradually estimate the true OOD distribution.
- **The curves of the adapted outliers almost coincided with the true OOD**
This occurs because of highly effective outlier distribution adaptation in our method. This is also justified by the results in Tab. 5, where AdaptOD has only a small performance gap to the Oracle model that utilizes the ground truth labels of the OOD test data to update the outlier distribution. This indicates that AdaptOD can well approximate the true OOD distribution by the predicted labels of the OOD samples.
> **W3: the mechanism of DODA. Wrongly predictions may lead to bad optimization of outlier distribution**
Yes, it is true that the wrongly predicted OOD samples would lead to bad outlier distribution during TTA, as shown in Fig. 3.
Therefore, our DODA designs a Z-score-based method in Eq. 3 to implement an OOD filter based on training ID data, where $\alpha=3$ (corresponding to 99\% confidence interval) is set so as to utilize a predicted OOD sample as true OODs with very high confidence. There may be a few wrongly predicted OOD samples, but their influence is limited in DODA as they are often similar to true OOD data, and moreover, the optimization of the outlier distribution would be dominated by the most correctly predicted OOD samples. As a result, our AdaptOD can quickly adapt to the outlier distribution and perform very stably thereafter (see Fig. 3).
> **W4: the meaning of adapting the outlier distribution to true OOD distribution with specified OOD samples**
Yes, OOD samples can be drawn from any unknown distribution in different deployment stages and/or applications. This is exactly the main motivation of our work that we aim to continuously adapt the outlier distribution by utilizing dynamic, unknown OOD samples in the target application scenarios to tackle this challenge. Our comprehensive results with different OOD data across three ID data benchmarks show that AdaptOD can generalize well regardless of the difference in ID or OOD datasets.
> **Q1: The experimental comparisons for Tab. 1 seem unfair**
COCL and EnergyOE are previous SOTA methods, so we directly compare them in Tab. 1 to show the significant improvement of AdaptOD as a whole. For a fairer comparison, in Tab. 2 and Tab. 3, we report the average performance across six OOD datasets of Tab. 1 which combine COCL and EnergyOE with DODA and other TTA methods, so COCL and EnergyOE also can access the test-time true OOD data.
> **Q2: for Tab. 2 and Tab. 3, the previous methods with DODA modules obeying the OOD setting should be considered**
DODA is a TTA method that can be combined with previous OOD methods to utilize the unlabeled test data for tackling the distribution shift problem. For a fair comparison, we have compared AdaptOD with the previous OOD detection methods enabled by DODA in Tab. 2, and we have also compared DODA with other TTA methods in Tab. 3 and the table above with the recent RTL.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer KD4x,
As far as we understand, you might have some major misunderstanding of our setting and method. In our rebuttal, we have provided detailed clarifications to address these issues. Since the author-reviewer discussion is coming to an end soon, please kindly advise whether our response has addressed your concerns. We're more than happy to address any further concerns you might have. Thank you very much for helping enhance our paper! | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers' time and invaluable feedback. We are encouraged that there is a unanimous consensus among all reviewers highlighting our method's effectiveness through extensive experiments. The reviewers also appreciate the importance of the addressed problem (KD4x, t7zA, udMg) and the novelty of our method (KD4x, t7zA, eeVQ). Additionally, we are pleased that the reviewers found the paper clear and easy-to-understand (KD4x, t7zA, udMg)
We respond to each reviewer's comments in detail below. We will incorporate the reviewers' suggestions into the manuscript revisions, which we believe will significantly enhance the paper's quality. We also include an additional PDF containing a revised Figure 2, and visualization results on the near OOD dataset CIFAR .
To further demonstrate the improvement of our AdaptOD, we conduct extra experiments compared with RTL [Ref1]. We present the new results in Tab. A and Tab. B.
Table A: Result of AdaptOD and RTL on CIFAR10-LT.
| | AUC $\uparrow$ | AP-in $\uparrow$ | AP-out $\uparrow$ | FPR $\downarrow$ |
| :-------: | :--------------: | :------: | :-------: | :----: |
| AdaptOD | 94.69 | 93.89 | 94.12 | 27.26 |
| RTL | 92.69 | 92.73 | 92.34 | 30.68 |
Table B: Result of AdaptOD and RTL on CIFAR100-LT.
| | AUC $\uparrow$ | AP-in $\uparrow$ | AP-out $\uparrow$ | FPR $\downarrow$ |
| :-------: | :--------------: | :----------------: | :-----------------: | :----------------: |
| AdaptOD | 81.93 | 83.09 | 77.83 | 67.37 |
| RTL | 79.58 | 81.06 | 75.04 | 72.34 |
**References**
- [Ref1] Fan, Ke, et al. "Test-Time Linear Out-of-Distribution Detection." Proceedings of CVPR, 2024.
Pdf: /pdf/8427a8bf937dff0360733a98f384607085e14397.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents | Accept (poster) | Summary: The paper introduces a novel approach for behavior tokenization in agent domains, utilizing a unified token transformer. The key contributions include the development of a self-supervised behavior encoder that learns a vocabulary of actions. The generated discrete tokens for actions, augments vocabularies into MLMs for autoregressive modeling. The authors curate a large-scale Minecraft and multimodal QA dataset, including synthetic annotation generation, to train their model. The experiments demonstrate that the model surpasses baselines which go through a text bottleneck or directly map from pixels to low level actions.
Strengths: - The introduction of a quantized action codebook and the use of FSQ for behavior tokenization represent a novel approach to representing sub-goals, as opposed to text subgoals which may be limiting or require a predefined set.
- The authors beat strong baselines DEPS and GROOT on long-horizon tasks in Minecraft, and show strong performance in open-ended instruction following and question answering.
- The paper is generally clear and well-organized.
- The proposed approach has potential applications beyond Minecraft, offering insights into behavior tokenization and hierarchical learning.
Weaknesses: - The impact of the new dataset compared to the proposed architecture is unclear. Further analysis is needed to isolate the effects of the dataset from the architectural innovations.
- Given the focus throughout the paper on the architecture contribution, the paper lacks comprehensive ablations to validate the contributions of individual components, such as FSQ and curated dataset.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The paper proposes a new dataset and architectural methods. Can you clarify the individual contributions of each and their combined impact?
2. More comprehensive ablations to demonstrate the utility of the proposed architecture and the new dataset would strengthen the paper. Additionally, comparing against multimodal LLM training methods like QFormer or Perceiver to understand the advantages of the encoder approach and training stages.
2. The ablations in Table 6 need more detail. Does the language goal include memory and caption text?
3. How many episodes per task are used in Table 6? Why not ablate the full subtasks to provide a larger sample size? Additionally, why are ablations of the training phases and behavior tokenizer architecture not included?
4. Why are Jarvis-1 and Voyager not included in the comparisons in Table 2?
5. Can the codebook be used to derive interpretable skills?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors address some limitations, but additional suggestions for improvement include:
- More comprehensive ablations to demonstrate the utility of the proposed architecture and the new dataset would strengthen the paper. Comparing against multimodal LLM training methods like QFormer or Perceiver could highlight the advantages of the proposed approach.
- Ensuring consistent training data across baselines would provide a clearer comparison of model performance, or providing more experiments on the contributions of each component to back up paper claims.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1, Q1, and L2: Impact of interaction dataset and model architecture.
Thank you for your insightful comments. To clarify the individual contributions and combined impact of the new dataset and architectural methods, we conducted specific ablation studies detailed in our paper.
**Dataset Contributions**: OmniJARVIS utilizes two key datasets: a text QA dataset based on Minecraft knowledge and the Contractor dataset of human gameplay in Minecraft. The text QA dataset enhances OmniJARVIS’s understanding of Minecraft knowledge and high-level planning for complex tasks. The Contractor dataset, on the other hand, improves mastery of basic Minecraft skills such as “mine oak log.” We explored the impact of these datasets on different task types through new ablation experiments presented in **Table 2 of the supplementary PDF**. Results indicate that the QA dataset significantly influences OmniJARVIS’s performance on long-horizon programmatic tasks, while the Contractor dataset facilitates success in short-horizon atom tasks.
**Architectural Innovations**: To assess the contributions of architectural innovations, we also conducted experiments on the effects of the vision encoder (Fuyu, LLaVA, isolated Captioner), behavior tokenizer (VQ-VAE, FSQ, Language), and various scales and architectures of the language model (Gemma-2B, LLaMA-7B, LLaMA-13B). These are detailed in Tables 4 and 6, and Figure 4 of the paper.
Our findings demonstrate that both the datasets and architectural enhancements significantly contribute to OmniJARVIS’s ability to perform a wide range of tasks, from simple atom tasks to complex programmatic challenges. The synergy between the diverse datasets and innovative architectural elements is key to the robust performance of OmniJARVIS across different task domains.
> W2: Comprehensive ablations on FSQ and dataset.
Thank you for your comments. We’ve conducted several ablation studies to validate the individual contributions of our architecture:
1. FSQ Ablations: We’ve added new experiments on FSQ codebook size and code context length, detailed in the PDF-Table-1 and **General Response**.
2. Dataset Ablations: We add new dataset ablation in **Table 2 of the supplementary PDF** to investigate the effectiveness of the QA and interaction dataset. Detailed in Table 5 of the original paper, these experiments evaluate the impact of different dataset segments on OmniJARVIS’s performance.
3. Behavior Tokenizer Ablations: Experiments on different tokenizer types (language, VQ-VAE, FSQ) are added to **Table 3 of supplementary PDF**, highlighting their effects on behavior generation.
These studies rigorously assess the influence of each component on the model’s effectiveness.
> Q2 and L1: Ablations on multimodal LLM.
Thank you for the suggestion to include more comprehensive ablations and comparisons with multimodal LLMs. We have conducted experiments using various visual encoders, including FUYU (patch encoder), LLAVA (ViT), and an independent vision captioning model (ShareCaptioner+). The results of these experiments are detailed in Table 4 of the main paper.
Additionally, we are currently finetuning the OmniJARVIS model using the QFormer-based BLIP2-OPT-2.7B architecture. This variant of OmniJARVIS has not yet been finished; we will update the manuscript with the results once they are available.
> Q3: Details on behavior tokenizer ablation in Table 6.
Thanks for your comments. This table showcases the success rates of OmniJARVIS in completing programmatic tasks using different behavior tokenizers.
Regarding your specific question about language goals, they indeed include memory and caption text derived from the meta-information in the contractor data. These are converted into language goals that serve as decoder policy prompts for the language-conditioned STEVE-I model. For example, a language goal could be "chop down the tree to mine oak logs."
A note on the VQ-VAE tokenizer: during its training, we encountered a posterior collapse where only one code from the VQ codebook was utilized, leading to the failure of the VQ-GROOT training. The FSQ-GROOT, in contrast, represents the final setting used for OmniJARVIS.
All models in this comparison, including LLaVA-7B as the base model, used the same synthetic memory, thought, and caption data to ensure fairness in our evaluations. We continue to expand the scale of the ablation experiments (more tasks and test repetitions) and have placed the latest results in **supplementary PDF Table 3**.
> Q4: More evaluation times in Table 6.
In Table 6, each task was evaluated 10 times. Due to the time-intensive nature of Programmatic Tasks, which require 6000-12000 timestamps to complete, this number of evaluations was deemed a practical balance between comprehensiveness and feasibility. To expand the scale of our experiments, we selected 1-2 representative tasks from each group of Programmatic Tasks and tested each 20 times. The results of this expanded testing are presented in Table 3 of the supplementary PDF, offering a larger sample size and further validation of our findings.
Due to word limit restrictions, we have added more responses to the official comments, and we hope you can see them.
---
Rebuttal Comment 1.1:
Title: Thank you to the authors for their comments
Comment: I thank the authors for addressing my clarifications and comments. My concerns have been mostly addressed with the additional experiments. Given the impact of the data used, I think the authors should be more transparent in the writing of the paper about the impact of the dataset versus proposed architecture, and how the training data differs from that used by baselines. I have raised my score to a 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and for increasing the score. We will incorporate these dataset details and additional experiments into the main text.
If you have any further questions, please feel free to discuss them with us.
---
Rebuttal 2:
Title: More responses due to the char limitation.
Comment: > Q5: Results comparison with Jarvis-1 and Voyager.
Thank you for your inquiry. Jarvis-1 and Voyager were not included in Table 2 due to their testing frameworks and controller:
1. Testing Settings: Jarvis-1 and Voyager operate under few-shot settings, utilizing multiple inference cycles and an explicit textual memory module for life-long learning. In contrast, our experiments are conducted in a zero-shot setting, focusing on each agent’s ability to follow instructions and generalize without prior exposure.
2. Controller Differences: Voyager employs a scripted action (offered by MineFlyer) and accesses privileged environmental information (voxel and lidar data), which provides it with capabilities not shared by the baseline agents in our study, which primarily use policy-based and visual perceptive controllers.
These differences make direct comparisons between the models unfair and uninformative. We will clarify this in the updated manuscript.
> Q6: Using Codebook to derive interpretable skills.
Thanks for the suggestions. As mentioned in the **General Response**, OmniJARVIS primarily aims at offering a more compact representation of skills in VLA compared to counterparts like RT-2 that employ language annotations, which can be expensive to obtain. We commit to exploring the interpretability of our behavior codebook as part of future work. | Summary: This work presents OmniJARVIS, a instruction following agent for open-world Minecraft. The agent works by learning a behavior encoder that generates behavior tokens conditioned on textual, visual and action inputs via self-supervised learning at first stage, then a multimodal interaction sequence can be packed with the learned tokenizer and a policy decoder is trained with this tokenizer autoregressively with the objective of predicting action sequence directly. The proposed method is achieve good performance on atomic action tasks, significantly better performance on programmatic tasks comparing with baselines, and better performance on open-ended tasks where instruction is given creatively. Comprehensive ablation experiments are conducted on behavior tokenizers, input modalities, and vision tokenizer.
Strengths: 1. The agent has a similar structure with GROOT, but replaced VAE-based approach with Finite Scalar Quantization (FSQ).
2. The agent demonstrated significant performance gain on proposed experiments over all baselines.
Weaknesses: 1. Since this work get insights from GROOT, a slightly more comprehensive compare and contrast is preferred, especially, it appears the difference between these two works is not limited to how to learn trajectory representation, but also about input data format.
2. Writing could be improved:
1. The description for 2nd stage of training seems incomplete (line 175)
2. Typos in general
3. The description of how to handle longer trajectories (> 128) is not clear to me.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Is OmniJARVIS trained considering programatic task? If yes, in Table 2, do other baselines also trained with programatic task data? if not, If not, how do you handle this difference when creating those baselines?
2. Just to confirm, in Table 2, the baselines are retrain / finetuned with the data constructed in this work?
3. GROOT and OmniJARVIS have close evaluation result on atom tasks as in Table 1, but far apart results for programatic tasks, could you provide more discussion on the reason? Clarifying first two questions would be helpful for this question.
4. Figure 5. Left and right seems similar visually, more explanation and description on what those behaviors are would be helpful for understanding.
I will consider raising score if above questions are addressed.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Authors provided a discussion of scalability of the agent: Scaling law is applicable for OmniJARVIS's instruction tuning process, the eval loss exhibit a log-linear decrease as data scale for tuning increases. Scaling up VLM improves performance but saturation is observed at 7B parameters.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1: difference between GROOT and OmniJARVIS
As detailed in the **General Response**, our core contribution is to propose a novel Vision-Language-Action (VLA) model architecture termed OmniJARVIS, as shown in **Figure 1 of the supplementary PDF**, aiming at resolving the issues including inference efficiency and language and annotation bottleneck.
GROOT is limited to short-horizon tasks (atom tasks) and **cannot accept language instructions**, as demonstrated in Tables 1 and 2 in the original paper. OmniJARVIS, on the other hand, uses the FSQ-GROOT decoder as a de-tokenizer, enabling it to generate environment-acceptable actions from discrete FSQ codes that are produced by the VLM. This allows OmniJARVIS to handle complex, long-horizon tasks (programmatic tasks) with advanced reasoning and planning abilities.
Please let us know if you still have further concerns about the novelty and we're more than happy to assist!
> W2: Descriptions for OmniJARVIS training.
Sorry for the confusion. We utilized the SFTTrainer class from the TRL library by Hugging Face to train the VLM model. The learning rate was set at 1.4e-5, and a cosine learning rate scheduler was employed. The weight decay parameter was set to 0 with a warm-up ratio of 0.03. Training took place on 8 A800 GPUs with FSDP, with a batch size of 2 and gradient accumulation steps of 4 using bf16 precision. The training lasted for one epoch on our generated dataset.
> W3: typos
Sorry for the typos. We've revised the manuscript accordingly.
> W4: How to handle longer trajectories (>128).
Thank you for your question. OmniJARVIS is specifically designed to model trajectories that exceed this length, which is a limitation in models like GROOT which can only handle fixed 128-frame segments for instruction-following.
Given a long trajectory(>128), we first slice it into multiple segments of 128 frames each. These segments are then encoded into action tokens, $D^{beh}$, and integrated into an interaction dataset formulated as sequences of ${D^{instruct}, D^{mem}, D^{obs}, D^{th}, D^{beh}, D^{obs}, D^{th}, D^{beh}, ...}$. OmniJARVIS, as an autoregressive transformer model, supports a maximum token length of 8k, allowing it to model sequences that contain multiple $D^{behavior}$ tokens, effectively handling long sequences beyond 128 frames.
Additionally, during *training*, we employ a sliding window technique to manage sequences that exceed 8k tokens in VLM context length. In *inference*, OmniJARVIS outputs a new behavior token every 128 frames to address long-horizon tasks such as programmatic and creative tasks.
> Q1 and Q2: Programmatic tasks are included in OmniJARVIS training datasets?
OmniJARVIS leverages two datasets for training: a Question-answering dataset enriched with Minecraft knowledge and the Contractor dataset of human gameplay trajectories. While the instructions in the Contractor dataset don’t exclusively pertain to programmatic tasks, they support OmniJARVIS in generalizing from learned knowledge to generate new plans and behaviors in varied instructional contexts. For the comparisons in Table 2, all models, including the baselines, were trained using the same Minecraft dataset to ensure fairness in evaluating generalization capabilities across different tasks.
Additionally, the baseline models in Table 2 were not retrained or finetuned with the data constructed specifically for OmniJARVIS. However, we conducted further tests, shown in Table 3, finetuning open-source models on a Minecraft-specific Question-Answering dataset derived via ChatGPT self-instruct. This helped evaluate the impact of injecting specialized Minecraft knowledge on model performance, where the finetuned models showed improved understanding, aligning their capabilities closer to those of ChatGPT.
> Q3: Discussion on GROOT performance on Atom and Programmatic tasks.
Atom tasks are short-horizon tasks that can usually be completed within 600 timestamps (30s). These tasks primarily assess the agent’s mastery of basic skills in Minecraft, such as “mine oak log” and “kill sheep.” In contrast, programmatic tasks are long-horizon tasks, often requiring more than 6000 timestamps (5 min) to complete. These tasks demand complex reasoning and planning. For example, the programmatic task “obtain diamond” involves multiple intermediate steps, such as mining oak logs, crafting a wooden pickaxe, and acquiring stone and iron ore.
GROOT and STEVE-I models are trained on the Minecraft Contractor data for instruction-following tasks of fewer than 128 frames. As a result, they possess certain atom skill proficiency but lack complex planning and reasoning capabilities as opposed to VLA models like OmniJARVIS. Consequently, they perform well on simple atom tasks but struggle with programmatic tasks that require sophisticated planning and reasoning.
> Q4: Explanation of Figure 5.
Figure 5 aims to visualize the behavior produced by our FSQ-GROOT decoder (the low-level policy adopted by OmniJARVIS) when conditioned on certain behavior codes. On the left is a screenshot of the input reference video to be encoded into behavior codes, and on the right is a screenshot of the FSQ-GROOT decoder policy's rollout in a new environment.
The behavioral similarity between left and right screenshots verifies that the code produced by the behavior tokenizer can be consistently decoded into the same behavior as the input by decoder policy, even in novel environments.
---
Rebuttal Comment 1.1:
Comment: I think my comments are mostly addressed by author response, I think it is important to include the author responses about training details, discussion and comparison into main content in the future. I raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and for increasing the score. We will incorporate the content including training details, discussion, and comparisons into the main text.
If you have any further questions, please feel free to discuss them with us. | Summary: This work introduces OmniJARVIS, which jointly reasons over visual observations, instructions, self-generated text, and actions. OmniJARVIS models actions via behavior tokens, which are discrete embeddings that are separately learned on a behavior dataset. A policy decoder converts these behavior tokens to a sequence of low-level actions. OmniJARVIS also has a pipeline for synthesizing instructions, memory to track what happened in a long trajectory, and chain-of-thought text from offline observation action data. OmniJARVIS outperforms baselines in Minecraft on atomic, programmatic, and open-ended tasks.
Strengths: 1. To the best of my knowledge, the behavior tokenization is a novel way to connect VLA models to actions. This approach has the advantage that the large LLM model does not need to be run for every action generation. Instead, the lighter weight policy decoder can generate a sequence of actions.
1. The pipeline of using interleaved self-generated memory with a VLA is also novel. The paper shows the value of this additional data in Table 5 and presents a scalable way to generate it for Minecraft.
1. OmniJARVIS presents a way to scale end-to-end policies for complex long-horizon tasks like Minecraft. Prior approaches like Voyager, while able to operate on long-horizon tasks, assume primitives that can be called via language. Other methods, like STEVE-1, directly output keyboard actions but struggle with long-horizon tasks (as shown in Table 2). OmniJARVIS operates directly from pixel inputs and outputs keyboard and mouse actions yet can complete long-horizon tasks with a high success rate.
1. The paper shows extensive results in Minecraft with multiple tasks in 3 task setups of atomic, programmic, and open-ended tasks. In each setting, OmniJARVIS mostly outperforms the relevant baselines.
1. OmniJARVIS can scale to larger models and datasets, as demonstrated in Fig 4.
Weaknesses: 1. The paper lacks many important details. Little detail is given about the encoder and policy decoder architectures (see (2) under the questions section).
1. Important behavior token ablation experiments are missing. The context length for the behavior tokens is never analyzed and only 128 is used throughout the paper. However, this could be an important setting for the behavior tokens. Additionally, the paper does not ablate the FSQ settings to determine the necessary codebook size for the behavior tokens. The behavior tokens are also only conditioned on observation sequences without actions (L92). However, this decision is never justified.
1. Experimental result details are unclear. On L215, the paper states it uses "30 programmatic tasks to evaluate the performance of different agents". But as far as I can tell, these 30 tasks are never described.
1. The value of including the question answering dataset for instruction following is not verified. It is possible including this additional training source is the primary cause of the OmniJARVIS outperforming baselines considering it constitutes a third of the examples.
1. OmniJARVIS performs worse than the GROOT baseline in collecting harder resources like wood and cobblestone (Table 1). However, I do not see this as a large weakness because OmniJARVIS is capable of programmatic tasks unlike GROOT.
1. It's unclear how the OmniJARVIS can be used beyond Minecraft. OmniJARVIS exploits that the OpenAI Minecraft data includes oracle meta information for synthesizing the instruction, memory, and thought.
1. The paper does not clearly discuss limitations.
1. Agent behavior and failure modes are not analyzed. See point (6) of my questions.
Minor:
1. Table 2 should clarify what the numbers in parentheses next to the task type mean. I assume they are the number of programmatic tasks per category.
1. A checkmark representing the training setting is _removed_ is confusing. I suggest changing to an "x" mark instead.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In regards to weakness (6), how can the OmniJARVIS be applied to non-Minecraft domains?
1. What are the policy architectures for the encoder and policy decoder? How many parameters are there? How long are they trained for?
1. For the behavior tokens, why introduce new tokens into the LLM vocabulary as opposed to reusing infrequently used tokens in the vocabulary as in RT-2?
1. How does the paper arrive at 1T tokens on L187, given the previous token counts add up to 1B tokens?
1. In Table 5, how can OmniJARVIS work without the instruction?
1. What are the failure modes of OmniJARVIS? I am specifically interested in where OmniJARVIS fails in the programmatic tasks in Table 2. For the challenging task of Diamond, qualitatively, what behaviors does OmniJARVIS exhibit to succeed at this task?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No, the paper does not clearly state its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1 and Q2: Details on encoder and decoder of behavior tokenizer.
Sorry for the confusion.
As shown in Fig. 2, the Behavior Tokenizer consists of three parts: Encoder, Decoder, and FSQ quantizer. We perform quantization based on the original GROOT, and the Encoder and Decoder use a design consistent with GROOT.
Specifically, the encoder includes a Convolutional Neural Network (CNN) to extract spatial information from image states $s_{1:T}$ and a non-causal transformer to capture temporal information from videos. The CNN is EfficientNet-B0 networks and the non-causal transformer is built on the code from minGPT-2 removing the causal mask.
The decoder consists of 4 identical blocks, where each block contains a Transformer-XL block and a Flamingo gated-attention dense layer to condition the decoder on FSQ code embeddings. More details can be found in GROOT paper Section 4.1 and Section 4.2.
The fsq quantizer consists of 5 levels of [8,8,8,6,5] with a codebook size of 15360.
The training hyperparameters of FSQ-GROOT are as follows:
Parallel Strategy: ddp, Accumulate Gradient Batches: 8, Batch size: 2, Precision: bf16, Image size 224*224, Encoder blocks: 8, Decoder blocks: 8, Hidden dimension: 1024, Chunk size: 128, Attention memory size: 256, Optimizer: AdamW, Weight decay: 0.001, Learning rate: 1.81e-05, Warmup steps: 2k.
> W2: Ablation experiments on behavior tokenizer (codebook size and context length).
Thanks for your comments.
We conduct an in-depth investigation of behavior tokenizers with varying **codebook sizes**, utilizing recommended sets of FSQ levels to approximate specified codebook dimensions in the supplementary PDF Table-1. *Codebook Usage* is quantified as the proportion of codewords utilized at least once when encoding the validation datasets. *Reconstruction FSD* is measured by the FSD scores derived from the MineCLIP encoder, processing 1,000 different demonstration videos through the FSQ-GROOT and subsequent rollout in a randomly generated environment. Additionally, we measure *Resampling FSD*, which is the FSD score obtained when the environment rollout is conditioned on representations sampled from the codebook. Finally, we assess the *average rewards* for the task “collect wood” using OmniJARVIS across varying codebook sizes. Our findings indicate that increases in codebook size correlate with enhanced average rewards and reduced FSD scores, suggesting a scalable performance in OmniJARVIS with larger codebooks.
We also conduct an ablation of experiment on different context length of behavior token in PDF Table 1. We select the codebook size e14 as the default setting and explore different context lengths of 128, and 64.
The performances under context lengths of 64 and 128 are similar. Our insight is, that although shorter context lengths may offer better behavior granularity, they also require more frequent behavior code switching during inference, resulting in a slower speed of OmniJARVIS.
> W2: Behavior tokens are only conditioned on observation sequence.
The reason we use observation sequence as tokenizer input is as follows: we inherit the self-supervised learning scheme from GROOT to train the behavior tokenizer, and the learning objective is to estimate missing actions from the observation-only sequence. Including actions in the input will break this learning scheme.
> W3 and Minor1: Evaluation details of programmatic tasks.
Sorry for the confusion. The number behind the task category is the number of tasks in this group. The evaluated programmatic tasks are taken from DEPS, and the detailed task description and task settings are shown in the supplementary PDF Table 5. All tasks are evaluated over 20 times from an empty inventory under the open-ended generated environment in Minecraft. We will add this section to the updated manuscript.
> W4: Ablation experiments on the training dataset.
Thank you for your comments.
To validate the effectiveness of including the QA dataset, we conducted ablation studies (detailed in **Supplementary PDF Table 2**). These studies revealed that while OmniJARVIS performs similarly on Atom Tasks with or without the QA dataset, there is a significant performance drop in Programmatic Tasks when the QA dataset is omitted. This drop underscores the necessity of the QA dataset for enhancing reasoning and planning capabilities essential for Programmatic Tasks.
We will further clarify this aspect in the updated manuscript, ensuring a comprehensive understanding of the QA dataset’s contribution to OmniJARVIS’s enhanced performance compared to baselines.
> W5: Performance drop of OmniJARVIS in Table 1.
Thank you for your comments. We apologize for the confusion caused by an error in Table 1; the success rate for OmniJARVIS on the stone task should be 15.8, not 5.8 as mistakenly listed. This typo has skewed the comparison.
Furthermore, it’s important to note that GROOT relies on manually selected reference videos as instructions, which can significantly influence its performance. In contrast, OmniJARVIS autonomously generates behavior tokens for the policy decoder to execute, providing a more consistent and scalable approach.
We further tested more atom tasks to compare GROOT and OmniJARVIS in the following table. Our tests show that OmniJARVIS performs comparably to GROOT across most Atom tasks, demonstrating its robustness not just in programmatic tasks but also in short-horizon atom tasks.
| | log | dirt | stone | seeds | wheats | wool | llava |
|-----|-----|-----|-----|-----|-----|-----|-----|
| GROOT | **14.3±4.7** | 19.7±8.7 | **19.0±11.3** | 7.3±0.6 | 8.7±2.2 | 1.9±1.2 | **1.0±0.5** |
| OmniJARVIS | 10.8±5.2 | **20.3±9.2** | 15.8±2.9 | **8.2±3.6** | **10.3±1.2** | **2.1±1.8** | 1.0±0.7 |
Due to word limit restrictions, we have added more responses to the official comments, and we hope you can see them.
---
Rebuttal 2:
Title: More responses due to the char limitation.
Comment: > W6 and Q1: OmniJARVIS on other environments.
Thank you for your suggestions. We have indeed begun extending OmniJARVIS to other environments, starting with Atari Montezuma’s Revenge game, where it achieved a score of 3600. This initial success illustrates the model’s potential for generalization. Details on adapting OmniJARVIS to different environments are further elaborated in the **General Response**. We are committed to ongoing efforts to expand its applicability across various domains, demonstrating its versatility and broader utility.
> W7: Add limitations.
Thanks for your comments. We discuss the limitations of OmniJARVIS in the **General Response**, which will be inserted in the updated manuscript.
> W8 and Q6: More analysis on agent failure modes.
Thank you for your inquiry regarding the failure modes of OmniJARVIS and its behaviors, particularly in challenging tasks such as obtaining a diamond. The primary failure modes can be categorized into the following:
1. **Planning Errors**: During programmatic tasks, OmniJARVIS occasionally produces incorrect thoughts or plans. For instance, it might attempt to mine stone without a wooden pickaxe. These errors stem from inaccuracies within the QA dataset and hallucinations inherent in the language model.
2. **Action Execution Failures**: The FSQ GROOT decoder (our low-level control policy) sometimes fails to translate the behavior code into actions properly, particularly with less frequently occurring fsq codes due to data imbalance in the interaction dataset.
3. **Hallucinations in perception**: Errors in visual recognition by the Llava model can lead to incorrect scene interpretations, such as mistaking a stone for iron ore. These hallucinations hinder the agent’s reasoning and decision-making processes.
In the specific task of obtaining a diamond, these failure modes manifest when OmniJARVIS incorrectly plans or executes sequences due to the above issues. However, OmniJARVIS has a high success rate on finishing such tasks and correctly sequences tasks from mining basic materials to crafting necessary tools and finally obtaining the diamond, demonstrating effective integration of vision, language, and action within one unified model.
> Q3: why not re-using language tokens?
Thank you for your question about integrating behavior tokens into the LLM vocabulary. We employ different strategies based on the tokenizer used:
1. Reserved Tokens: When possible, we use reserved tokens from the language tokenizer for behavior tokens to maintain semantic integrity. This happens with the tokenizer of Fuyu VLM.
2. Reusing Infrequently Used Tokens: In the absence of suitable reserved tokens, we repurpose infrequently used tokens, as seen with the Llama tokenizer. This happens with the LLaVA tokenizer. We apologize for not making this clear and have updated the manuscript.
Furthermore, our FSQ design compresses extensive codebook sizes into up to 35 tokens (when the FSQ setting is 8+8+8+6+5), optimizing vocabulary use without semantic loss. These approaches balance semantic preservation with efficient vocabulary management.
> Q4: Confusion on training dataset tokens.
Thank you for your attention to detail. The 1T tokens were indeed a typo, the correct calculation should be 1B behavior and language tokens in total. We apologize for the confusion and will correct this in the revised manuscript.
> Q5: How OmniJARVIS works without instruction?
Thank you for your question. In the absence of instructions, OmniJARVIS effectively models a dataset with gameplay video only -- the videos are segmented into trunks with a default length of 128, and then all trunks are converted to codes by the behavior tokenizer. OmniJARVIS is tasked to predict these codes based on the initial visual observations of the corresponding segments. The goal of this ablation is to verify the necessity of including plans, thoughts, and other means of instruction (the "language" modality) in the VLA modeling of OmniJARVIS.
> Minor 2: checkmark in training settings
Apologies for the confusion. We use a checkmark to indicate when the model's training data includes specific information. In Q5, the first row without an instruction represents unconditional OmniJARVIS, which is unable to follow human language instructions. With richer synthetic data, including thought, memory, and caption data, OmniJARVIS can better follow instructions, leading to improved learning with lower loss. This demonstrates the effectiveness of synthetic data. We will follow your suggestion to replace the symbols.
---
Rebuttal Comment 2.1:
Comment: Thank you for the response and clarifications. It is essential to include all these details in the paper after the rebuttal. I also recommend directly including all details in the paper rather than referring to the GROOT for the full details. With these added details, I raised my score.
It is encouraging to see OmniJARVIS working on Montezuma's revenge, but it's unclear how the data formation described in Section 3.1 is used in this environment.
---
Reply to Comment 2.1.1:
Comment: Thank you for your response and for increasing the score. We will incorporate these details into the main text.
OmniJARVIS in Minecraft utilizes a dataset comprising QA datasets, synthetic instructions, memory, captions, thoughts, observations, and behavior data.
However, on Montezuma's Revenge, due to time constraints, we have not yet constructed synthetic memory and thought data. Initially, we retrained the Behavior Tokenizer on Montezuma's data—specifically FSQ-GROOT—and encoded the dataset to obtain Behavior data $D^{bhv}$. The Instruction $D^{inst}$ is set as "*Play Atari Montezuma's Revenge and get a higher score.*" Observations $D^{obs}$ consist of raw image data, while Captions utilize pretrained Visual Language Models' foundational capabilities. Using this data to train OmniJARVIS; during this training process, no Memory or Thought data was incorporated yet.
Despite not using complete synthetic datasets OmniJARVIS performed well in playing Montezuma’s game earning rewards up to 3600 showing good generalization across different long-horizon games . The relevant synthesis work continues, and we believe a full set of training would improve performance even more.
If you have any further questions, please feel free to discuss them with us. | Summary: This paper presents a Vision-Language-Action (VLA) model, OmniJARVIS, for open-world instruction-following agents in Minecraft. OmniJARVIS leverages unified tokenization of multimodal interaction data to enable strong reasoning and efficient decision-making capabilities. This work introduces a behavior tokenizer to encode behavior trajectories into compact representations that could be effectively modeled with other modality tokens via autoregressive transformers. The experiments are conducted in a variety of tasks in the Minecraft environment, along with some analysis of design choices and scaling properties.
Strengths: 1. This paper is well-written and easy to follow.
2. Illustrations of the pipeline are informative and clear. The authors demonstrate their proficiency in pipeline illustrations in Fig.1 and Fig.3, which contribute to improving readability.
3. The proposed approach is scalable with respect to model sizes. As depicted in Fig. 4, a larger model results in lower evaluation loss, showcasing the scalability of this approach.
4. The authors greatly utilized established LLMs to construct a large-scale multi-modal interaction dataset. The proposed strategy of data collection could provide inspiration for future research in multi-modal embodied understanding.
Weaknesses: 1. Incremental design with limited novelty. To my knowledge, the key idea of this work is tokenizing observations into compact behavior representations. However, the detail of the tokenizer works similarly to the previous work GROOT. The most notable difference between the two methods is that GROOT utilized continuous latent while this work uses quantized one (from lines 88-89). In my view, the idea of quantizing the action space seems incremental to the original GROOT and cannot adequately support the innovative quality of this paper. The authors are encouraged to add a separate section/subsection to detailedly discuss the difference between these two works.
2. Lack of training/inference efficiency analysis and comparison. The authors claimed the efficiency of this approach multiple times throughout the paper, such as in lines 102-104. However, there is no direct comparison of the actual training/inference speed/memory cost between different approaches. Some quantitative supporting evidence could be helpful.
Technical Quality: 2
Clarity: 3
Questions for Authors: Since the authors leverage GPT3.5 to augment previous datasets with more language labels, what is the price/cost for constructing such a large dataset with GPT APIs?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: There’s no separate "Limitations" section in this paper. The authors are encouraged to point out that the proposed approach is only validated in the Minecraft environment, and the generalization and transferability to other scenarios are not explored in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1: Novelty concerns.
Thank you for your comments. As detailed in the General response, our core contribution is to propose a novel Vision-Language-Action (VLA) model architecture termed OmniJARVIS, as shown in **Figure 1 of the supplementary PDF**, aiming at resolving the issues including inference efficiency and language and annotation bottleneck. Please let us know if you still have further concerns about the novelty and we're more than happy to assist!
> W2: Training/Inference efficiency analysis and comparison.
Thank you for your valuable feedback. We’ve added comparisons of inference speed, memory usage, and model parameters to **Table 4 in the supplementary PDF**:
**Inference Efficiency Comparison**:
The inference efficiency experiments are conducted on the same workstation with one RTX3090 GPU.
1. Inference Speed: Based on Minecraft’s rendering speed of approximately 25 fps, we found that the inference speed of models using a native action tokenizer (direct action output), such as GROOT and VPT, inversely correlates with model size. The same architecture’s VPT model shows decreasing inference speeds with increasing parameters: 1x, 2x, and 3x.
2. Memory Usage: Models based on language tokenizers, which require in-context learning facilitated by ChatGPT, exhibited the lowest inference speeds due to their complex computational requirements.
3. OmniJARVIS Performance: Thanks to its self-supervised behavior tokenizer and hierarchical architecture, OmniJARVIS achieves faster inference speeds (16.62 fps) even with a larger parameter count (7.2 billion) compared to native tokenizer models like VPT-3x (0.5 billion parameters at 12.34 fps).
**Training Efficiency**:
OmniJARVIS enhances training efficiency by encoding N=128 frames of trajectory data into k=5 behavior tokens. This compression allows the model to handle data more effectively compared to models like RT-2 and OpenVLA, which predict actions directly. This results in a significant reduction in training data volume and corresponding training time.
These quantitative results have been included in the revised paper to substantiate our claims regarding the efficiency of OmniJARVIS compared to other approaches.
> Q: cost for synergized dataset.
Due to the relatively certain generation of Thought, Memory, and Instruction, we used `gpt-3.5-turbo` to synthesize this data, and processing the Contractor dataset cost around $400.
> L: Add a separate "Limitations" section and Generalization of OmniJARVIS.
Thank you for your suggestion. We have added a dedicated "Limitations" section in the **General Response** to address the scope of validation for OmniJARVIS. Initially, the model was validated solely within the Minecraft environment. Recognizing the need for broader applicability, we have also conducted preliminary generalization experiments in Atari Montezuma’s Revenge environment. These results, discussed in the **General Response**, demonstrate OmniJARVIS’s potential for adaptation to different scenarios. We will provide further details on these aspects in the updated manuscript to ensure a comprehensive understanding of the model’s limitations and capabilities.
---
Rebuttal Comment 1.1:
Comment: Thanks for the sufficient response in rebuttal, which greatly addressed my concerns. Moreover, I think the framework comparison in Fig.1 of the rebuttal pdf is clear and valuable, which should be highlighted and added to the revised paper together with other required results. Overall, I'm happy to see the paper being accepted, thus I would raise my score to 7 (accept).
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and for increasing the score. Following your suggestions, we will add the framework comparison in Fig.1 and other experimental results to the revised paper.
If you have any further questions, please feel free to discuss them with us. | Rebuttal 1:
Rebuttal: Thank all. We would like to first explain some sharing concerns.
* Comparative Framework of existing Vision-Language-Action (VLA) Models.
According to the structure of the model and the training methods, the current mainstream VLA models can be roughly divided into three categories (**Figure 1 of the supplementary PDF**):
**(a)** depicts a model where upon receiving a language instruction, motor commands(actions) are directly produced based on the environmental state, facilitating immediate interaction with the environment at a unified frequency. Smaller models with <1B parameters like *VPT* maintain higher interaction speed (>20Hz), though their capability for complex reasoning tasks is limited. Larger models with >7B parameters such as *RT-2*, offer enhanced performance but operate at significantly reduced speed (2-3Hz).
**(b)** illustrates a common approach utilizing large vision-language models (VLMs) for planning, subsequently outputting language goals, e.g., *DEPS* and *PaLM-E*. A language-conditioned policy (controller) then translates these language goals into actions at a real-time interaction rate of 20Hz, with high-level models re-planning at less than 1Hz. This hierarchical structure balances interaction frequency and performance, while it requires language as an intermediary and additional language annotations. Also, the training process of high-level VLMs and language-conditioned policies are separate and thus could struggle on tasks that can not be easily depicted by language (ex. building tasks, the language descriptions can be tedious).
**(c )** OmniJARVIS(ours) mirrors the hierarchical structure of (b) but differentiates by employing a self-supervised encoder-decoder policy (GROOT) and FSQ quantization as a behavior tokenizer to connect the planner and the controller. The upper-level VLMs produce behavior tokens, which are effectively **compact** representations of tasks, and drive a policy decoder to output actions. The behavior tokens are automatically produced by tokenizing interaction data using the behavior tokenizer (see Fig.1 in the main paper), eliminating the need for language annotations on tasks/behaviors as used by the previous VLA scheme, therefore scaling more easily.
* Differences between GROOT and OmniJARVIS
Our core contribution is the construction of a brand new Vision-Language-Action structure, as shown in **Figure 1 of the supplementary PDF**. GROOT can be classified as a (a)-class model. In addition to the aforementioned characteristics, GROOT is unable to accept language instructions and can only complete short-horizon tasks (atom tasks), as shown in Tables 1 and 2 in the main paper. In OmniJARVIS, the decoder of FSQ-GROOT is used as the de-tokenizer for OmniJARVIS, which conditions on the discrete fsq code generated by OmniJARVIS to output environment-acceptable actions $a_t$ based on observation $o_t$.
* Limitations of OmniJARVIS
1. More Evaluation Environments: While OmniJARVIS’s training method does not have specific requirements for the environment or data, allowing it to potentially generalize to other environments given the appropriate data, it has so far only been tested in the Minecraft environment. Collecting data from other environments to further validate its generalizability is an ongoing work. We have finetuned OmniJARVIS in the Atari Montezuma environment and show the primary result in **Supplementary PDF Figure 2**.
2. Large-Scale Data Requirements: Training large-scale Vision-Language-Action (VLA) models like OmniJARVIS requires a significant amount of interaction data. This can be resource-intensive and may limit the feasibility of deploying such models in environments where extensive data collection is difficult.
3. Model Hallucinations: Large models like OmniJARVIS may experience hallucinations, where the model generates incorrect or nonsensical outputs. This can lead to failures in decision-making processes for certain tasks, impacting the overall reliability of the agent.
* Generalization to Other Environments
Thank you for your suggestion. The primary validation of OmniJARVIS has been within the Minecraft environment as detailed in our paper. However, extending our approach to other environments to test generalization capabilities is an ongoing work. For instance, we have recently adapted OmniJARVIS to the Atari game Montezuma’s Revenge. Here, we created a dataset from 500 episodes played by an agent trained with Random Network Distillation, supplemented by random actions in early frames to enhance diversity. This dataset contains 1,823,699 transitions.
We then trained the FSQ-GROOT tokenizer on this new dataset and subsequently trained OmniJARVIS on the tokenized data. In initial tests, the finetuned OmniJARVIS achieved a score of 3600 in Montezuma’s Revenge, indicating promising transferability. We visualize a rollout trajectory in the **supplementary PDF Figure 2**.
The training components of OmniJARVIS include a self-supervised Behavior Tokenizer for short-horizon tasks and a Vision Language Transformer that leverages a long-horizon multimodal interaction dataset. This setup aligns with formats used in other embodied datasets, suggesting potential for broader applicability.
Due to time constraints, comprehensive experiments in additional environments are planned but not yet complete. Future work will focus on demonstrating OmniJARVIS’s generalization and robustness across diverse settings.
Pdf: /pdf/d96c7f564df43a26c399cea53682432040665276.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Local Differential Privacy for Mixtures of Experts | Reject | Summary: This paper introduces a novel mixture of experts model that applies local differential privacy to the gating mechanism. Their methods leverages the one-out-of-n gating mechanism and provides specific generalization bounds.
Strengths: 1. The overall insight of the paper is clear and strong.
2. Improve the tightness of bounds on the risk for mixtures of experts models.
3. Reduce complexity by relying on fewer parameters.
Weaknesses: 1. The section 3 talks about PAC-Bayesian bounds for mixtures of experts, but it lacks insight into why PAC-Bayesian bounds are applied instead of other bounds. More explanation is needed here, similar to the explanation needed in section 4 regarding Rademacher bounds.
2. In the experiment section, it is unclear why the chosen dataset is used for the experiments and why only 5 epsilon values were selected.
3. Even though there are very few existing guarantees, the experiment should include other methods as baselines and compare the results.
4. For the experiment section, only consider mixtures of n linear experts in binary classification tasks seems easy. Need to add other classification tasks.
5. There is no description about the datasets used in experiments.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. For section 3, why there is no reference for PAC-Bayes theory when you first mention it?
2. Why there is no reference about Kullback-Leibler divergence?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: 1. Lack many reference as I motioned in questions part.
2. The paper lacks a smooth flow, making it difficult to follow. Specifically, there is no clear insight or reasoning provided to explain why the existing mechanisms or bounds were chosen, as highlighted in weakness 1.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s feedback. Here, we address the concerns raised and provide clarifications to improve the overall evaluation of our work.
1. Regarding the choice of PAC-Bayesian bounds discussed in Section 3, we would like to clarify that PAC-Bayesian bounds have been extensively studied in recent literature and are known for providing tight theoretical guarantees on the risk of various models. This makes them particularly suitable for our analysis. The decision to use PAC-Bayesian bounds was motivated by their ability to offer robust generalization guarantees in probabilistic models, which aligns well with our focus on examining MoE models under LDP. We will revise Section 3 to provide a more detailed explanation of why PAC-Bayesian bounds were selected over other methods, including a discussion of recent advancements and their relevance to our work.
2. Similarly, in Section 4, we will expand on the use of Rademacher bounds. We will clarify that Rademacher bounds are employed to compare our results with existing studies, such as the one by Azran et al. cited in the paper. This comparison will help elucidate the role of Rademacher bounds in our analysis and justify their application. Additionally, we will note that our method can be extended to various other bounds, including mutual information-based bounds and others. This will be included in the revised version to provide a broader context for our theoretical approach.
3. Concerning the experiments conducted, they are primarily designed to demonstrate that the conditions imposed by our method are reasonable and that the chosen epsilon values are consistent with the theoretical framework. The aim is to validate the practicality of applying LDP within MoE models and to ensure that it does not adversely affect model performance for reasonable privacy parameters. Our main contribution is theoretical, focusing on presenting bounds that are significantly tighter than existing ones. We acknowledge that the current title and abstract may be misleading, and we propose to revise them to more accurately reflect the theoretical nature of our contributions and clarify any potential confusion.
4. Also, we will include a reference to the foundational PAC-Bayesian theory when it is first introduced in Section 3, to ensure that readers have a clear understanding of the theoretical background as well as a reference to the Kullback-Leibler divergence in the relevant section of the paper. | Summary: This paper introduces a novel approach to regularize mixtures of experts by imposing local differential privacy (LDP) on the gating mechanism. The authors provide theoretical justifications and derive PAC-Bayesian and Rademacher bounds tailored to this approach. Experiments conducted on various datasets demonstrate that using LDP as a regularizer improves the generalization ability of the models, especially in cases prone to overfitting. The method offers a balance between leveraging neural networks for gating and maintaining robust theoretical guarantees, making it a valuable contribution to the field of machine learning.
Strengths: This paper demonstrates originality by integrating local differential privacy (LDP) into the mixture of experts model, addressing privacy concerns while improving model generalization. The theoretical contributions, including PAC-Bayesian and Rademacher bounds, are rigorously derived and tailored to the new approach. The clarity of exposition makes complex concepts accessible, and the experiments validate the practical benefits of the method. The significance lies in enhancing the robustness and scalability of mixture of experts models, making them more applicable to real-world scenarios prone to overfitting.
Weaknesses: The primary concern regarding this paper lies in its significance. There have been previous works that incorporated differential privacy (DP) into the construction of mixture of experts models with privacy considerations. This paper, however, utilizes local differential privacy (LDP) to analyze the theoretical aspects of mixture of experts models. The introduction of LDP significantly alters the generalization behavior of these models because both LDP and DP are methods that inherently enhance algorithm robustness, thereby affecting generalization. If the main goal of the paper is to enhance privacy, it is imperative to compare this approach with existing DP-based methods and highlight what specific aspects LDP protects that traditional DP cannot. Without this comparison, the added value of using LDP over existing DP methods remains unclear. On the other hand, if the focus is on analyzing the generalization of mixture of experts models, the paper must justify the rationale behind incorporating LDP for this analysis, as LDP is not inherently required for mixture of experts models. The paper needs to elaborate on why LDP is a suitable and necessary tool for this analysis and how it fundamentally impacts the generalization properties of the models in a meaningful way. Additionally, while the theoretical contributions are substantial, the practical implications need to be demonstrated more robustly through experiments. Comparing the results directly with models using traditional DP methods would strengthen the paper by showing the practical improvements and specific scenarios where LDP outperforms DP. Furthermore, using a broader range of datasets could better illustrate the claimed benefits in robustness and scalability. By addressing these concerns, the paper can more convincingly argue the necessity and advantages of using LDP in mixture of experts models, thereby enhancing its significance in the field.
Technical Quality: 2
Clarity: 1
Questions for Authors: Please refer to weakness.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s insightful feedback regarding the significance of our paper.
1. We acknowledge that there have been previous works incorporating differential privacy (DP) for regularization. Our paper, however, focuses on utilizing local differential privacy (LDP) on a part of the model (the gating network) to analyze the theoretical aspects of MoE models, with a particular emphasis on the generalization properties under LDP conditions. The strength of our bounds lies in the fact that LDP makes them tighter and ensures they depend logarithmically on the number of experts $n$, whereas existing bounds typically depend linearly on $n$. We leverage the one-out-of-$n$ mechanism to achieve these tighter bounds.
2. While traditional DP methods also add noise to regularize the model, they do so in a "global" manner. In contrast, our approach imposes LDP specifically on the gating network. We believe that global noise would result in looser bounds that, once again, depend linearly on $n$. Therefore, we believe our method of applying LDP locally is crucial for maintaining the tightness of the bounds.
3. Also, we are interested in understanding why the presentation was rated as 1 (poor). This feedback would help us improve the quality of our future presentations. We would greatly appreciate it if you could provide specific details or suggestions so that we can better prepare our work moving forward.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their efforts in clarifying and revising their manuscript.
That said, I will keep my original score. I do not feel that the rebuttal is able to resolve my concerns in a meaningful way.
I'm happy to discuss my opinion with other reviewers and area chairs. | Summary: This paper provides generalization bounds for a particular type of mixture of experts (MoE) networks. They focus on MoE architectures where an input $x$ first goes through a gating function $g$, and then gets routed to a single (one out of n) expert $i \in [n]$ according to the $g(x) \in [0,1]^n$ distribution. The final output of the MoE network is the output of the expert $h_i(x)$.
The authors observe that when the gating function $g$ has certain regularization properties, which correspond to local differential privacy (LDP), then the resulting network has better generalization bounds than what appears in the existing MoE litterature. The authors provide such bounds.
Finally, the authors evaluate LDP-regularized routing on binary classification tasks with mixtures of linear models. The results show that LDP regularization outperforms an un-regularized baseline.
Strengths: Mixture of experts models are still understudied, especially from a theoretical standpoint, so I appreciate the new analysis provided by this paper. The connection with differential privacy is creative and seems fruitful, although I have some reservations about it (see below).
The bounds do improve on existing generic bounds for this type of MoE models. The paper is well-written and easy to follow, and the experimental code is available.
Weaknesses: First, it is worth emphasizing that the MoE networks in this paper do not satisfy local differential privacy themselves. The gating network is not even *trained* with differential privacy. This paper only uses local differential privacy as a regularization condition on the *gating network only* at *inference* time, which does not provide meaningful privacy guarantees. That is not immediately clear from the title of the paper. To be fair, this work does not try to achieve any privacy goals, and is entirely focused on generalization bounds. But if privacy is not needed, then it is not clear why DP is the right tool for the job. The paper directly uses LDP (it could have been called something like "exponentially regularized routing") but does not motivate this choice. Are there other forms of regularization that could achieve similar or better bounds? While there are some known connections between robustness, differential privacy, and generalization, they are not mentioned here. In the context of MoEs, some large models already regularize their gating functions (e.g., the Switch Transformer adds some "jitter" noise to the routing logits).
Next, the paper motivates the study of MoE models by mentioning recent progress with LLMs such as the Switch Transformer, which uses multiple layers of experts (deep MoE) and combines the output of different experts. All the modern LLM MoE models I am aware of are such deep MoEs. Meanwhile, the paper focuses on simple shallow MoE models with a single gating network followed a single layer of experts, which limits the potential impact of the paper in my opinion. While theoretical bounds may be of interest even on shallow MoE models, I would appreciate at least some discussion about whether the authors' approach can generalize to deeper models.
Finally, the experiments have some limitations, which mostly stem from the two previous concerns.
* The authors only evaluate a single, rather simplistic (3-layer MLP gating network followed by linear experts), MoE architecture. More concerningly, they use a fixed number of experts (n = 100), thereby missing an opportunity to evaluate their claim that "we can have many more experts with almost no penalty from the theoretical point of view".
* The only baseline is "No LDP", which I think is a quite weak baseline. It is not entirely surprising that adding some regularization, in the form of LDP routing, improves generalization compared to a completely un-regularized baseline. How about other forms of regularization, such as dropout, clipping, or jitter noise (which already exists in the context of MoEs)?
* Another baseline would be a non-MoE model, with a comparable number of parameters, e.g., even a simple, dense, multi-layer perceptron. Showing that shallow MoEs outperform dense models would alleviate concerns about the practical relevance of this work.
Minor comments:
* Table 1 might be more readable as a graph.
* It is quite surprising to see MNIST being qualified as a "large" dataset, for which a 4-layer network takes 3 hours to train on a GPU, in 2024.
* Also, it is unclear why MNIST has to be broken down into 3 binary classification tasks.
Technical Quality: 2
Clarity: 4
Questions for Authors: Have you considered other forms of regularization (including the noisy routing techniques that are already used in practice by MoE)? Is there any justification why local differential privacy would be the type of regularization that gives the most desirable generalization bounds?
Confidence: 2
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: The authors adequately addressed the limitation they identified (difficulty of tuning epsilon), even though this is not the main limitation of this work in my opinion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their thoughtful and detailed feedback. Your insights have highlighted several important areas for improvement and clarification in our work. We appreciate the opportunity to address the concerns raised.
1. The critique regarding the use of local differential privacy (LDP) as a regularization technique rather than as a strict privacy guarantee is well taken. Our primary contribution is the application of LDP specifically to the gating network within the mixture of experts (MoE) framework. This targeted use of LDP allows us to derive tight theoretical bounds on the model’s risk, which supports the impressive empirical performance of MoE models. We acknowledge that the title and abstract could better reflect this emphasis on generalization. We will revise these sections in future versions to more clearly convey our focus on theoretical bounds and generalization, rather than privacy. However, it is important to mention that the LDP condition can be achieved by ensuring that each expert is also $\epsilon$-LDP. In this case, the overall model would then provide $2 \epsilon$-LDP due to the composition of privacy guarantees.
2. Regarding the baselines, while it would indeed be beneficial to compare LDP with other regularization methods such as dropout, clipping, or jitter noise, our current bounds are specifically tied to the LDP condition. Incorporating other forms of noise might not yield bounds with the same tightness and may not align with the theoretical analysis we have established. This is an interesting direction for future research, and we plan to explore how these alternative methods impact generalization bounds in subsequent studies.
3. The focus on shallow MoE models was intentional, aimed at providing clear insights into the theoretical bounds. However, we recognize the value in extending our analysis to deeper MoE architectures, such as those used in modern large language models (LLMs) like the Switch Transformer. We are planning follow-up studies to investigate whether our approach generalizes to more complex and deeper MoE models and will include discussions on this in future revisions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the answer. I appreciate the clarification regarding the title and the use of differential privacy.
While composing LDP experts with a LDP gating function does indeed give LDP guarantees for the whole network, these guarantees only hold for a single inference. Since inference-time DP is rarely used and less relevant than DP training of neural networks, I think that this is an interesting side remark but not a particularly strong contribution from the paper, so I agree with the new framing/title/abstract you propose.
I still believe that other baselines or more context about why LDP regularization is a natural choice would strengthen the paper, but I am no longer strongly opposed to accepting this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful feedback. We acknowledge your remark regarding the baselines, and we will compare our regularization method with other techniques such as dropout and adding noise in future revisions.
Regarding the use of the LDP condition, we recognize the importance of justifying our choice and would like to provide some clarification: Our motivation was to find a middle ground between input-independent mechanisms, such as weighted majority vote classifiers, which offer tight theoretical guarantees on risk, and input-dependent aggregation mechanisms, such as the best-known versions of mixtures of experts, which excel in practical performance. Our analysis of existing bounds on mixtures of experts revealed that they can be loose in certain cases—particularly when the weights provided by the gating network do not depend, or depend only *minimally*, on the input. We aimed to bridge these two approaches by introducing a unified framework that quantifies this dependence and incorporates it into the theoretical bounds. | Summary: The authors consider the mixtures of experts models, in particular the one-out-of-n gating mechanism for ease of theoretical analysis, and show that applying a soft-max, which is also the exponential mechanism, on the gating mechanism gives LDP and can improve generalization. The privacy techniques are largely the same as previous work, PATE, but specifically applied to mixtures of experts. The authors then provide theoretical analysis showing generalization bounds for this approach.
Unfortunately, I’m not familiar enough with the mixture of experts literature to evaluate the novelty of applying the soft-max and the corresponding theoretical guarantees. I am rather surprised though that the soft-max has never been applied and am still somewhat confused upon what the previous techniques were in the one-out-of-n mixtures of experts.
Strengths: The authors show that choosing an expert through the soft-max provides better generalization. The privacy guarantee is then essentially proportional to the regularization factor (\beta) for the soft-max application. Further they give theoretical generalization bounds for this approach.
Weaknesses: Unless I am mistaken (please correct me if I’m wrong) the authors are not providing privacy guarantees for the model itself, but only one inference call to the model. In particular, if feature vector x is input to the model, then the expert is chosen randomly according to the soft-max / exponential mechanism, which is \epsilon-LDP. If inference was then run again, suppose even on the same feature vector, then the random draw from experts would occur again. This is known as composition in the privacy literature, and the privacy guarantees would now be 2*\epsilon.
Providing privacy guarantees on only one inference call to a model is both not very useful nor interesting due to the composition properties.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses section.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing valuable feedback. We recognize the need to clarify our main contributions. The softmax function and LDP are well-known techniques that have been used in previous works. However, we believe that our contributions are distinct and significant since they use these techniques as an original way of getting theoretical guarantees on the risk of our models. Here are some explanations regarding the review:
1. The reviewer is correct in noting that while our gating networks satisfy the
$\epsilon$-LDP condition, our models do not inherently guarantee this privacy level since we do not impose constraints on the experts. However, LDP of the whole model can be achieved by ensuring that each expert is also
$\epsilon$-LDP. Under these circumstances, the overall model would then provide
$2 \epsilon$-LDP due to the composition property. Our innovation is in applying LDP specifically to the gating network within the mixture of experts. This strategic application allows us to derive tight theoretical bounds on the model's risk, substantiating the impressive empirical performance of mixtures-of-experts models.
2. We realize that the title and abstract of our paper may have led to some confusion regarding our contributions. We will revise these sections to better reflect the novelty and significance of our theoretical analysis and the specific application of LDP in gating networks.
We hope these clarifications help convey the originality and scientific value of our work.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their comments and clarifications! Regarding point 1), I can appreciate that this can be part of ensuring LDP, but ensuring the each expert is LDP is the much harder difficulty here, not the exponential mechanism for choosing the expert. Further, the idea of using exponential mechanism or noisy max on \epsilon-DP (or LDP) experts/models to choose one for inference is not a new concept at all. Also the \epsilon parameter here is essentially just acting as a regularizer for the softmax function which is also not a novel concept.
The authors have some nice novel theoretical contributions! But I think the necessary re-write is too substantial at this time and in my opinion the connection to LDP is more of a remark given the well understood connection between exponential mechanism and softmax with a regularizer. I will keep my score. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their thorough analysis and valuable feedback on our paper. The comments have allowed us to take a step back and reflect on our work, especially regarding the presentation of our contributions. The reviews have made us realize that our title and abstract are misleading and should be changed to make it clear that our article is about theoretical guarantees on the risk of mixtures of experts.
Indeed, very little theoretical work has been done on mixtures of experts, and we wanted to provide a mathematical analysis to back up the impressive empirical results achieved by mixtures of experts. Note that the existing generalization bounds depend linearly on the number of experts, whereas ours only depend on the logarithm of the number of experts, which makes them much tighter than the existing bounds, under reasonable conditions.
Our guarantees on the risk of mixtures of experts are obtained by imposing local differential privacy (LDP) on the gating network of our models. The motivation for this is that imposing LDP amounts to making the outputs of the gating network less dependent on the particular example being classified, which can be seen as a way of controlling its complexity. This has the benefit of entirely eliminating the complexity/KL divergence term associated to the gating network from our risk bounds; instead, generalization is controlled by the parameter $\epsilon$. This novel approach bridges the gap between two existing ensembling methods: one that uses an input-independent aggregation mechanism, as in weighted majority vote classifiers, and another that employs an input-dependent mechanism without any constraints regarding the dependence between the input and the output of the gating mechanism, as in the best-known mixtures of experts models. Our method generalizes these approaches, with
$\epsilon = 0$ replicating the first method and
$\epsilon \to \infty$ replicating the second.
Note that LDP is just a formal condition we have found helpful in our theoretical analysis, and we are *not* claiming that our models satisfy any privacy guarantees, nor was this the point of our work. We were only looking to understand the generalization of mixtures of experts by providing bounds on their risk. However, LDP can be satisfied by the whole model by ensuring that our experts satisfy $\epsilon$-LDP. In this case, the mixture of experts would satisfy $2\epsilon$-LDP. To clarify this point, we propose including this explanation in Section 2.2.
Also, please note that our experiments have been conducted solely to support our theory and to show that the conditions imposed on our gating networks are reasonable. As we showed, in addition to providing theoretical guarantees, the imposition of LDP on the gating network does not deteriorate the quality of the learning, but rather acts as a form of regularization. The aim of this paper was not to beat state of the art mixtures-of-experts models. Instead, our aim was to take steps toward a theoretical analysis of their performance.
For the reasons mentioned above, we suggest changing the name of our article to *Tighter Risk Bounds on Mixtures of Experts* and editing our abstract as follows:
*In this work, we provide theoretical guarantees on the risk of mixtures of experts by imposing local differential privacy on their gating mechanism. These bounds are specifically tailored for mixtures of experts provided with the one-out-of-$n$ gating mechanism rather than the more conventional $n$-out-of-$n$ mechanism and depend on the number of experts only logarithmically. This makes them much tighter than the existing bounds, under reasonable conditions. Experimental results support our theory, demonstrating that our approach enhances the generalization capability of mixtures of experts and validates the feasibility of the imposed conditions.* | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The authors take the first step (I though so at first) to MoE under LDP theoretically. I read again and found that the author seems to have raised the utility lower bound of existing studies. Few experiments could be found. Perhaps, I am not an expert in MoE, but it really leave a hard time.
Strengths: Important poblem.
Weaknesses: 1. Perhaps I am not an expert in MoE, and I cannot tell from the author's introduction that there are any challenges.
2. The experimental results and application scenarios are not clear.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. According to my rough understanding, the proposed approach seems to have some similarity with ensemble learning So what is the difference between the proposed approach and existing ensemble learning under LDP?
2. MoE seems to be widely used in deep learning, so what are the differences between the proposed method and DP-SGD or LDP-SGD? Is there LDP-SGD-MoE?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: see weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for reading our submission and providing valuable feedback. We believe that we should have emphasized more that our main contribution lies in the theoretical risk bounds presented in the article, which can be much tighter than existing bounds. In light of the review, we realize that the title and abstract of our paper are confusing and that certain points regarding our contribution have to be clarified.
1. To the best of our knowledge, there are no works that impose local differential privacy on gating networks in mixtures of experts or comparable models. Our method bridges the gap between two existing ensemble methods: one that aggregates the outputs of experts using a mechanism that does not depend on the input, and another that aggregates the outputs of predictors using an input-dependent mechanism without any restrictions on this dependency. Typical mixtures of experts are an example of the latter case. Our method is a generalization of these two methods, since the first case can be obtained by setting $\epsilon$ to 0 and the second can be obtained for an $\epsilon$ that tends to infinity.
2. Existing methods such as DP-SGD and LDP-SGD share similarities with our methods in the sense that they add noise to satisfy privacy conditions or to regularize models. However, in our case, we impose this condition on only one part of our model, which is the gating network. We use this well-known technique (LDP) as a tool to obtain theoretical bounds on the risk of our models in order to support the impressive empirical results of Mixture of Experts.
---
Rebuttal Comment 1.1:
Comment: Thks for your response. I would keep my score. | null | null | null | null | null | null |
DeTeCtive: Detecting AI-generated Text via Multi-Level Contrastive Learning | Accept (poster) | Summary: This paper presents DeTeCtive, a new algorithm for detecting AI-generated text. The key insight in this paper is that instead of treating AI-generated text detection as a binary classification problem, it should be treated as a multi-class "author style" classification problem. Using this insight, the authors develop a contrastive learning algorithm which learns style vectors for each LLM style, which are then stored in a style vector database for inference via KNN classification. The learnt style vector database can easily be expanded at inference to accomodate new text generated from potentially OOD LLMs.
The authors perform experiments on 3 existing datasets with outputs from 8 to 27 different LLMs, and show that DeTeCtive outperforms some existing methods. The authors complement their work with ablation studies on their loss function, and analysis experiments in OOD settings.
Strengths: 1. The paper presents an interesting reformulation of AI-generated text detection, as a multi-label "LLM style" classification problem. The paper presents an intuitive contrastive learning algorithm to learn style vectors for each LLM style, which are then stored in a style vector database for inference.
2. It seems to be easy to adapt the proposed algorithm towards OOD LM-generated text. By simply computing vector representation on an OOD LLM's generated text, the algorithm can support AI-generated text detection on this OOD LLM in future cycles. The authors perform analysis experiments to confirm its effectiveness.
3. While I do have numerous concerns with the experiments, the authors do make an attempt at a large-scale empirical evaluation of their method, considering 3 existing datasets with outputs from 8 to 27 different LLMs.
Weaknesses: I had some concerns about the experiments in this paper.
1. **Baselines seem to be quite weak / over a year old**. Most of the baselines used are either pre-2022 methods or DetectGPT, which seems unsuitable for a cross-model dataset like the one used in evaluation. How do newer methods like Binoculars [5] perform on the same benchmark? Also, why not compare against watermarking algorithms like KGW [1], EXP-Edit [2], SemStamp [3], or [4]? Also how well do commercial tools like GPTZero perform on the same datasets (https://gptzero.me)?
2. **Limited experiments on newer GPT-4 class LLMs**. AI-generated text detection is the hardest on the most human-like LLMs, which are the GPT-4 class models. However, almost no experiments were done on GPT-4 class models (except for a small subset of the M4 dataset). I encourage the authors to stress test their OOD generalization to outputs from GPT-4 class models like Claude Opus, Gemini 1.5 and the newer variants of GPT-4.
3. **Limited emphasis on attack robustness**. There are almost no experiments in the paper (besides the mention of OUTFOX paraphrases) showing the robustness of the DeTeCtive algorithm to paraphrasing [6, 7], text mixing attacks [8], and translation attacks. This is critical to establish the robustness of the method compared to alternative detectors.
4. **It's not clear whether gains over baselines are coming from the SimCSE initialization or training**. Looking at the ablation studies in Table 3 and baselines in Table 1, all ablated variants outperform all baselines. This makes me question the need for the complexity in the DeTeCtive method's loss function, and how much of the gains can be attributed to the SimCSE initialization rather than the proposed DeTeCtive algorithm. An ablation study that would help here is using the DeTeCtive loss function on alternative encoders like RoBERTa / BERT / SentenceBERT.
[1] - https://arxiv.org/abs/2301.10226
[2] - https://arxiv.org/abs/2307.15593
[3] - https://arxiv.org/abs/2310.03991
[4] - https://arxiv.org/abs/2305.08883
[5] - https://arxiv.org/pdf/2401.12070
[6] - https://arxiv.org/abs/2303.11156
[7] - https://arxiv.org/abs/2303.13408
[8] - https://openreview.net/pdf?id=DEJIDCmWOz
----
**After rebuttal**: I've decided to increase my score from 3 to 5 due to extra experiments on paraphrase robustness, and baseline comparisons against Binoculars.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. AI-generated text detectors typically need to operate in a low FPR setting, to minimize the risks of labeling innocent text as AI-generated. Given this, what are the Table 1 true positive rates at a low FPR of say 0-1%? TPR at low FPR ranges (0-1%) is a standard metric for evaluating AI-generated text detectors which has been used in many previous papers and blogs: https://arxiv.org/abs/2303.13408, https://openreview.net/pdf?id=DEJIDCmWOz, https://arxiv.org/pdf/2401.12070, https://foundation.mozilla.org/en/blog/who-wrote-that-evaluating-tools-to-detect-ai-generated-text/
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors acknowledged weakness #3 in their limitations section. But I think this is a critical limitation and analysis on paraphrase / attack robustness is necessary in AI-generated text detection papers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We would like to thank Reviewer Tv1c for the valuable feedback. The comments and suggestions have greatly helped us in improving the quality of our work. Please see below for our responses to your comments.*
***
## Baselines seem to be quite weak/over a year old.
1. We need to clarify that the comparison methods presented in our paper are commonly used baselines in the task of AI-generated text detection, and most of them are the latest state-of-the-art (SOTA) approaches **as the table shows in the top-level Author Rebuttal**. All the baseline schemes we identified are compared with DetectGPT. To align with related works in the field, we also include corresponding results.
2. Watermarking algorithms rely on embedding specific markers **during the generation process**. In contrast, our neural network based approach does not require any intervention in the generation process and can universally detect all texts. Since our goal is to provide a general detection method **independent of the generation process**, rather than intervening and controlling the model's generation process, our approach isn't suitable and feasible for comparison with watermarking algorithms. Moreover, watermarking algorithms are difficult to implement in closed-source models, and the flexible modifiability of open-source models also limits their widespread application.
3. The comparison with Binoculars and GPTZero can be respectively found at **the Table 2 and Table 3 within the uploaded PDF file**. The experimental results confirm that our method continues to outperform these two supplementary approaches.
***
## Limited experiments on newer GPT-4 class LLMs.
In the out-of-distribution (OOD) detection experiment on the M4 dataset, a subset of the testing data is from GPT-4, and is utilized to evaluate the model's OOD detection capability. Our method demonstrates a performance advantage over comparative approaches on this dataset. Additionally, we supplement OOD experiments based on the "Unseen Domains & Unseen Models" dataset from the paper "MAGE: Machine-generated Text Detection in the Wild" presented at ACL 2024. **The testing data is sourced entirely from GPT-4**, and the results indicate that our method performs better than the comparative schemes. (Longformer† indicates that Longformer uses data from the testing set to determine the detection threshold.)
| Methods | HumanRec | MachineRec | AvgRec | F1 |
|---|---|---|---|---|
| Unseen Domains & Unseen Model | | | | |
| FastText | 71.78 | 68.88 | 70.33 | 70.21 |
| GLTR | 16.79 | 98.63 | 57.71 | 28.40 |
| Longformer | 52.50 | 99.14 | 75.82 | 68.45 |
| Longformer† | 88.78† | 84.12† | 86.54† | 86.42 |
| **DeTeCtive** | **75.46** | **97.88** | **86.67** | **84.93** |
***
## Limited emphasis on attack robustness.
As you mentioned, a portion of the M4 dataset has undergone paraphrasing enhancement by OUTFOX, and our method outperforms all comparative solutions on this dataset. Additionally, we have supplemented our experiments with assessments of attack robustness, with the results shown **in the top-level Author Rebuttal and Table 4 within the uploaded PDF file**. The results demonstrate that our approach possesses strong resistance to attacks, indicating robust performance.
***
## It's not clear whether gains over baselines are coming from the SimCSE initialization or training.
As described in **Section 4.2 of the paper**, we conduct experiments with **a variety of text encoders** to verify that our approach can enhance the performance of AI-generated text detection. **The results are already presented in Table 4 within the original submission**. This indicates that our method is not limited to a single encoder like SimCSE but is applicable to various encoders, and the performance gains come from our training.
***
## AI-generated text detectors typically need to operate in a low FPR setting, to minimize the risks of labeling innocent text as AI-generated.
Our classification approach is based on dense retrieval and KNN (K-Nearest Neighbors) classification. This approach differs from traditional probability-based classifiers in that **it does not rely on probability outputs**. Specifically, the KNN classifier makes decisions based on the proximity of neighboring samples, rather than estimating the probability distribution of categories. Due to the absence of probability outputs, our method cannot define and adjust the threshold for the False Positive Rate (FPR). Typically, a low FPR range (such as 0-1%) is calculated by selecting different thresholds on the model's output probabilities, thereby measuring the True Positive Rate (TPR) at a given FPR. In our setup, the classification outcome is derived from the distance measurement of the nearest neighbors, **without any involvement of probability scoring or threshold adjustment**, providing only a global FPR and TPR. Therefore, we are unable to provide the corresponding TPR data within a specific low FPR range, such as 0-1%.
---
Rebuttal Comment 1.1:
Title: Thank you for the detailed response and experiments, raising score to 5
Comment: Thank you for the detailed response and additional experiments! I've decided to increase my score from 3 to 5 due to extra experiments on paraphrase robustness, and baseline comparisons against Binoculars/GPTZero. I still encourage the authors to do add watermarking baselines in the paper, perhaps in the supplementary material.
---
Rebuttal 2:
Comment: Thank you for your response and acknowledgment of our supplementary experiments. We are currently conducting experiments on watermarking dataset, this experiment requires a certain amount of time because we need to conduct evaluations on watermarking specific dataset, which entails some preliminary data processing work. We will include the results in the final version of the supplementary material within our paper. | Summary: This paper proposes to learn a text encoder with contrastive learning to cluster texts from different sources for fine-grained classification. A training-free incremental adaptation method is designed for detecting OOD data.
Strengths: 1: The innovation of TFIA is inspiring for OOD detection. Detection in information retrieval style is interesting and exhibits great performance.
2: The solid experiments across multiple datasets prove the effectiveness of the proposed method.
Weaknesses: 1: The proposed multi-level contrastive loss shares some similarities with existing work [1] so it is not exciting enough.
2: Although the authors conduct extensive experiments on multiple datasets, only few of them are designed specifically for AI text detection (GLTR, DetectGPT). They should provide more comparisons with SOTA methods [2,3].
3: As a work towards detecting AI-generated texts, more analysis of the features of texts should be provided. Otherwise, the framework could be applied to any binary classification problem (with hierarchical labels).
[1] Liu S, Liu X, Wang Y, et al. Does\textsc {DetectGPT} Fully Utilize Perturbation? Selective Perturbation on Model-Based Contrastive Learning Detector would be Better[J]. arXiv e-prints, 2024: arXiv: 2402.00263.
[2] Verma V, Fleisig E, Tomlin N, et al. Ghostbuster: Detecting text ghostwritten by large language models[J]. arXiv preprint arXiv:2305.15047, 2023.
[3] Chen Y, Kang H, Zhai V, et al. Gpt-sentinel: Distinguishing human and chatgpt generated content[J]. arXiv preprint arXiv:2305.07969, 2023.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1: I am wondering if this framework would do a good job in authorship attribution [1]?
2: For the classification approach in ablation studies, is the classification head trained with your own data or initialized in other ways?
3: Could you please provide some explanation on the advantage of multi-level contrastive loss over pcl? It is confusing to me that fine-grained clustering could help with binary classification.
[1] Uchendu A, Le T, Shu K, et al. Authorship attribution for neural text generation[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020: 8384-8395.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors mention they do not explore the robustness of the proposed method. The validation could be conducted under different attacks like paraphrasing, editing, prompting.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We would like to thank Reviewer W7ZQ for the valuable feedback. The comments and suggestions have greatly helped us in improving the quality of our work. Please see below for our responses to your comments.*
***
## The proposed multi-level contrastive loss shares some similarities with existing work so it is not exciting enough.
We would like to elaborate on the differences between our method and previous methods from the following aspects:
1. This paper utilizes a form of contrastive learning that adheres to the traditional definition of positive and negative sample pairs, which is also the plain contrastive learning(PCL) approach that we compared against in our paper. Their use of contrastive learning in conjunction with selective masking strategy aims to achieve better text representations. However, one of the core contributions of our work is the proposal of a multi-level contrastive learning(MCL) framework. We define multi-level positive and negative sample pairs and derive a loss function for multi-level contrastive learning, which is different from previous studies.
2. Additionally, we perform classification based on feature clustering, distinguishing it from traditional binary classification methods. Such a classification approach effectively enhances the model's detection capabilities and robustness.
3. Lastly, within the multi-level contrastive learning framework, our model also has the ability of Training-Free Incremental Adaptation, effectively improving out-of-distribution (OOD) detection capability. This is also an important capability that has not been explored or discovered in previous methods.
Comprehensive ablation studies on our multi-level contrastive learning framework are performed in Section 4.3. From the experimental results, it can be seen that our method surpasses plain contrastive learning framework. Each component improves the model's performance, thereby validating the effectiveness of our proposed framework.
***
## Although the authors conduct extensive experiments on multiple datasets, only few of them are designed specifically for AI text detection (GLTR, DetectGPT). They should provide more comparisons with SOTA methods.
Thanks for your affirmation of the amount of our experiments. We want to clarify that methods we compared across each dataset are commonly used baseline methods in the field of AI-generated text detection. The two comparison methods you referenced, Ghostbuster and GPT-Sentinel, have been factored into our research. In our paper, we make a comparison against T5-Sentinel, a latest work proposed by the author of GPT-Sentinel, demonstrating superior performance relative to GPT-Sentinel. Detailed results can be observed in **Table 1 of our paper**. As for Ghostbuster, due to the necessary utilization of OpenAI API interface for testing and because our test dataset is quite voluminous, it is not an appropriate choice for comparison. Instead, we select Binoculars as a stand-in comparison method, which outperforms Ghostbuster in their paper. Our approach outperforms all compared methods; see **Table2 in the uploaded PDF** for details. Recently, an AI-generated text detection competition was held based on the M4 dataset. We also incorporate all the solutions from this competition into our list of comparison methods, details of which can be found in **Table1 of the uploaded PDF file**. In the monolingual test set of the M4 dataset, we achieve the top score, surpassing all other participating schemes. In the multilingual test set, we also secured the fifth place.
***
## As a work towards detecting AI-generated texts, more analysis of the features of texts should be provided.
In Section 4.3 of the submission, we provide a visualization analysis of learned text embeddings. Through the UMAP dimensionality reduction visualization, it can be seen that text embeddings of different types have successfully clustered together, which demonstrates the effectiveness of our method.
***
## I am wondering if this framework would do a good job in authorship attribution?
We have supplemented the performance of our proposed method on the task of authorship attribution detection on TuringBench dataset, **the results can be found at the top-level Author Rebuttal**. The experimental results indicate that our approach achieves state-of-the-art performance, demonstrating its ability to accomplish this task.
***
## For the classification approach in ablation studies, is the classification head trained with your own data or initialized in other ways?
Our classification head is trained with the training data and then tested.
***
## Could you please provide some explanation on the advantage of multi-level contrastive loss over pcl?
Please see more explanation in the comment below your review block.
***
## The authors mention they do not explore the robustness of the proposed method. The validation could be conducted under different attacks like paraphrasing, editing, prompting.
Thanks for your suggestions. In this paper, our focus is on proposing a general method for AI-generated text detection. As introduced in Section 4.1 of the paper, the M4 dataset includes testing data that has undergone paraphrasing attack, which can be used to verify the model's ability to resist attacks. We have also conducted additional experiments regarding attack robustness which can be found at **the top-level Author Rebuttal and the uploaded PDF file**, and the experimental results demonstrate that our method possesses strong resilience against attacks.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thank you for the detailed explanation and additional experiments. Here are some after-rebuttal questions:
* Q1
My major concern is that this work focuses too much on improving binary classification performance instead of providing insights about synthetic text, which deviates the research topic -- do you think SCL is targeted at detecting synthetic data even if it is a commonly used baseline (even a strong baseline) in this area?
Many existing works about AI-generated text detection discuss the differences between AI-generated and human texts, which deepens communities' understanding about AI-generated texts. For example, DetectGPT finds the difference in log probability between two categories. GPT-Sentinal traces integrated gradient to tokens to decide which tokens contribute to the classification. CoCo discovers the coherence structure difference in two categories -- these are examples from the related work introduced in this paper.
I agree that the visualization results prove the effectiveness of the learned encoder. However, I argue that this is a success in classification (I think SCL is also able to cluster the data embedding well) but not in explaining the intrinsic difference between AI-generated and human texts -- the research problem in this paper.
This is a methodologically inspiring work. I lean to reject this paper for the weak relation to the problem it claims to solve based on the above-mentioned reason. I am willing to increase my score if the authors address my concern.
* Q2
I have a question about the authorship attribution experiment. If I do not misunderstand, the author indicates the result is in Table 2 in the attached PDF, which is the same (except for the addition of Binoculars) in Table 1 in the original paper. I assume the results in table 1 in the original paper is for binary classification? But authorship attribution is a multi-class classification problem and I did not expect the same results between the two tables. Please tell me if I am wrong about anything.
---
Rebuttal 2:
Title: Answer for the comment "Could you please provide some explanation on the advantage of multi-level contrastive loss over pcl?"
Comment: Suppose we liken our task to appreciating paintings, needing to categorize them into "Impressionist" and "non-Impressionist" genres. If we focus solely on these two broad categories, we might overlook the stylistic differences between works by Monet and Renoir within the Impressionist category. Fine-grained clustering, however, is akin to not only distinguishing between Impressionist and non-Impressionist works but also to further differentiating the styles of various artists within the Impressionist genre itself. When we can differentiate between Monet's and Renoir's works, it means we have gained a deeper understanding of Impressionist pieces, thus making us more adept at distinguishing between the "Impressionist" and "non-Impressionist" categories. This more nuanced classification aids in better understanding and appreciating the artwork, enhancing the precision of our discernment. Similarly, in AI-generated text detection, incorporating multi-level contrastive learning allows the model to more precisely identify and differentiate texts from various sources.
Our ablation study, as shown in Table 3 within the paper, also confirms the multi-level contrastive learning outperforms plain contrastive learning.
---
Rebuttal 3:
Title: (Q1-part1) Response to the after-rebuttal questions raised by Reviewer W7ZQ.
Comment: We sincerely appreciate your acknowledgment of our proposed method as an inspiring work and for the commendable performance of our framework. This recognition serves as an encouragement in our research endeavors. Next, we will address your after-rebuttal questions in detail.
## Q1
Thank you for your thoughtful and detailed feedback. We appreciate the points you've raised and would like to address your concerns directly from the following perspectives.
* **The research objective and insights of our work.**
1. **Performance vs. Insights:** As stated in the Abstract and Introduction of the paper, our insight is that the key to accomplishing AI-generated text detection resides in distinguishing the multi-level writing styles of different "authors", rather than just modeling this task as a binary classification problem. Therefore, based on this key insight, we design the DeTeCtive framework. Specifically, we define multi-level positive and negative sample pairs according to the affinity relations between different language models and derive a multi-level contrastive learning loss function. Eventually, we accomplish the learning of multi-level features through contrastive learning. Moreover, under the DeTective framework, we do not focus on improving binary classification performance. Instead, within our framework, the model's learning objective is to capture multi-level features among different large language models. The principal objective function is the multi-level contrastive learning loss we designed, rather than a simple binary cross-entropy loss. Furthermore, our classification method is based on the K-Nearest Neighbors (KNN) clustering of embeddings. Therefore, the improvement in model performance results from successful learning of multi-level features rather than relying on the classification head. Hence, both the initial intent of framework design and the way of model learning reflect our insights. We believe that the end-to-end method we proposed, which outperforms baselines (e.g., SCL/PCL) and all the comparison methods (e.g., DetectGPT, GPT-Sentinal, CoCo) by a significant margin, is not just a trivial improvement. The substantial performance gains suggest that our method is capturing internal patterns and nuances that are specific to AI-generated text. These internal insights could offer valuable perspectives on how machines differentiate between human and AI-generated content effectively.
2. **Human-Friendly Insights vs. Machine Efficiency:** As elaborated above, our understanding and insights of this task are reflected in the design of the method and framework. The visualization of learned text features also reveals that features at different levels (e.g., by model famliy, by individual model) could cluster well, which is something typical supervised contrastive learning (SCL) cannot achieve due to the lack of multi-level relationship constraints. We believe that the insights upheld by our paper, namely, that the key to accomplishing AI-generated text detection task resides in distinguishing the multi-level writing styles of different "authors", is intuitive and human-friendly. The effectiveness of our approach has also been proven through comprehensive experiments and the results are also consistent with our insights.
* **Weaknesses of existing works.**
1. **Unsatisfactory generalizability:** The out-of-distribution (OOD) detection results shown in Table2 of our original submission, the performance of existing methods on OOD data is quite poor. This suggests that the handcrafted features and their insights discovered are challenging to generalize to OOD data, significantly hindering the practical application of the algorithm. Considering the rapid development of language models, this presents a problem.
2. **Performance bottlenecks and lack of comprehensive evaluations:** Compared to our work, existing solutions lack a comprehensive evaluation on various benchmarks, testing scenarios and other applications. The experimental results of existing methods on multiple benchmarks are not ideal, indicating that there are performance bottlenecks. However, our method surpasses the existing state-of-the-art solutions by a large margin on each individual dataset.
---
Rebuttal Comment 3.1:
Title: Follow-up Response
Comment: I appreciate the detailed response from the authors. I totally understand the claimed novelty of this work. But I do not think the authors understand my point. As mentioned in the comments before, if this work is targeted at detecting AI-generated text, analysis of sequence patterns or special tokens should be provided to help the community gain insights about the AI-generated texts. Otherwise, I think this work is also applicable to and should be evaluated on datasets with similar label structures. I will increase my score a bit based on the additional experiments but I encourage the authors to conduct the analysis mentioned above.
---
Reply to Comment 3.1.1:
Comment: Dear Reviewer,
We would like to express our sincere gratitude for your response and acknowledgment of our additional experiments. The issue you have raised gives us a new perspective and a direction for reflection. Upon careful consideration, we think that the issue you pointed out could serve as a promising direction for future research. Once again, we appreciate your valuable comments.
Authors
---
Rebuttal 4:
Title: (Q1-part2) Response to the after-rebuttal questions raised by Reviewer W7ZQ.
Comment: * **The motivation and contributions of our research work.**
1. **Motivation:** In response to the aforementioned weaknesses of existing works, the motivation of our study is to propose an AI-generated text detection algorithm that can be widely applied to various language models and scenarios. It could effectively adapt to newly released large language models and other unseen domains, demonstrating good generalization and robustness.
2. **Contributions:** Our work contributes to the field of AI-generated text detection by proposing a novel method that can effectively detect AI-generated text. Based on our framework, we introduce Training-Free Incremental Adaptation (TFIA), a scheme to enhance the model's out-of-distribution (OOD) detection capabilities. We validate our method on multiple benchmarks, where it consistently outperforms existing solutions, especially in terms of OOD detection performance. The visualization of text embeddings also validates that our model captures multi-level text features, which aligns perfectly with our insights. Additionally, we supplement experimental results on attack robustness and authorship attribution detection, further demonstrating our method's superior generalization and robustness surpassing existing solutions. We believe such a novel framework, paired with comprehensive experimental evaluations and state-of-the-art performance, contributes to the community, fostering the application of our algorithm in real-world scenarios.
In conclusion, we believe that our approach offers a complementary perspective that enhances the performance of AI-generated text detection and also aligns well with our key insights. Moreover, we have also conducted corresponding analysis and explorations based on our method, including extensive experiments under various testing scenarios (e.g., multiple benchmarks, OOD detection, authorship attribution detection, attack robustness) and the Training-Free Incremental Adaptation (TFIA) capabilities. These solid experimental results and findings are contributions of our work to the research community. Finally, we are open to incorporate further discussion on interpretability of our method into the final version.
---
Rebuttal 5:
Title: (Q2) Response to the after-rebuttal questions raised by Reviewer W7ZQ.
Comment: ## Q2
Please allow me to clarify your misconception about our supplementary results for the task of **authorship attribution detection**. These corresponding experimental results are detailed in the section titled ***"Supplementary experiments on other applications. (Reviewer yy53, Reviewer W7ZQ)"***, which can be found at the top-level block of ***Author Rebuttal by Authors***. While Table2 of the attched PDF file shows the results of the additional baseline schemes on several existing datasets on the task of AI-generated detection, so the results are consistent with that in Table1 of the original submission. Due to the length constraint of the uploaded PDF file, we only list the additional results of authorship attribution detection in the rebuttal reply. We commit to update these results in the final version afterward.
***
Thank you again for your comments. We sincerely hope that our responses can address your questions.
---
Rebuttal 6:
Comment: Dear reviewer,
We kindly request your feedback on whether our response has addressed your questions. If you have any remaining questions or concerns, we are happy to address them. Thank you for your time and consideration!
---
Rebuttal Comment 6.1:
Comment: Dear Reviewer W7ZQ,
Thanks again for helping review this paper! Since we are approaching the end of the author-reviewer discussion period, would you please check this author response regarding your concerns? We really appreciate it!
Best,
AC | Summary: The paper discusses the challenges of current AI-generated text detection methods, which often suffer from performance issues and poor adaptability to new data and models. The authors introduce a new framework called DeTeCtive, which uses multi-level contrastive learning to distinguish different writing styles rather than just classifying text as human-written or AI-generated. This approach improves the effectiveness of various text encoders, achieving top results across multiple benchmarks, particularly in out-of-distribution scenarios.
DeTeCtive's framework includes a dense information retrieval pipeline and a Training-Free Incremental Adaptation (TFIA) mechanism, enhancing performance without extra training when encountering new data. The method fine-tunes text encoders using a new multi-task auxiliary, multi-level contrastive learning loss to capture detailed features of different writing styles.
Extensive experiments show that DeTeCtive surpasses existing methods in detecting AI-generated text and excels at handling data from unseen models and domains. The paper also emphasizes the method's compatibility with various text encoders and its strong performance in diverse scenarios, making a significant contribution to the field of AI-generated text detection and promoting the safe use of large language models.
Strengths: The use of multi-level contrastive learning to distinguish writing styles is a novel method that goes beyond traditional binary classification. This allows for a more nuanced detection of AI-generated text, improving the overall detection performance.
The proposed method consistently outperforms existing techniques across multiple benchmarks, establishing new state-of-the-art results. This indicates the effectiveness of the multi-level contrastive learning framework.
Weaknesses: While the paper shows strong performance on the chosen benchmarks, it remains unclear how well the model would perform on other datasets or in different application contexts. This limits the generalizability of the findings to some extent.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the quality and diversity of the training data affect the performance of DeTeCtive? Are there specific datasets that are more beneficial for training the model?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The focus on distinguishing writing styles might overlook content-based cues that could also indicate AI-generated text. This might lead to missed detections if an AI successfully mimics human writing style while generating misleading content.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *We would like to thank Reviewer yy53 for the valuable feedback. The comments and suggestions have greatly helped us in improving the quality of our work. Please see below for our responses to your comments.*
***
## The paper's strong performance may not generalize to other datasets or applications.
We appreciate your acknowledgment towards our proposed framework and experimental results. In the submission, we conduct evaluations on three widely-used datasets for AI-generated text detection. These include a variety of testing scenarios, such as: multi-model, multi-domain, multi-language, text-paraphrase attack, and out-of-distribution (OOD) detection, among others. The experimental results demonstrate that our method achieves state-of-the-art performance in these various scenarios. Furthermore, we supplement our experiments on **attack robustness** and **authorship attribution detection**, and we also add performance comparisons with **more baseline methods** on existing datasets. Please refer to **the tabels in the top-level Author Rebuttal** and **the uploaded PDF file** for details. Additionally, we supplement **OOD evaluation** based on the "Unseen Domains & Unseen Models" dataset from the paper "MAGE: Machine-generated Text Detection in the Wild" presented at ACL 2024. The testing data is entirely derived from GPT-4, with the corresponding results presented in the table below. The results also indicate that our method performs better than other solutions. These additional experimental results will be incorporated into the final version of our paper.
To sum up, our approach exhibits state-of-the-art performance across various datasets, testing scenarios, and applications.
| Methods | HumanRec | MachineRec | AvgRec | F1 |
|---|---|---|---|---|
| Unseen Domains & Unseen Model | | | | |
| FastText | 71.78 | 68.88 | 70.33 | 70.21 |
| GLTR | 16.79 | 98.63 | 57.71 | 28.40 |
| Longformer | 52.50 | 99.14 | 75.82 | 68.45 |
| Longformer† | 88.78† | 84.12† | 86.54† | 86.42 |
| **DeTeCtive** | **75.46** | **97.88** | **86.67** | **84.93** |
(Longformer† indicates that Longformer uses data from the testing set to determine the detection threshold.)
***
## How does the quality and diversity of the training data affect the performance of DeTeCtive?
In the experiments, we use publicly available datasets and strictly follow the divisions of the training and testing sets without introducing any additional data to assist model training.
Moreover, we believe that increasing data diversity and quality can effectively enhance the performance of our method. Under the proposed multi-level contrastive learning framework, diverse data compose more positive and negative sample pairs, enabling the model to learn more fine-grained writing-style features and relationships, thereby improving the model's performance. We believe that scaling up the data volume within our proposed framework holds promise as a future research direction.
***
## Are there specific datasets that are more beneficial for training the model?
Within our proposed framework, the process is streamlined, requiring only the texts and labeling of different large language models. Following this, our method facilitates the construction of multi-level contrastive learning sample pairs. Consequently, there is no necessity for specific datasets to train our model.
***
## The focus on distinguishing writing styles might overlook content-based cues that could also indicate AI-generated text. This might lead to missed detections if an AI successfully mimics human writing style while generating misleading content.
We appreciate you pointing out the potential limitation of our method. We will address this issue from the following two aspects, hoping to alleviate your concerns:
1. As our approach is based on fine-tuning pre-trained text encoders, which already possess the capability of content-based semantic understanding (such as BERT, SimCSE, etc.), we believe that the model can understand the content and semantics of the text itself.
2. Further, we conduct paraphrasing-attack experiments for validation. Specifically, OUTFOX and DIPPER paraphrase texts generated by large language models to mimic human-written texts, employing this as a paraphrasing attack strategy. We train our model on the training set provided by OUTFOX and test under the following three scenarios: Non-attacked, OUTFOX paraphrasing attack, and DIPPER paraphrasing attack. We also compare with several baseline methods, and the results are as follows:
| Attacker | Detector | HumanRec | MachineRec | AvgRec | F1 |
|---|---|---|---|---|---|
| Non-attacked | RoBERTa-base | 93.8 | 92.2 | 93.0 | 92.9 |
| | RoBERTa-large | 91.6 | 90.0 | 90.8 | 90.7 |
| | HC3 detector | 79.2 | 70.6 | 74.9 | 73.8 |
| | OUTFOX | 99.0 | 94.0 | 96.5 | 96.4 |
| | **DeTeCtive** | **98.2** | **100.0** | **99.1** | **99.1** |
| DIPPER | RoBERTa-base | 93.8 | 89.2 | 91.5 | 91.3 |
| | RoBERTa-large | 91.6 | 97.0 | 94.3 | 94.4 |
| | HC3 detector | 79.2 | 3.4 | 41.3 | 5.5 |
| | OUTFOX | 98.6 | 66.2 | 82.4 | 79.0 |
| | **DeTeCtive** | **97.4** | **97.9** | **97.7** | **97.5** |
| OUTFOX | RoBERTa-base | 93.8 | 69.2 | 81.5 | 78.9 |
| | RoBERTa-large | 91.6 | 56.2 | 73.9 | 68.3 |
| | HC3 detector | 79.2 | 0.4 | 39.8 | 0.7 |
| | OUTFOX | 98.8 | 24.8 | 61.8 | 39.4 |
| | **DeTeCtive** | **95.4** | **98.6** | **97.0** | **96.9** |
The results indicate that even when AI-generated texts are subjected to paraphrasing attack to mimic human writings, DeTeCtive experiences only a minimal performance decline compared to the non-attacked scenario and remains effective in detecting AI-generated text. In contrast, other solutions we compared all exhibit significant performance degradation. Therefore, we believe that our framework demonstrates good robustness against mimicry of human writing style. | null | null | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their reviews and constructive feedback. Taking into account each concern and question posed by the reviewers, we have given thorough responses within our rebuttal. It is our hope that our responses will be kindly considered during the evaluation of our submission by the reviewers and the AC. Here, we summarize the primary concerns raised by the reviewers and provide a consolidated response.
***
## The elaboration on the effectiveness and timeliness of the comparison methods. (Reviewer W7ZQ, Reviewer Tv1c)
The benchmarks and comparison methods used in our submission are state-of-the-art research works in the field of AI-generated text detection. Here, we summarize all the comparison methods and benchmarks information for your reference. (Note, symbol ("*") denotes supplementary methods during the rebuttal phase.)
| Method | Paper title | Conference |
|---|---|---|
| FastText | Bag of Tricks for Efficient Text Classification | EACL 2017 |
| GLTR | GLTR: Statistical Detection and Visualization of Generated Text | ACL 2019 |
| SCL | Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning | ICLR 2021 |
| DetectGPT | DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature | ICML 2023 |
| DIPPER | Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense | NeurIPS 2023 |
| T5-Sentinel | Token Prediction as Implicit Classification to Identify LLM-Generated Text | EMNLP 2023 |
| OUTFOX | OUTFOX: LLM-Generated Essay Detection Through In-Context Learning with Adversarially Generated Examples | AAAI 2024 |
| Longformer (MAGE) | MAGE: Machine-generated Text Detection in the Wild | ACL 2024 |
| GPT-who* | GPT-who: An Information Density-based Machine-Generated Text Detector | NAACL 2024 |
| Binoculars* | Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text | ICML 2024 |
| Benchmark | Conference |
| --------------- | ---------- |
| TuringBench | EMNLP 2021 |
| M4 | NAACL 2024 |
| Deepfake (MAGE) | ACL 2024 |
| OUTFOX | AAAI 2024 |
***
## The study of robustness of our methodology against attacks. (Reviewer W7ZQ, Reviewer Tv1c)
As stated in **Section 4.1 of our submission**, testing data of the M4 dataset has undergone paraphrase attacks using the OUTFOX method. We believe evaluations on this dataset can serve as a reliable indication of our model's robustness against such attacks. Further, as acknowledged in the **Limitation and Future Work section of our paper**, our focus primarily lies in proposing a detection algorithm rather than an extensive study into defensive robustness, whereas the latter being a specialized realm of study. Nevertheless, based on the valuable feedback from the reviewers, we supplement our research with experiments on attack robustness using the OUTFOX dataset, and results are as follows.
| Attacker | Detector | HumanRec | MachineRec | AvgRec | F1 |
|---|---|---|---|---|---|
| Non-attacked | RoBERTa-base | 93.8 | 92.2 | 93.0 | 92.9 |
| | RoBERTa-large | 91.6 | 90.0 | 90.8 | 90.7 |
| | HC3 detector | 79.2 | 70.6 | 74.9 | 73.8 |
| | OUTFOX | 99.0 | 94.0 | 96.5 | 96.4 |
| | **DeTeCtive** | **98.2** | **100.0** | **99.1** | **99.1** |
| DIPPER | RoBERTa-base | 93.8 | 89.2 | 91.5 | 91.3 |
| | RoBERTa-large | 91.6 | 97.0 | 94.3 | 94.4 |
| | HC3 detector | 79.2 | 3.4 | 41.3 | 5.5 |
| | OUTFOX | 98.6 | 66.2 | 82.4 | 79.0 |
| | **DeTeCtive** | **97.4** | **97.9** | **97.7** | **97.5** |
| OUTFOX | RoBERTa-base | 93.8 | 69.2 | 81.5 | 78.9 |
| | RoBERTa-large | 91.6 | 56.2 | 73.9 | 68.3 |
| | HC3 detector | 79.2 | 0.4 | 39.8 | 0.7 |
| | OUTFOX | 98.8 | 24.8 | 61.8 | 39.4 |
| | **DeTeCtive** | **95.4** | **98.6** | **97.0** | **96.9** |
The analysis is as follows:
Through training with our proposed multi-level contrastive learning framework, we can discern fine-grained features of AI-generated and human-written texts. Furthermore, our usage of the K-Nearest Neighbours (KNN) algorithm for classification offers our approach with a level of fault tolerance. Thus, minor disturbances prompted by certain attacks do not engender significant feature drift. Consequently, our method remains both effective and robust in detection.
***
## Supplementary experiments on other applications. (Reviewer yy53, Reviewer W7ZQ)
Taking into account the valuable feedback from the reviewers, we conduct **authorship attribution detection** on TuringBench dataset. The results confirm that our method also demonstrates state-of-the-art performance on this task.
| Model | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|
| Random Forest | 58.93 | 60.53 | 58.47 | 61.47 |
| SVM (3-grams) | 71.24 | 72.23 | 71.49 | 72.99 |
| WriteprintsRFC | 45.78 | 48.51 | 46.51 | 49.43 |
| OpenAI detector | 78.10 | 78.12 | 77.41 | 78.73 |
| Syntax-CNN | 65.20 | 65.44 | 64.80 | 66.13 |
| N-gram CNN | 69.09 | 68.32 | 66.65 | 69.14 |
| N-gram LSTM-LSTM | 66.94 | 68.24 | 66.46 | 68.98 |
| BertAA | 77.96 | 77.50 | 77.58 | 78.12 |
| BERT-Multinomial | 80.31 | 80.21 | 79.96 | 80.78 |
| RoBERTa-Multinomial | 82.14 | 81.26 | 81.07 | 81.73 |
| **DeTeCtive** | **84.04** | **82.59** | **83.05** | **82.75** |
***
Finally, we address each reviewer's comments in detail below their reviews. **The attached PDF file** further includes comprehensive comparisons of more baseline methods on three existing datasets **(see Table 2)**, including the **Binoculars** method. Additionally, we have added comparisons with **GPTZero** and **GPTWho** methods on the Deepfake dataset **(see Table 3)**. Furthermore, it includes a performance comparison with a recent competition (more than 100 participating teams) based on the M4 dataset **(see Table 1)**. Please kindly check out them. We will include all additional experiment results **in our final version**. Thank you and we hope that our submission can be fully discussed in the next stage.
Pdf: /pdf/1b4d46c2fd0861e5eac6969e6ac2c96f9ef0bbe6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models | Accept (poster) | Summary: This paper introduces a systematic framework, referred to as BELM, that is designed for the specific task of exact inversion of diffusion sampling. This framework encompasses several existing intuitive exact inversion samplers as its special cases. Subsequently, the authors derive an optimal variant within this framework via local truncation error minimization, named O-BELM, and investigate its theoretical properties, including zero-stability and convergence. Experimental results demonstrate that O-BELM can achieve exact inversion and high-quality sampling. The authors further explore its potential applications in downstream computer vision tasks.
Strengths: 1 This paper represents the first attempt to formalize the task of exact inversion of diffusion sampling as a rigorous mathematical problem. It highlights that bidirectional explicit property is a sufficient condition for a general sampler to achieve exact inversion.
2 This paper introduces O-BELM as an efficient sampler intended for practical use. Additionally, the paper offers theoretical guarantees for the newly proposed sampler, albeit under mild assumptions.
3 This paper derived the form of local truncation error for general linear multistep samplers within the context of diffusion models.
4 A comprehensive range of experiments has been conducted to confirm that the O-BELM exhibits both exact inversion and high-quality sampling capabilities.
5 This paper is well-articulated and clearly presented.
Weaknesses: 1 There is a typographical error in the caption of Table 3.
2 The information regarding the scheduler setting used in the experiments has not been included.
3 Some mathematical derivations in the paper are too succinct, making them difficult to follow. For instance, the transition from equation (30) to equation (31) is not straightforward as it involves a series of Taylor's expansions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1 This paper examines the zero-stable property of the O-BELM algorithm and baseline samplers. However, I've noticed that there are other stability properties, such as Absolute-stability or Butcher-stability, for ODE solvers. I'm interested in knowing how the samplers discussed in this paper fare in regards to these stability terms.
2 I'm curious to understand how adopting this new sampler might affect the marginal distribution of the diffusion models. Could you elaborate on the potential impacts this could have?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have acknowledged the unexplored semi-linear structure as a limitation in the design of the BELM.
Also, the effects of utilizing O-BELM in advanced image editing like P2P remain unexplored, which is a limitation of O-BELM as well.
Additionally, the impact of the paper is primarily confined to the sampling sub-area within the diffusion-based model community.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our work, we will answer your questions one by one regarding these weaknesses/questions.
> **Weaknesses 1:** ''There is a typographical error in the caption of Table 3.''
We apologize for the oversight. We have now corrected the error and ensured that the caption accurately reflects the contents of the table. The caption of the Table 3 should be "Comparison of different samplers on FID score( $\downarrow$) for the task of unconditional generation." We have conducted a thorough second revision and fix all typographical errors.
> **Weaknesses 2:** ''The information regarding the scheduler setting used in the experiments has not been included.''
We understand the importance of this detail for the complete understanding and replication of our experiment. For these pretrained diffusion models, we utilize the noise scheduler delineated in their respective configurations, applying it uniformly across all our experiments. Given that our experiments do not necessitate the training or fine-tuning of diffusion models, there is no need to devise a new scheduler setting.
We have now added the necessary information about the scheduler setting in the Appendix C.6 section of the paper.
> **Weaknesses 3:** ''Some mathematical derivations in the paper are too succinct, making them difficult to follow. For instance, the transition from equation (30) to equation (31) is not straightforward as it involves a series of Taylor's expansions.''
We acknowledge that some of the transitions, such as from equation (30) to equation (31), may not be easy to follow. We have provided more detailed explanation of these derivations to ensure that the logic and process of our mathematical derivations are clear and easy to follow.
To elucidate the derivation from Equation (30) to Equation (31), we have included an intermediate illustration as follows:
$$
\begin{aligned}
& \sum\_{j=1}^{k} a\_{i,j} \bar{x}\_{i-1+j} +\sum\_{j=1}^{k-1}b\_{i,j} h\_{i-1+j}\bar{\boldsymbol{\varepsilon}}\_\theta(\bar{x}\_{i-1+j},\bar{\sigma}\_{i-1+j})\\\\
=&\sum\_{j=1}^{k} a\_{i,j} ( \bar{x}\_{i-1} + \sum\_{l=1}^{2k-2} \frac{1}{(l)!}(\sum\_{m=0}^{j-1}h\_{i+m})^{l}\bar{\boldsymbol{\varepsilon}}^{(l-1)}\_\theta(\bar{x}(t\_{i-1}),\bar{\sigma}\_{i-1}) ) \\\\
&+\sum\_{j=1}^{k-1}b\_{i,j} h\_{i-1+j}( \sum\_{l=1}^{2k-2} \frac{1}{(l-1)!}(\sum\_{m=0}^{j-1}h\_{i+m})^{l-1}\bar{\boldsymbol{\varepsilon}}^{(l-1)}\_\theta(\bar{x}(t\_{i-1}),\bar{\sigma}\_{i-1}) )\\\\
&+\mathcal{O}(({\sum\_{m=0}^{k-1}h\_{i+m}})^{(2k-1)})\\\\
=& \sum\_{j=1}^{k} a\_{i,j} \bar{x}\_{i-1} + \sum\_{l=1}^{2k-2}(\frac{1}{(l!) }\sum\_{j=1}^{k}a\_{i,j}( \sum\_{m=1}^{j}h\_{i+m-1})^{l})\bar{\boldsymbol{\varepsilon}}^{(l-1)}\_\theta(\bar{x}(t\_{i-1}),\bar{\sigma}\_{i-1})\\\\
&+\sum\_{l=1}^{2k-2}(\frac{1}{((l-1)!) }\sum\_{j=1}^{k-1}b\_{i,j}h\_{i+j-1}( \sum\_{m=1}^{j}h\_{i+m-1})^{l-1})\bar{\boldsymbol{\varepsilon}}^{(l-1)}\_\theta(\bar{x}(t\_{i-1}),\bar{\sigma}\_{i-1})\\\\
&+\mathcal{O}(({\sum\_{m=0}^{k-1}h\_{i+m}})^{(2k-1)})
\end{aligned}
$$
We have also conducted a comprehensive review of our mathematical derivations, enhancing them with additional details to improve their clarity and ease of understanding.
> **Questions 1:** ''This paper examines the zero-stable property of the O-BELM algorithm and baseline samplers. However, I've noticed that there are other stability properties, such as Absolute-stability or Butcher-stability, for ODE solvers. I'm interested in knowing how the samplers discussed in this paper fare in regards to these stability terms.''
Zero-stability and Absolute-stability (abbreviated as A-stability) are the most commonly used stability conditions. The A-stability is quite similar to zero-stability, with the primary difference being that A-stability measures the discrete step-size case, while zero-stability measures the limiting step-size case. However, it has been demonstrated that A-stability is extremely demanding and no explicit linear multistep method can satisfy A-stability as shown by the Second Barrier Theorem of Dahlquist [1]. Given the heavy cost of using implicit methods in the diffusion model area, A-stability is not suitable for analysis in the diffusion sampler.
On the other hand, Butcher-stability (abbreviated as B-stability) is typically used to analyze the stability property of Runge-Kutta type ODE samplers. For these samplers, B-stability can derive zero-stability [2]. However, for non-RK type ODE samplers, the implications of the Butcher-stability condition remain unclear.
Therefore, it is reasonable to select zero-stability as the criterion for assessing stability in the analysis of diffusion samplers.
> **Questions 2:** ''I'm curious to understand how adopting this new sampler might affect the marginal distribution of the diffusion models. Could you elaborate on the potential impacts this could have?''
The marginal distribution of the proposed sampler will also conform to the marginal distribution implied by the diffusion ODE.
Intuitively, the approximated $x_t$ obtained from the discretized sampler serves as a proxy for the underlying truth $\mathbf{x}(t)$ of the diffusion initial value problem (IVP).
Theoretically, our convergence analysis ensures that our approximated $\mathbf{x}_t$ will indeed converge to this underlying truth. The DDIM and other deterministic diffusion samplers, such as the DPM-solver, are also constructed to simulate this diffusion IVP problem. Their marginal distributions are identical, when omitting discretization errors.
Experimental results show that our O-BELM can generate high-quality samples, effectively demonstrating the capability of O-BELM to accurately model the underlying data distribution at time $T$.
**References**
[1] G. Dahlquist, A special stability problem for linear multistep methods, BIT 3, 27–43, 1963.
[2] Theorem 357D, Butcher, John Charles. Numerical methods for ordinary differential equations. John Wiley & Sons, 2016.
---
Rebuttal Comment 1.1:
Comment: After reading the careful response and other reviews, I will keep my score. Thank the authors for their response.
---
Reply to Comment 1.1.1:
Comment: Thanks for your valuable questions regarding our theory and for your generous appreciations to our work! | Summary: This paper introduces a novel method for inverting real images into a diffusion model. It presents Bidirectional Explicit Linear Multi-step (BELM) samplers aimed at minimizing the mismatch in DDIM inversion.
Strengths: The proposed Bidirectional Explicit Linear Multi-step (BELM) samplers seems reasonable, and outputforms DDIM sampler.
Weaknesses: 1. The paper lacks discussion and comparison with several related works such as NMG, EDICT, DirectingInv, ProEdit, ReNoise, and others. These works also aim to address the limitations of DDIM inversion through various proposed solutions. It is essential for the authors to provide a thorough discussion and comparative analysis with these existing methods.
2. The evaluation metrics used in the paper may not be sufficiently convincing. DirectingInv, for example, introduces several metrics specifically tailored for evaluating editing tasks, which are generally considered more reliable. Furthermore, DirectingInv also establishes a standard benchmark dataset for diffusion inversion tasks. It would be beneficial for the authors to conduct experiments using this benchmark dataset to ensure a comprehensive study and rigorous evaluation of their proposed method.
3. In terms of reconstruction performance compared to AE, EDICT, and BDIA, O-BELM does not demonstrate any improvement.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see in weakness
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable reviews, we will answer your questions one by one regarding these weaknesses/questions.
> **Weaknesses 1:** The paper lacks discussion and comparison with several related works such as NMG, EDICT, DirectingInv, ProEdit, ReNoise, and others. These works also aim to address the limitations of DDIM inversion through various proposed solutions. ...
We understand that there are several techniques proposed to address the inexact inversion issue of DDIM within the context of **classifier-free-text-guided image editing**. These include the methods you mentioned, such as NMG[1], DirectingInv[2], ProEdit[3], and ReNoise[4], as well as others like NPT [5] or NT [6]. We argue that the proposed O-BELM and these techniques should not be considered as comparative algorithms for following reasons.
- **These methods are orthogonal.** O-BELM modifies the discretization formula to achieve exact inversion, while these techniques adjust the classifier-free-guidance mechanism. They address this problem from different directions.
- **They can be used together in the classifier-free-text-guided image editing.** Take DirectingInv as an instance, its inversion is just DDIM inversion and its forward process encompasses two state-interacting DDIM forward processes with different prompts. We can substitute the DDIM inversion/forward in DirectingInv to be O-BELM inversion/forward and get O-BELM+DirectingInv. We conduct experiments of origin DDIM+DirectingInv and O-BELM+DirectingInv in `Table 1 in attached PDF`, showing that O-BELM can be integrated with DirectingInv to further enhance the performance.
- **Their working scenarios differ.** The O-BELM is built on the general diffusion IVP (equation 11), and can guarantee exact inversion (Proposition 2) and minimized error (Proposition 4) under all tasks based on diffusion ODE (PF-ODE). O-BELM can always converge to underlying IVP solution as demonstrated by Proposition 5. This means that the BELM framework is compatible with a wide variety of diffusion-based tasks, irrespective of the data type (images or words), the task type (editing or interpolation), the guidance method (unconditional, classifier-free, classifier-based, or adjoint ODE-based), or the network structure (whether it includes an attention layer or not). On the contrary, these techniques are developed specific for classifier-free-text-guided image editing task.
We will add a paragraph to discussion these works.
> **Weaknesses 2:** The evaluation metrics used in the paper may not be sufficiently convincing. DirectingInv, for example, introduces several metrics specifically tailored for evaluating editing tasks, which are generally considered more reliable. Furthermore, DirectingInv also establishes a standard benchmark dataset for diffusion inversion tasks. ...
Although we have conducted thorough theoretical analysis on BELM framework and use experiments as validation to our theory, we agree that comprehensive evaluation metrics on standard benchmark dataset is beneficial. To illustrate the efficiency of our proposed O-BELM at downstream image editing task, we follow your valuable advice to compare O-BELM with baseline methods on the standard benchmark PIE-Bench established by DirectingInv [2]. We follow the experiment design of DirectingInv to evaluate in terms of eight metrics covering four aspects: structure distance, background preservation (PSNR, LPIPS, MSE, and SSIM outside the annotated editing mask), edit prompt-image consistency (CLIPSIM of the whole image and regions in the editing mask) and inference time. These results can be found in `Table 1 in the attached PDF` and `Table 2 in the Author Rebuttal`. It is shown that O-BELM outperform baselines in this benchmark.
> **Weaknesses 3:** In terms of reconstruction performance compared to AE, EDICT, and BDIA, O-BELM does not demonstrate any improvement.
This is because O-BELM, BDIA, and EDICT all fall within the BELM framework, and as Proposition 2 proves, any BELM method possesses the exact inversion property. However, since SD-1.5 is a latent diffusion model (LDM), errors will be inevitably introduced during the encoding/decoding process of the AutoEncoder (AE) component of SD model. It is a prevailing practice to take the reconstruction error of AE as a lower bound of LDM pixel-level reconstruction error[7]. O-BELM has already reached this lower bound.
We have conducted an additional experiment to assess the reconstruction error in the latent space of O-BELM and other baseline methods.
| Method| MSE on latents (10 steps) | MSE on latents (20 steps) | MSE on latents (100 steps) |
| :--| :- | :-|:-|
|DDIM|0.414|0.243|0.041|
|EDICT|0.000|0.000|0.000|
|BDIA|0.000|0.000|0.000|
|**O-BELM (Ours)**|**0.000**|**0.000** |**0.000**|
It is demonstrated that O-BELM (along with BDIA and EDICT) achieves exact inversion at the latent level, resulting in zero reconstruction error. O-BELM outperforms BDIA and EDICT in terms of sampling accuracy due to reduced local error (as shown in Table 1 of the original paper), not in terms of reconstruction error.
**References**
[1] Cho, Hansam, et al. "Noise map guidance: Inversion with spatial context for real image editing." arXiv:2402.04625 (2024).
[2] Ju, Xuan, et al. "Direct inversion: Boosting diffusion-based editing with 3 lines of code." arXiv:2310.01506 (2023).
[3] Han, Ligong, et al. "Proxedit: Improving tuning-free real image editing with proximal guidance." WACV 2024.
[4] Garibi, Daniel, et al. "ReNoise: Real Image Inversion Through Iterative Noising." arXiv:2403.14602 (2024).
[5] Miyake, Daiki, et al. "Negative-prompt inversion: Fast image inversion for editing with text-guided diffusion models." arXiv:2305.16807 (2023).
[6] Mokady, Ron, et al. "Null-text inversion for editing real images using guided diffusion models." CVPR 2023.
[7] Wallace, Bram, Akash Gokul, and Nikhil Naik. "Edict: Exact diffusion inversion via coupled transformations." CVPR 2023.
---
Rebuttal 2:
Title: Looking forward for further discussions.
Comment: Dear Reviewer [eBdS],
Thank you again for your constructive feedback.
Our research is the first to construct a theoretical well-posed IVP modeling for the general inversion problem in diffusion sampling, as outlined in **Equation 11** and **Proposition 1**. Based on this IVP view, we have innovatively identified the Bidirectional Explicit condition, a sufficient prerequisite for achieving a mathematically precise inversion, as stated in **Proposition 2**. This condition does not merely reduce inversion error, but ensures exactness. Building on this condition, we have developed a generic formula for the general exact inversion samplers, which we have termed as Bidirectional Explicit Linear Multi-step (BELM) samplers, as detailed in **Equation 14**. These samplers incorporate several previous methods as special cases, as noted in **Remark 1**. We have conducted a thorough analysis of the Local Truncation Error (LTE) within the BELM framework, as described in **Proposition 3** and **Corollary 1**, and have proposed the optimal variants, O-BELM, as delineated in **Proposition 4**. We have also demonstrated that O-BELM possesses the advantageous property of zero-stability, as outlined in **Proposition 5**, which ensures its robustness to initial values. Additionally, O-BELM exhibits the beneficial characteristic of global convergence, also stated in **Proposition 5**, which prevents O-BELM from diverging during sampling.
In regard to downstream tasks, we have taken your insightful advice into account and conducted experiments on the **standard image editing benchmark**, **PIE-Bench**. Additionally, during our interactions with Reviewer [QGyL] and Reviewer [brf4], we have conducted further experiments to provide support for the effectiveness and robustness of our methods. These additional tests include the **ControlNet-based editing task** and the **style transfer task**, further showcasing the effectiveness and robustness of our approach.
We sincerely hope to engage in further discussions with you. Thank you for your time and consideration!
---
Rebuttal 3:
Comment: Dear Reviewer [eBdS],
Since the discussion period will end in a few hours, we will be online waiting for your feedback on our rebuttal, which we believe has addressed your concerns. We would highly appreciate it if you could take into account our response when having discussions with AC and other reviewers.
Thank you so much for your time and efforts. Sorry for our repetitive messages, but we're eager to your feedback.
Authors of Submission 2649
Title: Dear Reviewer [eBdS], we are eager for your feedback | Summary: The paper introduces the Bidirectional Explicit Linear Multi-step (BELM) sampler framework for exact inversion in diffusion models. The authors systematically investigate the Local Truncation Error (LTE) within the BELM framework and propose an optimal variant, O-BELM, which minimizes LTE for high sampling quality. Comprehensive experiments validate O-BELM's effectiveness in tasks like image reconstruction, editing, and interpolation.
Strengths: Novel Framework: The BELM framework generalizes existing exact inversion samplers and introduces a bidirectional explicit constraint to ensure exact inversion.
Theoretical Rigor: The paper provides a thorough theoretical analysis of the Local Truncation Error (LTE) and the stability and convergence properties of the proposed samplers.
Practical Applications: Demonstrates the practical potential of O-BELM in various tasks such as image reconstruction, editing, and interpolation.
Weaknesses: 1. As for downstream applications of diffusion inversion, style transfer [1] should also be included. I encourage the author could apply this method to some style transfer to show the robustness and effectiveness of this framework.
2. Many punctuation marks at the end of the formulas are omitted by the authors. It is suggested to careful revise this article.
3. The comparative analysis of qualitative and quantitative results is deficient. It is suggested that the authors provide detail comparative analysis and conclusion in experiment section.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper does not explore the integration of high-accuracy exact inversion samplers such as O-BELM with more powerful image editing pipelines. Additionally, the application of high-accuracy exact inversion samplers like O-BELM to tasks beyond image processing remains uninvestigated in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable reviews, we will answer your questions one by one regarding these weaknesses/questions.
> **Answer to Weaknesses 1:** ''As for downstream applications of diffusion inversion, style transfer [1] should also be included. I encourage the author could apply this method to some style transfer to show the robustness and effectiveness of this framework.''
Despite we have conducted thorough theoretical analysis on the BELM framework and use experiments on downstream tasks as validation to our theory. We agree that more downstream applications of diffusion inversion, like style transfer, is beneficial. Therefore, we have evaluated the O-BELM and baseline methods on the style transfer sub-dataset in the PIE-Bench dataset [1].
- The qualitative comparative analysis can be found in `Figure 2 in Attached PDF`. The results demonstrate that the O-BELM sampler ensures consistency and produces high-quality results in the style transfer task.
- We also carry out a quantitative analysis on PIE-Bench style transfer sub-dataset. We employ Structure Distance [2] to evaluate whether the edited image preserves the structure of the original image during style transfer and use CLIP Similarity [3] to measure whether the edited image accurately reflects the meaning of target prompts. The following table shows that O-BELM achieves superior structure preservation according to the Structure Distance metric, validating that our O-BELM achieves the exact inversion property (as per Proposition 2 of the original paper). It is also demonstrated that O-BELM achieves better CLIP Similarity, validating that O-BELM reduces sampling error (as per Proposition 4 of the original paper).
| Style Transfer Methods | Structure Distance$\times10^3$ ($\downarrow$) | CLIP Similariy ($\uparrow$) |
| :---------------- | :------ | :---- |
| DDIM | 71.1 | 24.82 |
| EDICT | 19.5 | 24.39 |
| BDIA | 25.3 | 23.20 |
| **O-BELM (Ours)** | **18.0** | **25.09** |
*The details of the experiment can be found in the Global Rebuttal section.*
> **Answer to Weaknesses 2:** ''Many punctuation marks at the end of the formulas are omitted by the authors. It is suggested to careful revise this article.''
Thank you for your detailed review. We have conducted a thorough second revision and corrected all typographical errors. We assure you that all necessary punctuation marks will be included in the formulas in our revised version.
> **Answer to Weaknesses 3:** ''The comparative analysis of qualitative and quantitative results is deficient. It is suggested that the authors provide detail comparative analysis and conclusion in experiment section.''
Our theoretical investigation and comparison are thorough and precise. We not only establish the exact inversion property (Proposition 2 of the original paper), local error property (Proposition 4 of the original paper), stability property, and global convergence property (Proposition 5 of the original paper) of O-BELM, but also scrutinize the theoretical properties of other methods. This comprehensive analysis allows us to provide a rigorous theoretical comparison of different samplers in Table 1 of the original paper.
We understand that more comparative analysis of qualitative and quantitative results in downsteam task is still beneficial. Therefore, we not only conduct qualitative and quantitative comparative experiments in style-transfer tasks as mentioned above, but also carry out a thorough quantitative comparison of experiments on image editing tasks. We follow the experimental design of DirectingInv [1] to evaluate methods on the PIE-Bench image editing dataset using eight metrics covering four aspects: structure distance [2], background preservation (PSNR, LPIPS [4], MSE, and SSIM [5] outside the annotated editing mask), edit prompt-image consistency (CLIPSIM [3] of the whole image and regions in the editing mask) and time cost. These results can be found in `Table 1 in the attached PDF` and `Table 2 in the attached PDF`. It is shown that O-BELM outperform baselines in this benchmark.
**References**
[1] Ju, Xuan, et al. "Direct inversion: Boosting diffusion-based editing with 3 lines of code." arXiv preprint arXiv:2310.01506 (2023).
[2] Tumanyan, Narek, et al. "Splicing vit features for semantic appearance transfer." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[3] Wu, Chenfei, et al. "Godiva: Generating open-domain videos from natural descriptions." arXiv preprint arXiv:2104.14806 (2021).
[4] Zhang, Richard, et al. "The unreasonable effectiveness of deep features as a perceptual metric." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
[5] Wang, Zhou, et al. "Image quality assessment: from error visibility to structural similarity." IEEE transactions on image processing 13.4 (2004): 600-612.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer brf4
Comment: Thank you for your detailed reply. It addresses most of my concerns.
I will keep my rating to weak accept.
---
Rebuttal 2:
Comment: Thank you again for your times and efforts, your suggestions have been very helpful for us! | Summary: This manuscript introduces a generic formulation named ``Bidirectional Explicit Linear Multi-step'' (BELM) samplers for exact inversion of diffusion sampling. In contrast to DDIM Inversion, BELM inversion establishes the relationship between $x_{i-1}$, $x_i$, $x_{i+1}$, and $\epsilon_\theta(x_i, i)$. The authors prove that BELM is the generic version of EDICT and BDIA.
Strengths: 1. The authors propose a new variable-stepsize-variable-formula (VSVF) linear multi-step scheme for exact inversion.
2. The paper investigates the Local Truncation Error (LTE) within the BELM framework.
3. O-BELM is designed by minimizing LTE to ensure minimized local error, and the experiments on COCO validate the exact inversion property of O-BELM.
Weaknesses: 1. How about the computation cost and latency of 2-step O-BELM compared with BDIA and EDICT?
2. Lack of more applications such as ControlNet-based Image Editing exemplars, and more failure cases is better for analyzing the limitation of BELM.
3. The local error of O-BELM seems higher than BDIA. Besides, for BDIA, $\gamma$ is tuned for different quality and effect, when $\gamma=1$, BDIA seems zero-stable and global convergence.
Technical Quality: 3
Clarity: 4
Questions for Authors: Please see the weakness part.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback, we will answer your questions one by one regarding these weaknesses/questions.
> **Answer to Weaknesses 1:** ''How about the computation cost and latency of 2-step O-BELM compared with BDIA and EDICT?''
- Theoretically, the computation cost bottleneck of diffusion sampling is the number of accesses to the noise network, $\boldsymbol{\varepsilon}_\theta(\mathbf{x}_i,i)$, which is also referred to as the Number of Function Evaluations (NFE). In each iteration, O-BELM only accesses the noise network once, as demonstrated in equation 18. Therefore, for a number of steps equal to $N$, O-BELM requires an NFE equal to $N$, which is the same as DDIM and BDIA. However, EDICT doubles this requirement to $2N$.
Moreover, O-BELM only requires the additional storage of a state, which is a relatively small extra memory cost compared to methods that need to store the entire chain, such as NMG [1] or P2P [2]. And as there is no parallelism in diffusion sampling, the latency is equivalent to the computational time cost.
- Experimentally, we've conducted additional tests to compare the average time cost of different methods across sampling, editing, and reconstruction tasks. The results show that O-BELM does not incur any additional computational overhead compared to DDIM across all these tasks. Detailed information about these experiments can be found in the `Table 2 in attached PDF` and `Author Rebuttal`.
| Method | Time-cost (s) of Image Sampling | Time-cost (s) of Image Editing |Time-cost (s) of Image Reconstruction |
| :---------------- | :------ | :---- |:---- |
| DDIM | 6.67 | 13.30 | 13.20 |
| EDICT | 12.67 | 25.77 | 25.72 |
| BDIA | 6.59 | 13.37 | 13.28|
| **O-BELM (Ours)** | **6.53** | **13.22** | **13.20** |
*The details of the experiment can be found in the Global Rebuttal section.*
> **Answer to Weaknesses 2:** ''Lack of more applications such as ControlNet-based Image Editing exemplars, and more failure cases is better for analyzing the limitation of BELM.''
- **ControlNet-based Image Editing exemplars:** Despite we have conducted thorough theoretical analysis on the BELM framework and use experiments as validation to our theory. We agree that practical applications such as ControlNet-based Image Editing examples are beneficial for demonstrating the robustness of our O-BELM. Our BELM framework is built on the general diffusion IVP (equation 11), and can guarantee exact inversion (Proposition 2) and minimized error (Proposition 4) under all tasks based on diffusion ODE (PF-ODE). This means that the BELM framework is compatible with a wide variety of diffusion-based tasks, irrespective of the data type (such as image or language), the guidance method (unconditional, classifier-free, classifier-based, ControlNet-based, or adjoint ODE-based), or the network structure (whether it includes an attention layer or not). Following your valuable advice, we conducted experiments on ControlNet-based Image Editing using O-BELM and baseline methods, as shown in `Figure 1 in the Attached PDF`. The results demonstrate that O-BELM is effective in ControlNet-based Image Editing, preserving features from the original images while maintaining high-quality samples.
- **failure cases:** Compared to EDICT and BDIA, one major advantage of O-BELM is its ability to prevent hard-to-tune hyperparameters and reduce the occurrence of failure cases, as demonstrated in Figure 8 of the original paper. However, we recognize the importance of analyzing failure cases and will include this in our revised version.
> **Answer to Weaknesses 3:** ''The local error of O-BELM seems higher than BDIA. Besides, for BDIA, $\gamma$ is tuned for different quality and effect, when $\gamma=1$, BDIA seems zero-stable and global convergence.''
- **The local error:** Our analysis, as outlined in Corollary 1 and Proposition 4, demonstrates that the asymptotic convergence rate of the local truncation error (LTE) for O-BELM is of order 3 with respect to the small variable $h$, whereas for BDIA it is of order 2. In mathematical terms, a larger order number signifies faster convergence to zero and smaller error [3]. Therefore, the local error of O-BELM is smaller than that of BDIA.
- **zero-stable and global convergence:** The zero-stable property and global convergence property are rigorous and strict mathematical properties, serving as sufficient conditions for achieving stability and high accuracy in practical tasks. While BDIA performs well with some datasets for a certain number of steps when $\gamma = 1$, there is no evidence to suggest that BDIA satisfies these stringent properties. In fact, in Figure 6 of BDIA's original paper [4], BDIA with $\gamma = 1$ could lead to deformed edited images. In Table 5 of our original paper, BDIA with $\gamma = 1$ could lead to low-quality sampling results on CIFAR-10 and CalebA-HQ datasets (FID $\approx 100$). Furthermore, there is no solid report indicating that the $\gamma$ of BDIA has a clear high-level meaning.
**References**
[1] Cho, Hansam, et al. "Noise map guidance: Inversion with spatial context for real image editing." arXiv:2402.04625 (2024).
[2] Hertz, Amir, et al. "Prompt-to-prompt image editing with cross attention control." arXiv preprint arXiv:2208.01626 (2022).
[3] Rudin, Walter. Principles of mathematical analysis. Vol. 3. New York: McGraw-hill, 1964.
[4] Zhang, Guoqiang, Jonathan P. Lewis, and W. Bastiaan Kleijn. "Exact diffusion inversion via bi-directional integration approximation." arXiv preprint arXiv:2307.10829 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal, my concern about experimental results has been well addressed.
After reading the other comments and the corresponding rebuttal, I prefer to raise my score to accept.
---
Rebuttal 2:
Comment: Thank you again for your patience and kindness, your suggestions on experiments have been very helpful! | Rebuttal 1:
Rebuttal: ## To All Reviewers
We sincerely appreciate the time and effort you have dedicated to reviewing our paper! Your valuable feedbacks have been carefully considered, and we have provided point-to-point responses to your reviews in respective rebuttals. We remain open to any additional feedback you may have. If you feel it is appropriate based on our responses, we would be extremely grateful if you could consider raising our score.
## Attached PDF
In the attached PDF document, we have provided several figures and tables. The details of the experiments are listed as follows:
- **Figure 1 -- ControlNet-based Editing Results**
We evaluated O-BELM and baseline algorithms on ControlNet-based image editing tasks, which included canny-based and depth-map-based editing. The editing hyperparameters are chosen the same as our original paper. The ControlNet hyperparameters were kept at their default values, consistent across all methods. We set the number of steps to 100. The canny images were obtained using the Canny function from the opencv-python library, and the depth-map model used was Intel/dpt-large (https://huggingface.co/Intel/dpt-large). We use stable-diffusion-v1-5 model (https://huggingface.co/runwayml/stable-diffusion-v1-5) as our base model.
- **Figure 2 -- Style Transfer Results**
We evaluated O-BELM and baseline algorithms on style transfer tasks using the style transfer sub-dataset of the PIE-Bench dataset (https://paperswithcode.com/dataset/pie-bench). The editing hyperparameters were selected to match those in our original paper. We use stable-diffusion-2-base model (https://huggingface.co/stabilityai/stable-diffusion-2) as our base model.
- **Table 1 -- Quantitative Evaluation in Image Editing on PIE Benchmark**
We evaluated O-BELM and baseline algorithms on image editing tasks using the PIE-Bench dataset (https://paperswithcode.com/dataset/pie-bench). The editing hyperparameters were selected to match those in our original paper. We use stable-diffusion-2-base model (https://huggingface.co/stabilityai/stable-diffusion-2) as our base model. We follow the experimental design of DirectingInv [1] to evaluate methods on the PIE-Bench image editing dataset using seven metrics covering three aspects: structure distance [2], background preservation (PSNR, LPIPS [3], MSE, and SSIM [4] outside the annotated editing mask), edit prompt-image consistency (CLIPSIM [5] of the whole image and regions in the editing mask). We utilize the dino_vitb8 model (https://huggingface.co/facebook/dino-vitb8) and clip-vit-large-patch14 (https://huggingface.co/openai/clip-vit-large-patch14) for metric evaluations.
- **Table 2 -- Time Costs Comparison on PIE Benchmark**
We assessed the time costs of O-BELM and baseline algorithms using the PIE-Bench dataset (https://paperswithcode.com/dataset/pie-bench), which included tasks such as image generation, image editing, and image reconstruction. The number of steps was set to 50. We employed the stable-diffusion-2-base model (https://huggingface.co/stabilityai/stable-diffusion-2) as our base model and conducted tests on a single NVIDIA V100 chip and an Intel Xeon Platinum 8255C CPU.
**Reference**
[1] Ju, Xuan, et al. "Direct inversion: Boosting diffusion-based editing with 3 lines of code." arXiv preprint arXiv:2310.01506 (2023).
[2] Tumanyan, Narek, et al. "Splicing vit features for semantic appearance transfer." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[3] Zhang, Richard, et al. "The unreasonable effectiveness of deep features as a perceptual metric." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
[4] Wang, Zhou, et al. "Image quality assessment: from error visibility to structural similarity." IEEE transactions on image processing 13.4 (2004): 600-612.
[5] Wu, Chenfei, et al. "Godiva: Generating open-domain videos from natural descriptions." arXiv preprint arXiv:2104.14806 (2021).
Pdf: /pdf/0a2e2ae1952aa4d031110a8bb7cf2798095aa083.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor | Accept (poster) | Summary: This paper focuses on the in-training backdoor defense by proactively injecting a defensive backdoor into the model during training. During inference, PDB embeds a defensive trigger in the inputs and reverses the model prediction. This operation effectively suppresses malicious backdoors and ensures the model utility on the original task. Various experiments demonstrate the effectiveness.
Strengths: Various experiments demonstrate the proposed method is effective.
Weaknesses: This paper has several limitations.
1. Novelty has been discovered by existing methods. I assume that the author truly share a interesting insight towards the backdoor defense. But in [2] (Usenix Security 2024), “backdoor samples are OOD samples compared to benign samples from the target class, designs indicator task leveraging OOD samples to identify and rule out backdoor updates”. But I do not see that author has discussed in your paper. I wonder whether authors could explain whether directly utilizing [2] in your setting has any problem.
2. Missing comparison and confusing results. As shown in your Table 1: Results (%) on CIFAR-10 with PreAct-ResNet18 and poisoning ratio 5.0%, for attack Trojan, ABL achieves 18.64 100.00, but the results in [1] is not the same. 70.70/0.02. Why do you not present the same setting as the current relative works? I wonder whether you could compare your work in your setting or the [1] scenario.
3. Abaltion for trigger pattern. "Regarding the trigger’s position, it should181 be crafted to preserve the core visual patterns182 of the original image". I wonder whether the author could conduct the experiments on the trigger pattern position to demonstrate your conclusion.
[1] Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization. ICCV 2023.
[2] BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning. Usenix Security 2024
Technical Quality: 3
Clarity: 2
Questions for Authors: Your work shares a high similar motivation with the BackdoorIndicator. I encourage authors to provide detailed explanations.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes, the authors have discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Novelty and comparison with backdoorIndicator [2]**
**R1:** Since [2] was first accessible on May 31 on Arxiv, one week after the submission deadline of NeurIPS 2024, we did not have the opportunity to review and compare our work with it. We would like to mention that **our submission is not expected to compare to work that appeared only a month or two before the deadline, according to the latest policy of NeurIPS 2024 (see NeurIPS-FAQ)**.
However, we appreciate the opportunity to highlight the distinctions between our approach and BackdoorIndicator [2] to further improve our submission.
* **First**, we would like to clarify the following differences between our work and [2]:
* **Threat model**: [2] targets *decentralized training (FL) setting*, where multiple clients train models locally and contribute updates to a central server. Our work considers a *centralized training setting* where only a central server is used.
* **Task**: [2] focuses on *detecting malicious clients*, whereas our method aims to *train a secure model* on a poisoned dataset without clients.
* **Motivation**: [2] is built on the motivation that planting subsequent backdoors with the same target label *enhances* previously planted backdoors, therefore, providing a way to detect the poisoned clients, while our method is based on the motivation that planting a concurrent reversible backdoor can help to *mitigate* the malicious backdoor.
* **Methodology**: [2] utilizes *OOD samples* for backdoor client detection while our method constructs a *proactive defensive poison dataset*, following well-designed principles.
Based on above analysis, we think our work and [2] are significantly different from each other. If the reviewer can identify specific similarities in the conceptual ideas between our work and [2], we would be more than happy to engage in a deeper discussion regarding these points.
* **Second**, we would like to discuss the challenges in direct utilizing [2] in our setting:
* BackdoorIndicator [2] is designed to detect malicious clients within a federated learning (FL) context. This makes it challenging to apply [2] directly to our centralized environment since *the task of identifying backdoored clients does not naturally fit into this setting (only a central server)*.
* For comparison between [2] and our method, we need to emulate an FL scenario by assigning each image to a separate client (ensuring existence of benign client), thereby *creating 50,000 local models from the CIFAR-10 dataset to defend a single attack with PreAct-ResNet18. This would require an impractical amount of computational resources, estimated at over 30,000 hours (1,250 days) of training time and 30TB of storage space using a server with a single RTX 3090 GPU*. That's why we cannot compare our method with [2].
**Q2. Confusing results in Table 1.**
**R2:** Thank you. Firstly, it is important to note that:
- All attack checkpoints were sourced directly from the official BackdoorBench website.
- All experimental results for baselines align with those reported by BackdoorBench (refer to the official leaderboard).
This ensures that the comparisons presented in our paper are both fair and reliable.
It should be noted that, as we use different checkpoints compared to [1], the results presented in our paper may differ from those in [1] due to the inherent randomness and potential instability of some baselines. To investigate the failure of ABL in Table 1 of main manuscript, we conducted a detailed analysis of its training process and found that ABL successfully detected 437 poisoned samples from 2500 samples. During the unlearning phase, a sudden increase in the loss for clean samples occurs, leading to a notable degradation in model performance, which ultimately causes ABL to fail (see Figure 3 of the Supplementary PDF for the loss curves).
Additionally, we anonymously requested the Trojan attack checkpoint for PreAct-ResNet18 with CIFAR-10 and a poisoning ratio of 5% from [1]. For this checkpoint, PDB can still mitigate the backdoor with ACC of 91.84% and ASR of 0.47%.
**Q3. Missing comparison to FT-SAM [1]**
**R3:** Thanks. To address the comparison with FT-SAM [1], we have adapted their method to our experimental setting. It's worth noting that in [1], the authors employ the Blended attack with a blending ratio of 0.1 (Blended-0.1), whereas we use a blending ratio of 0.2 (Blended-0.2). For consistency and completeness, we have now included experiments using both blending ratios, and the results are shown below:
**Table 1: Results on PreActResNet-18**
||Attack →|BadNet|BadNet|Blended-0.2|Blended-0.2|Blended-0.1|Blended-0.1|Sig|Sig|SSBA|SSBA
-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
Poisoning ratio ↓|Defense ↓|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR
5%|FT-SAM|92.66|1.22|92.87|31.54|92.76|2.87|92.82|1.80|92.83|3.27
5%|PDB|91.08|0.38|91.36|0.70|91.85|0.22|91.79|0.06|91.58|0.46
From Table 1 above, we can find FT-SAM can achieve a higher accuracy as it aims to fine-tune a backdoored model while PDB aims to train a model from scratch. Consistent with [1], Table 1 shows that FT-SAM can mitigate backdoor attacks for most cases, except for the Blended-0.2. We observe that FT-SAM struggles to defend against blended attacks with higher blending ratios, such as 0.2. Notably, PDB achieves a significantly lower ASR across all cases, with an average ASR below 0.5%.
**Q4. Ablation study for trigger position.**
**R4:** Thanks. We would like to refer you to the **Common Response** for more comprehensive ablation studies for PDB. From **Common Response**, we can see that placing a trigger at the center of an image significantly degrades accuracy, as the trigger masks the core patterns of the image.
Reference:
[1] Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization.
[2] BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning.
---
Rebuttal 2:
Title: Official Comment by Authors
Comment: Dear Reviewer wZZe,
We sincerely appreciate your valuable insights and suggestions on our work. We have made our best efforts to address the concerns and queries you raised during the rebuttal process. We would greatly appreciate confirmation on whether our response has effectively resolved your doubts. Your feedback will be instrumental in improving the quality of our work. As the end of the discussion period is approaching, we eagerly await your reply before the end.
Sincerely,
The Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for the author's response. I expect the authors could add the relative discussion with [1] in your final version. Besides, I hope the authors could provide a more detailed experiment setting in your main part or supplement material.
[1] BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning.
---
Reply to Comment 2.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer wZZe,
We sincerely appreciate your thoughtful response and the time you have dedicated to reviewing our paper. We are pleased to hear that our rebuttal addressed your concerns satisfactorily, and we are strongly encouraged by your recognition of our efforts. **In the next revision, we will include a detailed discussion contrasting our approach with BackdoorIndicator, highlighting both the similarities and differences to further strengthen our paper. Additionally, we will enhance *Appendix A: Experiment Details* to provide a more thorough description of our experimental methodology, including all necessary details for reproducibility and clarity.**
Thank you again for your valuable input and support.
Best regards,
The Authors | Summary: This paper investigates in-training backdoor defense by proactive defensive backdoor. They introduce a defensive poisoned dataset to train the model together with all poison data. By attach the defensive trigger onto the input sample, the potential backdoor attack is failed by such proactive defensive backdoor. Extensive experiments are conducted to evaluate the effectiveness of our method and compare it with five SOTA defense methods across seven challenging backdoor attacks.
Strengths: The author offers a straightforward solution to eliminate the need for costly detection and relabeling processes, thus improving the efficiency.
The experiments seem comprehensive, great attach success rate and defensive effectiveness rate are achieved.
Weaknesses: The author should elaborate why DBD, NAB and V&B result in a substantial increase in training costs in Introduction. Experimental analyses are highly desired.
The contribution (1) is overclaimed. This is not the novel paradigm because this work develop based on several proactive attack-based works like NAB.
The sensitivity to the defensive poisoned dataset of the proposed method should be evaluated. Though the author claim the proposed method follow Principle 4, more details and thorough analyses are required.
Despite better defense effectiveness rate of the proposed method, its accuracy on benign samples is obviously inferior to other methods. When the poisoning ratio is decreasing, the proposed method may unexpectedly become an attack to the original benign dataset, resulting in the degraded accuracy. How to avoid this situation?
Technical Quality: 3
Clarity: 3
Questions for Authors: Why the proposed method can meet Principle 4? The explanation in Method is not clear.
What is the difference between the malicious poisoned dataset and the defensive poisoned dataset when preparing the data?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The generalization of the proposed method to diverse backdoor attack remains unknown. When the attack is invisible or the poison rate is mariginal, the proposed method may degenerate.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Training cost for DBD, NAB and V&B**
**R1:** Thanks. Here, we report both training complexities and the empirical runtime of these methods in Table 1. For simplicity, we first define the following notations:
- $C_{sl}$: Supervised training cost.
- $C_{ssl}$: Self-supervised training cost.
- $C_{semi}$: Semi-supervised training cost.
- $C_{fc}$: Training cost for fully connected (FC) layers.
- $N_{tr}$: Size of the training dataset.
- $N_{def}$: Size of the defensive poisoned dataset.
- $F$: Frequency of sampling defensive poisoned samples.
**Table 1: Training complexity and empirical runtime on CIFAR-10 with PreAct-ResNet18**
|Method|Complexity|Empirical Runtime (s)|
:-:|:-:|:-:
No Defense|$O(C_{sl})$|919
DBD|$O(C_{ssl} + C_{fc} + C_{semi})$|7495
NAB|$O(C_{ssl} + C_{sl})$|3081
V&B|$O(2 \cdot (C_{ssl} + C_{sl}))$|6144
PDB (Ours)|$O\left(\frac{N_{tr} + F \cdot N_{def}}{N_{tr}} \cdot C_{sl}\right)$|1853
**Analysis:** From Table 1, we find that since we set $\frac{F \cdot N_{def}}{N_{tr}}$ as a small value, the training complexity of PDB is not much larger than the baseline (*i.e.*, No Defense). In contrast, as $C_{ssl}$ is often several times $C_{sl}$, the complexities of DBD, NAB, V\&B are often much higher than the baseline. It is reflected in the difference in empirical time, where PDB's runtime is about two times that of the baseline and is much lower than other methods.
**Q2. Difference between PDB and NAB**
**R2:** Thanks for this insightful comment. A comprehensive analysis can be found in **Appendix C.5** of our submitted manuscript. Here, we would like to highlight that although both NAB and PDB share the idea of proactive backdoor, PDB has essential differences from NAB, as detailed below:
* **PDB does not rely on poisoned sample detection**, while NAB still relies on accurate poisoned sample detection, falling into the "detection-then-mitigation" pipeline.
* **PDB does not depend on suspicious sample relabeling**, while NAB relies on accurately relabelling of detected suspicious samples.
**Q3: Sensitivity to the defensive poisoned dataset and why the proposed method can meet Principle 4.**
**R3:** Thanks. We would like to refer you to the **Common Response** for more comprehensive experiments and analysis for PDB.
**Q4: Difference between the malicious poisoned dataset and the defensive poisoned dataset**
**R4:** Thanks. We highlight the differences as follows:
* **Malicious poisoned dataset** is provided by the **attacker** and may contain **unknown** malicious poisoned samples.
* **Defensive poisoned dataset** is crafted by the **defender** with **known** reversible defensive poisoned samples.
**Q5: Generalization to invisible backdoor attack and low-poisoning ratio attack.**
**R5:** Thanks. As discussed in our paper, the proposed method, PDB, does not rely on specific assumptions about the type of attack, making it effective for defending against both invisible backdoor attacks and attacks with low poisoning ratios. To demonstrate this effectiveness, we conducted experiments using low poisoning ratios (0.5% and 0.1%) for both *Visible* and *Invisible* attacks. The results are summarized in Table 2, from which we can find that PDB can consistently mitigate backdoor attacks.
**Table 2: Results on PreAct-ResNet18 and CIFAR10**
||Trigger Type →|Visible|Visible|Invisible|Invisible|Invisible|Invisible|Invisible|Invisible|Invisible|Invisible|
-|-|-|-|-|-|-|-|-|-|-|-
|Attack →||BadNet|BadNet|Blended|Blended|Sig|Sig|SSBA|SSBA|WaNet|WaNet
Poisoning ratio ↓|Defense ↓|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR
0.10%|No Defense|93.61|1.23|93.80|56.11|93.73|41.27|93.89|1.62|92.18|0.78
0.10%|PDB|91.55|0.87|91.91|0.36|91.59|0.39|91.72|0.42|91.87|0.89
0.50%|No Defense|93.76|50.06|93.68|93.30|93.80|82.43|93.41|35.67|91.27|1.12
0.50%|PDB|91.62|0.60|91.66|0.31|91.72|0.12|91.65|0.54|91.72|0.92
**Q6: Degraded accuracy when the poisoning ratio is decreasing.**
**R6:** Thanks. For such concern, we would like to first clarify that PDB can still achieve high accuracy when the poisoning ratio is decreasing, as shown in Table 2 above. Then, we would like to discuss the factors that influence the accuracy of PDB:
* **Model capacity and data complexity:**
* **Model capacity:** Since PDB introduces additional task, i.e., injecting defensive backdoor, increasing the model capacity helps to increase the accuracy of PDB, as evidenced in Table 3.
**Table 3: Results on different models**
|Model|ResNet-18|ResNet-18|ResNet-34|ResNet-34|ResNet-50|ResNet-50|
:-:|:-:|:-:|:-:|:-:|:-:|:-:
Metric|ACC|ASR|ACC|ASR|ACC|ASR
No Defense|92.54|76.27|93.08|82.48|93.76|87.26
PDB|91.81|0.29|92.63|0.28|93.67|0.18
* **Dataset complexity:** By comparing defense results with different datasets (*e.g.*, Table 1 and Table 6 in the main manuscript), we can find that by decreasing the dataset complexity, the accuracy of PDB increases significantly.
* **Strength of defensive backdoor:**
* **Strength of augmentation:** From **Fig. 2 of Supplementary pdf**, we can find that there exists a tradeoff between ACC and ASR. Therefore, the accuracy of PDB can be boosted by reducing the strength of augmentation.
* **Sampling frequency:** From **Table 4 in common response**, we can find that by increasing the sampling frequency of defensive poisoned samples, the accuracy of PDB can be boosted.
* **Trigger size:** **Table 1 in Common Response** shows that a proper choice of trigger size can also help to increase the accuracy. Therefore, if a validation set is accessible, a proper trigger size can be chosen to increase accuracy.
**In summary**, due to the "home field advantage" of PDB, there are several ways to maintain a high accuracy even in the case of a low malicious poisoning ratio, such as increasing model capacity, simplifying the dataset, reducing the strength of augmentation to defensive poisoned samples, increasing the sampling frequency and choosing a proper defensive trigger size.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. Most of my concerns have been addressed. I strongly suggest the author clarify the motivation and differences between the existing methods in the introduction. Table 1 of complexity should be included in the final version. Overall, I tend to maintain my positive score.
---
Rebuttal 2:
Title: Official Comment by Authors
Comment: Dear Reviewer 8vxw,
We sincerely appreciate your valuable insights and suggestions on our work. We have made our best efforts to address the concerns and queries you raised during the rebuttal process. We would greatly appreciate confirmation on whether our response has effectively resolved your doubts. Your feedback will be instrumental in improving the quality of our work. **As the end of the discussion period is approaching, we eagerly await your reply before the end.**
Sincerely,
The Authors
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer 8vxw,
We truly appreciate the thoughtful feedback you provided on our work. We've taken your comments to heart and have worked diligently to address them. We're reaching out again because we're nearing the end of the discussion period, and we hope to hear your thoughts on our rebuttal. Your insights are incredibly valuable to us and will help us enhance the quality of our paper.
Thank you so much for your time and support!
Best regards,
The Authors | Summary: The paper introduces a novel method called Proactive Defensive Backdoor (PDB) to counter backdoor attacks in deep neural networks. PDB differs from traditional methods, which focus on detecting and eliminating suspicious samples. Instead, PDB proactively injects a defensive backdoor into the model during the training phase. PDB operates by embedding a trigger in the model's inputs during prediction, which neutralizes the effects of any malicious trigger present. The authors designed the defensive backdoor in such a way that it can reverse the model's prediction to its true label, thus maintaining its utility on benign tasks. Through extensive experimentation, the authors have demonstrated PDB's ability to outperform existing defense methods by achieving a balance between suppressing malicious backdoors and preserving model performance across various datasets and model architectures.
Strengths: - The paper is well-structured and clearly written.
- Introduces a novel approach for mitigating backdoor attacks.
- Detailed experiments and comparison with state-of-the-art.
Weaknesses: - The authors evaluated adaptive attacks only by increasing the trigger size of the BadNets attack. A more appropriate adaptive attack would increase the poisoning ratio to strengthen the backdoor effect compared to the defensive backdoor.
- It would be interesting to see if PDB can scale for clean label backdoor attacks, where the adversary does not change the trigger label during the backdoor attack.
- It is not clear whether the running time in Table 4 refers to training time or inference time. Comparing inference time would be more appropriate, as PDB requires the model to make two predictions for a single image. How does the inference time of PDB compare to other defenses in the literature?
- It is not clear why the specific blocks, namely Block 1 and Block 4, were chosen to illustrate the impact of PDB. Also, the plots related to TAC are difficult to understand from the text.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Will PDB be effective against adaptive attacks that increase the poisoning ratio?
- How does the inference time of PDB compare with other defenses in the literature?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors adequately addressed the limitations and potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Defend adaptive attacks that increase the poisoning ratio.**
**R1:** Thanks for this suggestion. To provide a more comprehensive evaluation of the proposed method PDB against adaptive attacks, we conduct experiments with poisoning ratios from 10% to 30%, and malicious trigger size from 4x4 to 10x10 using PreAct-ResNet18 on CIFAR-10. The results are summarized below:
**Table 1: Results for adaptive attacks**
|Poisoning ratio →|10%|10%|10%|10%|20%|20%|20%|20%|30%|30%|30%|30%|
:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
Defense →|No Defense|No Defense|PDB|PDB|No Defense|No Defense|PDB|PDB|No Defense|No Defense|PDB|PDB
Malicious trigger size ↓|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR
4x4|92.39|96.83|90.66|0.18|91.14|97.67|90.01|0.21|90.38|98.13|89.65|0.49
5x5|93.11|97.69|91.28|0.29|92.79|97.98|90.97|0.28|92.20|98.30|90.02|0.56
6x6|93.26|98.16|91.62|0.27|92.48|98.68|90.83|0.33|92.01|98.83|90.03|0.69
7x7|93.65|98.66|91.46|0.31|93.07|99.03|91.03|0.56|92.56|99.23|90.48|0.67
8x8|93.51|99.24|91.16|0.37|92.82|99.38|91.14|0.58|92.53|99.50|90.27|0.74
9x9|93.45|99.53|91.12|0.51|92.76|99.67|90.84|0.56|92.15|99.72|90.39|0.67
10x10|93.20|99.66|91.37|0.54|93.17|99.74|90.76|0.78|92.58|99.81|90.45|0.82
From Table 1, we can find that PDB can consistently mitigate backdoor against adaptive attacks with various malicious trigger size and poisoning ratio. Note that to keep the stealthness of malicious backdoor, its poisoning ratio and trigger size is expected to be constrained. However, the defensive backdoor can utilize a large trigger size and high sampling frequency to meet the **Principle 4: Resistance against other backdoors**, therefore, mitigating the malicious backdoor effectively.
**Q2. Can PDB scale for clean label backdoor attacks?**
**R2:** Thank you for your question. As discussed in our paper, the proposed method, PDB, does not rely on specific assumptions about the type of attack. The key of PDB is utilizing a proactive defensive backdoor to suppress the malicious backdoor, making it effective for defending against clean label attacks with large model and dataset.
Note that we have shown that PDB can defend against SIG for CIFAR-10 datset (Table 1 in main manuscript). To demonstrate PDB's effectiveness for clean label attack on large scale dataset and model, we conduct experiments on the SIG and LC [1], with ViT-B-16 and Tiny-ImageNet. Note that for Tiny ImageNet (200 classes), the poisoning ratio is at most 0.5%. The results are summarized below:
**Table 2: Defending results against clean label attack, on ViT-B-16 and Tiny-ImageNet**
||Attack →|LC|LC|SIG|SIG|
-|:-:|:-:|:-:|:-:|:-:
Poisoning ratio ↓|Defense ↓|ACC|ASR|ACC|ASR
0.10%|No Defense|75.39|1.32|75.86|9.10
0.10%|PDB|74.75|0.23|74.49|0.52
0.50%|No Defense|76.15|32.65|75.29|69.21
0.50%|PDB|75.07|0.37|74.25|0.03
From Table 2, we can find that clean label attacks fail to attack the model with poisoning ratio 0.1%. For poisoning ratio 0.5%, PDB can effectively defend against the clean label attack for large-scale dataset and model.
**Q3: Comparing training and inference runtime between PBD and other baselines**
**R3:** Thanks for your constructive suggestion.
* **Training runtime:** Firstly, we would like to clarify that the running time reported in Table 4 of the main manuscript indicates to the training time. Furthermore, to better understand the training complexity of our method, please refer to our response **R1 to Reviewer 8vxw**.
* **Inference runtime:** As shown in Algorithm 1 (see the bottom four rows), during inference, our PDB also requires only one forward pass like the standard inference. The additional costs involve adding the defensive trigger onto each input image (*i.e.*, $x \oplus \Delta_1$), and the inverse mapping (*i.e.*, $h^{-1}(\cdot)$). Compared the cost of forward pass, these additional costs are egligible. Thus, our inference cost is almost similar with other baselines.
**Q4: Detailed explanation of TAC plot**
**R4:** Thanks. We noticed a typo in the legend of the TAC plots (Figure 4) in our main manuscript, which may confuse the reviewer, and we will fix it in our revisions. Due to the page limit, we only show the visualization of TAC for the 1st and 4th blocks of PreAct-ResNet18 (4 blocks in total) in our main manuscript. To provide a more comprehensive explanation for TAC plots, we visualize the TAC for all blocks of PreAct-ResNet18 and show the plots in **Fig 1 of Supplementary pdf**. Now, we provide a more detailed explanation for the plots:
* **Definition of TAC [2]:** TAC is designed to measure the change of activation values for each neuron when comparing maliciously poisoned samples to their benign counterparts. Let $\phi$ be a feature extractor which maps an input image $x$ to the latent activations. For an input image $x$, we can construct the malicious poisoned sample $x\oplus \Delta_0$. In PBD, a defensive trigger is added to the malicious poisoned sample, crafting sample $x\oplus\Delta_0\oplus\Delta_1$, aiming to suppress the malicious backdoor. Therefore, for dataset $D$, we define
$$\text{TAC w/o } \Delta_1 = \frac{\sum_{x\in D}(\phi(x\oplus\Delta_0)-\phi(x))}{|D|},$$
$$\text{TAC w/ } \Delta_1 = \frac{\sum_{x\in D}(\phi(x\oplus\Delta_0\oplus\Delta_1)-\phi(x))}{|D|}.$$
* **Analysis of the plots:** From the TAC plots in **Fig 1 of Supplementary pdf**, we can find that by planting a defensive trigger to a malicious poisoned sample, the activation changes from a malicious backdoor are substantially suppressed, indicating that defensive backdoor can suppress the malicious backdoor, therefore, reducing the ASR.
Reference:
[1] Poison frogs! targeted clean-label poisoning attacks on neural networks. NeurIPS 2018
[2] Data-free backdoor removal based on channel lipschitzness. ECCV 2022
---
Rebuttal 2:
Title: Official Comment by Authors
Comment: Dear Reviewer 8hm3,
We sincerely appreciate your valuable insights and suggestions on our work. We have made our best efforts to address the concerns and queries you raised during the rebuttal process. We would greatly appreciate confirmation on whether our response has effectively resolved your doubts. Your feedback will be instrumental in improving the quality of our work. **As the end of the discussion period is approaching, we eagerly await your reply before the end.**
Sincerely,
The Authors
---
Rebuttal Comment 2.1:
Comment: I appreciate the authors' thorough rebuttal, which addressed most of my concerns. I have raised my score. I recommend that the authors incorporate these clarifications into the final version of the paper.
---
Reply to Comment 2.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer 8hm3,
We sincerely appreciate your thoughtful response and the time you've dedicated to reviewing our paper. We are pleased to hear that our rebuttal addressed your concerns satisfactorily and we are strongly encouraged by your recognition of our efforts.
**We will ensure that the clarifications discussed in the rebuttal are incorporated into the next revision of the paper.**
Thank you again for your valuable input and support.
Best regards,
The Authors | Summary: This paper proposes an proactive defense approach called PDB, which aims to combat malicious backdoor attacks by injecting active defense backdoors introduced by the defender. The main goal of PDB is to suppress the impact of malicious backdoors while preserving the utility of the model for its original tasks. PDB first analyzes the objectives for effective backdoor defense and introduces four fundamental design principles: reversibility, inaccessibility to attackers, minimal impact on model performance, and resistance against other backdoors. Then, an additional defensive poisoning dataset is constructed, and the model is trained using both this dataset and the entire poisoned dataset. To evaluate its effectiveness, the paper compares PDB with five SOTA in-training defense methods against seven SOTA data poisoning backdoor attack methods involving different model architectures and datasets. Experimental results show that PDB performs comparably or even better than existing baseline methods.
Strengths: 1. The perspective of the paper is novel.
2. This paper is well-written and easy to understand.
Weaknesses: 1. The paper lacks an effective explanation for PDB.
2. The paper lacks sufficient and reasonable explanations for the experimental results.
Technical Quality: 3
Clarity: 3
Questions for Authors: The perspective of the paper is innovative, and it is well-written and easy to understand. However, this paper still has the following issues.
1. The paper lacks sufficient and reasonable explanations for the experimental results, particularly those in Table 2. In Table 2, the ASR results for PDB under different attacks are all 0, while the results for other baseline are close to 100 (i.e., they all fail to defend successfully). The paper should provide a reasonable explanation for the significant difference in ASR between PDB and the other baselines.
2. In the Section of Resistance to Adaptive Attack, the paper evaluates PDB's resistance to malicious backdoor attacks with trigger sizes ranging from 3x3 to 6x6. In this section, the paper should set the backdoor attack trigger size to greater than 7x7 to match the trigger size set by PDB. Additionally, it raises the question of why the paper sets the trigger size for PDB to 7x7?
3. In the Section of Influences of Augmentation, different strength of augmentation do not show significant changes in defense-related ACC and ASR, even in cases without any augmentation. Therefore, what is the role of augmentation during the training process? Based on this, it is easy to question why PDB's defense triggers can be stronger than the triggers of the backdoor without relying on augmentation?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Explanations for experimental results in Table 2**
**R1:** Thanks. Firstly, it's important to note that
* **All attack checkpoints in our experiments were sourced directly from the official BackdoorBench website**
* **All experimental results for baselines in our main manuscript align with those reported by BackdoorBench (refer to the official leaderboard)**
The above operation guarantees that the comparisons presented in our paper are both fair and reliable.
Regarding the performance of the baselines in Table 2, we provide the following analysis:
* **AC and Spectral:** Both methods rely on the latent representation of images to detect poisoned samples. AC identifies poisoned samples through clustering in the latent space, considering smaller clusters as likely to contain poisoned data. Spectral Signature detects outliers in the latent space to identify such samples. However, *with a poisoning ratio of 5% for Tiny ImageNet (200 classes, each class accounts for 0.5%), the poisoned samples become the majority within the target class, breaking the underlying assumptions of these methods*. Therefore, reducing the poisoning ratio to 0.1% helps maintain AC's and Spectral's effectiveness, as shown in Table 1. Furthermore, Table 1 also shows that PDB exhibits robustness across various poisoning ratios and outperforms AC and Spectral even at lower poisoning rates.
**Table 1: Results on ViT-b-16 with Tiny ImageNet**
|Poisoning ratio ↓|Attack →|BadNet|BadNet|WaNet|WaNet|BPP|BPP|
-|-|-|-|-|-|-|-
||Defense ↓|ACC|ASR|ACC|ASR|ACC|ASR
0.10%|No Defense|76.67|54.03|61.70|51.33|62.89|69.14
0.10%|AC|76.50|24.15|75.65|2.77|76.79|3.55
0.10%|Spectral|75.55|25.47|75.48|2.47|76.09|3.14
0.10%|PDB|75.53|0.02|73.09|0.38|73.58|0.08
* **ABL:** This method leverages the early learning phenomenon of poisoned samples. Upon closer inspection, however, we find that ABL struggles to identify poisoned samples when used with the ViT_b_16 model correctly. Different from the CNN model, Vision Transformers has some intriguing properties, e.g., weaker biases on backgrounds and textures [1] and robust to severe image noise and corruption [2], which may prevent them from learning the triggers in the early stages. Moreover, as ABL takes additional stages to train the model (with a lower learning rate) and unlearn the isolated samples, it achieves higher accuracy.
* **NAB:** It struggles to detect and relabel the poisoned sample correctly for large datasets and large models, resulting in a low ACC and high ASR in Table 2. A more detailed analysis could be found in Appendix C.5.
**Q2. Defend adaptive attacks with larger trigger and the choice of defensive trigger size**
**R2:** Thank you for the insightful question. We would highlight the following points:
* **For adaptive attacks with larger trigger sizes**, please refer to **Q1 of Reviewer 8hm3**. There, we conducted experiments with poisoning ratios ranging from 10% to 30% and malicious trigger sizes varying from 4x4 to 10x10. Our experiments demonstrate that PDB effectively defends against adaptive attacks with various trigger sizes and poisoning ratios. Note that to keep the stealthness of malicious backdoor, its poisoning ratio and trigger size is expected to be constrained. However, the defensive backdoor can utilize a large trigger size and high sampling frequency to meet the **Principle 4: Resistance against other backdoors**, therefore, mitigating the malicious backdoor effectively.
* **For the choice of defensive trigger size**, we emphasize that the proposed method PDB is not tied to any specific choice of defensive trigger (see Appendix C.2 for PDB with other triggers). We would like to refer you to the **Common Response** for a detailed analysis of the role of defensive trigger size, from which we can find that a larger trigger size is preferred to ensure the defense effectiveness and a size between 5x5 to 8x8 is recommended for square triggers.
**Q3. More explanation for PDB and the role of augmentation**
**R3.** Thanks. We would like to refer you to the **Common Response** for more comprehensive explanation and experiments for PDB as well as the role of augmentation. In **Common Response**, we show that the augmentation plays an important role for further enhancing PDB and reducing the ASR.
Reference:
[1] Delving Deep into the Generalization of Vision Transformers under Distribution Shifts, CVPR 2022
[2] Intriguing Properties of Vision Transformers, NeurIPS 2021
---
Rebuttal Comment 1.1:
Title: Official Comment by Authors
Comment: Dear Reviewer 1B7j,
We sincerely appreciate your valuable insights and suggestions on our work. We have made our best efforts to address the concerns and queries you raised during the rebuttal process. We would greatly appreciate confirmation on whether our response has effectively resolved your doubts. Your feedback will be instrumental in improving the quality of our work. **As the end of the discussion period is approaching, we eagerly await your reply before the end.**
Sincerely,
The Authors
---
Rebuttal Comment 1.2:
Comment: Thank you for the detailed responses in the rebuttal.
My concerns have been addressed. Please add the rebuttal content to the final version of the paper.
I agree to accept this paper.
---
Reply to Comment 1.2.1:
Title: Thanks for your feedback
Comment: Dear Reviewer 1B7j,
We sincerely appreciate your thoughtful response and the time you've dedicated to reviewing our paper.
We are strongly encouraged by your recognition of our efforts. **We will incorporate your suggestions and insights into the revised manuscript.** Thank you once again for your thorough review and your positive evaluation.
Sincerely,
Authors | Rebuttal 1:
Rebuttal: # Common Response
**Q1. Explanation for PDB, including the design of defensive trigger and satisfaction to Principle 4**
**R1.** Thank you for your insightful comments. We would like to clarify that the proposed method, PDB, is not tied to any specific choice of defensive trigger or backdoor enhancement strategy as long as these strategies adhere to the principles outlined in our work. While we primarily focus on a 7x7 square trigger in Section 4 of the main manuscript, the effectiveness of PDB with other configurations has been validated in Appendix C.2 and C.3. For experiments and discussions here, we adopt the implementation of PDB presented in Section 4 unless otherwise specified.
Now, we explain PDB and discuss how to meet **Principle 4 (Resistance against other backdoors)** from the following perspectives:
* **Design of defensive trigger:**
* **Defensive trigger size:** Trigger size directly contributes to the strength of the defensive backdoor. In Table 1, we evaluate PDB with a square defensive trigger with sizes ranging from 1x1 to 9x9. From Table 1, we can find that **A larger trigger leads to a stronger defensive backdoor, resulting in a higher ACC and a lower ASR. However, as the trigger size increases, it may interfere with the visual content of the image, leading to a slight decrease in ACC.** Notably, as the square trigger has strong visibility, a trigger with size 1x1 can still alleviate the malicious backdoor to some extent.
* **Defensive trigger position:** As discussed in Section 3.2, the position of the defensive trigger is essential for Principle 3, i.e., minimal impact on model performance. Table 2 shows that triggers placed in different positions (corner, random, and center) achieve a similar effect in defending against backdoor attacks. However, placing a trigger at the center of an image significantly degrades accuracy, as the trigger masks the core patterns of the image.
* **Pixel value:** For a square trigger, pixel value is also an important parameter for PDB. In Table 3, we evaluate PDB with a square defensive trigger with different pixel values, from which we can find that **PDB can achieve high effectiveness across different pixel values.**
* **Backdoor enhancement strategy during training:**
* **Increasing sampling frequency:** Given a fixed number of defensive poisoned samples, the defensive backdoor can be further enhanced by increasing the sampling frequency of poisoned samples, forcing the model to pay more attention to defensive poisoned samples. Table 4 shows that **a larger sampling frequency leads to a stronger defensive backdoor**, resulting in a higher ACC and a lower ASR. Note that for the malicious attacker, the poisoning ratio is expected to be low to ensure the stealthiness of the attack.
* **Data augmentation:** To demonstrate the role of augmentation in PDB, we first provided a more detailed visualization of results with different strengths of augmentation in **Fig 2 of Supplementary pdf**. From Fig 2, we can find that PDB, without any sample augmentation ($\alpha=0$), exhibits significant efficacy with ASR lower than 2%. As augmentation strength increases, the ASR decreases, indicating **a stronger augmentation can help further enhance PDB's effectiveness**. However, a tradeoff between augmentation intensity and model performance is also observed.
In summary, a visible trigger with **larger trigger size**, **higher sampling frequency**, and **data augmentation** contribute to meeting Principle 4.
**Table 1: Results on PreAct-ResNet18 with Poisoning Ratio 5% and different defensive trigger size**
|Attack →|BadNet|BadNet|Blended|Blended|Sig|Sig|SSBA|SSBA|WaNet|WaNet|
:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
Defensive Trigger size ↓|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR
1x1|48.75|6.88|52.34|5.06|53.15|5.22|52.03|6.20|58.04|4.26
2x2|74.39|3.37|81.38|2.71|81.13|2.32|77.49|3.87|76.49|3.98
3x3|86.08|0.26|85.60|0.67|86.94|0.07|87.01|0.49|85.70|0.97
4x4|89.46|0.28|90.07|0.56|90.38|0.07|89.60|0.44|89.98|0.92
5x5|91.51|0.33|92.22|0.31|92.35|0.06|92.14|0.64|92.05|0.97
6x6|90.78|0.32|91.93|0.49|92.04|0.04|91.82|0.41|91.52|0.91
7x7|91.08|0.38|91.36|0.70|91.79|0.06|91.58|0.46|91.47|0.92
8x8|90.48|0.33|91.56|0.40|91.59|0.02|91.41|0.39|91.44|0.86
9x9|90.21|0.32|91.24|0.39|90.79|0.03|90.92|0.32|90.87|0.56
**Table 2: Results on PreAct-ResNet18 with Poisoning Ratio 5% and different positions**
|Attack →|BadNet|BadNet|Blended|Blended|Sig|Sig|SSBA|SSBA|WaNet|WaNet|
:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
Defensive Trigger Position ↓|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR
Corner|91.08|0.38|91.36|0.7|91.79|0.06|91.58|0.46|91.47|0.92
Random|88.79|0.69|90.10|0.81|90.12|0.16|89.39|0.66|89.49|0.97
Center|87.35|0.63|87.82|0.44|88.19|0.06|87.93|0.89|87.70|0.93
**Table 3: Results on PreAct-ResNet18 with Poisoning Ratio 5% and different pixel values**
|Attack →|BadNet|BadNet|Blended|Blended|Sig|Sig|SSBA|SSBA|WaNet|WaNet|
:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
Pixel ↓|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR
1.50|90.69|0.57|91.40|0.56|91.74|0.09|91.54|0.60|91.44|0.83
2.00|91.08|0.38|91.36|0.7|91.79|0.06|91.58|0.46|91.47|0.92
2.50|90.99|0.48|91.39|0.50|91.77|0.04|91.48|0.80|91.54|0.54
-0.50|90.94|0.48|91.78|0.91|91.56|0.01|91.64|0.61|91.69|0.60
-1.00|90.84|0.23|92.31|0.07|91.81|0.00|91.85|1.07|91.83|0.90
-1.50|90.86|0.29|91.62|1.00|91.88|0.00|91.43|0.66|91.86|0.78
**Table 4: Results on PreAct-ResNet18 with poisoning ratio 5% and different sampling frequencies**
|Attack →|BadNet|BadNet|Blended|Blended|Sig|Sig|SSBA|SSBA|WaNet|WaNet|
:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:
Frequency ↓|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR|ACC|ASR
1|91.01|0.69|91.19|1.48|91.38|4.62|91.19|1.24|91.13|1.07
3|91.06|0.57|91.27|1.39|91.73|0.10|91.28|0.72|91.44|0.97
5|91.08|0.38|91.36|0.70|91.79|0.06|91.58|0.46|91.47|0.92
7|91.34|0.27|91.56|0.59|91.98|0.04|91.89|0.43|91.84|0.27
9|92.15|0.20|92.19|0.50|92.27|0.02|92.30|0.31|92.48|0.16
Pdf: /pdf/fe1a55ef4b3a1552bde6b638cae04e01dc0a5bf3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Faster Diffusion: Rethinking the Role of the Encoder for Diffusion Model Inference | Accept (poster) | Summary: The authors analyze the encoder features of diffusion UNet and find that they share a lot of similarities across certain time steps. Based on this observation, the authors propose to reuse the encoder features and do parallel computation in sampling. This significantly fastens the generation process. Additionally, they also introduce a prior noise injection method to improve the generation quality. Extended experiments demonstrate their effectiveness across a wide range of generation tasks, including T2I, T2V, DreamBooth, controlNet. And their method can be seamlessly combined with various noise schedulers.
Strengths: 1 This paper is well-written and easy to understand. The analysis in section 1 is insightful. Fig.4 clearly shows the encoder propagation methods.
2 The empirical evaluation is thorough. The authors validate their method on various datasets, architectures, and use various metrics. They also make a lot of comparison to the SOTA acceleration methods, such as DeepCache.
3 Sec. A4 shows that the parallel denoising leads to an acceptable memory cost, which addresses my concerns.
4 This approach can be seamlessly combined with other acceleration methods such as novel noise schedulers and distillations.
Overall, this is an excellent work that significantly reduces diffusion sampling cost in a training-free manner.
Weaknesses: I think the manual selection of key time-steps may not be optimal. Is it possible to use some automatic strategies to determine the key time-steps in your work? Otherwise when we use a new architecture we would have to redo the encoder feature analysis experiment and manually define the key-time steps again. I think there are several works have explored the selection of key time-steps. Discuss on them will make your work even stronger.
[1] OMS-DPM: Optimizing the Model Schedule for Diffusion Probabilistic Models
[2] AutoDiffusion: Training-Free Optimization of Time Steps and Architectures for Automated Diffusion Model Acceleration
[3] Denoising Diffusion Step-aware Models
Technical Quality: 4
Clarity: 3
Questions for Authors: Please see the weakness section.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes, they discuss their limitation in section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper.
**W1. Automatic selection for key time steps**
Thank you for your insightful suggestion. We find that when using uniform $\textit{key}$ time-steps with the same time interval, the sampling results are acceptable ($t^{key} = \\{50, 44, 38, 32, 26, 20, 14, 8, 2\\}$), though not as optimal compared to those obtained with non-uniform time-steps (see Table.5 in main paper).
To identify key time steps, we conducted empirical analysis of feature changes at adjacent time steps in multistep Stable Diffusion models (50-step model as an example).
This analysis is based on statistics from the distribution of features across 100 random prompts, which does not impose a high time cost.
We found that the encoder features change minimally in later time steps, whereas the encoder features in earlier time steps exhibit substantial variations compared to later ones.
Based on analysis as shown in Section 3.2, we determine the $\textit{key}$ time-step as $t^{key}=\\{50, 49, 48, 47, 45, 40, 35, 25, 15\\}$ for the 50-step Stable Diffusion model.
As a general case, we also applied this $\textit{key}$ time-step configuration to the ControlNet [10], Text2Video-zero [4], VideoFusion [5], Dreambooth [7] and Custom Diffusion [8] models based on the 50-step SD models. That will not impose any additional searching time for the key time steps for these downstream application scenarios.
As per your suggestions, we explored OMS-DPM [A], AutoDiffusion [B], and DDSM [C]. They are using reinforcement learning or machine learning algorithms to search for the optimal schedule, time steps, and model size, etc.
Each search iteration involves high-frequent images generation and FID calculation.
Executing these search algorithms is extensively time-consuming, often exceeding the training time, as claimed in DDSM.
For example, the NSGA-II search algorithm is utilized in DDSM [C]. The search cost for the DDSM network is approximately 1.1 to 1.2 times that of training a standard diffusion model (**see the table below**).
Due to the short rebuttal period, we have not been able to apply these algorithms to our method, however, we will discuss the use of the NSGA-II algorithm for automatically searching $t^{key}$ time-steps in the final version and include it as an improved version for FasterDiffusion in our future research.
| | CIFAR-10 | CelebA |
|-----------------|----------|--------|
| **DDPM-train** | 278 | 435 |
| **DDSM-train** | 502 | 1036 |
| **DDSM-search** | **320** | **524**|
**Table:** Total GPU hours of DDSM.
---
[A] OMS-DPM: Optimizing the Model Schedule for Diffusion Probabilistic Models
[B] AutoDiffusion: Training-Free Optimization of Time Steps and Architectures for Automated Diffusion Model Acceleration
[C] Denoising Diffusion Step-aware Models | Summary: This paper presents an approach to accelerate diffusion model inference by capitalizing on the minimal change in encoder features across time steps. The proposed encoder propagation strategy reduces encoder computations by reusing encoder features from previous time steps. The proposed method is comparable to existing acceleration methods and demonstrates effective acceleration across diverse tasks.
Strengths: 1. The proposed method offers a promising solution to speed up diffusion models.
2. The experiments are conducted from various tasks, e.g., text to image; text to video; personalized generation and reference-guided generation.
Weaknesses: 1. The evaluation metrics are insufficient, as concerns exist regarding the universality of FID in assessing generative models. It is recommended to validate performance on additional metrics, such as the newly proposed ImageReward [1] and Pick Score [2].
2. It is unclear whether the latency is comparable when fewer sampling steps are used (Table 11 and Table 12). Moreover, the performance improvement is marginal compared to results with fewer sampling steps, and mainly stems from prior noise injection, which is not the core contribution of the paper. I am also curious whether fewer sampling steps, if equipped with prior noise injection, could achieve similar performance as the results presented in this paper.
3. Comparing multi-GPU results with those single-GPU results from DeepCache seems unfair (Table 1). Besides, the acceleration outcomes on a single GPU are marginally improved when compared with other acceleration methods.
4. The DiT does not inherently include concepts of Encoder and Decoder; thus, categorizing based solely on observations into Encoder and Decoder is somewhat imprecise.
5. The layout of the paper is somewhat disorganized; the order of tables does not align with the sequence of their introduction in the text (e.g., Table 1 and Table 3), making it hard to follow.
[1] Xu J, Liu X, Wu Y, et al. Imagereward: Learning and evaluating human preferences for text-to-image generation. NeurIPS 2023.
[2] Kirstain Y, Polyak A, Singer U, et al. Pick-a-pic: An open dataset of user preferences for text-to-image generation. NeurIPS 2023.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness for details.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have discussed both limitations and potential negative social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper.
**W1. Additional metrics**
Thanks for your advice. We showcase our experimental results with the ImageReward and PickScore evaluation metrics over the MS-COCO2017 10K dataset, as shown in the **Table below**.
Our method FasterDiffusion enhances sampling efficiency while preserving the original model performance.
| | **ImageReward**$\uparrow$ | **PickScore**$\uparrow$ | **s/image** |
|----------------------------|---------------------------|-------------------------|-------------|
| **SD (DDIM)** | 0.149 | 52.10% | 2.42 |
| **SD (DDIM) w/ Ours** | 0.162 | 47.90% | 1.42 |
**Table:** Metrics in ImageReward and PickScore.
---
**W2. Few step generation with noise injections**
To extend the content illustrated in Table 11 and Table 12,
we present the full results with fewer sampling steps in **Table below**, specifically with 9 and 25 steps as in Table 11.
We mainly present the generation results with 9 steps because it is the number of $key$ time-steps for the 50-step SD model determined in our approach.
We increase the number of sampling steps (i.e., 25 steps), and find that the sampling results do not perform as well as DDIM with 50 steps and its variation with FasterDiffusion (Figure.12).
This demonstrates that our method is not simply reducing the number of sampling steps.
As shown on **Table below**, we conduct
DDIM scheduler inference time with various steps.
Although FasterDiffusion is slightly longer than DDIM scheduler with 9 steps, our sampling results are much closer the 50-step DDIM generation quality, whereas the 9-step DDIM results are much inferior (see Table.11, Table.12, and Figure.12 for examples).
For directly applying noise injection to the generation phase, we show image generation examples in **Figure.27** in the rebuttal file.
Fewer sampling steps equipped with prior noise injection result in almost no improvement in image quality. **Table below** further supports this conclusion.
While in our case, FasterDiffusion working without the noise injection will lead to smoother textures, as shown in Figure.6.
The noise injection technique helps in preserving fidelity in the generated results.
| Sampling method | T | **FID** $\downarrow$ | **Clipscore** $\uparrow$ | **s/image** $\downarrow$ |
|------------------------------|----|-----------------------|--------------------------|--------------------------|
| **DDIM** | 50 | 21.75 | 0.773 | 2.42 |
| **DDIM** | 25 | 22.16 | 0.761 | 1.54 |
| **DDIM w/ noise injection** | 25 | 21.89 | 0.761 | 1.54 |
| **DDIM** | 9 | 27.58 | 0.735 | 0.96 |
| **DDIM w/ noise injection** | 9 | 27.63 | 0.736 | 0.96 |
| **DDIM w/ ours** | 50 | **21.08** | **0.783** | **1.42** |
**Table:** Quantitative comparison for DDIM with fewer steps.
---
**W3. Comparison with single-GPU methods**
Our advantage over other methods (e.g., DeepCache) is the ability to perform inference on non-key time steps in parallel, therefore enabling parallel processing across multi-GPUs.
This characteristic assist our method FasterDiffusion to achieve better speedup performance compared to other methods.
In this paper, we notice that DeepCache is not parallelizable on multiple GPUs since DeepCache needs to use all or part of the encoder and decoder at every time step.
FasterDiffusion, on the other hand, only uses the encoder at the $\textit{key}$ time steps, which enables parallel processing at non-key time steps and faster inference time cost.
It is also worth to notice that FasterDiffusion can be further combined with a wide range of diffusion model based tasks, including Text2Video-zero, VideoFusion, Dreambooth, and ControlNet. In this case, DeepCache often shows much slower speed while applied to these tasks. As an example shown in **Table.14 (in 'global' response)**, DeepCache applied to the ControlNet is slower by 24\% compared to the FasterDiffusion based ControlNet.
More specifically, since the ControlNet model requires an additional encoder, our method FasterDiffusion is able to execute this extra encoder in a parallel manner and reuse it, and that makes the additional time negligible.
On the other hand, DeepCache is reusing the decoder feature, which leads the ControlNet to wait for the additional encoder to complete computation at each time-step, resulting in almost no time saved from skipping the encoder in the standard SD models.
Please refer to our 'global' response (**General Response 1**) for **more details**.
**W4. Applicability to DiT based models**
The DiT includes 28 transformer blocks.
Through visualization and statistical analysis (see **Figure.30** in the rebuttal PDF and **Figure.11** in main paper),
we observe that the features in the first several transformer blocks change minimally, similar to the Encoder in SD, while the features in the remaining transformer blocks exhibit substantial variations, akin to the Decoder in SD.
For ease of presentation, we refer to the first several transformer blocks of DiT as the Encoder and the remaining transformer blocks as the Decoder. In **Table 2**, we demonstrate the accelerating performance of our method while applied to DiT-based generation models.
**W5. Paper layout**
We will carefully design the layout in the future version for better reading experience.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
I'm still a bit confused, as typically, the PickScore should be a specific numerical value, not a percentage.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer wo3A:
We are grateful for your expeditious response and we are glad to clarify and address your concerns further.
For the PickScore, we used the calculation method recommended by the official github repository reference:
> For a prompt $x$ and a image $y$,the scoring function $s$ computes a real number by representing $x$ using a transformer text encoder and $y$ using a transformer image encoder as d-dimensional vectors, and returning their inner product:
$$
s(x,y) = E_{txt}(x) · E_{img}(y) · T
$$
Where $T$ is the learned scalar temperature parameter of CLIP.
Following that, assuming the original image generated by SD is $y_1$, and the image generated after combining with Ours is $y_2$, we can calculate the preference distribution vector $p$:
$$
p_i = \frac{\exp s(x, y_i)}{\sum_{j=1}^2 \exp s(x,y_j)}
$$
PickScore official recommended get probability, if there are multiple images available for selection.
We also calculated the original PickScore scores, as shown in the table below:
| | PickScore$\uparrow$ | s/image |
| ---------------- | ------------------- | ------- |
| SD(DDIM) | 21.43 | 2.42 |
| SD(DDIM) w/ Ours | 21.35 | 1.42 |
Our method FasterDiffusion enhances sampling efficiency while preserving the original model performance.
We hope that these clarifications have addressed your concerns satisfactorily and look forward to any further feedback you may have. We are committed to enhancing our work based on your expert guidance. | Summary: This paper presents an extensive study of the evolution of internal activations in diffusion U-Nets and uses their findings to motivate a training-free approach for accelerating sampling from diffusion models. The method is demonstrated to successfully speed up inference in a variety of settings, including different architectures (including non-hierarchical DiTs), different samplers, and some modified inference processes.
Strengths: - The method is demonstrated to work well on a wide range of networks & tasks, including U-Net-based LDMs (SD), Imagen-style cascaded pixel-space U-Net diffusion models (IF), and DiTs. Especially DiT is notable, as previous methods do not work on homogeneous transformer architectures out-of-the-box.
- The extensive appendix provides a lot of useful additional information and experimental results and supplements the empirical study about the evolution of internal features in diffusion models well.
- The proposed method opens up avenues for further accelerating sampling speed as measured in wall clock delay for single samples by utilizing multiple GPUs to distribute the proposed parallel encoder propagation strategy.
- The proposed "prior noise injection method" helps alleviate artifacts incurred from the optimized sampling process.
Weaknesses: - No details are provided about the user study, and false claims relating to it are made in the checklist [questions 14 and 15].
- The main method seems to only be a minor incremental evolution of DeepCache's methodology (encoder features are cached and reused without adaptation of features or network in following steps, optionally multiple times, with the main difference being what is defined as the encoder), and seems to provide a *reduced* speedup when compared in a fair single-GPU setting [Tab. 1, Sec. 4.1 l. 271: 41% speedup vs. 56% speedup, at comparable quality levels].
- The presentation of the paper needs some work. Font sizes in some figures are excessively small [cf. Figs. 1, 2, 3, 7], Tabs. 2 and 3 are excessively compressed up to a point at which readability is impaired. The paper would also benefit from a thorough re-read and corrections for the camera-ready version. Especially grammar errors that substantially affect the claims made in the paper (e.g., sampling time reduction "to 41%" [l. 271] instead of "by 41%") and typos that change the meaning of words (e.g., "rudely" [l.105]), but also names of methods (e.g., "DeepFolyd-IF" [l. 60], "Stable Diffuison" [l. 525]).
Technical Quality: 3
Clarity: 2
Questions for Authors: - Can the authors provide examples and evidence of practically relevant settings where DeepCache fails or underperforms significantly compared to the proposed method (such as a combination with DPM-Solver(++), ControlNet, or Dreambooth etc)? These would help address the main method-related weakness mentioned.
- What do the ellipses in the key time step steps mean? Is every time step between 80 and 25 included for $\{80,...,25\}$?
- Is there a relevant difference between the proposed "prior noise injection method" and the various methods that artificially increase the noise level slightly during ODE sampling, such as ``churn'' in (Karras et al., Elucidating the Design Space of Diffusion-Based Generative Models, NeurIPS 2022)?
- Are all the videofusion samples in the supplementary material just corrupted or is there something to be demonstrated there?
- Does parallel-batch encoder propagation limit the possible batch size on a single GPU compared to other sampling methods? If so, could the authors provide a performance comparison that individually maxes out the batch size for the various approaches (including competing ones) and then reports the samples/second? As generating multiple variants simultaneously is a standard approach when generating images with diffusion models in practice, this is an important avenue for optimization. Specifically, I am worried that at a maximal batch size, parallel encoder propagation will only offer a negligible speedup compared to encoder propagation (due to larger generation batch sizes being possible with non-parallel encoder propagation, as VRAM usage is substantially different between the two methods [Tab. 8]), at which point this method would provide less than half of the speedup provided by prior art.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper.
**W1. User study details.**
The study participants were volunteers from our college. The questionnaire consisted of 35 questions, each presenting two images: one from the baseline methods (including ControlNet, SD, DeepFloyd-IF, DPM-Solver++, Dreambooth, Custom Diffusion, Text-to-Video, etc.) and the other from our method FasterDiffusion (one example shown in **Figure.29**).
Users were required to select the image where the target was more accurately portrayed, or choose “both equally good”. A total of 18 users participated the questionnaire, resulting in totally 630 samples (35 questions × 1 option × 18 users). As the final results shown in **Figure.8**, the chosen percentages for SD, DeepFloyd-IF, DPM-Solver++, Text-to-Video, ControlNet, and Personalize were 48\%, 44\%, 51\%, 52\%, 46\%, and 39\%, respectively.
These results show that our method performs on par with the baseline methods.
**W2\&Q1. Comparison with DeepCache**
DeepCache is developed based on the observation over the temporal consistency between high-level features.
In this paper, we have a more thorough analytical study over the SD model features as shown in **Figure.3**, where we find out the encoder features change less, whereas the decoder features exhibit substantial variations across different time steps. This insight motivates us to omit encoder computation at certain adjacent time-steps and reuse encoder features of previous time-steps as input to the decoder in multiple time-steps. We further determine the key time steps in the T2I inference stage, which helps our method to skip the time steps in a more scientific way.
It is also important to note that DeepCache is not parallelizable on multiple GPUs since DeepCache needs to use all or part of the encoder and decoder at every time step.
FasterDiffusion, on the other hand, only uses the encoder at the $\textit{key}$ time steps, which enables parallel processing at $\textit{non-key}$ time steps and faster inference time cost.
It is also worth to notice that FasterDiffusion can be further combined with a wide range of diffusion model based tasks, including Text2Video-zero, Dreambooth, and ControlNet. In this case, DeepCache often shows much slower speed while applied to these tasks. As an example shown in **Table below**, when combined with ControlNet, DeepCache is slower by **24\%** compared to the FasterDiffusion.
More specifically, since the ControlNet requires an additional encoder, our method FasterDiffusion is able to execute this extra encoder in a parallel manner and reuse it, and that makes the additional time negligible.
On the other hand, DeepCache is reusing the decoder feature, which leads the ControlNet to wait for the additional encoder to complete computation at each time-step, resulting in almost no time saved from skipping the encoder in the standard SD models.
As shown in **Figure.28**, FasterDiffusion successfully preserves the given structure information and achieves similar results as the original ControlNet.
| | **Clipscore**$\uparrow$ | **FID**$\downarrow$ | **s/image**$\downarrow$ |
|--------------------------|-------------------------|---------------------|-------------------------|
| **ControlNet** | 0.769 | 13.78 | 3.20 |
| **ControlNet w/ DeepCache** | 0.765 | 14.18 | 1.89 (1.69x) |
| **ControlNet w/ Ours** | 0.767 | 14.65 | **1.52 (2.10x)** |
**Table 14:** When combined with ControlNet (Edge) 50-step DDIM, our inference time shows a significant advantage compared to DeepCache.
---
**W3. Paper presentation**
We will carefully correct these figures, tables, grammar errors and typos in the future version.
**Q2. Ellipsis meaning in the key time step**
The ellipsis in $t^{key}$ denotes each time-step between the time-steps on either side of the ellipsis. For example, 80...25 means that every time-step between 80 and 25 is included.
**Q3. Relevant difference**
In EDM [A] (Karras et al.), they increase the noise level during ODE sampling to improve deterministic sampling, which is referred to as stochastic sampling (i.e., 'churn'). Compared to deterministic sampling, it injects noise into the image at each step, which helps to enhance the quality of the generated images. Song et al. [B] first observed that perturbing data with random noise makes the data distribution more amenable to score-based generative modeling. Increasing the noise level in EDM is based on Song's paper.
Their purpose for injecting noise is to perturb the data, whereas our purpose for injecting noise during sampling is to preserve high-frequency details in the image during the denoising process, preventing the diffusion model from removing high-frequency information as noise (see **Figure.6** in the main paper).
In addition, our method FasterDiffusion differs from them by only inserting noise in the later stage of the diffusion steps, while they insert noise to all time-steps.
**Q4. Video corruption**
The files (including the videofusion samples) in the supplementary material are not corrupted. They can be opened correctly using the default video players on Windows or Mac operating systems.
However, on the Linux operating systems with graphical interfaces (e.g., Ubuntu), the default video player cannot open these files normally. You may need to download an alternative video player from the software store (e.g., Celluloid) to read these videos.
**Q5. Parallel or Serial**
Please refer to our 'global' response (**General Response 1**) for details.
---
[A] Elucidating the design space of diffusion-based generative models
[B] Generative modeling by estimating gradients of the data distribution
---
Rebuttal Comment 1.1:
Comment: > **W1. User study details.**
>
> The study participants were volunteers from our college. The questionnaire consisted of 35 questions, each presenting two images: one from the baseline methods (including ControlNet, SD, DeepFloyd-IF, DPM-Solver++, Dreambooth, Custom Diffusion, Text-to-Video, etc.) and the other from our method FasterDiffusion (one example shown in Figure.29).
>
> Users were required to select the image where the target was more accurately portrayed, or choose “both equally good”. A total of 18 users participated the questionnaire, resulting in totally 630 samples (35 questions × 1 option × 18 users). As the final results shown in Figure.8, the chosen percentages for SD, DeepFloyd-IF, DPM-Solver++, Text-to-Video, ControlNet, and Personalize were 48%, 44%, 51%, 52%, 46%, and 39%, respectively. These results show that our method performs on par with the baseline methods.
Thanks for providing these details and the screenshots in the rebuttal PDF, but this does still not resolve the major concerns about question 15 (IRB board approval, potential risks for participants).
---
Reply to Comment 1.1.1:
Title: IRB considerations
Comment: I appreciate your emphasis on the significance of IRB approvals for user studies in current computer vision research. It is indeed crucial to ensure that research involving human subjects adheres to ethical standards and guidelines, and obtaining IRB approval is a key step in this process.
The Institutional Review Board (IRB) is a formalized entity tasked with upholding research ethics by conducting a rigorous assessment of the methodologies proposed for research endeavors that involve human subjects. The primary objective of these reviews is to safeguard the wellbeing of study participants, ensuring that they are not subjected to harm and that their rights and welfare are duly protected. This is achieved through a meticulous examination of research protocols and associated documentation, with the intention of identifying and mitigating potential risks.
In the context of this research paper, it is noteworthy that the study design does not pose any risks of harm to the participants. Instead, we employ a user study methodology wherein participants are invited to express their opinions regarding the images generated by our approach in comparison to those produced by alternative methods. To further safeguard the participants, we have implemented rigorous safety checks within our image generation pipelines, ensuring that no harmful or inappropriate content is included in the output images. Consequently, the study adheres to ethical principles by minimizing potential risks and prioritizing the protection of participant rights and welfare.
In this context, we address checklist item 15 as outlined by the regulatory body, as our study does not involve research concerning human subjects. Rather, it may be construed as an online questionnaire administered for data collection purposes.
We endeavored to address all your concerns comprehensively to ensure clarity in the explanations provided. If you have further questions about our research, please feel free to engage with us for further discussion. | Summary: This method can be applied at inference time without requiring retraining to improve sampling efficiency while maintaining high image quality and can be combined with other approaches to speed up diffusion models. The approach is effective across various conditional diffusion tasks, including text-to-video generation (e.g., Text2Video-zero, VideoFusion), personalized image generation (e.g., Dreambooth), and reference-guided image generation (e.g., ControlNet). Using encoder features from previous time steps as input for decoding multiple later time steps allows for concurrent decoding, further accelerating SD sampling by 41%, DeepFloyd-IF sampling by 24%, and DiT sampling by 34%. Besides, a prior noise injection strategy is introduced to preserve image quality, maintaining texture details in the generated images.
Strengths: Novelty: The three techniques to design efficient diffusion models are novel.
Significance: The problem of efficient diffusion transformer inference acceleration is significant in diffusion models.
Methodology: The proposed algorithm is well-formulated and well-explained.
Results: The experimental results show significant improvements over existing methods over SD, DiT, and DeepFloyd-IF and are widely applied to text-to-image, text-to-video generation, and personalization.
Weaknesses: The visualization results for smaller steps appear suboptimal, potentially due to the absence of consistency models or adversarial distillation models for verification. This raises concerns about the foundational assumption of maintaining high similarity throughout the process. It is crucial to explore whether incorporating consistency models or adversarial distillation methods could enhance the quality of results and ensure the assumption of high similarity holds even in smaller steps. Providing more robust verification methods would strengthen the validity and reliability of the visual outputs generated by the proposed approach.
Technical Quality: 3
Clarity: 3
Questions for Authors: Extra Cost to Identify Key Time-Steps: What are the additional costs associated with identifying key time steps in the diffusion process? Is this identification process capable of being learned dynamically, or is it independent of the data? Understanding the computational overhead or resource allocation needed for this identification process is crucial for evaluating the overall practicality of the method.
Enhancing Application Area with Smaller Steps Diffusion Models: Applying this approach to smaller steps diffusion models has the potential to significantly expand its applicability. By effectively handling smaller time steps, the method could be utilized in a wider range of scenarios, enhancing its versatility and impact.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weakness and questions part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and will incorporate the discussions mentioned below to enhance the quality of our paper. Note that we utilize the numerical references to cite sources within the main paper.
**W1\&Q2: Cooperation with few-step T2I models**
In the **Limitations** part of the main paper, we present that our method faces challenges in generating quality results when using a smaller number of sampling steps.
The underlying reason is that our method FasterDiffusion is accelerating the T2I generative model based on the empirical study over the feature changes at adjacent time-steps in multistep DMs (e.g., Stable Diffusion with 50 steps), as shown in Fig.3. We can observe that the encoder features change minimally, whereas the decoder features exhibit substantial variations across different time steps. This insight motivates us to omit encoder computation at certain adjacent time-steps and reuse encoder features of previous time-steps as input to the decoder in multiple time-steps.
To check the possibility to combine our method with few-step T2I models, including LCM [A] and SDXL-Torbo [B], we visualize the features for both the encoder and decode of UNet of these models.
In 4-step LCM, the features of the encoder and decoder of UNet vary greatly at adjacent time steps (see **Figure.22**). The feature changes of the encoder are not as subtly varied as they are in 50-step Stable Diffusion models, which means that the encoder features cannot be shared in adjacent time-steps (see **Figure.24**).
The 4-step SDXL-Turbo [A] shares the same problem as the 4-step LCM, as can be seen in **Figure.23** and **Figure.25**.
Therefore, these few-step T2I generative models (LCM and SDXL-Turbo) cannot be combined with FasterDiffusion
However, as a future task, we aim to apply these findings to model distillation. We hypothesize that incorporating the encoder only once and the decoder across multiple time-steps can help capture richer semantic information and accelerate the diffusion model distillation training.
For example, in **Figure.26**, we illustrate the comparison between 1-step LCM and 4-step LCM feature visualization, the decoder feature maps are with more diversity with 4 steps. This implies that we can share one encoder by four decoders to capture the semantic information encapsulated in these decoders. By this means, we can achieve faster distillation for the T2I models and quicker parallel T2I generations.
In this paper, we focus only on training-free acceleration techniques and will make this training-required distillation method our future work.
**Q1: Additional cost to identify key time steps**
To identify key time steps, we conducted empirical analysis of feature changes at adjacent time steps in multistep Stable Diffusion models (50-step model as an example).
This analysis is based on statistics from the distribution of features across 100 random prompts, which does not impose a high time cost.
Based on analysis as shown in **Section 3.2**, we determine the $\textit{key}$ time-step as $t^{key}=\\{50, 49, 48, 47, 45, 40, 35, 25, 15\\}$ for the 50-step Stable Diffusion model.
As a general case, we also applied this $\textit{key}$ time-step configuration to the ControlNet [10], Text2Video-zero [4], VideoFusion [5], Dreambooth [7] and Custom Diffusion [8] models based on the 50-step SD models.
Moreover, once we determine the $\textit{key}$ time steps through analysis, the T2I diffusion models and their applications using FasterDiffusion always employ the same determined $\textit{key}$ time steps.
This is a **once-for-all** solution, with no additional costs for each application scenario.
---
[A] Latent consistency models: Synthesizing high-resolution images with few-step inference.
[B] Adversarial diffusion distillation. | Rebuttal 1:
Rebuttal: **“global” response**
We appreciate all reviewers (**R1=wXBv**, **R2=BYFv**, **R3=wo3A**, **R4=i4aK**) for their positive feedbacks. They note that this paper is well-written (R4) and easy to understand (R4); the technique is novel (R1) and promising (R3); that the proposed algorithm is well-formulated and well-explained (R1); that we present insightful analysis of UNet (R4); that we provide extensive experiments over various network architectures (R1, R2, R4), tasks (R1, R2, R3), and use various metrics (R4); that we can also work on diffusion transformer inference acceleration (R1,R2). Below we respond to general questions raised by reviewers; We use **W** to abbreviate **Weaknesses**, **Q** to represent **Questions** and **L** for **Limitations**.
Note that we utilize the numerical references to cite sources within the main paper.
**General Response 1. Parallel on multi-GPU or Serial on single-GPU (R2-Q5, R3-W3)**
Our advantage over other methods (e.g., DeepCache) is the ability to perform inference on *non-key* time steps in parallel, therefore enabling parallel processing across multi-GPUs.
This characteristic assist our method FasterDiffusion to achieve better speedup performance compared to other methods.
It is also noticeable that DeepCache is not parallelizable, since it needs to use all or part of the encoder and decoder at every time step. Due to this reason, DeepCache cannot be further improved by deploying on multiple GPUs.
Deep learning models rely heavily on parallel training and inference across multi-GPUs. Using only a single GPU can result in excessive time consumption while dealing with large models. Therefore, it is a very relevant and impactful contribution that parallelizing existing algorithms and unlocking the speed-up available due to multiple GPU computations. In this paper, our proposed FasterDiffusion parallelizes the processing of multiple time-steps, enhancing processing efficiency.
In addition, our findings can also be applied to model distillation by using the decoder at multiple time steps to capture richer semantic information, which DeepCache cannot achieve. As a future task, we aim to apply these findings to model distillation. We hypothesize that incorporating the encoder only once and the decoder across multiple time-steps can help capture richer semantic information, and accelerate the diffusion model distillation training.
For example, in **Figure.26** in rebuttal PDF, we illustrate the comparison between 1-step LCM and 4-step LCM feature visualization, the decoder feature maps are with more diversity with 4 steps.
This implies that we can share one encoder by 4-step decoder to capture the semantic information encapsulated in 4-step decoder. By this means, we can achieve faster distillation for the T2I models and quicker parallel T2I generations.
In this paper, we focus only on training-free acceleration techniques and will make this training-required distillation method our future work.
| | **Clipscore**$\uparrow$ | **FID**$\downarrow$ | **s/image**$\downarrow$ |
|--------------------------|-------------------------|---------------------|-------------------------|
| **ControlNet** | 0.769 | 13.78 | 3.20 |
| **ControlNet w/ DeepCache** | 0.765 | 14.18 | 1.89 (1.69x) |
| **ControlNet w/ Ours** | 0.767 | 14.65 | **1.52 (2.10x)** |
**Table 14:** When combined with ControlNet (Edge) 50-step DDIM, our inference time shows a significant advantage compared to DeepCache.
Pdf: /pdf/b15ddcfddfc6a27c1a480ae5462b91845789134d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Abstract Reward Processes: Leveraging State Abstraction for Consistent Off-Policy Evaluation | Accept (poster) | Summary: The authors introduce a new framework that groups the states with a similar reward, hence reduces variance of any applied OPE estimator.
Strengths: The framework is technically sound, well discussed, the analysis is on point and addresses important questions, for example that regardless of the choice of state abstraction function \phi, the overall framework remains asymptotically consistent. The discussion on how the framework translates to existing OPE methods when varying its parameters (\phi, c) help show where the improvements of this framework lie.
The experiments are complete and show a strong empirical performance of this method.
Weaknesses: Minor: [L548, L560, L586, L600, L609] "See Appendix B" is obsolete, we are already in Appendix B.
Technical Quality: 3
Clarity: 3
Questions for Authors: In [L212] you claim this is the first model-based OPE method that converges to a true policy value without any model class assumptions. How does your method compare to action clustering method of Peng et al. (2023)? This method factors action space instead of the state space, and regardless of the model class, it seems to converge to a true policy value.
In [L304], you suggest that performance can be improved even further by applying estimator selection methods. Moreover, in [L250], you mention there is no approach to discovering good abstraction functions \phi. The recent work of Cief et al. (2024) might solve both these issues.
References:
Peng, Jie, Hao Zou, Jiashuo Liu, Shaoming Li, Yibao Jiang, Jian Pei, and Peng Cui. “Offline Policy Evaluation in Large Action Spaces via Outcome-Oriented Action Grouping.” In Proceedings of the ACM Web Conference 2023, 1220–30. WWW ’23. New York, NY, USA: Association for Computing Machinery, 2023. https://doi.org/10.1145/3543507.3583448.
Cief, Matej, Michal Kompan, and Branislav Kveton. “Cross-Validated Off-Policy Evaluation.” arXiv, May 24, 2024. http://arxiv.org/abs/2405.15332.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors extensively discuss the limitations throughout the work, for example, in Conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for taking time to review our paper and appreciate you recognizing the strength of the work’s theoretical and empirical analysis. We appreciate you pointing out additional relevant references, and address related questions below.
> In [L212] you claim this is the first model-based OPE method that converges to a true policy value without any model class assumptions. How does your method compare to action clustering method of Peng et al. (2023)? This method factors action space instead of the state space, and regardless of the model class, it seems to converge to a true policy value.
GroupIPS performs IPS on the clustered actions and is thus a model-free method, whereas our work involves approximating a model of the system, specifically, an abstract Markov reward process. The use of discrete state abstractions, in conjunction with tabular models, frees our method from requiring knowledge about the model class that can represent the underlying MDP. While the referenced paper does not seem to provide a guarantee for convergence to the true policy value (as per Proposition 3.3, the method is biased), our approach offers the theoretical guarantee of asymptotic convergence to the true policy value.
> In [L304], you suggest that performance can be improved even further by applying estimator selection methods. Moreover, in [L250], you mention there is no approach to discovering good abstraction functions \phi. The recent work of Cief et al. (2024) might solve both these issues.
Thank you for highlighting this reference on estimator selection for OPE. Holding out some off-policy data to estimate a proxy of the true policy value, using a validation OPE estimator, is a promising approach to abstraction selection. We will incorporate this reference and discuss its relationship to our work in the updated version of the paper.
> [L548, L560, L586, L600, L609] "See Appendix B" is obsolete, we are already in Appendix B.
We shall remove that text in the updated version of the paper.
We hope we have addressed your concerns, kindly let us know if you have any additional questions.
---
Rebuttal 2:
Comment: Thank you for taking the time to answer my questions. After reviewing all discussions, I am keeping my score and will advocate for accepting the paper (if the discussion will be needed, it seems the vote is unanimous). | Summary: The paper introduces a new framework, STAR, for off-policy evaluation (OPE). OPE uses a chain with rewards (MRP) to conduct the evaluation. The challenge of this methodology is that the MRP estimation can introduce bias, given the shift between the behavior policy and the policy for evaluation. STAR is designed to address this challenge. STAR projects continuous state space to a tabular space and introduces the abstract reward process (ARP), which refers to the MRP over an abstract state space. The tabular abstract state space reduces the difficulty of reward prediction, thus eliminating the model class mismatch in prediction. It is proved in the paper that projecting states to a discrete space preserves the return.
Strengths: * The paper makes good contributions by proposing a new framework for off-policy evaluation. The proposed method is analyzed in detail with both mathematical proofs and empirical evaluations.
* The introduction clearly lists the challenges and contributions, as well as provides a subsection for an overview of the new method. The way the introduction is organized tunes to be helpful for better understanding.
* For the introduced step, abstracting continuous states with a tabular space, the paper explains the advantages clearly. The paper also provides mathematical proof to explain why this step is not considered as a source of error.
Weaknesses: * Theorem 3.1 proves why state abstraction preserves the return with a known reward function on the abstraction state space, but it remains unclear how the state abstraction increases the difficulty of learning the transition model for predicting the reward and the next state, and possibly also introduces stochasticity. Assuming the true MDP has $P(r_0, s_1| s_0, a_0) = 1$ and $P(r_1, s_3|s_1, a_0)=1$, if the state abstraction projects $s_0$ and $s_1$ to the same representation $\phi_0$, which is possible because the latent space is tabular. Then, the model will have to simulate a stochastic transition with probability $P(r_0, s_1 | \phi_0, a_0)=0.5$ and $P(r_1, s_3 | \phi_0, a_0)=0.5$. Therefore, it seems how accurate the evaluation is heavily depends on how good the state abstraction is. If so, it would be good to point out in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the concern above
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for taking the time to review our paper, and appreciate your recognition of our work's contributions and analyses. Your observation regarding differing ease of model estimation across various abstractions is insightful, as detailed next. We have updated the paper to include this point.
> Theorem 3.1 proves why [...] in the paper.
As correctly pointed out, in some cases abstraction by aggregation of states can increase the difficulty of estimation of the transition function (ref: `Note A`). However, in general, state aggregation tends to simplify estimation by increasing the effective sample size [1].
`Note A:` For example, aggregation of two states with deterministic transitions -- which can be estimated perfectly from a single observation of those transitions -- creates stochastic transitions between abstract states. For these states, the abstraction function being the identity mapping is preferable to one that is a many-to-one mapping.
As per Theorems 3.1 and 4.1, MSE for OPE using STAR converges to zero as the amount of data increases, regardless of the abstraction function used. However, the rates of convergence can vary with the choice of abstraction. This highlights the need to consider additional factors, such as the ease of estimation and dataset size, while selecting an abstraction in STAR.
We appreciate you pointing out this subtlety and hope we addressed your concern. We have updated the paper to include this point - kindly let us know if you have any additional questions.
---
[1] Jiang, Nan. "Notes on state abstractions." (2018) URL: https://nanjiang.cs.illinois.edu/files/cs598/note4.pdf
---
Rebuttal Comment 1.1:
Title: Reply
Comment: I would like to thank the authors for their reply. The reply addressed my concern. After reading the reply and other reviews, I increased the score. | Summary: This paper studies the problem of Off-Policy Evaluation (OPE), which consists of estimating the value of a policy $\pi_e$ from an input dataset generated from another behaviour policy $\pi_b$. One naive way of constructing the estimated return of $\pi_e$ would be to compute its associated empirical Markov Reward Process (MRP) in the MDP, then apply importance sampling to it. Instead, the technique proposed in the paper is to apply importance sampling to an abstract MRP obtained via some discrete abstraction function. This technique has been validated with theoretical results, and with an experimental comparison against other baselines.
Strengths: (Idea and contribution) The core idea, that is marginalizing states into abstract MRP, is simple. However, it requires special care in selecting appropriate abstract representations that allow lossless evaluation of arbitrary policies. As confirmed by Theorem 3.1, this is indeed the case for the ARP defined in the paper (though it should be noted that this strongly relies on $\gamma = 1$). From Theorem 3.1 all the other results follow, which verify that the proposed technique is sound for *any mapping function*. The fact that "actions propagate through the abstract state transitions" (line 227) is also a very interesting addition, that contributes to the application of importance sampling. Summarizing, the theory developed in the paper is an elegant display of the abstraction process for OPE, and a sound use of HRL principles.
(Evaluation) The technique has been compared with a good number of relevant baselines from the literature. The results show that the proposed estimators have competitive or better performance than their competitors. The authors also reported the median estimator. Even though it is not always the best estimator, this addition allows to have a clearer picture, and it was evaluated positively.
(Presentation) The quality of presentation is high. Each new technical definition is well motivated, and the paper is reasonably self contained.
(Reproducibility) The authors provided the full source code at the time of submission. The hyperparameters are also provided.
Weaknesses: 1. The theoretical results show that the method is sound and any estimator constructed as explained in the paper is an unbiased estimate of the true value.
However, the paper does not provide any theoretical result regarding the reduction in variance, which is the second motivation of this work. This is only confirmed experimentally in some domains.
1. No finite samples analysis has been conducted. Thus, when considering datasets of finite size, no theoretical result regarding the approximation error is available.
1. The method assumes to explicitly know the behaviour policy that generated the data. This is a strong limitation that has been also listed by the authors in the limitation section. This did not contribute heavily to the final score.
Minor:
- There are some typos in the use of z and z' in equation (2) and in the proof of Theorem 3.1.
- If accepted, I suggest to use the additional page for explaining the missing details of the experimental evaluation, such as:
- How is the MSE for OPE exactly defined
- How is the variance-bias measure is constructed for Figure 3.
- List Algorithm 1 in the main body
Technical Quality: 4
Clarity: 4
Questions for Authors: 4. How did you obtain the probabilities of the behaviour policy for the ICU sepsis dataset?
5. Which of the baselines assume complete knowledge about the behaviour policy?
The authors may also address any of the points raised above.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors listed all the most important limitations of this work. The contribution has no direct societal impact and it does not require further discussion regarding negative impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your careful review of our paper and insightful comments. We appreciate your positive assessment of our contributions and the clarity of our presentation. We address your questions and comments below.
> How did you obtain the probabilities of the behaviour policy for the ICU sepsis dataset?
The behavior policy is constructed by increasing the temperature parameter to the logits of an expert policy, making it more stochastic. The expert policy weights are provided provided with the ICU-Sepsis benchmark. Behavior probabilities are obtained by querying this modified behavior policy.
> Which of the baselines assume complete knowledge about the behaviour policy?
We thank the reviewer for pointing out the shared assumption of a known behavior policy among the baselines. All baselines, other than fitted Q-evaluation (FQE), assume knowledge of the behavior policy. Variants to some methods have been proposed where the behavior policy is empirically estimated [1], introducing some bias in estimation, and we expect these to be a straightforward practical extension to STAR.
> Theoretical proof for variance reduction
The framework of STAR encompasses a range of estimators, controlled by $(\phi, c)$, each configuration of which instantiates a specific estimator. The efficacy of variance reduction of these estimators is inherently linked to the problem of *abstraction discovery*. Different combinations of abstractions and weight clipping factors yield varying degrees of variance reduction compared to standard OPE, with some configurations potentially not offering any reduction. For instance, weighted per-decision importance sampling (WPDIS) and approximate-model MRP (AM-MRP), representing the end points within STAR (ref: Section 4.2), cannot, in general, be ordered by their variance. Similarly, estimators that lie in between may exhibit significantly lower or higher variance and MSE. We assert the *existence* of such low variance estimators and provide empirical support for this claim.
> Finite sample analysis
A finite sample analysis would necessitate an extension of the simulation lemma [2] for ARPs defined from finite-horizon MDPs. Subsequently, the policy value estimation error can be bounded in terms of the estimation errors in the transition and reward functions of the estimated ARP. Such an extension to the simulation lemma would be a valuable theoretical contribution independent of its application to our work, and represents a promising direction for future exploration.
> Minor
Thank you for the suggestions, we shall incorporate these points in the additional space in the updated version.
We hope we have addressed your concerns, kindly let us know if you have any additional questions.
---
[1] Hanna, Josiah, Scott Niekum, and Peter Stone. "Importance sampling policy evaluation with an estimated behavior policy." International Conference on Machine Learning. PMLR, 2019.
[2] Kearns, Michael, and Satinder Singh. "Near-optimal reinforcement learning in polynomial time." Machine learning 49 (2002): 209-232. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SC3D: Self-conditioned Generative Gaussian Model with 3D-aware Feedback | Reject | Summary: This paper (SC3D) proposes a single image-to-3D reconstruction method. It combines multi-view diffusion model and a 3D reconstruction model, and uses the 3D reconstruction results as self condition to improve the multi-view generation process. The motivation of the proposed method is to improve the geometric consistency previous single image reconstruction pipeline, namely first generate multi-view images then perform sparse view reconstruction. The core idea proposed in the paper, 3D-aware feedback, is reasonable and also appear in concurrent works IM-3D and VideoMV. Several ablation studies need to be included to prove that the proposed 3D-feedback (including RGB and coordinate maps) are improving the reconstruction quality. Authors also need to justify more about the contribution w.r.t. related work VideoMV. Furthermore, there is still space to improve the readability in the submission.
Strengths: Major:
- The idea of using 3D reconstruction rendering as condition to improve the geometry consistency of multi-view diffusion models is reasonable.
- The experiments are comprehensive. The results demonstrate that the proposed feedback mechanism is solid in the multi-view reconstruction approach.
Weaknesses: - Claim about key contributions: the 3D-feedback idea appears already in VideoMV. Since the VideoMV is already available on Arxiv in March, authors need to justify more clearly about the difference and contribution w.r.t. VideoMV.
- lacks generalization results: the method is evaluated on google scan objects, which is standard. However, i am curious to see if the approach generalizes to real world images.
- lacks one ablation: SVD+RGBs feedback, which is missing in Tab. 2 and Tab. 3.
- the readability of the Alg.1 and Alg.2 can be improved. Currently it is too specific and looks like python program. A more abstract algorithm is expected in a scientific paper.
- A typo: in line 213, after comma, "we" instead of "We" (wrong capitalized "W")
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why there is no SVD+RGB feedback in the ablation study? I think that is important to see how does the coordinates map contribute compared to multi-view RGB images as feedback.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - No obvious limitations are found in the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***1. Claim about key contributions: the 3D-feedback idea appears already in VideoMV. Since the VideoMV is already available on Arxiv in March, authors need to justify more clearly about the difference and contribution w.r.t. VideoMV.***
We will more clearly justify the difference with VideoMV in our revised paper.
- VideoMV aims to generate **multi-view consistent** images through 3D-aware denoising sampling. However, on the overall image-to-3D pipeline, its (a) lacking joint training and (b) inability to use geometric information hinder its capacity to fully leverage 3D-aware knowledge and unify the two stages. Moreover, this method fails to address the **"data bias"** between multi-view generation and 3D reconstruction. While its 3D-aware sampling method, employed in the inference phase, can improve multi-view consistency, it also introduces **biased information from reconstructed 3D models**, resulting in outputs that are misaligned with the input image (see **Fig.3 and Fig.4 in our attached PDF**).
- Our approach begins with a two-stage image-to-3D generation pipeline, aiming to address the **"data bias"** problem in the two stages, and thus propose a unified self-conditioning framework to achieve high-quality 3D generation. We utilize both geometry and appearance information from the reconstructed results, incorporating joint training of the two stages. **The feedback loops in both the training and inference stage** help reduce data bias and generate 3d assets with high-quality geometry and textures adhere to the input image.
***2. Lacks generalization results: the method is evaluated on google scan objects, which is standard. However, i am curious to see if the approach generalizes to real world images.***
We provide the generalization results in **Fig.2 of our attached PDF**. Our SC3D can also generate high-quality 3D models from out-of-distribution images and real world images.
***3. lacks one ablation: SVD+RGBs feedback, which is missing in Tab. 2 and Tab. 3.***
We acknowledge the importance of ablating coordinate map feedback. As demonstrated in **Fig. 6 and Tab. 1 in our attached PDF**, the reconstructed results in the 4th column show that relying solely on the color map leads to poor geometric quality in fine details. The combination of RGB and coordinates map feedback provides the best results by enhancing both geometry and texture quality, showcasing superior performance.
***4. The readability of the Alg.1 and Alg.2 can be improved. Currently it is too specific and looks like python program. A more abstract algorithm is expected in a scientific paper.***
We revise the two algorithms and the updated algorithm is shown **in the common response**.
***5. A typo: in line 213, after comma, "we" instead of "We" (wrong capitalized "W")***
Thanks for pointing out this typographical error. We will correct it in the revised version.
***6. Why there is no SVD+RGB feedback in the ablation study? I think that is important to see how does the coordinates map contribute compared to multi-view RGB images as feedback.***
We apologize for our oversight. We **have included** the qualitative and quantitative results of this setting in **Fig.6 and Tab.1 of our attached PDF**. The experimental results confirm the necessity of our two feedback types, demonstrating that CCM feedback enhances detailed geometry, while RGB feedback contributes to high-quality texture. The combination of these two feedback types brings greater benefits.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing a detailed rebuttal and for conducting additional experiments and the new ablation study, which demonstrates how integrating RGB and CCM images from 3D reconstructions into the 2D diffusion process can enhance reconstruction capabilities.
While the improvement in reconstruction ability is appreciated, my primary concern regarding the innovative contribution of the proposed approach remains unaddressed. The core concept of utilizing 2D multiview diffusion supplemented with 3D-aware features closely mirrors methodologies already explored in SyncDreamer, where a similar technique of constructing a 3D feature volume and projecting it to each view as an additional 3D-aware condition is applied. Additionally, the 3D reconstruction model employed closely resembles the sparse view reconstructor used in LGM, further diluting the distinctiveness of the proposed method.
The idea of combining a 3D reconstruction model to provide 3D-aware features for a 2D multiview generation process is conceptually sound and holds potential. However, for this approach to make a significant impact and to truly advance the field, it requires further development to distinguish it from existing methods and to deepen its methodological insights.
In light of these considerations, I have decided to maintain my initial evaluation score. I believe that with significant refinements, particularly in developing unique aspects and insights beyond those provided by SyncDreamer and LGM, the proposed method could potentially offer a noteworthy contribution to the field.
---
Rebuttal 2:
Comment: Thank you for your feedback and for bringing these concerns. Our previous response to your questions may have led to your misunderstanding of the core contribution of our approach. We would like to clarify the distinct contributions and the unique value of our method.
- **Comparison with SyncDreamer.** In terms of specific practices, our 3D-aware feedback differs from the volume features used in SyncDreamer[1]. While many multi-view generation methods, such as MVDream[2], EpiDiff[3], and SPAD[4], employ "3D-aware" features to enforce multi-view consistency, the 3D information they use does **not derive from explicit and accurate 3D models.** Instead, these methods typically rely on implicit multi-view interactions—like full multi-view self-attention in MVDream, epipolar-constraint attention in EpiDiff and SPAD, or SyncDreamer’s use of latent volumes and projection features. However, these approaches do not accurately reflect the true 3D geometric structure. **Our SC3D framework focuses on explicitly modeling 3D geometric structures with more accurate 3D-aware information derived from the reconstruction module.** Moreover, the explicit geometric structure provided by our SC3D framework allows it to be adaptable and generalizable across various network designs other than the implicit geometric features that should be trained or fine-tuned in each applied scenario.
- **Discussion about LGM.** The SC3D framework is designed to seamlessly integrate multi-view generation with reconstruction, demonstrating robust adaptability across different models. While the LGM[5] model is employed in our reconstructions, SC3D's inherent versatility allows it to also **incorporate alternative models** such as GRM[6], GS-LRM[7], among others. This flexibility enhances the framework's ability to adapt to a wide range of reconstruction models, distinguishing it from more rigid systems that are tightly coupled to specific models.
- **Our key contributions.** Our SC3D **framework** integrates multi-view generation with 3D reconstruction, aiming to reduce the "data bias" between these two stages. To address these challenges, we implement a self-conditioning mechanism and employ a joint training approach. Our multi-view generator in our framework models explicit 3D geometric features by incorporating **accurate 3D-aware** information from the reconstruction model. Our framework is **highly extensible** and can accommodate various multi-view generation networks and reconstruction networks.
We sincerely hope that you will reconsider the contributions and value of our approach. Looking forward to your feedback again.
[1] SyncDreamer: Generating Multiview-consistent Images from a Single-view Image (ICLR 2024)
[2] MVDream: Multi-view Diffusion for 3D Generation (ICLR 2024)
[3] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion (CVPR 2024)
[4] SPAD : Spatially Aware Multiview Diffusers (CVPR 2024)
[5] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation (ECCV 2024)
[6] GRM: Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation
[7] GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting | Summary: This paper proposes a method for 3D asset generation conditioned on a single image. The approach follows the recent trend of a two-stage feed-forward model – first generating multi-view images and then using a sparse-view reconstructor to reconstruct the 3D object (specifically, LGM in this paper). This two-stage model has a significant drawback: the inconsistency of the multi-view generation model may result in an imperfect input for the reconstructor, thus causing quality degradation of the final generated 3D assets.
To address this issue, the authors propose adding a 3D-aware feedback mechanism to improve multi-view consistency and enhance the final reconstructed results. Specifically, a self-conditioned mechanism is introduced, where the output of the reconstruction model is fed into the diffusion model. This output is involved in the diffusion process, leading to better 3D consistency.
Overall, the method seems sound to me.
Strengths: (1) The problem definition and the motivation for the project are very clear.
(2) The paper is well-written and easy to understand.
(3) The method seems sound. By adding the rendering results of a reconstructor as input, which present strong multi-view consistency, the diffusion model is also capable of generating multi-view-consistent images.
(4) Table 3 appears reasonable and as expected.
(5) The appendix provides helpful details on training and network architectures, aiding in the reproduction of the results.
Weaknesses: (1) Some related works lack citation and discussion:
(a) In "Dmv3d: Denoising Multi-View Diffusion Using 3D Large Reconstruction Model" [ICLR 2024], the paper uses a similar mechanism (though not entirely the same) by employing a 3D reconstructor as a multi-view image denoiser.
(b) “Carve3D: Improving Multi-View Reconstruction Consistency for Diffusion Models with RL Finetuning” [CVPR 2024] enhances multi-view consistency through RL fine-tuning.
(2) I encourage the authors to provide more visual results to help readers understand and appreciate the diffusion/reconstruction process. For example, could the authors provide some visual results of $\tilde{x}_0$ at different denoising steps?
(3) In the comparisons, although quantitative results are provided, could the authors include some qualitative (visual) comparisons to the baseline methods?
Other minor issues:
(1) Line 118, The Plucker coordinate should be (d, o x d)
(2) Line 206, We -> we
(3) Line 225, meshe -> meshes
Technical Quality: 3
Clarity: 3
Questions for Authors: Not applicable
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: (1) The method is limited to object-level reconstruction with a clean background. Though this is a common limitation in recent related works, I encourage the authors to explore this issue in future work.
(2) As discussed in the paper, extracting high-quality surface geometry from the Gaussian model remains an open problem. This is an interesting topic for future research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***1. Some related works lack citation and discussion: DMV3D and Carve3D.***
Thanks for your suggestion, and we will cite DMV3D and Carve3D with more discussions in our revised paper:
- DMV3D employs a 3D reconstruction model as the 2D multi-view denoiser in a multiview diffusion framework, to achieve generic end-to-end 3D generation. However, it does not leverage the powerful capabilities of pretrained image or video diffusion models, and training from scratch on 3D data limits its generalization. In contrast, our unified framework can leverage the visual prior from large image and video datasets, offering greater potential and capacity to tackle complex 3D modeling challenges.
- Carve3D employs a RLFT algorithm with a multi-view consistency metric to enhance the consistency of multi-view diffusion models. This metric is computed by comparing the generated multi-view images with those rendered from the reconstructed NeRF. However, Carve3D does not address the challenge of poor reconstruction quality resulting from limited and differently distributed training data for reconstruction model, which is the second argument for "data bias" we put forward in line 32 of the original paper.
***2. I encourage the authors to provide more visual results to help readers understand and appreciate the diffusion/reconstruction process. For example, could the authors provide some visual results of x~0 at different denoising steps?***
Thanks for the helpful advice. We visualize reconstruction results at different denoising steps in **Fig.5 of the attached PDF**. Results show that floaters and distorted geometries are generated in the early stages due to multi-view inconsistency. The quality of geometry and appearance tends to be higher in denoising process.
***3. In the comparisons, although quantitative results are provided, could the authors include some qualitative (visual) comparisons to the baseline methods?***
We add more visual comparisons to the baseline method and results in **Fig.3 and Fig.4 of our attached PDF**. We compare our SC3D with SOTA image-to-multiview generation methods and image-to-3D generation methods. The results show that our SC3D generates more consistent and higher-quality multi-view images, and produces 3D assets with superior geometry and textures. We provide our detailed analysis **in the common response**.
***4. Other minor issues about typos.***
Sorry for our carelessness. We will fix it in the revised version. | Summary: The paper observes that the current state-of-the-art image-to-3D generation models consist of two separate parts: generate multi-view images from a single image and run on top the 3D reconstruction. This process has no feedback loop, i.e. the reconstruction does not inform the image generation which in turn leads to a worse quality of reconstruction. They propose a method that builds in a feedback to loop back the feedback of the reconstruction into the diffusion process. They report superior 3D reconstruction quality over the usual two separate step method.
Strengths: 1. The overall idea that drives the paper to provide a link between 3D reconstruction and diffusion at training and inference time is very powerful and novel, and has not been explored in existing text-to-3D papers. I think this is a significant asset of the paper in a crowded area.
2. The results presented seem to improve quite a bit over the existing state-of-the-art for the results shown.
Weaknesses: 1. The presentation of the paper is not clear.
- In Fig. 1 the paper is describing an iterative process. Fig. 1 has also no output, but suggests the 3D representation is the output. However, Fig. 2 suggests the (multi-view) images are the final output? Are the two decoders the same? In the paper you are referring to different models G and F. They are not mapped to the figure to get a better picture.
- Lines 169-172: this is describing the training strategy. That should be moved to the part starting from line 180 where you are actually describing the training strategy.
- Equation 1: c_skip is not explained
- The paper has many typos. Especially in part 4, they appear in almost every paragraph.
- Lines 227-228: This statement seems contradicting: “Directly employing a NeRF-based feed-forward model during the training process significantly reduces training speed due to the computational demands of volumetric rendering.”
- Replacing the algorithm code with more concise pseudo code may make it much easier for more readers to understand.
2. Comparisons are not very comprehensive
- None of the methods in Figure 1 are qualitatively compared.
- In Figure 3, it seems different views are compared in the first and second column
- Minor: It would be also really helpful to introduce some visual cues into figure 9 to easier grasp the results.
3. Some claims are not justified
- Is the section on augmenting the diffusion model with camera control in 3.1 a claimed contribution of the paper? The statement that: “This approach allows for more detailed and accurate 3D rendering, as pixel-specific embedding enhances the model’s ability to handle complex variations in depth and perspective across the video frames.” Is not justified at all, as other types of embeddings are not ablated.
Technical Quality: 3
Clarity: 2
Questions for Authors: My questions are described in the weaknesses section, but the most important is fleshing out the comparisons to understand what sort of improvement this method makes. The quantitative comparisons are dependent on the generated multi-view images to compare with for PSNR, so the reconstructed results aren’t necessarily good, but rather just match what the model generated. Having qualitative comparisons to existing methods would go a long way in answering my question of whether or not the proposed method is an improvement.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Overall the paper does describe some limitations of the method, but it’s not clear if they are relevant. For example, is using the gaussian splatting method really a limitation in this case? I’d be interested to know how long this method takes (is it much slower than LGM baseline), how computationally intensive it is, or how sensitive it is to the initial generation by the multi-view diffusion model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***1. The presentation of the paper is not clear.***
- *Confusion about Fig.1 and Fig.2 in the original paper.* We slightly adjust these two figures to ensure that both outputs are 3D representations (The updated figure sees **Fig.1 in the attached PDF**.). The two decoders are the same VAE decoder, and the two reconstruction models are the same too. We also include symbols $\mathcal{G}$ and $F$ in the updated figure.
- *Move the lines 169-172 to where we actually describe the training strategy.* We agree with the suggestion and the mentioned lines are moved to the "Training Strategy" section in the revised version.
- *Explanation of c_skip in Equ.1.* $c_\text{skip}$ is a parameter that controls how much of the original x_0 is retained. Its form is similar to the skip connection and is defined as $c_\text{skip}(\sigma)=\frac{1}{1+\sigma^2}$. The definition can be found in Appendix A.1 of our original paper.
- *Typos.* We committ to fixing these errors, and have conducted a thorough proofreading of the manuscript.
- *Contradictory statement in lines 227-228.* Sorry that our description causes some misunderstanding. The statement aims to show that the low speed of volumetric rendering makes it challenging to replace our current Gaussian Splatting reconstruction model with a NeRF-based reconstruction model. We will make it clearer.
- *Replacing the algorithm code with more concise pseudo code.* We provide the pseudo code **in the common response**, and will update the manuscript.
***2. Comparisons are not very comprehensive.***
- *None of the methods in Figure 1 are qualitatively compared.* We include more comprehensive qualitative comparisons **in our attached PDF**. As shown in **Fig.4**, the two-stage baseline methods like LGM and InstantMesh, often generate incorrect or incomplete geometry due to the inconsistency of the generated multi-view images. We also conduct ablation studies on our feedback machinism in **Fig.6**. The reconstructed results show that our SC3D with full 3D-aware feedback achieves superior performance in both geometry and texture. Please check **the common response** for detailed analysis and visualization.
- *Comparison of different views in Figure 3.* We apologize for the confusion and fix it in **Fig.4 of the attached PDF**.
- *Add visual cues in Figure 9.* We appreciate the suggestion and will add visual cues as we did in our attached PDF.
***3. Some claims are not justified.***
- *Is the section on augmenting the diffusion model with camera control in 3.1 a claimed contribution of the paper?* The usage of Plücker embedding is not one of our paper’s contributions. It is a common positional encoding utilized in several recent works [1~4]. Our approach simply adopts this design.
[1] DMV3D: Denoising Multi-View Diffusion Using 3D Large Reconstruction Model
[2] SPAD: Spatially Aware Multi-View Diffusers
[3] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion
[4] Free3D: Consistent Novel View Synthesis without 3D Representation
***4. My questions are described in the weaknesses section, but the most important is fleshing out the comparisons to understand what sort of improvement this method makes. Having qualitative comparisons to existing methods would go a long way in answering my question.***
We've provided our detailed responses in the above reply and the common response. In summary, we conduct detailed visual comparisons in **Fig.3, Fig.4 and Fig.6 of our attached PDF**. Qualitative comparisons for image-to-multiview (Fig.3) and image-to-3D (Fig.4) generation show that our SC3D generates more consistent and higher-quality 3D assets than other methods. The ablation study on feedback (Fig.6) shows the importance of our feedback loop in improving texture quality and geometric details. We provide the detailed analysis in **the common response**.
***5. Overall the paper does describe some limitations of the method, but it’s not clear if they are relevant. For example, is using the gaussian splatting method really a limitation in this case? I’d be interested to know how long this method takes (is it much slower than LGM baseline), how computationally intensive it is, or how sensitive it is to the initial generation by the multi-view diffusion model.***
- *Is using the GS method a limitation?* We consider that 3D mesh is more commonly used in downstream applications, and there are still challenges in converting Gaussian Splatting to high-quality meshes.
**Table B**
|Model|Inference time|
|-|-|
|LGM baseline (ImageDream + LGM)|1.225s|
|SV3D + LGM|24.18s|
|SC3D (SVD + LGM + Feedback) |25.19s|
- *Computational cost.* We acknowledge that our SC3D has limitations in computation cost, and will include it in the revised version. In **Tab.B**, we compare the inference time of baseline methods under the same setting. The LGM baseline employs ImageDream[5] to generate 4 views of 256x256 resolution, and reconstruct them into 3DGS. Our SC3D approach uses SVD to generate 8 views of 512x512 resolution. **For a fair comparison**, we report the inference time of "SV3D + LGM", where SV3D[6] is a multi-view generator fine-tuned from SVD. Compared to "SV3D + LGM", our additional overhead mainly arises from the feedback mechanism at each step, involving VAE decoding, 3D reconstruction and rendering, and conditioning injection. As shown in **Tab.B**, the inference time remains within acceptable bounds.
- *Sensitivity to initial multi-view generation.* SC3D is not sensitive to the initial multiviews. Our reconstruction model may produce poor results in the early steps due to the inconsistency of the initial multi-view images. But the quality of the reconstructed results rapidly increases in the denoising process (see **Fig.5 in the attached PDF**).
[5] ImageDream: Image-Prompt Multi-view Diffusion for 3D Generation
[6] SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal. The additional comparisons and ablations are very insightful, and have increased my perception of the paper. Thus, I am increasing my score. However, the paper will require a significant amount of improvement in the presentation, which makes me less certain that it is ready for publication now.
---
Reply to Comment 1.1.1:
Comment: We appreciate your feedback and recognition of the improvements in our revised submission. We've revised the figures for clarity, improved textual organization and clarity, provided additional technical explanations, and thoroughly proofread the manuscript to enhance overall quality. Thank you for your guidance in improving our submission! | Summary: This paper proposes SC3D for the single-image-to-3D generation, which integrates the diffusion-based multi-view generation and Gaussians-based 3D reconstruction through a self-conditioning mechanism. Specifically, during each denoising step, SC3D injects the rendered image and geometric map from the reconstruction model into the denoising process to enhance the multi-view consistency of the multi-view generated images. Experiments on GSO dataset demonstrates its superiority over existing methods mentioned in this paper.
Strengths: 1. SC3D integrates multi-view image generation and 3D reconstruction into a single framework, ensuring similar data distribution between the two modules and thereby improving reconstruction quality during the reference process.
2. SC3D proposes a self-conditiond 3D-award feedback mechanism to bridge the multi-view image generation and 3D reconstruction, in which the rendered images and geometric map are injected in the multi-view generation network. Such design makes sense and could improve the consistency of the generated results from the multi-view generation network.
Weaknesses: 1. Lack detailed visual comparisons with baseline methods. The authors only compare SC3D with LGM but do not show results generated from other baselines, making the visual comparison results less convincing.
2. The paper suffers from poor organization. For example, Figure 4 and Figure 5 are not referenced anywhere in the text. The purpose of Figure 6 is confusing, as its caption suggests it shows results from another work, and it is difficult to discern differences among the three rows. Additionally, the paper's typesetting is of poor quality. There are many blank spaces in the text.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Will jointly training the multi-view generation network and 3D reconstruction network make the training unstable? Will the jointly training mechanism increase the training time? More details about potential disadvantages of jointly training should be discussed in the paper.
2. As mentioned in Line 211-218, the setting of two ablated experiments seems same, what is the difference between them?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Please refer the weaknesses and questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***1. Lack detailed visual comparisons with baseline methods.***
We conduct comparisons with more baseline methods, and show the visualization results in **Fig.3 and Fig.4 of the attached PDF**. We compare SC3D with the SOTA image-to-multiview and image-to-3D generation methods.
- *Image-to-multiview generation.* We compare SC3D with SyncDreamer[1], SV3D[2] and VideoMV[3], and present the qualitative results in **Fig.3 of our attached PDF**. SyncDreamer and SV3D fine-tune image or video diffusion models on 3D datasets but do not use explicit 3D information, often resulting in blurry textures or inconsistent details. VideoMV aggregates rendered views from reconstructed 3D models at the inference stage, but it fails to take into account the "data bias" between these two stages. Although VideoMV improves the multi-view consistency, it introduces biased information from reconstructed 3D models, leading to results that are unaligned with the input image. Our SC3D framework involves the joint training of the two stages and uses geometry and appearance feedback for multi-view generation, enhancing the consistency and quality of the generated multi-view images.
- *Image-to-3D generation.* We compare SC3D with TripoSR[4], VideoMV[3], LGM[5] and InstantMesh[6], with visualization results in **Fig. 4 of the attached PDF**. TripoSR reconstructs 3D model from a single image without using generative models, resulting in low-quality geometry/appearance and limited generalizablity. VideoMV reconstructs 3DGS from its generated multi-view images. Due to its biased multiview generation (as indicated in Fig.3 in the attached PDF), it may generate inconsistent texture against the input image and somewhat distorted geometry. Moreover, the two-stage baseline methods like LGM and InstantMesh (i.e., an off-the-shelf image-to-multiview generation method plus LGM or InstantMesh to fulfil the image-to-3D generation) produce incomplete or inconsistent geometry due to the gap between the two stages. Our SC3D bridges the multiview generation and 3D reconstruction where the benefits of each module can be transferred to the other one, therefore generating high-quality 3D assets.
[1] SyncDreamer: Generating Multiview-consistent Images from a Single-view Image (ICLR 2024)
[2] SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion (Arxiv 2403.12008)
[3] VideoMV: Consistent Multi-View Generation Based on Large Video Generative Model (ECCV 2024)
[4] TripoSR: Fast 3D Object Reconstruction from a Single Image (Arxiv 2403.02151)
[5] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation (ECCV 2024)
[6] InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models (Arxiv 2404.07191)
***2. The paper suffers from poor organization.***
We acknowledge the issues with our paper's organization and typesetting, and will seriously polish the manuscript.
- *Figures 4 and 5 are currently not referenced in the text.* In the revised version, we will ensure that these figures, which illustrate the comparison with baseline methods and out-of-distribution (OOD) testing results, are properly integrated into Section 4 to strengthen our analysis and conclusions.
- *The purpose of Figure 6 is confusing, as its caption suggests it shows results from another work.* We included Figure 6 in the original version to show that VideoMV's fusion of reconstruction results at the inference stage leads to a deviation in appearance from the input. To avoid any confusion, we will remove Figure 6 and provide a clearer and more comprehensive comparison like Fig.3 and Fig.4 in the attached PDF.
- *Typesetting and Formatting.* We will carefully re-evaluate the typesetting to eliminate blank spaces and enhance the overall layout.
***3. Will jointly training the multi-view generation network and 3D reconstruction network make the training unstable? Will the jointly training mechanism increase the training time? More details about potential disadvantages of jointly training should be discussed in the paper.***
- *Training stability.* Our joint training method is stable. This benefits from the following two aspects. (We will include additional training details in the revised version.)
1. The pretrained video diffusion model effectively utilizes its powerful visual generation capabilities to produce high-quality initial images.
2. We initialize the output layers of the condition encoders to zero, ensuring that even suboptimal initial reconstruction results do not adversely affect the network significantly.
- *Training time.* Jointly training the multi-view generation network and the 3D reconstruction network increases training time and requires more GPU memory, as it involves training two models simultaneously with the feedback mechanism. We've measured the time required for 1,000 training steps on a single A100 GPU with the setting listed in Appendix B of our paper, as detailed in **Tab.A**. We will further discuss the computation requirements in Section 4.3 (Limitations) of the revised paper. It is worth noting that SC3D has minimal impact on inference speed, as the 3D feedback mechanism incurs only a slight overhead.
**Table A**
| Setting | Training Time |
|-|-|
| train multi-view diffusion model only (SVD) | 15 min |
| train reconstruction model only (LGM) | 10 min |
| train SC3D (SVD + LGM + Feedback) | 36 min |
***4. As mentioned in Line 211-218, the setting of two ablated experiments seems the same, what is the difference between them?***
Sorry for the confusion of the notations in the table. Actually, in the "Variant" column, "SVD" means the metrics for multi-view generation results, while "GS" shows the metrics for the reconstructed 3D results. In the revised version, we will merge Tab.2 and Tab.3 from the original paper to provide a clearer comparison. The updated table refers to **Tab.1 in our attached PDF**.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer KsJi
Comment: Thanks for providing the detailed rebuttal and additional experiments. My concerns are addresses. I will change my rating to Borderline accept. | Rebuttal 1:
Rebuttal: We thank all reviewers for the constructive comments and for recognizing the novelty and effectiveness of our self-conditioned image-to-3D generation method with 3D-aware feedback. We also extend our gratitude to the reviewers for identifying shortcomings in our paper's presentation and organization. We will seriously make revisions to enhance its readability and presentation. Here, we provide responses to common questions raised by reviewers.
- **Generalizability.** As shown in **Fig.2 of our attached PDF**, SC3D has strong generalizability, generating high-quality 3D assets from out-of-distribution images, including real world images.
- **Visual comparison with baseline methods.** Our SC3D can generate multi-view images and 3D models that are consistent with each other. To further assess its effectiveness, we compare SC3D with the SOTA image-to-multiview and image-to-3D generation methods.
- **Image-to-multiview generation.** We compare SC3D with SyncDreamer[1], SV3D[2] and VideoMV[3], as shown in **Fig.3** of the attached PDF. SyncDreamer and SV3D fine-tune image or video diffusion models on 3D datasets but lack explicit 3D information, often resulting in blurry textures or inconsistent details. VideoMV aggregates rendered views from reconstructed 3D models at the inference stage but fails to take into account the "data bias" between two stages. Although VideoMV improves the multi-view consistency, it introduces biased information from the reconstruction stage, leading to results that are unaligned with the input image. Our SC3D uses joint training of the two stages and uses geometry and appearance feedback for multi-view generation, generating consistent and high-quality multi-view images.
- **Image-to-3D generation.** We compare SC3D with TripoSR[4], VideoMV[3], LGM[5] and InstantMesh[6], as visualized in **Fig.4** of the attached PDF. TripoSR struggles with high-quality geometry and appearance due to lacking large pre-trained generative models. VideoMV reconstructs 3DGS from its generated multi-view images, but its inherent biases in multiview generation can lead to misaligned textures and distorted geometries. Two-stage methods such as LGM and InstantMesh(comprising an off-the-shelf image-to-multiview generation method followed by reconstruction models for the image-to-3D generation process), often yield incomplete geometry due to the disparity between multiview generation and 3D reconstruction. In contrast, our SC3D framework integrates multiview generation and 3D reconstruction, enhancing each module's strengths to produce high-quality 3D assets.
- **Performance analysis.** To effectively analyze our method's effectiveness, we visualize the generation process and conduct comprehensive ablation experiments.
- **Denoising process.** **Fig.5** of the attached PDF shows the reconstruction results at various denoising steps, demonstrating our self-conditioning denoising process. The geometry and texture of objects progressively improve through iterative refinement.
- **Ablation study.** We've reorganized the ablation study and the presentation (see qualitative results in **Fig.6** and quantitative results in **Tab.1** of the attached PDF). Visualization results show that the baseline without feedback generates low-quality and inconsistent results. Using only coordinates map feedback results in blurry textures, while only RGB feedback leads to poor geometric details. Combining both significantly enhances geometry and texture quality. We've alse added an ablation experiment using only RGB and merged the original tables into **Tab.1** in the attached PDF to for better comparison. Quantitative results show that feedback using both RGB and coordinates map achieves superior outcomes. Furthermore, our framework reduces the performance gap between the generated multi-view images and 3D representation, enhancing overall performance.
- **Presentation and organization.** We take the issues with our paper's organization and typesetting seriously and are committed to improving the manuscript to enhance its readability and presentation.
- **More concise pseudo code.** Our revised pseudo code is shown as Algorithm 1 and Algorithm 2.
```
Algorithm1 Train_loss
Input: x, cond_image, cameras, timestep
Output: loss
Description: Returns the loss on a training example x. Details about EDM are omitted here.
Begin
noise <- Sample from Normal Distribution
noisy_x <- Add_Noise(x, noise, timestep)
pred_x <- F(noisy_x, cond_image, timestep, cameras)
pred_i <- VAE_Decoder(pred_x)
self_cond <- G(pred_i, cameras, timestep)
if Random_Uniform(0, 1) > 0.5 then
pred_x <- F(noisy_x, cond_image, timestep, cameras, self_cond)
loss_mv <- MSE_Loss(pred_x, x)
loss_recon <- MSE_Loss(self_cond, x) + LPIPS_Loss(self_cond, x)
loss <- loss_mv + loss_recon
Return loss
End
```
```
Algorithm2 Inference
Input: cond_image, cameras, timesteps
Output: images, 3d_model
Description: Generate multi-view images and 3D model from a condition image.
Begin
self_cond <- None
x_t <- Sample from Normal Distribution
for each timestep in timesteps do
pred_x <- F(x_t, cond_image, timestep, cameras, self_cond)
pred_i <- VAE_Decoder(pred_x)
self_cond <- G(pred_i, cameras, timestep)
End For
Return pred_i, self_cond
End
```
[1] SyncDreamer: Generating Multiview-consistent Images from a Single-view Image (ICLR 2024)
[2] SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion (Arxiv 2403.12008)
[3] VideoMV: Consistent Multi-View Generation Based on Large Video Generative Model (ECCV 2024)
[4] TripoSR: Fast 3D Object Reconstruction from a Single Image (Arxiv 2403.02151)
[5] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation (ECCV 2024)
[6] InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models (Arxiv 2404.07191)
Pdf: /pdf/a2ab954bab2b29f5b880fdd6fc0dad9956c59c07.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Extracting Training Data from Molecular Pre-trained Models | Accept (poster) | Summary: This paper tackles the problem of extracting private training molecular data from pre-trained models. To address this problem, the authors propose a machine learning method based on a model-independent scoring function and a molecule extraction policy network. The privacy of the training data is an emerging issue in scientific applications, and this paper showed that the proposed method has practical potential in the problem of extracting private training molecular data.
Strengths: 1. Well-organized manuscript.
2. Clear motivation and problem definition.
3. Comprehensive experiments to demonstrate the effectiveness of the proposed method.
Weaknesses: 1. Although the authors made a scenario to clarify the problem, it is not realistic because we can easily hide the molecular representations of classification and regression models.
---
2. If the authors claim the importance of the training data privacy in the molecular representation learning tasks, I cannot agree on it because extensive molecular structures for representation learning are already available in public databases, such as PubChem and ChEMBL. Note that the molecular data for supervised learning, rather than representation learning, is expensive because measuring the target physical and chemical properties of the molecules is time-consuming.
---
3. The score function in Eq. (1) is trivial, and the manuscript does not present some theoretical or chemical justification for the score function.
---
4. The proposed method should be evaluated in the regression tasks to demonstrate its practical potential in real-world chemical applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the Weaknesses section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to the Weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer qKJK.
We greatly appreciate reviewer qKJK’s insightful feedback and critical comments to help us refine our work.
>W1\&4: Although the authors made a scenario to clarify the problem, it is not realistic ...; The proposed method should be evaluated in the regression tasks ...
Thanks for your comment. We first clarify that our research is centered on model-sharing collaboration for molecular data. In this context, sharing a graph representation learning model is more realistic than sharing classification or regression models due to several reasons. On one hand, from the perspective of data owner, sharing a classification/regression model poses a higher privacy risk due to the potential leakage of confidential label information, such as properties of molecules. These labels are highly valuable and sensitive, as highlighted by your insightful comment, making data owners hesitant to share models trained on such confidential data. Under such consideration, research efforts have proposed collaborative approaches based on sharing graph representation learning models [1,7]. On the other hand, from the model user's standpoint, there is often a need for a general pre-trained model to facilitate various downstream tasks, e.g., property prediction, drug-target interactions and etc. Under this consideration, since it is impractical for data owner to share different classification/regression models for different tasks, sharing a general representation learning model presents more beneficial. Previous efforts also highlight the necessity for such a general pre-trained representation learning model [2, 3].
We also agree with you that in some target-specific collaboration cases, classification/regression model is shared. Therefore, as suggested, we further explored data extraction attacks on regression models. When integrating regression tasks, we adapted our model by replacing the final output of the regression with the representation in Eq (2). In our implementation, we chose the real-world chemical dataset FreeSolv [4] regressed the hydration free energy, utilizing 5\% of the molecules from FreeSolv as an auxiliary dataset. The detailed results are as follows. We discovered that, our model still performs well with regression model.
|~|One Step||||Two Step||||
|-|-|-|-|-|-|-|-|-|
|~|K = 50||K = 100||K = 100||K = 200||
|~|Prec.|FCD|Prec.|FCD|Prec.|FCD|Prec.|FCD|
|MLP|0.39|19.00|0.21|**16.82**|0.22|**17.26**|0.16|17.30|
|Ours|**0.39**|**17.38**|**0.28**|17.42|**0.29**|18.31|**0.33**|**16.63**|
>W2: If the authors claim the importance of the training data privacy in the molecular representation learning tasks, I cannot agree on ...
We would like to clarify that privacy protection of molecular structures is also significant.
Take a real-world example, consider the MELLODDY (MachinE Learning Ledger Orchestration for Drug Discovery) project, where 10 pharmaceutical companies collaborated to train a model for structure-activity. These companies had substantial privacy concerns regarding their molecule structure, treating them as confidential information that cannot be shared [6]. In addition, previous research has demonstrated that molecular structure often involves sensitive intellectual property information and must be controlled by the owning company at all times [4]. Moreover, information on the structural similarity between partners’ compounds is also considered sensitive [5]. These underscore the privacy issue of molecular structures.
We also thank the reviewer reminding us that the labels for molecular data, such as target physical and chemical properties, are also private. So we extend our model's application to share classification/regression model that is trained on labels (see our response to W1). We will include these discussions in the revised version and explore this aspect further in the future. Thank you for providing this important perspective again.
>W3: The score function in Eq. (1) is trivial...
Thank you for your valuable and insightful feedback. Here, we illustrate the rationale behind the scoring function from a theoretical perspective.
Assume that graph $G$ is composed by $G:=G_{1} \cup G_{2}$ and graph pre-trained model is represented by $f$. Let the loss function for the pre-training task be $\mathcal{L}$, which takes graph representation as input. Further, we assume that $\mathcal{L}$ is a bijection and linear mapping. Without loss of generality, we assume that loss function belongs to the category of weighted sum, that is $\mathcal{L}(f(G)) = \alpha_1 \mathcal{L}(f(G_1)) + \alpha_2 \mathcal{L}(f(G_2))$, with $\alpha_1$ and $\alpha_2$ serve as hyper-parameters (This assumption is common among various tasks. For instance, in the most common case of cross-entropy for classification task, $\alpha_1 = |G_1|/|G|$ and $\alpha_2 = |G_2|/|G|$.).
We can infer that $f(G)=\mathcal{L}^{-1}(\alpha_1 \mathcal{L}(f(G_1)) + \alpha_2 \mathcal{L}(f(G_2)))=\alpha_{1}f(G_{1})+\alpha_{2}f(G_{2})$.
Given the theoretical analysis, we can observe that the relationship between $f(G)$, $f(G_{1})$, and $f(G_{2})$ is akin to a weighted combination, which justifies our design of score function.
Reference:
[1] Xie, Han, et al. "Federated graph classification over non-iid graphs." NeurIPS, 2021.
[2] Hu, Weihua, et al. "Strategies for pre-training graph neural networks." ICLR, 2019.
[3] Xia, Jun, et al. "A systematic survey of chemical pre-trained models." IJCAI, 2023.
[4] Mobley, David L., and J. Peter Guthrie. "FreeSolv: a database of experimental and calculated hydration free energies, with input files." 2014.
[5] Simm, Jaak, et al. “Splitting chemical structure data sets for federated privacy-preserving machine learning.”, 2021.
[6] MELLODDY: Machine Learning Ledger Orchestration for Drug Discovery. https://www.melloddy.eu/.
[7] Tan, Yue, et al. "Federated learning on non-iid graphs via structural knowledge sharing." 2023.
---
Rebuttal 2:
Title: Any unanswered questions yet?
Comment: Dear Reviewer qKJK,
Thanks again for your detailed review and questions. We have provided detailed answers to them in the rebuttal. As we are approaching the end of the discussion phase, we would like to kindly ask if you still have any unanswered questions about our paper?
Best, Authors
---
Rebuttal 3:
Comment: Thank you for the careful response.
However, as a researcher in chemical science, I still do not agree with the scenario of this work.
Researchers and engineers in academic and industrial chemistry want to build a classification or regression model for their molecular datasets because they need to skip time-consuming chemical experiments to observe the physical and chemical properties of target molecules.
The graph representation of the molecule is just intermediate information for classification and regression in most real-world chemical applications. Furthermore, as I mentioned, many large molecular databases containing extensive 2D and 3D molecular structures are publicly accessible, such as PubChem, ChEMBL, and QM. For this reason, training data for molecular representation learning is not expensive and private.
The authors need to handle the validity of their problem definition before developing the method.
---
Rebuttal Comment 3.1:
Title: Response to Reviewer qKJK.
Comment: We appreciate your perspective in chemical science and understand the concerns you have raised. Firstly, we recognize the importance of developing classification or regression models in reducing the need for time-consuming experiments. Correspondingly, we have conducted experiments on the FreeSolv regression task and achieved promising results, indicating that our approach is also suitable for regression-type models.
Secondly, we believe that due to the confidential nature of label information, and in order to ensure the transferability of the model when dealing with Out-of-Distribution data, selecting to release the intermediate layer representations of the classification/regression model is a reasonable approach in a practical scenario. Specifically, our attack on graph representation is suitable for the scenario.
Moreover, when publicly available datasets’ distribution diverge from private data, and the privacy concerns of data owners who are unwilling to disclose any information about the labels, the choice would still be to employ SSL pre-training to obtain graph representations.
Lastly, we will revise the problem formulation of our manuscript and incorporate the three cases discussed above. In conclusion, we are committed to refining our manuscript to better align with the concerns and expectations of the chemical science research community. We believe that our work can make a significant contribution to the field, and we are grateful for the opportunity to improve our submission based on your valuable feedback. | Summary: This paper explores the vulnerabilities of molecular pre-trained models to data extraction attacks. The authors introduce a novel molecule generation approach and a model-independent scoring function to identify molecules potentially originating from private datasets. They also present a Molecule Extraction Policy Network to optimize the search process for high-scoring molecules.
Strengths: 1. The introduction of a novel approach to generate molecules and a scoring function specific to molecular data extraction offers a unique perspective on security concerns associated with pre-trained models.
2. The paper presents a sophisticated technical framework, including reinforcement learning to refine the search for molecular candidates.
3. The availability of the codebase and datasets enhances the reproducibility of the research and facilitates further investigation by the community.
Weaknesses: 1. The experiments might not fully capture the diversity and complexity of real-world datasets, which could affect the generalizability of the findings. I recommend conducting additional experiments using a wider variety of datasets to better evaluate the model's performance in real-world scenarios.
2. The scoring function is based on G, which is a linear combination of the representations of R and M. However, considering that the target model is treated as a black-box model, it is unclear whether there is a linear relationship between the representations of R and M obtained by the black-box model and the representation of G. To clarify this point, I suggest providing a thorough analysis of the relationship between the representations, as well as discussing any assumptions made in the scoring function.
Technical Quality: 3
Clarity: 2
Questions for Authors: See above
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['Ethics review needed: Safety and security']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer o6iJ.
We sincerely thank reviewer o6iJ for detailed feedback and we address the reviewer’s concerns as follows.
>Q1: The experiments might not fully capture the diversity and complexity of real-world datasets, which could affect the generalizability of the findings. I recommend conducting additional experiments using a wider variety of datasets to better evaluate the model's performance in real-world scenarios.
Thank you for your valuable suggestions. We have further evaluated our approach on other real-world datasets. We utilized the PubChem dataset from [1]. We sampled 20,000 molecules from it to serve as an auxiliary dataset and employed the other molecules as the pre-training dataset. The specific results are as follows and the performance on the PubChem dataset indicates the broad applicability of our method.
|~|One Step||
|-|-|-|
|~|K = 50||
|~|Prec.|FCD|Prec.|FCD|Prec.|FCD|Prec.|FCD|
|MLP|0.43|18.26|0.30|**16.75**|0.35|18.32|0.25|**16.52**|
|Ours|**0.46**|**17.29**|**0.37**|17.42|**0.42**|**16.54**|**0.34**|17.25|
>Q2: The scoring function is based on G, which is a linear combination of the representations of R and M. However, considering that the target model is treated as a black-box model, it is unclear whether there is a linear relationship between the representations of R and M obtained by the black-box model and the representation of G. To clarify this point, I suggest providing a thorough analysis of the relationship between the representations, as well as discussing any assumptions made in the scoring function.
Thank you for your valuable and insightful feedback. Here, we illustrate the rationale behind the scoring function from a theoretical perspective.
Assume that graph $G$ is composed by $G:=G_{1} \cup G_{2}$ and graph pre-trained model is represented by $f$. Let the loss function for the pre-training task be $\mathcal{L}$, which takes graph representation as input. Further, we assume that $\mathcal{L}$ is a bijection and linear mapping. Without loss of generality, we assume that loss function belongs to the category of weighted sum, that is $\mathcal{L}(f(G)) = \alpha_1 \mathcal{L}(f(G_1)) + \alpha_2 \mathcal{L}(f(G_2))$, with $\alpha_1$ and $\alpha_2$ serve as hyper-parameters (This assumption is common among various tasks. For instance, in the most common case of cross-entropy for classification task, $\alpha_1 = |G_1|/|G|$ and $\alpha_2 = |G_2|/|G|$.).
We can infer that $f(G)=\mathcal{L}^{-1}(\alpha_1 \mathcal{L}(f(G_1)) + \alpha_2 \mathcal{L}(f(G_2)))=\alpha_{1}f(G_{1})+\alpha_{2}f(G_{2})$.
Given the theoretical analysis, we can observe that the relationship between $f(G)$, $f(G_{1})$, and $f(G_{2})$ is akin to a weighted combination, which justifies our design of score function.
Reference:
[1] Y. Wang, J. Wang, et al. "MolCLR: Molecular Contrastive Learning of Representations via Graph Neural Networks." Nat. Mach. Intell., 2022.
---
Rebuttal 2:
Title: Any unanswered questions yet?
Comment: Dear Reviewer o6iJ,
Thanks again for your detailed review and questions. We have provided detailed answers to them in the rebuttal. As we are approaching the end of the discussion phase, we would like to kindly ask if you still have any unanswered questions about our paper?
Best, Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for the responses. And I have also read other reviewer's comments. The rebuttal has addressed some of my concerns. I have raised my rating accordingly. | Summary: This paper investigates the issue of data leakage risks when using pre-trained molecular models in a shared environment and proposes a method to extract training data from such pre-trained models. Specifically, the authors employ a molecular generation method based on templates and a candidate motif bank to attempt to generate potential training data. By designing a scoring function, the model can distinguish substructures that belong to the training data, and using a policy-based reinforcement learning method, the exploration space for generation is narrowed. Finally, the authors evaluated the method on various models trained using ZINC15 dataset.
Strengths: - The issue mentioned in this paper is important and currently overlooked in the community. I appreciate the authors for bringing up this problem and making an initial attempt to address it.
- The proposed method is innovative and satisfactory. Although it does not involve complex model structures, the problem is clearly defined, and the designed modules are targeted at solving the problem. Particularly, analyzing and understanding potential training data from the representation space and then generating based on certain rules make this modeling design interesting to me.
- The experiments selected various pre-training strategies, demonstrating the effectiveness of private data extraction in a cross-model scenario. The conclusions drawn by the authors indicate a significant risk of data leakage.
Weaknesses: While I recognize the research significance and model solution of this work, some design aspects could be improved, specifically:
- Regarding the selection of template structures, the paper ultimately chooses rings as the starting structures for subsequent generation/growth. However, a proportion of molecular data does not contain rings, so this heuristic choice of starting templates might not be the best strategy. The authors could consider constructing a template bank to avoid this issue.
- Are the 20k molecules used in the experiments on line 269 also obtained from ZINC15? Is this a strong iid assumption? I would like to see the effect of data extraction when the auxiliary dataset is replaced with molecules having significantly different distributions, for example, selecting some molecules from the PubChem dataset.
- The meaning of "step" in Table 1 seems unclear. Is it referring to the time-step mentioned earlier? If so, why only explore the case of one and two steps? What characteristics would molecules generated with more steps have? This seems to lack discussion and analysis.
- The author explores few model backbones, so I suggest the authors also conduct experimental evaluation on Graph Transformer or other architectures. And the pertaining data set is also limited, which would be more convincing if another one dataset was added.
Technical Quality: 3
Clarity: 3
Questions for Authors: A key metric in the experimental evaluation is Precision. How do you determine whether the generated molecules exist in the dataset? Since molecules lack unique identifiers and it is easy to consider two identical molecules as different mistakenly, I hope the authors can provide clearer details on how the presence of data is judged.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Please refer to Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Reviewer JnnP.
We greatly appreciate reviewer JnnP for the time and effort you have dedicated to reviewing our paper. We address the reviewer's concerns as follows:
>Q1: Regarding the selection of template structures, the paper ultimately chooses rings as the starting structures for subsequent generation/growth. However, a proportion of molecular data does not contain rings, so this heuristic choice of starting templates might not be the best strategy. The authors could consider constructing a template bank to avoid this issue.
Thanks for your valuable suggestion. We first clarify rationality behind choosing ring structures as starting template is that the ring structures are very common in chemical datasets. In the ZINC15 dataset with 2 million unlabeled molecules, we identified 1,990,890 molecules containing ring structures, constituting the majority of the dataset. Furthermore, the diversity of ring structures offers a wide range of options for template structures (such as tetrahydrofuran and cyclobutane). Therefore, we have implemented a template bank that incorporates the 84 most frequently occurring rings from the auxiliary dataset.
We also appreciate your suggestions to explore template structures that do not contain rings. We further incorporate five ring-free scaffold templates (i.e., alkanes) into the template bank. The performance are shown as follows. It can be observed that incorporating these ring-free templates has led to slight improvement in performance.
|~|One Step||||Two Step||||
|-|-|-|-|-|-|-|-|-|
|~|K = 50||K = 100||K = 100||K = 200||
|~|Prec.|FCD|Prec.|FCD|Prec.|FCD|Prec.|FCD|
|Ours|0.50|19.22|0.35|**19.85**|0.31|23.57|0.51|**23.09**|
|Ours-ring|**0.51**|**19.11**|**0.36**|20.18|**0.37**|**23.27**|**0.52**|23.18|
>Q2: Are the 20k molecules used in the experiments on line 269 also obtained from ZINC15? Is this a strong iid assumption? I would like to see the effect of data extraction when the auxiliary dataset is replaced with molecules having significantly different distributions, for example, selecting some molecules from the PubChem dataset.
We first clarify that 20k molecules employed for the auxiliary dataset were also sourced from ZINC15. In Appendix A.3, we have illustrated the overlap between the pre-training dataset and the auxiliary dataset. We found only 103 identical molecules, indicating a relatively small overlap ratio, which suggests it does not strongly adhere to the IID assumption.
Besides, following the reviewer's insightful suggestion, we sampled 20,000 molecules from PubChem [1] as auxiliary dataset to ensure a difference from the ZINC pre-training dataset. The results based on GraphCL pre-trained model are shown as follows. We observed that under these scenarios, performance of our method have a slight decline. However, it still demonstrates comparable efficacy and showcases the robustness, and this observation aligns with what we presented in Appendix A.4.
|~|One Step||||Two Step||||
|-|-|-|-|-|-|-|-|-|
|~|K = 50||K = 100||K = 100||K = 200||
|~|Prec.|FCD|Prec.|FCD|Prec.|FCD|Prec.|FCD|
|Ours|0.50|19.22|**0.35**|19.85|**0.31**|23.57|**0.51**|**23.09**|
|Ours-PubChem|**0.52**|**18.52**|0.33|**17.90**|0.26|**20.57**|0.31|24.41|
>Q3: The meaning of "step" in Table 1 seems unclear. Is it referring to the time-step mentioned earlier? If so, why only explore the case of one and two steps? What characteristics would molecules generated with more steps have? This seems to lack discussion and analysis.
Apologize for not being clear and appreciate for providing an opportunity to clarify. The "step" mentioned here refers to the time-step as understood by the reviewer. The reason for selecting only one and two steps is that other baseline methods have difficulty managing molecular structures beyond two steps, due to the increase in complexity. Consequently, we confined our comparisons to just one and two steps.
However, our approach based on reinforcement learning can be extended to multiple steps. Here, we extend the time-step, and results are as follows (we only take "Random" as baseline considering the runtime of other baselines). It can be observed that as the time-step increases, performance may decline due to the increased difficulty of extraction caused by the complexity of the molecules. Nevertheless, our model still outperforms the baseline model in precision, indicating the effectiveness of our extraction method.
|~|One Step||Two Step||Three Step||Four Step||
|-|-|-|-|-|-|-|-|-|
|~|$K=50$|$K=100$|$K=100$|$K = 200$|$K=100$|$K = 200$|$K=100$|$K = 200$|
|Random|0.05|0.09|0.09|0.07|0.02|0.00|0.00|0.00|
|Ours|**0.50**|**0.35**|**0.31**|**0.52**|**0.27**|**0.25**|**0.17**|**0.18**|
Table:Precision of molecular extraction results among different time step
---
Rebuttal 2:
Title: Additional Rebuttal
Comment: >Q4: The author explores few model backbones, so I suggest the authors also conduct experimental evaluation on Graph Transformer or other architectures. And the pertaining data set is also limited, which would be more convincing if another one dataset was added.
Sorry for not explaining clearly. In our experiments, we have employed various encoder architectures for molecular pre-trained models. The Grover graph pre-trained model included in Table 1 is based on the GTransformer [2], a Graph Transformer. The performance demonstrates the effectiveness of our model across various pre-trained model architectures.
As for the limitation of pre-training dataset, we further introduced a new dataset. We employed the PubChem dataset used in [1]. We sampled 20k molecules from the PubChem dataset to serve as the auxiliary dataset and employed the other molecules as the pre-training dataset. The specific results are as follows and the performance on the PubChem dataset indicates the broad applicability of our method.
|~|One Step||
|-|-|-|
|~|K = 50||
|~|Prec.|FCD|Prec.|FCD|Prec.|FCD|Prec.|FCD|
|MLP|0.43|18.26|0.30|**16.75**|0.35|18.32|0.25|**16.52**|
|Ours|**0.46**|**17.29**|**0.37**|17.42|**0.42**|**16.54**|**0.34**|17.25|
>Q5: A key metric in the experimental evaluation is Precision. How do you determine whether the generated molecules exist in the dataset? Since molecules lack unique identifiers and it is easy to consider two identical molecules as different mistakenly, I hope the authors can provide clearer details on how the presence of data is judged.
Sorry for not explaining it clearly. We utilized the RDKit library to determine molecular identity [3]. Specifically, it assesses whether there exists an atomic mapping such that each atom in the query molecule can be paired with a corresponding atom in the target molecule, with the same connectivity as in the query. Detailed explanation will be included in the revised version.
Reference:
[1] Y. Wang, J. Wang, et al. "MolCLR: Molecular Contrastive Learning of Representations via Graph Neural Networks." Nat. Mach. Intell., 2022.
[2] Rong, Yu, et al. "Self-supervised graph transformer on large-scale molecular data." NeurIPS, 2020.
[3] Landrum, Greg. "RDKit: A software suite for cheminformatics, computational chemistry, and predictive modeling." 2013.
---
Rebuttal 3:
Title: Any unanswered questions yet?
Comment: Dear Reviewer JnnP,
Thanks again for your detailed review and questions. We have provided detailed answers to them in the rebuttal. As we are approaching the end of the discussion phase, we would like to kindly ask if you still have any unanswered questions about our paper?
Best, Authors
---
Rebuttal Comment 3.1:
Comment: Thanks for the response. I've read through other reviewers' feedback and responses as well. I believe this work holds practical value, so I have provided continued support. Although, as other reviewers have mentioned, the method itself may not be particularly fancy, I think it offers a new perspective and a feasible solution for safely applying molecular pretraining models in collaborative scenarios.
Therefore, I will increase my rating to `7`.
Best regards,
Reviewer JnnP | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On Statistical Rates and Provably Efficient Criteria of Latent Diffusion Transformers (DiTs) | Accept (poster) | Summary: In this paper, the authors explore the statistical and computational limits of latent Diffusion Transformers (DiTs) under the assumption of a low-dimensional linear latent space. Their contributions include an approximation error bound for the DiTs score function, which is sub-linear in the latent space dimension, as well as a sample complexity bound demonstrating convergence of the data distribution generated from the estimated score function. Additionally, they identify efficient criteria for forward inference and backward computation, achieving almost-linear time complexity for both processes.
Strengths: - The authors derived approximation error bounds for transformer-based score estimators in latent DiTs, providing practical structural guidance.
- The paper also provided sample complexity bounds for score estimation and demonstrated recovery of initial data distribution.
- The authors provided efficiency criteria and characterized efficient algorithms for latent DiTs, including almost-linear time algorithms for both forward inference and training.
Weaknesses: - I have reviewed Appendix C. How does the proof for DiTs in this paper differ from that presented in [Chen et al., 2023a]? The assumption of a low-dimensional linear latent space in the context of DiTs is somewhat unclear to me. Is this assumption widely accepted within the DiTs community, or has it been adopted primarily to simplify technical analysis and proofs? Additionally, is there potential for the proof to be generalized beyond this assumption? My concerns are further reinforced by the nontrivial gap suggested in Corollary 3.1.1.
Technical Quality: 3
Clarity: 2
Questions for Authors: See above.
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Reviewer's Question 1:** I have reviewed Appendix C. How does the proof for DiTs in this paper differ from that presented in [Chen et al., 2023a]?
**Response:**
Thanks for the question. Here are some clarifications.
* To prove **score approximation** (Theorem 3.1), our approach utilizes the universal approximation of the Transformer network, while the work [Chen23] relies on the approximation capabilities of the ReLU neural network.
* To prove **score estimation** (Corollary 3.1.1) and distribution estimation (Corollary 3.1.2), our work uses the covering number of the Transformer network, while the work [Chen23] uses the covering number of the ReLU neural network.
* Our work provides the analysis of the **computational limits** for all possible efficient DiT algorithms/methods for both forward inference and backward training under the strong exponential time hypothesis, while the work [Chen23] does not consider the computational limit analysis.
> **Reviewer's Question 2:** The assumption of a low-dimensional linear latent space in the context of DiTs is somewhat unclear to me. Is this assumption widely accepted within the DiTs community, or has it been adopted primarily to simplify technical analysis and proofs?
**Response:**
Thank you for your question. Let us provide some additional details.
The assumption is widely accepted within the DiTs community. In practice, existing works [Peebles23, Ma24] use the DiTs with an autoencoder to compress input data into a low-dimensional latent space, where the autoencoder can be a simple linear layer. This aligns with the low-dimensional linear latent space assumption.
> **Reviewer's Question 3:** Additionally, is there potential for the proof to be generalized beyond this assumption? My concerns are further reinforced by the nontrivial gap suggested in Corollary 3.1.1.
**Response:**
Grateful for your question. Here's some further explanation.
Yes. We can generalize our proof beyond the low-dimensional linear latent space assumption by setting the matrix $B$ in Assumption 2.1 as an identity matrix. However, the linear subspace assumption leads to a more robust conclusion, suggesting that the latent DiTs have the potential to bypass the challenges associated with the high dimensionality of initial data.
---
We hope these points address the reviewer's questions.
We're open to any further questions or clarifications you might have about our work. Thank you!
===
* [Chen23] Minshuo Chen, Kaixuan Huang, Tuo Zhao, and Mengdi Wang. Score approximation, estimation and distribution recovery of diffusion models on low-dimensional data. In International Conference on Machine Learning (ICML), 2023.
* [Peebles23] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023.
* [Ma24] Xin Ma, Yaohui Wang, Gengyun Jia, Xinyuan Chen, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, and Yu Qiao. Latte: Latent diffusion transformer for video generation. arXiv preprint arXiv:2401.03048, 2024. | Summary: This paper explores the statistical and computational limits of latent DiTs:
1. It proves that Transformers are sufficient as universal approximators for the score function in DiTs, with their approximation capacity depending on the latent dimension.
2. Transformer-based score estimators converge to the true score function, indicating that the Transformer architecture is adequate for estimating the original data distribution.
3. It provides provably efficient criteria to demonstrate the existence of almost-linear time algorithms for forward inference and backward computation, offering a theoretical basis for the efficient training and inference of DiTs.
Strengths: The advantages of the paper are as follows:
1. This paper derives an approximation error bound for the score network that is sub-linear in the latent space dimension. This finding not only explains the expressiveness of latent DiTs (under mild assumptions) but also offers guidance for the structural configuration of the score network in practical implementations.
2. This paper prove that learned score estimator is able to recover the initial data distribution, which provides a theoretical basis for the feasibility of using neural networks to estimate the score.
3. This paper proves the existence of almost-linear time DiT training algorithms for forward inference and backward computation, providing a theoretical foundation for efficient training and inference.
4. All statistical and computational results are analyzed in a low-dimensional subspace, demonstrating the feasibility of using VAE for dimensionality reduction in latent DiTs.
Weaknesses: The weaknesses of the paper are as follows:
1. It is meaningful to further elucidate why existing DiTs models are difficult to train.
2. How will these theoretical proofs contribute to the exploration of fast training and sampling algorithms?
Technical Quality: 4
Clarity: 1
Questions for Authors: see weaknesses.
Confidence: 1
Soundness: 4
Presentation: 1
Contribution: 3
Limitations: The paper has outlined its limitations and broader impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Reviewer's Comment 1:** It is meaningful to further elucidate why existing DiTs models are difficult to train.
**Response:**
Thanks for your comment. We'd like to clarify a few points.
This relates to the high-dimensional latent data representation, which increases both approximation and estimation errors. To demonstrate this, we can generalize our proof to the setting without the low-dimensional linear latent space assumption by using an identity matrix for matrix $B$ in Assumption 2.1, implying a large $d_0$. According to Theorem 3.1, and the proof details of Corollaries 3.1.1 and 3.1.2 (page 48 and 52), we see that the errors in score approximation, score estimation, and distribution estimation all depend on $d_0$. A larger $d_0$ leads to greater errors.
> **Reviewer's Question 1:** How will these theoretical proofs contribute to the exploration of fast training and sampling algorithms?
**Response:**
Thanks for the question.
In essence, our hardness results provides necessary conditions for designing efficient methods:
* The latent dim should be small enough $d=O(\log L)$ (Thm 4.1 & Prop 4.1, 4.2)
* Normalization of $K,Q,V$ in DiT attention heads is beneficial for performance and efficiency. For example:
* For efficiency inference: $\max{\|| W_KA_1\||,\||W_QA_2\||,\||W_{OV}A_3\|| }\le B$ with $B=o(\sqrt{\log L})$ (Prop 4.2)
* For efficiency training: $\max{\||W_KA_1\||,\||W_QA_2\||,\||W_{OV}A_3\|| }\le \Gamma$ with $\Gamma=o(\sqrt{\log L})$ (Thm 4.1)
We want to emphasize that these conditions are necessary but not sufficient. Sufficient conditions should depend on the detailed designs of specific methods.
We hope these points address the reviewer's concerns. We have revised the latest version of our paper accordingly.
We're open to any further questions or clarifications you might have about our work. Thank you!
---
Rebuttal 2:
Comment: Thank you for your response.
In summary, the paper is overly theoretical, and I would appreciate it if it could include some toy examples. For instance, it could demonstrate improvements in DiTs by adhering to the design principles for d and QKV you mentioned in R2, and show a certain performance boost as a result. I would be glad to see such experiments incorporated into the paper.
Considering the limited evaluation as a criterion for scoring, I will deduct one point. I look forward to further discussions that can alleviate my concerns.
---
Rebuttal Comment 2.1:
Title: Why no experiment? It's uncommon to companion computational hardness results with experiments
Comment: Thanks for your feedback.
We’d like to remind the reviewer that computational hardness (provably criteria) is “there exist” types of results. It is widely accepted that such results do not require and are not meaningful to be supported by specific experiments. For reasons:
* **General Applicability:** These results are designed to be general and widely applicable, not specific to particular datasets or experimental setups.
* **Purpose:** The purpose of universality/hardness results is to show the limits of what is feasible. This makes any empirical experiment vacuous, incomplete, and hence unnecessary to establish them.
Please refer to standard ML/TCS material for more details, for example, [1] from CMU for the nature of such fundamental limits and why they make empirical validation redundant.
[1] Toolkit, A. Theorist’S. "Lecture 24: Hardness Assumptions." (2013).
---
### **Experiment Sketch**
If you would like a toy example, we can sketch 2 well-known methods that meet our efficiency criteria (assuming you are referring to Prop 4.1) when norm-bounded conditions are satisfied:
* DiTs using alternative attention like Performer (random feature transformer) [Choromanski20] can achieve subquadratic time computation under norm-bounded conditions.
* DiTs using alternative attention like linear attention [Katharopoulos20] can also achieve subquadratic time computation under norm-bounded conditions.
From these examples, it can be easily observed that:
* A large temperature parameter $\beta$ will hurt performance, even though they are efficient. Note that a large $\beta$ corresponds to the low temperature region of attention, which is generally known to perform poorly in practice. Thus, efficiency is maintained, but performance suffers when norm-bounded conditions are violated.
This aligns with our theory because $\beta$ scales the norms of $K$, $Q$, and $V$ beyond our norm-bounded conditions.
Although not necessary given their self-explanatory nature, we can still include these toy experiments in the final version for completeness. We hope this clarifies everything. Thank you for your time!
---
[Choromanski20] Choromanski, Krzysztof, et al. "Rethinking attention with performers." arXiv preprint arXiv:2009.14794 (2020).
[Katharopoulos20] Katharopoulos, Angelos, et al. "Transformers are rnns: Fast autoregressive transformers with linear attention." ICML, 2020. | Summary: The paper studies the statistical and computational limits of latent diffusion transformers.
Strengths: The results seem to be new and non-trivial (though I'm not an expert in the field, so I might have a wrong impression).
Weaknesses: The paper is too technical (e.g. there are many long formal definitions), and the results are hard to understand for a non-expert in the area. The formulations of the theorems contain a lot of parameters, it makes them very hard to read. I suggest that you may write down informal versions that would be clear and would illustrate the result (with a reference to the formal version in the appendix).
In addition, while the comparison with prior works is present, it is unclear to me and seems to be imprecise. It would be nice to see something like (for example) "we have this error bound, while all prior works [..., ...] have asymptotically worse errors".
UPDATE: Increased the score and the confidence after the rebuttal.
Technical Quality: 3
Clarity: 1
Questions for Authors: Could you write here some informal versions of the theorems (or some simple corollaries) and the clear comparison with prior works? I'll be happy to increase the score if I see that the results are better than the state of the art.
Confidence: 2
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Reviewer's Comment 1:** The paper is too technical (e.g. there are many long formal definitions), and the results are hard to understand for a non-expert in the area. The formulations of the theorems contain a lot of parameters, it makes them very hard to read. I suggest that you may write down informal versions that would be clear and would illustrate the result (with a reference to the formal version in the appendix).
> **Reviewer's Question 1:** Could you write here some informal versions of the theorems (or some simple corollaries) and the clear comparison with prior works? I'll be happy to increase the score if I see that the results are better than the state of the art.
**Response:**
Thanks for the suggestion. **We agree adding an informal version of our results will be beneficial for the general audience. We have added them in introduction accordingly**, which are quoted as follows:
* > **Theorem 1.1 (Informal Version of Theorem 3.1).** For any approximation error $\epsilon>0$ and any initial data distribution $P_0$ under Assumptions 2.1 to 2.3, there exists a DiT score network such that for any time $t\in [T_0,T]$ ($T$ is the stopping time in forward process, and $T_0$ is the early stopping time in backward process), the upper bound of the approximation error is
$\epsilon \cdot \sqrt{d_0}/\sigma(t)$,
where $\sigma(t)=1-e^{-t}$.
* > **Corollary 1.1.1 (Informal Version of Corollary 3.1.1).** Under Assumptions 2.1 to 2.3 and using $\epsilon \in (0,1)$, we choose the characterized structural configuration of DiT score network in the proof details of Theorem 1.1. Let n denote the sample size, then with probability $1-1/\mathrm{poly}(n)$, the sample complexity for the score estimation is
$\mathcal{O}(\xi(n, \epsilon))$,
where $\xi(n, \epsilon)=C_1 \cdot 2^{\epsilon^{-C_2}} \cdot n^{-0.5}+C_3 \cdot \epsilon^2+n^{-1}$, and $C_1, C_2, C_3$ are positive constants.
* > **Corollary 1.1.2 (Informal Version of Corollary 3.1.2).** With the estimated DiT score network in Corollary 1.1.1, we denote $\hat{P}_{T_0}$ as the generated distribution at time $T_0$. We have the following with probability $1-1/\mathrm{poly}(n)$.
> 1. The distribution estimation error of $\hat{P}_{T_0}$ in the latent subspace (see Assumption 2.1) is $\mathcal{O}(\sqrt{\xi(n, \epsilon)})$.
> 2. The estimated distribution $\hat{P}_{T_0}$ in the orthogonal subspace degenerates to a point mass at origin as $T_0 \rightarrow 0$.
* > **Theorem 1.2 (Informal Version of Theorem 4.1).** Assuming SETH and all numerical values are in $O(\log L)$ encoding, there exists an algorithm for approximating gradient computation of optimizing DiT loss up to $1/\mathrm{poly}(L)$ accuracy that runs in $L^{1+o(1)}$ time, if certain norm bound conditions are satisfied.
* > **Proposition 1.1 (Informal Version of Proposition 4.1).** Assuming SETH, the existence of sub-quadratic time algorithm for approximating DiT inference depends on the norm bounds of $K,Q,V$ of the attention heads in DiT.
* > **Proposition 1.2 (Informal Version of Proposition 4.2).** Assuming SETH, $L^{1+o(1)}$ time DiT inference is possible.
---
> **Reviewer's Comment 2:** In addition, while the comparison with prior works is present, it is unclear to me and seems to be imprecise. It would be nice to see something like (for example) "we have this error bound, while all prior works [..., ...] have asymptotically worse errors".
**Response:**
Thanks for the comment. We acknowledge the current draft is not precise enough and have made modifications accordingly. We quote them as follows:
* `line 179` After Theorem 3.1:
> **Remark 3.1 (Comparing with Existing Works.)** We are the first to prove the score approximation capability of DiT, while the prior theoretical works about DiT [Benton24, Wibisono24] merely assume that the score function is well approximated.
* `line 200` After Corollary 3.1.1:
> **Remark 3.2 (Comparing with Existing Works.)** We are the first to provide a sample complexity for DiT, while the prior theoretical works on the sample complexity of diffusion models [Zhu23, Chen23] only focus on ReLU-based diffusion models and does not include the attention mechanism.
* `line 223` After Corollary 3.1.2:
> **Remark 3.5 (Comparing with Existing Works.)** We are the first to provide a distribution estimation for DiT, incorporating the tail behavior assumption of the latent variable distribution (Assumption 2.2). However, the prior work [Oko23] does not address DiT and relies on assumptions about the initial data distribution that are far from empirical realities.
**These are all "1st known" DiT results (i.e., SOTA?).**
For computational limits, since we are the first analysis on DiT, there is no prior work to compare with.
---
We hope the revisions and clarifications provided in this response address the reviewer's concerns.
We look forward to further feedback and discussion. Thank you for your time and effort!
===
* [Benton24] Joe Benton, Valentin De Bortoli, Arnaud Doucet, and George Deligiannidis. Nearly d-linear convergence bounds for diffusion models via stochastic localization. In The Twelfth International Conference on Learning Representations (ICLR), 2024.
* [Wibisono24] Andre Wibisono, Yihong Wu, and Kaylee Yingxi Yang. Optimal score estimation via empirical bayes smoothing. arXiv preprint arXiv:2402.07747, 2024.
* [Zhu23] Zhenyu Zhu, Francesco Locatello, and Volkan Cevher. Sample complexity bounds for score-matching: Causal discovery and generative modeling. Advances in Neural Information Processing Systems (NeurIPS), 2023.
* [Chen23] Minshuo Chen, Kaixuan Huang, Tuo Zhao, and Mengdi Wang. Score approximation, estimation and distribution recovery of diffusion models on low-dimensional data. In International Conference on Machine Learning (ICML), 2023.
* [Oko23] Kazusato Oko, Shunta Akiyama, and Taiji Suzuki. Diffusion models are minimax optimal distribution estimators. In International Conference on Machine Learning (ICML), 2023.
---
Rebuttal 2:
Comment: Dear Reviewer jLPK,
As the discussion period coming to its end, we want to check if our rebuttal has addressed your concerns.
Please let us know if you have any further questions or need clarification. Thank you!
Best regards,
Authors | Summary: This paper establishes the statistical rates and provably efficient criteria of Latent Diffusion Transformers (DiTs). Specifically, there are three main theoretical results:
* the approximation error bound for the transformer-based score estimator,
* the sample complexity bound for score estimation,
* and the provably efficient criteria for latent DiTs in both forward inference and backward training.
These results closely rely on the low-dimensional linear subspace assumption on input data.
Strengths: 1. Given the popularity of generative AI these days, the theoretical understanding of DiTs are of great interest to the community.
2. Most of the concepts and the results are clearly presented.
Weaknesses: 1. The practical insights of the provably efficient criteria (Section 4) are relatively scarce.
2. A few noticeable typos were encountered in the paper. For example,
* At line 120 on page 3, 'knonw' should be corrected to 'known' in the sentence 'This is also knonw as ...';
* At line 242 on page 7, there is an extra 'full'.
3. Including simulation results on error bounds against related parameters could enhance the credibility and reliability of theoretical results.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the linear subspace assumption on input data be relaxed to a more general one, such as the manifold data? Since it's more natural to consider the intrinsic geometric structures of data.
2. One of the backgrounds of this work comes from the quadratic complexity of transformer blocks with respect to sequence length. While in literature, many alternative attention mechanisms that have a linear complexity in sequence length have been proposed (though not in the context of diffusion). I'm wondering whether it is possible to evaluate these methods' efficiency in your framework.
3. In the right-hand side of equation (3.1), it seems that the second term does not involve $n$ and thus will not goes to zero even when $n$ goes to infinity. Does it mean that the estimation can never be consistent?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of this work have been stated in Section 5 and are left for future work as claimed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Reviewer's Question 1:** Can the linear subspace assumption on input data be relaxed to a more general one, such as the manifold data? Since it's more natural to consider the intrinsic geometric structures of data.
**Response:**
Thanks for the question. Yes, but not trivial. Here are some clarifications:
* If we consider that the initial data is manifold without the subspace assumption, we can relax our proof to the manifold data by using an identity matrix for matrix $B$ in Assumption 2.1.
* If we consider that the low-dimensional latent representation is manifold with the subspace assumption, the price to pay is that we no longer have the score function decomposition (Lemma 2.1). We need new techniques to characterize the rates.
In addition, the linear subspace assumption yields stronger results. The [Chen23] with the subspace assumption obtained a sharper rate than that of [Oko23] on both score estimation and distribution estimation.
> **Reviewer's Question 2:** One of the backgrounds of this work comes from the quadratic complexity of transformer blocks with respect to sequence length. While in literature, many alternative attention mechanisms that have a linear complexity in sequence length have been proposed (though not in the context of diffusion). I'm wondering whether it is possible to evaluate these methods' efficiency in your framework.
**Response:**
Thanks for the question. We would like to clarify a few points.
Yes.
For the statistical limits (approximation and estimation theory), we can generalize our proof by deriving the universal approximation capability and the covering number of the attention mechanism with linear complexity.
For the computational limits, these alternative attention variants (efficient or not) have already fallen into our framework. This is because we consider “all possible” algorithms in **Problem 1**. That is, any attempt to compute the DiT gradient faster is considered in our analysis. This includes using different attention mechanisms as score estimators.
> **Reviewer's Question 3:** In the right-hand side of equation (3.1), it seems that the second term does not involve and thus will not goes to zero even when goes to infinity. Does it mean that the estimation can never be consistent?
**Response:**
Appreciate your question. Here are the clarifications.
Yes. This stems from the double exponential factor $2^{\epsilon^{-2L}}$ mentioned in Remark 3.4, where $\epsilon$ denotes any given approximation error. We plan to explore the possibilities noted in Remark 3.4 to avoid the double exponential factor. By doing so, we can choose $\epsilon$ as a function of the variable $n$ to balance the first and second terms on the right-hand side of the equation (3.1). This approach is expected to lead to consistent estimation.
---
> **Reviewer's Comment 1:** The practical insights of the provably efficient criteria (Section 4) are relatively scarce.
**Response:**
Thanks for your comment. We acknolodge the importance of enhancing the applicability of our findings. Here are some clarifications.
Our provably efficient criteria offer some insights for the design of more efficient methods. Specifically, we demonstrate that:
* The latent dim should be small enough $d=O(\log L)$ (Thm 4.1 & Prop 4.1, 4.2)
* Normalization of $K,Q,V$ in DiT attention heads is beneficial for performance and efficiency. For example:
* For efficiency inference: $\max{\|| W_KA_1\||,\||W_QA_2\||,\||W_{OV}A_3\|| }\le B$ with $B=o(\sqrt{\log L})$ (Prop 4.2)
* For efficiency training: $\max{\||W_KA_1\||,\||W_QA_2\||,\||W_{OV}A_3\|| }\le \Gamma$ with $\Gamma=o(\sqrt{\log L})$ (Thm 4.1)
We want to emphasize that these conditions are necessary but not sufficient. Sufficient conditions should depend on the detailed designs of specific methods.
We hope these points address your concern and enhance the utility of our criteria in practical settings.
> **Reviewer's Comment 2:** A few noticeable typos were encountered in the paper.
**Response:**
Thanks for the comment. We apologize for the typos. In response, we have conducted 3 more rounds of proofreading and fixed all typos identified in our latest version. This includes:
* `line 109`, we corrected “denosing” to “denoising”.
* `line 120`, we corrected “knonw” to “known”.
* `line 155`, we corrected “a” to “an”.
* `line 192`, we corrected “an” to “a”.
* `line 228`, we corrected “depend” to “depends”.
* `line 242`, we deleted the extra “full”.
* `line 253`, we corrected “analyze” to “analyzing”.
* `line 303`, we corrected “exists” to “exist”.
* `line 320`, we corrected “motivate” to “motivates”.
* `line 322`, we corrected “do” to “does”.
* `line 954`, we corrected “subset” to “subsets”.
* `line 977`, we corrected “part” to “parts”.
* `line 989`, we corrected “a” to “an”.
* `line 1168`, we added “at” after “arrive”.
Your feedback is greatly appreciated and has been essential in enhancing the clarity and readability of our paper. Thank you for your help.
---
We hope the revisions and clarifications provided in this response address the reviewer's concerns.
We welcome additional feedback and look forward to further discussions. Thank you for your time and valuable inputs!
---
* [Chen23] Minshuo Chen, Kaixuan Huang, Tuo Zhao, and Mengdi Wang. Score approximation, estimation and distribution recovery of diffusion models on low-dimensional data. In International Conference on Machine Learning (ICML), 2023.
* [Oko23] Kazusato Oko, Shunta Akiyama, and Taiji Suzuki. Diffusion models are minimax optimal distribution estimators. In International Conference on Machine Learning (ICML), 2023. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Are Multiple Instance Learning Algorithms Learnable for Instances? | Accept (poster) | Summary: This paper aims to discuss the theoretical learnability of multi-instance learning. It first gives the necessary conditions for bag-level learnability and instance-level learnability. Then it discusses how the results can be used to verify the learnability of existing MIL algorithms. Finally, some empirical experiments are adopted to validate the theoretical findings.
Strengths: This paper discusses the theoretical aspects of multi-instance learning (MIL). As MIL has attracted increased interest for applications in medical image analysis and time series analysis, it is nice to have some time to pause the development of heuristic algorithms and think about the theoretical foundations.
Weaknesses: 1. Many statements are not rigorous and this significantly reduces readability. For example, in Condition 4 the text refers to that the optimal hypothesis for independent instance domain must equal the sum of hypotheses for the individual instances. However, the formula is about the risks and they are not equivalent. In theorem 4, it just says "the MIL algorithm" without specifying which algorithm or the defining characteristics of the algorithm, which is not rigorous.
2. For the general bag domain discussed in the paper, it is not the multi-instance bags that allow dependency among instances as commonly discussed and adopted by previous literature. According to the paper, this is essentially a weighted sum of the instances. It is better to claim this as such, instead of claiming it as a "general" bag domain, which it is not.
3. The categorization of pooling methods, as discussed in the appendix, is not consistent with how they are used in the referenced paper. See questions for details.
4. The empirical validation is somewhat lacking. Although this is mainly a theoretical discussion, I don't think the empirical are well designed to support the theoretical claims.
Technical Quality: 2
Clarity: 1
Questions for Authors: 1. In Section A.2, the attention pooling is different from Ilse et al., 2018, which is the attention-based MIL that most follow-up works are based on. Instead, additive pooling is actually the attention mechanism in Ilse et al., 2018. Why?
2. I am not very clear about the multi-dimension MIL discussed briefly in 4.3.2. Can you explain this sentence "For instance, in video data, each frame is composed of patches in multi-dimensional structures for each frame dimension."?
3. In Table 2, the title is validation of theorem 4, but in the text it says validating theorem 5?
4. Also in Table 2, do you have results for more recently proposed instance pooling approach, such as [21]? This also applies to the results in Table 3.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Some limitations are discussed in the appendix. However, it seems to contain some grammatical mistakes which hinder understanding. For example, what is the meaning of this sentence: "to address the problem that current MD-MIL algorithms [6, 20] extend non-learnable algorithms for instances to multidimensional cases"?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We will address their feedback in the following:
**(W1)**
- **Condition 4**: The interpretation provided in Condition 4 aligns with PAC Learning theory. Specifically, $\inf R$ denotes the infimum of the risk within the given hypothesis space, representing the optimal hypothesis's risk. Thus, Eq. (9) refers to the risk of the optimal hypothesis concerning $D^{Ind}_{XY}$, consistent with PAC learning theory as seen in previous works [1, 2].
- **Theorem 4**: All theorems in this paper, including Theorem 4, are proven under Assumption 1, which states that any MIL algorithm must be learnable at the instance level. Consequently, Theorem 4 applies to all MIL algorithms satisfying Assumption 1.
**(W2)**
- Many previous studies assume all instances are independent. However, recent studies such as [3-6] consider relationships between instances, improving prediction performance. These relationships are common in Deep MIL, particularly in practical environments like images, natural language, and time series. Therefore, our study treats the Independent bag domain space $D_{XY}^{Ind}$ as a special case and uses the term ‘General’ to define a space including $D_{XY}^{Ind}$.
- To reduce misunderstandings, we propose defining the Dependent Bag Domain Space $D_{XY}^{Dep}$, which includes relationships between instances, and defining the General Bag Domain Space as $D_{XY}^{Gen} = D_{XY}^{Ind} \cup D_{XY}^{Dep}$.
**(W4)**
- To enhance empirical support for our theoretical framework, we added comparisons between theoretical applications and empirical results for various Deep MIL algorithms. The additional experimental results are presented in Tables A and B of the Global Rebuttal. Please refer to item 1 in the Global Rebuttal for details.
- Additionally, we conducted experiments assuming an MD-MIL (Multi-Dimensional Multiple Instance Learning) problem. The results, shown in Table C and item 2 of the Global Rebuttal, demonstrate that using information from instances in other bags during the computation of attention weights yields effective learning outcomes. This suggests our framework can guide future MIL research.
**(Q1)**
- According to Section 2.4 of Ilse et al. (2018) [7], attention pooling involves computing attention weights for each instance, multiplying these by instance features, summing the weighted features to obtain a bag-level feature, and then using this for bag classification. In contrast, Javed et al. (2022) [8] describe additive pooling, where predictions are made based on each instance's weighted features, and these individual predictions are summed for a bag-level prediction.
- Our paper represents attention pooling as averaging the weighted features (Line 651), while additive pooling is represented by summing individual predictions (Line 655). Although our attention pooling formula includes a factor of $\frac{1}{N}$, this does not fundamentally change the operation. To prevent misunderstanding, we will remove this factor to align it exactly with [7].
**(Q2)**
- The sentence provides an example to understand Multi-Dimensional MIL (MD-MIL). MD-MIL extends the traditional MIL problem to make predictions for bags-of-bags, using the outer bag's label to infer inner bags and instances.
- For example, in video anomaly detection (VAD), the goal is to detect the exact frame with an anomaly based on the video label. By interpreting VAD as an MD-MIL problem, each frame can be considered a bag of image patches, and the video as a bag-of-bags.
- We conducted experiments on such cases, as detailed in item 3 of the Global Rebuttal. Table C shows that MD-MIL algorithms satisfying our framework's theoretical conditions achieved the best performance.
**(Q3)**
- You are correct, the title should be "Validation of Theorem 5." We have reviewed all similar instances and will correct this error in the final paper version.
**(Q4)**
- The results for the requested instance-pooling algorithm [21] are included in Table A of the Global Rebuttal PDF. These results show that other recent instance-pooling algorithms are also not learnable on $D_{XY}^{Gen}$ as they are on $D_{XY}^{Ind}$.
- Applying these results to Table 3, we found that if an algorithm is not learnable for bags, it is also not learnable for instances, as confirmed by our experimental results in line with Theorem 1.
**(Limitation 1)**
- The sentence explains that current MD-MIL research adopts pooling methods that are not learnable for instances according to our theoretical framework. Specifically, embedding-pooling and attention-pooling methods used in [9] and [10] are not learnable for instances, making the MD-MIL algorithms using these methods also unlearnable.
- We will revise and clarify these points to ensure they accurately reflect the limitations and are grammatically correct.
**References**
[1] Mohri et al., *Foundations of Machine Learning*, MIT Press, 2018.
[2] Fang et al., "Is out-of-distribution detection learnable?", *NeurIPS*, 2022.
[3] Angelidis et al., "Multiple instance learning networks for fine-grained sentiment analysis", *TACL*, 2018.
[4] Shao et al., "TransMIL: Transformer based correlated multiple instance learning for whole slide image classification", *NeurIPS*, 2021.
[5] Early et al., "Inherently interpretable time series classification via multiple instance learning", *ICLR*, 2024.
[6] Chen et al., "TimeMIL: Advancing Multivariate Time Series Classification via a Time-aware Multiple Instance Learning", *ICML*, 2024.
[7] Ilse et al., "Attention-based deep multiple instance learning", *PMLR*, 2018.
[8] Javed et al., "Additive MIL: Intrinsically interpretable multiple instance learning for pathology", *NeurIPS*, 2022.
[9] Tibo et al., "Learning and interpreting multi-multi-instance learning networks", *Journal of Machine Learning Research*, 2020.
[10] Fuster et al., "Nested multiple instance learning with attention mechanisms", *ICMLA*, 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Some of my questions/concerns have been addressed, but I think you misunderstood some of my original comments.
For example, regarding W1, a hypothesis is a function while a hypothesis space is a set of functions. In the paper, it was written as "the optimal hypothesis"... "equal to the sum of hypotheses", which refers to the summation of functions. But Eq 9 refer to the summation of risks (of the functions). They are not the same.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. As you correctly pointed out, Condition 4 contains inaccuracies in its expression. While the formula in Condition 4 is accurate, the phrase "the optimal hypotheses" should be revised to "the risks of the optimal hypotheses."
According to Definition 6, we proposed Conditions 4 and 7 as necessary prerequisites for satisfying Condition 2. We have demonstrated, through mathematical proofs based on these conditions, that if Conditions 4 and 7 are met, then Condition 2 is also satisfied. We have confirmed that the formulas and proofs are presented correctly. In light of your comments, we plan to revise expressions in both Condition 4 and Condition 7 as follows (Modifications are represented in bold.):
Condition 4: **The risk of** the optimal hypothesis for $D_{XY}^{Ind}$ must ensure that it equals the sum of **the individual risks** of the optimal hypotheses within $D_{XY}^{Ind}$:
Condition 7: **The risk of** the optimal hypothesis for $D_{XY}^{Gen}$ must ensure that it equals the weighted sum of **the individual risks** of the optimal hypotheses within $D_{XY}^{Gen}$:
We will make these revisions to address the errors and improve the clarity of our paper. If you have any further questions or concerns, please let us know to clarifity them as well. Thank you again for your valuable feedback. | Summary: This paper mainly studies the instance-level learnability of common weakly-supervised MIL algorithms. With the PAC theoretical framework, it shows the conditions for MIL algorithms to be learnable for instances. Two general cases of instance distribution, IID instances and non-IID instances, are discussed and analyzed in the proposed theoretical framework. In addition, the proposed theoretical framework covers a wide range of MIL algorithms, which could provide valuable insights for many MIL-based applications in the real-world.
Strengths: - This paper is overall well-written and easy to follow. All key definitions and theorems’ implications are given and explained clearly.
- This paper studies an interesting and valuable problem, *i.e.*, instance-level learnability in weakly-supervised MIL. This problem is still in the stage of empirical exploration, so a formal study from a theoretical perspective, which this work presents, is needed and could be valuable to the community of deep MIL.
- The proposed theoretical framework covers a wide range of MIL algorithms, which could provide valuable insights for many MIL-based applications in the real-world.
Weaknesses: - In Definition 6, the sufficient condition for instance-level PCA learnability, i.e., Condition 2, is given. However, the authors do not prove that Condition 2 is necessary. From my understanding, the instance-level PCA learnability cannot directly deduce Condition 2, as it (Eq. 7) contains an additional bag-level constraint.
- The authors mainly study one type of MIL algorithm in which pooling (instance embedding level or instance score level) is only performed once. In fact, there is another type of MIL algorithm with multiple pooling operations [1, 2, 3]. Most of these algorithms aim at making instance-level predictions and could be viewed as a combination of the classical algorithms discussed in the paper. The authors are encouraged to discuss them and indicate whether these MIL algorithms could also be incorporated into the proposed theoretical framework.
Minor issues:
- Table 3: Please consider using a clearer way to present the performance of instance-level prediction and bag-level prediction, as well as their difference.
- To meet the basic publication requirements of NeurlPS, the authors are encouraged to carefully and repeatedly check those easy-to-avoid errors in their paper, *e.g.*, line 346 - “pr,edication” and “Additve” on page 33.
References:
[1] Shi et al., Loss-based attention for deep multiple instance learning. AAAI, 2020.
[2] Liu et al., Weakly-Supervised Residual Evidential Learning for Multi-Instance Uncertainty Estimation. ICML, 2024.
[3] Wang et al., Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Bag-Level Classifier is a Good Instance-Level Teacher. IEEE Transaction on Medical Imaging, 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: See my comments above.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations have been indicated by the authors and written in the paper. There is no additional limitation that needs to be explicitly highlighted here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition and feedback on our work. We've integrated your suggestions to enhance our manuscript. For more information, see Global Response. We are addressing your specific comments as follows:
**(W1)** Reason for Condition 2 Being a Necessary Condition for Instance-Level Learnability
- According to Theorem 1, if a model is not learnable at the bag level, it cannot be learnable at the instance level. Therefore, for a model to be learnable at the instance level, learnability at the bag level must be ensured, making Condition 2 a necessary condition for instance-level learnability. In response to the reviewer's comment, we will add a clear explanation in Definition 6 to improve clarity.
**(W2)** Applicability of the Framework to Various Latest Deep MIL Algorithms
- Our study aims to propose a theoretical framework that serves as a baseline for MIL. Therefore, we did not discuss all methods that use various strategies for pooling to enhance performance. However, Theorems 1, 2, and 3, along with Theorems 4-12, which verify the learnability of different pooling methods at the instance level, can be applied to most MIL algorithms. Specifically, for the special MIL algorithms mentioned by the reviewer [1, 2, 3], we can determine instance-level learnability as follows:
- **[1]**: The MIL algorithm proposed in [1] uses an attention mechanism to derive features for the bag from the features of each instance. It then makes predictions at the bag level and uses attention to make instance-level predictions. Thus, because the algorithm in [1] does not satisfy Condition 10 of Theorem 8, it is not learnable at the instance level.
- **[2, 3]**: The methodologies in [2, 3] obtain predictions for instances based on their individual features and then use these features to make bag-level predictions through an independent classifier. The predictions for instances are trained to approximate the bag-level predictions, making these algorithms a type of instance-pooling. Since the optimal hypothesis space for individual instances approximates the optimal hypothesis space for bags, these MIL algorithms are learnable at the instance level according to Theorem 6.
- As mentioned in item 1 of the Global Rebuttal, we have added detailed analyses and indications for various latest Deep MIL algorithms and will include these in the paper. Furthermore, we will strengthen the robustness of our theory through empirical validation of recent MIL algorithms, as shown in Tables A and B of the attached PDF.
**(Minor Issues)**
- To make the experimental results in Table 3 clearer, we will revise the table to separately show the Macro-F1 Score and AUROC results, and display the bag-level and instance-level prediction results together, as shown in Table B of the attached PDF. This should provide clearer performance metrics.
- We have reviewed and corrected the mentioned typographical errors and will reflect these corrections in the paper.
**References:**
[1] Shi et al., Loss-based attention for deep multiple instance learning. AAAI, 2020.
[2] Liu et al., Weakly-Supervised Residual Evidential Learning for Multi-Instance Uncertainty Estimation. ICML, 2024.
[3] Wang et al., Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Bag-Level Classifier is a Good Instance-Level Teacher. IEEE Transaction on Medical Imaging, 2024.
---
Rebuttal Comment 1.1:
Title: Reply to the Author's rebuttal
Comment: Thanks for the author's efforts and detailed responses. After reading the rebuttal, I still have some concerns as follows:
- **W1**: The authors mention that Theorem 1 makes Condition 1 a necessary condition for instance-level PAC learnability since instance-level learnability implies bag-level learnability according to Theorem 1. So, does this mean that Condition 2 can be simplified, *i.e.*, just removing the bag-level constraint from Eq. (7)? Further, if Condition 2 can be simplified as I said, what is the difference between Eq. (7) and Eq. (4)? From my understanding, they are the same. That is to say, Condition 2 seems meaningless and not mentionable as it is just a straightforward variant of Definition 5.
- **W2**: For [2,3], I don't think the optimal hypothesis space for individual instances approximates the optimal hypothesis space for bags. Could the authors explain or justify this?
- **Experimental results in Table A**: I wonder how the authors implement the bag-level prediction for the methods [1,2]. To my knowledge, all of [1,2,3] make bag-level predictions using their respective bag-level branch.
---
Rebuttal 2:
Title: (W1)
Comment: **Answer:**
Thank you for taking the time to thoughtfully read our study and rebuttals. Please find the additional answers to your further questions and concerns.
**(W1)**
- Condition 2 is a necessary condition to verify that for a MIL algorithm to be learnable at the instance level.
- Under Assumption 1, if it is learnable at the bag level, Condition 2 ensures learnability at the instance level.
- Hence, this is an essential condition defining the relationship between the hypothesis spaces of the bag and the instance, which is the theoretical foundation for proving the learnability of instances in our study.
- The differences between Eq. (4) and Eq. (7) are as follows:
- Eq. (4) deals with the definition of learnability for individual instances by a MIL algorithm.
- Since MIL is a problem of learning instances based on bag-level labels, learnability at the instance level alone does not sufficiently explain the algorithm's ability to learn instances.
- On the other hand, Eq. (7) defines the necessary and sufficient condition for a MIL algorithm to be learnable at the instance level, which is that the relationship between instance-level and bag-level learnability must be "equivalent."
- Therefore, the proofs of Conditions 4 and 7 in Section 3, which are necessary for each MIL algorithm to be learnable at the instance level, are based on Eq. (7) in Condition 2.
To clarify the necessity and significance of Condition 2, distinct from Eq. (4), we will reflect this in the paragraph after Condition 2 to improve the paper.
---
Rebuttal 3:
Title: (W2)
Comment: **(W2)**
- The reasons why the methods proposed by Liu et al. [2] and Wang et al. [3] can be interpreted through the theoretical framework proposed in this study are explained as follows:
- **Liu et al. [2]**
- MIREL (Multiple Instance Regression with Embedding Learning) [2] is composed of two main modules:
1. A module that extracts features for the bag by mean-pooling the features of instances within the bag and then predicts the label for the bag based on these features.
2. A module that predicts the label for each instance using the individual features of the instances within the bag.
- Additionally, MIREL includes a residual evidence module that calculates the difference between bag and instance predictions, ensuring that the instance and bag prediction modules work complementarily. This means the instance prediction module learns based on the bag prediction value.
- The instance prediction module performs based on the instance’s own features and does not use weighted sums from additional information. Therefore, MIREL's instance prediction module can learn for the bag on $D_{XY}^{Ind}$.
- The learnability of MIREL’s instance prediction module, satisfying Condition 4 of our framework, can be explained as follows:
1. The loss function of MIREL can be expressed as the sum of the loss functions generated by the two main prediction modules for the bag and instance:
- $L_{bag}$$=\ell_{bag}$ $\(\hat{y_{bag}}, y_{bag}\) $ , $L_{inst} = \sum_{i=1}^{N} \ell_{inst}(\hat{y_i}, y_i)$, where $\hat{y_{bag}}$ is the label predicted by the bag prediction module, $y_{bag}$ is the actual label of the bag, $\hat{y_i}$ is the label predicted by the instance prediction module, and $y_i$ is the label of the instance obtained from $\hat{y_{bag}}$ and residual evidence
- $L_{total} = L_{bag} + \lambda L_{inst}$, where $\lambda$ is a weight
2. The risk for each bag, each instance, and total can be defined as follows:
- $R_{bag} = E_{(X, Y) \sim D_{XY}^{Ind}}[\ell_{bag}(\hat{y}_{bag}, y_{bag})]$
- $R_{inst_i} = E_{(X, Y) \sim D_{X_{inst_i}Y}^{Ind}}[\ell_{inst}(\hat{y}_i, y_i)]$
- $R_{total} = R_{bag} + \lambda \sum_{i=1}^{N} R_{inst_i}$
3. Since our theoretical framework evaluates learnability at the instance level under Assumption 1, it can be expressed as follows:
- $\inf R_{total} = \inf R_{bag} + \lambda \inf \sum_{i=1}^{N} R_{inst_i}$
- Since $R_{bag}$ is learnable on $D_{XY}^{Ind}$, $\inf_{h \in H} R_{D_{XY}^{Ind}} = \inf R_{bag}$ holds.
- $\inf \sum_{i=1}^{N} R_{inst_i}$ learns from the instance labels generated by the bag prediction module, so $\inf_{h \in H} R_{D_{XY}^{Ind}} = \lambda \inf \sum_{i=1}^{N} R_{inst_i}$.
- Since $\lambda$ is a fixed constant, $\inf_{h \in H} R_{D_{XY}^{Ind}} = \inf \sum_{i=1}^{N} R_{inst_i}$ (i.e., Condition 4) holds.
- **Wang et al.[3]**
- Unlike instance pooling [2], we have confirmed that ICMIL (Iteratively Coupled Multiple Instance Learning) [3] employs an attention-pooling method. Therefore, we need to explain how our theoretical framework interprets ICMIL separate from [2].
- ICMIL is an algorithm where a teacher MIL model performs attention-pooling to pseudo label each instance based on its attention confidence score, followed by a student model performing supervised learning using these pseudo labels.
- The teacher MIL model assigns an attention weight between 0 and 1 to each instance, with the sum of these weights equal to 1. This space, multiplied by the hypothesis space of each instance, satisfies Condition 8 and, according to Theorem 5, is learnable for the bag on $D_{XY}^{Gen}$. This can be verified through proofs in Appendix C.5.
- Therefore, the hypothesis space for instances in the teacher model of ICMIL is included in the hypothesis space for the bag-level generated space of individual instances. However, since it does not comply with Condition 10, as proven in Appendix C.9, we cannot guarantee that the teacher model in ICMIL is learnable for instances.
- We have rigorously presented terms, definitions, and theories related to Deep MIL algorithms based on PAC theory to cover all the MIL algorithms under Assumption 1. We will improve the paper by including the cases of [2] and [3] as representative examples how our framework interprets a specific MIL algorithm.
---
Rebuttal 4:
Title: (Experimental results in Table A)
Comment: **(Experimental results in Table A)**
- **Shi et al.[1]**
- [1] addresses the issue where the attention mechanism used in attention-pooling assigns high weights to irrelevant instances, leading to incorrect predictions through a loss-based attention mechanism. [1] performs predictions for both instances and bags simultaneously, and we can measure its performance in the same way with other MIL models that use attention-pooling.
- **Liu et al. [2]**
- MIREL [2] separates the prediction modules for the bag and the instances. In our study, in accordance with Condition 2, we aim to evaluate the prediction performance for the bag based on the prediction results for the instances. Therefore, instead of directly utilizing the performance of the prediction module for the bag, we calculated the bag predictions by applying mean-pooling and max-pooling on the scores from the instance prediction module, and then measured the average performance.
- This experimental design was intended to demonstrate that MIREL[2] shares characteristics with other instance-pooling methods in the case for the bag-level prediction based on the results of the instance-level predictions.
- Now, as shown in (W2), we verified that MIREL[2] can indeed be interpreted as an instance-pooling method, and we have also sufficiently experimented with other instance-pooling methods in Table A. To avoid any unnecessary misunderstandings as pointed out, we will exclude the results from Table A.
Thanks again for your rigorous feedback, which helps us to improve the quality of the paper. If you have further concerns and questions, please let us know so that we can respond to them.
---
Rebuttal 5:
Title: Reply to the Authors' Rebuttal
Comment: Thanks for the authors' comprehensive responses. These have addressed my concerns raised before. The authors are encouraged to include their responses (to Condition 2 and the theoretical interpretation and experimental settings of [1,2]) into the final version of the paper.
In view of these, I am glad to keep my positive score. I believe this paper would impact the theoretical study of instance-level learnability in MIL.
---
Rebuttal Comment 5.1:
Comment: We deeply appreciate your positive evaluation of our paper and your kind acknowledgment of our comprehensive responses. We are pleased to have addressed your all concerns. We will incorporate your suggestions into the final version of the paper, including our responses to Condition 2 and the theoretical interpretation and experimental settings of [1,2]. We believe that this will enhance the contribution and quality of the paper. Thank you once again for your valuable feedback, and please do not hesitate to let us know if you have any additional questions or concerns. | Summary: The paper provides theoretical considerations regarding Multiple Instance Learning.
Strengths: Theoretical consideration supported by the experimental results.
Weaknesses: There are no images depicting the intuition behind those definitions and theorems.
Paper is extremely hard to follow because it is not giving the intuition and implication for practitioners.
Technical Quality: 4
Clarity: 1
Questions for Authors: Is it possible to improve presentation of the paper?
Confidence: 4
Soundness: 4
Presentation: 1
Contribution: 4
Limitations: Discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition and feedback on our work. We've integrated your suggestions to enhance our manuscript. For more information, see the Global Response. We are addressing your specific comments as follows:
**(W1, 2, Q1)** Improving the Presentation of the Paper
- To address the feedback that the paper is difficult to understand due to its focus on theoretical content and lack of figures, we will add example figures illustrating the problems addressed in our study. Additionally, we will include Figure A to summarize the relationships between the theorems.
- **Attached PDF Figure A:** Figure A summarizes the relationships between all the theorems that constitute the theoretical framework of our paper. It not only shows which theorems are derived from others but also helps to identify which MIL pooling methods or algorithms are learnable or not at the instance level. Figure A will help readers understand the theoretical framework by showing the connections and outcomes between the theorems.
- Furthermore, since the paper focuses primarily on theoretical validation, we have significantly strengthened the empirical validation to show how the framework can be applied in practical environments, as detailed in items 1 and 2 of the Global Rebuttal. This will help readers understand the real-world impact of our framework.
- Finally, we will enhance the presentation by thoroughly reviewing and correcting grammatical errors, table captions, typos, and other issues identified by reviewers.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response; it addressed my concerns. Additionally, I recommend discussing this work in the context of the theory you have developed, as I found it intriguing. It introduces a novel MIL assumption, which adds significant value.
Struski, Łukasz, et al. "ProMIL: Probabilistic multiple instance learning for medical imaging." ECAI 2023. IOS Press, 2023. 2210-2217.
---
Reply to Comment 1.1.1:
Comment: Thanks for adjusting your score. To further respond to your valuable feedback, we would like to add the discussion for ProMIL in the paper according to the following analysis.
ProMIL learns from individual instance predictions by performing predictions on each instance and then uses the classification results of instances within a specified quantile to make predictions for the bag using a Bernstein Polynomial Estimator. This approach enables more accurate quantile estimation in datasets with complex or uneven distributions, thereby improving the predictive performance for the bag.
- Let the prediction module for an instance be $f(x_i)$, and let the module's prediction value be $c_i$ (where $0 \leq c_i \leq 1$). The algorithm's prediction value for the bag, $c_q$, based on the $c_i$ values within the *q*-quantile, is given by:
- $c_q = \sum_{k=0}^{n} \binom{n}{k} q^{n-k} (1-q)^k \cdot c_k$
- If $c_q$ is greater than 0.5, the bag is predicted to be positive; otherwise, it is predicted to be negative.
ProMIL can be interpreted through our proposed theoretical framework as follows:
- ProMIL makes predictions for the bag based on individual instance predictions. Thus, it does not satisfy Condition 10 of our framework and can only be learned on $D_{XY}^{Ind}$, not on $D_{XY}^{Gen}$.
- According to our Definitions 2 and 4, the relationship between ProMIL and the model's risk on $D_{XY}^{Ind}$ can be expressed as:
- $\inf R_{bag} = \inf \left( \sum_{k=0}^{n} \binom{n}{k} q^{n-k} (1-q)^k \cdot R_{inst_k} \right)$
- According to our Assumption 1, if learning on the bag is possible, the following holds:
- $\inf_{h \in H} R_{D_{Ind}^{XY}} = \inf \left( \sum_{k=0}^{n} \binom{n}{k} q^{n-k} (1-q)^k \cdot R_{inst_k} \right) = \sum_{i=1}^{N} \inf R_{inst_i}$
- Therefore, ProMIL ensures satisfaction of Condition 4 by optimizing the total bag risk through the sum of each instance's optimal risk. This demonstrates that learning on instances is feasible on $D_{XY}^{Ind}$.
These analytical results illustrate that our framework based on PAC Learning theory can effectively analyze specific real-world algorithms like ProMIL. This serves as a representative example demonstrating the utility of our proposed framework.
Thank you once again for your valuable feedback. If you have any further questions or concerns, please let me know so as to further respond to them. | Summary: The main contributions of the paper are theoretical:
- Proof of the fact that MIL algorithms that are not learnable for bags do not guarantee learnability for instances.
- Using PAC learning theory (under some assumptions) learnability conditions are derived. These conditions are sufficient and necessary.
- Basic experiments are carried out to support the theoretical findings empirically.
The most important results of the paper are theorems 1, 3, 5, 7, 10, 11.
Strengths: The strongest part of the paper is the theoretical framework and mathematical setup resulting in some key results about PAC learnability at the instance level for various MIL algorithms.
- Explicit results connecting bag learnability with instance learnability.
- The paper derives theoretical guarantees about which kinds of pooling maintain PAC learnability for instances.
- I think the paper could have broader impact in the development of instance level MIL methods.
- The paper despite being quite dense with theorems and definitions flows reasonably well and is understandable when read. I checked the proofs and I can follow the arguments line by line. I appreciate that the authors take the time to set up clear notation and an proper mathematical formulation of their problem definition.
Weaknesses: Despite the strengths of the paper there are some weaknesses.
- From a motivation standpoint, it has been the case for a while now that the strongest methods in MIL are attention pooling based and operate at the bag/embedding level, not the individual instance level. How do the results presented in this paper affect methods such as [1]? Why should we care about instance based methods in 2024? Explainability or interpretability motivations? The paper should make the case for why these results are important given the current state-of-the-art in the field.
- The methodology is well supported by the proofs. However, the empirical evaluation is not very thorough. It would be nice to see if the methodology has practical implications in standard MIL benchmarks used in new methods. Datasets such as the ones used in [1] are basically standard for MIL research. If the authors want to go one step further they could incorporate more elaborate testing frameworks too [2]. More experiments would help to convince me that the theoretical results have grounding in empirical findings and encourage me to raise my score.
Refrences:
[1] Attention-based Deep Multiple Instance Learning, Maximilian Ilse, Jakub M. Tomczak, Max Welling, ICML (2017).
[2] Reproducibility in multiple instance learning: a case for algorithmic unit tests., Raff, Edward, and James Holt., NeruIPS (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: - The assumptions (referred to as conditions) 4, 6, 7 are crucial for the theorem proofs in the paper. Are these conditions assumed to be true? Or are they proven to be true somewhere? Is there a way to check if these conditions are true for a given model / dataset practically?
- The framework analyzes independently distributed instances within a bag $\mathcal{D}^{\text{Ind}}_{XY}$ as well as cases where instances within the same bag are statistically dependent. However, in many cases it is important to model the relation between different bags (or instances inside different bags). For example, thinking of an image as a set of bags where each bag (patch of pixels) is a set of feature instances (pixels), we might want to learn relationships between nearby pixels that fall in different bags. Do the authors have any insight on if their analysis could be expanded in this direction in future work?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper discusses its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition and feedback on our work. We've integrated your suggestions to enhance our manuscript. For more information, see the Global Rebuttal. We are addressing your specific comments as follows:
**(W1)**
- **Importance of Instance Pooling:**
- Instance-Pooling is a fundamental method still used in recent studies [2-4].
- The Independent bag domain space assumes instance independence, common in medical domains and video anomaly detection. In such cases, Instance-Pooling is effective due to lower computational and memory requirements compared to other pooling methods.
- For example, Causal MIL (Neurips, 2022) improves prediction by identifying influential features while assuming instance independence within a bag on $D_{XY}^{Ind}$.
- **Learnability of Attention MIL[1]:**
- Our paper presents Theorem 8 to evaluate the learnability of instance-based attention pooling, a popular MIL method. According to Theorem 8, if the hypothesis space for bags exceeds that of instances, Attention MIL (Pooling) is not learnable at the instance level. This conclusion applies to methods like [1], TransMIL [6], SA-AbMIL [7], and TimeMIL [8].
- Our framework also evaluates other attention-based methods like Additive MIL and Conjunctive MIL. We will include Figure A in the Global Rebuttal PDF for better understanding.
**(W2)**
- Our study proposes a framework to overcome the limitation of previous studies focusing only on experiments without theoretical validation of instance learnability. We prioritized empirical verification of pooling methods theories rather than specific algorithm results.
- The benchmark from [1], mentioned by the reviewer, is based on real datasets where generalization is not assumed, making theoretical results difficult to observe. Various algorithms have already been tested on these datasets.
- The benchmark from [5] confirms reproducibility of bags composed of vector datasets sampled from a normal distribution on the Independent bag domain space, evaluating MIL algorithm adherence to MIL assumptions. Our study differs by proposing a framework for evaluating instance learnability and defining the General Bag Domain space to assess learnability in environments with instance relationships.
- The synthetic dataset using MNIST data, similar to [2], is widely used for specific cases. It accurately verifies the theory without performance variations due to encoder differences, which is why we used it.
- Reflecting the reviewers' suggestion for more empirical evaluations, we conducted verifications on various latest Deep MIL algorithms in the Global Rebuttal. The results, shown in Tables A and B, align with our theoretical results and will be included in the revised paper.
**(Q1)**
- For Conditions 4 and 7, we have proven they are necessary and sufficient based on Definition 6 in our paper, as shown in Appendix C.2 and C.3. Conditions 4 and 7 hold true for any model as long as Assumption 1 is satisfied. Specifically, experimental validation of Theorems 8-11, derived from Theorem 3, shows that Condition 7 is necessary and sufficient for learnability at the instance level on $D_{XY}^{Gen}$.
- Condition 6 is assumed to be true in our paper. When relationships exist between instances, their contributions to the bag's domain space vary. The total contribution sum is 1. For example, in the **Experimental Validation of Theorems 5, 8, and 9**, if the digits 3 and 5 appear together in a bag, they are positive, but individually they are not. Thus, contributions change based on relationships.
**(Q2)**
- The scenario you mentioned relates to Multi-Dimensional MIL (MD-MIL). For example, in video data, a video is composed of snippets, and each snippet consists of image patches. For video anomaly detection, information from a snippet’s patches may require information from previous snippets’ patches.
- Our study clarified that to make predictions at the instance level using relationships between instances, Condition 10 must not be violated, as stated in Theorem 8. External information should not be used in the prediction stage itself but only in the calculation of weights, such as attention weights.
- We conducted additional experiments on such cases, as detailed in item 2 of the Global Rebuttal. The results confirmed that theoretically learnable algorithms achieved the best performance, demonstrating the potential for extending our theoretical framework to various domains. These findings will be included in the revised paper.
**References:**
[1] Ilse et al., Attention-based deep multiple instance learning. PMLR., 2018.
[2] Zhang et al., Multi-instance causal representation learning for instance label prediction and out-of-distribution generalization. Neurips, 2022.
[3] Liu et al., Weakly-Supervised Residual Evidential Learning for Multi-Instance Uncertainty Estimation. ICML, 2024.
[4] Wang et al., Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Bag-Level Classifier is a Good Instance-Level Teacher. IEEE Transaction on Medical Imaging, 2024.
[5] Raff et al., Reproducibility in multiple instance learning: a case for algorithmic unit tests., Neurips, 2024.
[6] Shao et al., Transmil: Transformer based correlated multiple instance learning for whole slide image classification. Neurips, 2021.
[7] Rymarczyk et al., Kernel self-attention in deep multiple instance learning. arXiv, 2020.
[8] Chen et al., TimeMIL: Advancing Multivariate Time Series Classification via a Time-aware Multiple Instance Learning. ICML, 2024
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the thorough response.
Most of my concerns were addressed and thus I am raising my score.
Though the paper is technically solid I still have some doubts about the impact. I understand that this is my subjective opinion of the work and encourage the authors to expand on how they think their work might impact subsequent research.
---
Rebuttal 2:
Comment: Thanks for your positive considerations and valuable further concerns for the subsequent research. We believe the answers will be helpful in clarifying the implications of our work. Therefore, we eagerly address your concerns as follows..
1. The theoretical framework proposed in this study can verify whether a specific MIL algorithm is learnable on instances for all MIL algorithms only if they satisfy Assumption 1.
- We derived Condition 1, a necessary and sufficient condition for learning on instances for MIL that satisfies Assumption 1, and through proofs, we demonstrated that all MIL algorithms satisfying Condition 1 can be applied to the proposed framework.
- As concrete examples, in additional responses to other reviewers (**bVsY(W2)**, **W4rs**), we showed that specific MIL algorithms such as MIREL, ICMIL, and ProMIL can be interpreted using the proposed framework. By demonstrating how the proposed framework can be applied to actual MIL with these representative cases, we aim to improve the paper.
2. It enables the appropriate selection of MIL algorithms across various real-world applications where MIL can be used.
- MIL is utilized in fields such as Medical Image Analysis, Video Anomaly Detection, and Biological Activity Modeling, where detailed labeling is difficult but accurate prediction for instances (e.g., cancer-affected regions, anomaly detection time points, molecules constituting new drugs) is essential. These applications include cancer diagnosis, enhancing the efficiency of security and surveillance systems, and drug development.
- Particularly in the medical field, where accurate prediction for instances is required, Attention-pooling-based MIL has been utilized recently due to its high prediction performance on bags [1,2,3,4]. However, according to our theoretical validation of the framework, Attention-pooling does not satisfy Condition 10 and therefore is not learnable for instances. This has been experimentally verified through various Attention-pooling methods in Table B of Global Rebuttal 1. Consequently, these algorithms may yield inaccurate predictions for instances, indicating the need for the introduction of learnable pooling methods for instances, such as Conjunctive-Pooling.
3. Theorem 8-11 and Condition 9 can provide direction for future MIL research in more practical problem settings.
- In the second item of Global Rebuttal and Table C, we demonstrated that Multi-Dimensional-MIL (MD-MIL), which combines MIL algorithms satisfying Theorem 8-11 and Condition 9, achieves the best performance in tasks predicting multi-dimensional instances.
- MD-MIL assumes a more practical problem by considering not only the relationships between instances within a bag, which most current MIL algorithms consider, but also the relationships between bags. Although this is still in the early stages of research, it is expected to be actively studied in the future. In Section 4.3.2, we showed that MD-MIL can be interpreted through our framework, and its potential has been experimentally proven in the rebuttal.
- These results demonstrate the scalability of the proposed framework and provide a theoretical basis for future research in MD-MIL.
- Additionally, Theorem 8-11 and Condition 9 theoretically interpret how external information to instances should be applied in MIL algorithms and provide direction for practical application.
- For example, in tasks predicting cancer-affected regions based on medical images for cancer diagnosis, additional medical information of the patient that could be used for diagnosis should not be directly used for instance prediction according to Theorem 9-11 and Condition 9, but should be used as weights according to Theorem 9.
Your comments significantly contribute to enhancing the utility of the paper by considering the future development and practicality of the proposed framework. Thank you once again for your valuable feedback. If you have any further questions or concerns, please let me know so that I can further address them.
**Reference:**
[1] Ilse et al., (2018, July). Attention-based deep multiple instance learning. In *International conference on machine learning* (pp. 2127-2136). PMLR.
[2] Xu et al., (2023). Classification of colorectal cancer consensus molecular subtypes using attention-based multi-instance learning network on whole-slide images. *Acta Histochemica*, *125*(6), 152057.
[3] Han et al., (2020). Accurate screening of COVID-19 using attention-based deep 3D multiple instance learning. *IEEE transactions on medical imaging*, *39*(8), 2584-2594.
[4] Li et al., (2020, December). Deep multi-instance learning with induced self-attention for medical image classification. In *2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)* (pp. 446-450). IEEE.
---
Rebuttal Comment 2.1:
Comment: I thank the authors for their response, it has made a good case about how the paper fits into the broader field - this analysis would be a good addition to the paper for a camera ready version should the paper be accepted. My concerns have been addressed. I am bumping my score by one more point.
---
Reply to Comment 2.1.1:
Comment: We deeply appreciate your positive evaluation of our paper and your kind acknowledgment of our responses. We are pleased to have addressed all your concerns. We will incorporate your suggestions into the final camera-ready version, and we believe that this will enhance the contribution and quality of the paper. Thank you once again for your valuable feedback, and please do not hesitate to reach out if you have any additional questions or concerns. | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and insightful comments. We appreciate the time and effort you have invested in reviewing our work. K1fQ and W4rs recognized our focus on the theoretical foundations of our study. dSpL and 2msW acknowledged the clarity and rigor of our theoretical framework and arguments. Finally, bVsY appreciated the clarity and potential impact of our work within the MIL community.
We did our best to address all of your comments and will make the necessary revisions to improve the clarity, rigor, and validity of our manuscript. We kindly ask for your consideration in adjusting the review score if you find that our response has effectively addressed your concerns. In the revised manuscript, we will make the following major modifications.
1. **Extensive Expansion of Experimental Validation**
- We have expaned experimental validation to a wider range of Deep MIL algorithms to demonstrate the practical applicability of our theoretical framework.
- Tables A and B of the attached PDF show experimental validation results for Deep MILs, which confirm the same conclusions as the theoretical framework of this paper. We will include detailed analysis and experimental results for these conclusions in the revised manuscript.
- Based on experimental validation, it was confirmed that MIL algorithms using the same pooling method exhibit a similar overall trend to the theoretical analysis of the framework, despite slight performance differences.
2. **Practical usage of Multi-Dimensional MIL (MD-MIL)**
- MD-MIL involves learning from bag-of-bags labels, predicting labels for the bags that comprise these bag-of-bags, and for the instances within each bag.
- According to Theorems 9, 10, and 11, using information from instances outside the target instance for prediction should be limited to the attention mechanism. Thus, in predicting instances within a specific bag, the relationships with instances from other bags should be utilized only through attention operations.
- As the research on MD-MIL is still in its early stages, this study does not consider all possible pooling combinations for MD-MIL. Instead, it aims to demonstrate the practical effectiveness of MD-MIL by investigating whether a model that theoretically considers the relationships between bags can directly enhance performance. To this end, experiments on MD-MIL were conducted using the ShanghaiTech Video Anomaly Detection (VAD) dataset.
- We evaluated the prediction performance of frames (i.e., bags) and image patches (i.e., instances) in videos, comparing three methods according to the instances used from other bags for instance prediction: 1) **None-Attention,** which does not utilize attention mechanisms, 2) **Attention**, which uses attention mechanisms within the same bag, and 3) **Cross-Attention**, which uses attention mechanisms that leverage features from instances in other bags.
- These three attention mechanisms are theoretically used for **Conjunctive Pooling** in the **General Domain Space**, at each dimension (instance and bag).
- The experimental results are in Table C of the attached PDF. The Cross-Attention method showed the highest performance by effectively utilizing relationships between instances across different bags, confirming the practical usage of MD-MIL.
3. **Overall presentation enhancement**
- We will add the figures in the attached PDF to the paper to help readers understand.
- **Figure A**: A summary diagram showing the relationships between theorems. This will help readers understand the flow and connections between the theorems and the corresponding pooling methods.
- We believe these revisions address the reviewers' concerns and enhance the overall quality and clarity of the manuscript. Thank you for considering our response.
**References:**
[1] Shi et al., Loss-based attention for deep multiple instance learning. AAAI, 2020.
[2] Liu et al., Weakly-Supervised Residual Evidential Learning for Multi-Instance Uncertainty Estimation. ICML, 2024.
[3] Wang et al., Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Bag-Level Classifier is a Good Instance-Level Teacher. IEEE Transaction on Medical Imaging, 2024.
[4] Shao et al., Transmil: Transformer based correlated multiple instance learning for whole slide image classification. Neurips, 2021.
[5] Zhang et al., Multi-instance causal representation learning for instance label prediction and out-of-distribution generalization. Neurips, 2022.
[6] Rymarczyk et al., Kernel self-attention in deep multiple instance learning. arXiv, 2020.
[7] Wang et al., Revisiting multiple instance neural networks. Pattern recognition, 2018.
[8] Ilse et al., Attention-based deep multiple instance learning. PMLR., 2018.
[9] Javed et al., Additive mil: Intrinsically interpretable multiple instance learning for pathology. Neurips, 2022.
[10] Early et al., Inherently interpretable time series classification via multiple instance learning. ICLR, 2024.
Pdf: /pdf/b938802b811573fd87fd428999ee0df6996ef83b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper addresses a critical gap in Multiple Instance Learning research by proposing a theoretical framework to evaluate instance-level learnability of MIL algorithms. Utilizing PAC learning theory, the authors derive conditions under which Deep MIL algorithms can achieve instance-level learnability. The framework is then applied to evaluate the learnability of several existing Deep MIL algorithms, and the findings are validated through empirical studies.
Strengths: 1. The paper introduces a novel theoretical framework to assess instance-level learnability in MIL, filling a significant gap in current research.
2. The authors provide clear and precise definitions of MIL domain spaces, hypothesis spaces, and risk, which aids in understanding the theoretical constructs.
3. The theoretical conditions and their proofs are well-articulated and grounded in PAC learning theory, providing a strong foundation for the proposed framework.
4. The paper evaluates a range of Deep MIL algorithms against the proposed theoretical conditions, offering a broad validation of the framework.
5. The study addresses practical concerns in domains requiring expert labeling, such as pathology, highlighting the potential impact of instance-level learnable MIL algorithms.
Weaknesses: 1. The paper’s theoretical sections are dense and may be challenging for readers not well-versed in PAC learning theory.
2. The framework relies on several assumptions, such as the independence of instances in certain conditions, which may not always hold in real-world applications.
3. Details on experimental setups and reproducibility of empirical studies are sparse, which could hinder replication efforts.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How do the proposed theoretical conditions translate to improvements in practical MIL applications? Can you provide more detailed real-world examples?
2. How robust are the theoretical conditions under different assumptions, such as dependent instances within bags?
3. Can the framework be extended to other types of machine learning algorithms beyond Deep MIL?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition and feedback on our work. We've integrated your suggestions to enhance our manuscript. For more information, see Global Response. We are addressing your specific comments as follows:
**(W1)** Complexity of the Theoretical Framework
- To facilitate understanding of the proposed theoretical framework, we will add a summary figure and explanation of the relationships between the theorems to the paper. This figure, shown as Figure A in the Global Rebuttal PDF, will illustrate how the theorems are derived and their implications.
- Figure A explains which theorems are derived from others and their outcomes, particularly whether certain pooling methods are learnable at the instance level. This should help readers unfamiliar with PAC learning theory to understand the goals and theoretical flow of our paper.
**(W2)** Applicability of the Theoretical Framework to Real-World Applications
- We addressed the learnability of MIL algorithms in independent scenarios because many studies assume independence when implementing MIL algorithms. Most studies using instance-pooling require instances to be independent.
- To assess the learnability of algorithms in broader scenarios, we defined the General Bag Domain Space, which includes both independent and dependent bag domain spaces. This ensures that our framework is valid under Assumption 1, even when relationships between instances exist.
**(W3)** Limited Empirical Validation
- The experiments in our paper are reproducible, with code made available and a detailed readme file explaining the experimental setup and parameters. Appendix D provides extensive details on the experimental environment, hyperparameters, and network structures to facilitate replication.
- Additionally, reflecting reviewers' suggestions for more empirical validation with recent Deep MIL algorithms, we conducted further experiments detailed in item 1 of the Global Rebuttal, shown in Tables A and B. These results will be added to the paper, demonstrating that our theoretical framework applies to all Deep MIL algorithms, not just representative pooling methods, and confirming its empirical validity.
**(Q1)** Practical Applications of the Framework
- Our proposed framework offers the advantage of theoretically evaluating whether MIL algorithms are learnable at the instance level. The Global Rebuttal PDF's Figure A clearly demonstrates the theoretical learnability of individual pooling methods, and we have empirically validated these theories through experiments on specific pooling algorithms, as shown in Global Rebuttal item 1.
- For example, in the medical domain, MIL algorithms have been adopted to address labeling issues. These algorithms often use attention pooling to reflect current trends. However, our study shows that attention pooling is not learnable at the instance level, indicating it is unsuitable for medical applications where accurate instance-level predictions are crucial for patient diagnosis.
- Our study identifies two pooling methods that are learnable at the instance level for various MIL domain spaces. This can help select appropriate MIL algorithms for specific environments.
- Additionally, our framework can be applied to individual techniques aimed at improving MIL performance.
- In Section 4.4.3, we applied our framework to positional encoding, commonly used in MIL research, showing in Table 4 that positional information should be used only in attention computation to improve instance-level prediction performance.
- As shown in Table C of the Global Rebuttal, our framework also theoretically guarantees the best performance for models that use relationships between instances from different bags, such as in MD-MIL scenarios.
**(Q2)** Theoretical Robustness of the Framework in the General Bag Domain Space
- We proposed a framework under Assumption 1, which assumes MIL algorithms are learnable at the bag level, to facilitate the theoretical evaluation of instance-level learnability. Under this assumption, our framework includes both the Independent bag domain space ($D_{XY}^{Ind}$) and Dependent bag domain space ($D_{XY}^{Dep}$) in the General Bag Domain Space ($D_{XY}^{Gen} = D_{XY}^{Ind} \cup D_{XY}^{Dep}$) and proposes Theorem 3 to determine instance-level learnability. Based on Theorem 3, we derived Theorems 5, 8, 9, 10, and 11.
- Specifically, Theorem 3 is proven using the Azuma-Hoeffding inequality [1], which applies to dependent random variables, demonstrating the robustness of our theoretical conditions. Empirical validation of our theorems confirms the robustness of the framework in $D_{XY}^{Gen}$.
**(Q3)** Extending the Framework to Other Types of Machine Learning Algorithms
- Our framework can be applied to machine learning-based MIL algorithms, provided they satisfy Assumption 1.
- For example, the well-known machine learning-based MIL algorithm MI-SVM [2] performs individual instance predictions and then predicts the bag based on these results, satisfying Condition 4 and being learnable at the instance level.
- Recent MIL research focuses more on Deep MIL than on machine learning-based methods because deep networks can better extract important features from instances, achieving higher performance [2]. Thus, we refer to our framework as a framework for Deep MIL.
**Reference:**
[1] Pelekis et al., Hoeffding’s inequality for sums of dependent random variables. *Mediterranean Journal of Mathematics*, 2017
[2] Andrews et al., Support vector machines for multiple-instance learning. Neurips, 2002
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The reply is appreciated. I hence raised my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive evaluation of our paper and for raising your score. Your feedback has been invaluable in enhancing our research. If you have any further concerns or questions, please do not hesitate to let us know. We are more than happy to address them. | null | null | null | null | null | null |
Learning Truncated Causal History Model for Video Restoration | Accept (poster) | Summary: This paper proposes a TURTLE to learn the truncated causal history model for video restoration tasks.
The proposed turtle's causal history model consists of two sub-modules: a State Align Block and a Frame History Router.
The state align block has a similarity-based retrieval mechanism that implicitly accounts for inter-frame motion and alignment from previous frames, and those retrieved representations are summarized and stored into a truncated history.
Then the frame history router generates output frames by cross-frame channel attention with the motion-compensated history states.
In experiments, the proposed method shows a promising result in a number of video restoration tasks including deraining, desnowing, deblurring, and super-resolution.
Strengths: - The proposed methods outperforms others in a number of video restoration tasks and benchmarks.
- The proposed causal history model has a capability to consider spatio-temporal-channel correlation by the state align block (spatio-temporal) and the frame history router (channel-temporal).
- It theoretically shows the link with the space state model.
- It seems to increase efficiency by designing the encoder to process single frame only while the decoder to process multiple frames.
Weaknesses: - As in the state align block, find similarity from history frames have previously proposed [1] and it seems the frame history router (channel-wise attention) is a new one proposed in this paper. However, there is no relevant ablation study with and without frame history router, so it is hard to see the improvement caused by it.
[1] Learning Trajectory-Aware Transformer for Video Super-Resolution (CVPR22)
- There is no detailed model architectures and model parameter (and runtime) comparisons with others.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Does the size of p1 and p2 affect the results?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: It is surprising that the proposed method shows excellent performance in all various experiments.
But one thing may be complemented is that there is no strong guidance for temporal consistency, and I found there is a background flickering in the deraining scene in attached video.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our work important and for their insightful and constructive comments.
**W1** Frame history router ablation study.
In the main paper, we conducted ablation experiments to analyze the effects of different components in CHM. We investigated three setups: No CHM, No State Align Block (only Frame History Router), and Turtle (both FHR and SAB), as shown in Table 9 (in the main paper). For clarity, we have reproduced the table below. The results demonstrate that using only CHM results in a +0.23 dB increase in PSNR and adding SAB on top of FHR provides an additional +0.19 dB increase in the ablation experiment.
**Methods**| **PSNR** |
------------------------------------------------------------|----------|
| No CHM | 31.84 |
| No State Align Block (Only Frame History Router) | 32.07 |
| Turtle (Both State Align Block, and Frame History Router) | **32.26** |
**Table 1**: _Ablating the CHM block to understand if both State Align Block, and Frame History Router are necessary._
The paper "Learning Trajectory-Aware Transformer for Video Super-Resolution (TTVSR)" [36] processes the current frame and multiple neighboring frames together, which is inefficient. Both TTVSR and Turtle use attention to find similar patches, but Turtle has distinct advantages. Turtle uses a history state $H_t$ and reuses features from past frames, reducing computational inefficiencies. While TTVSR learns the entire trajectory of each pixel (or group of pixels) in the video, Turtle limits the history and relevant patches. This is beneficial because the entire trajectory is often not useful in video restoration due to rapid changes caused by degradations like snow, rain, and raindrops.
Additionally, Turtle's Frame History Router (FHR) efficiently performs channel attention to correlate each patch in the current frame with the most relevant historical features, avoiding the overhead of full attention between frames while maintaining high restoration quality. The superiority of our proposed method is supported by the results in Table 7 and Table 5 in the main paper, which show that Turtle significantly outperforms TTVSR. Specifically, Turtle achieves a +3.96 dB increase in PSNR for Video Raindrop and Rain Streak Removal and a +1.7 dB increase in video super-resolution.
**W2.** Runtime comparisons with other methods.
We profile Turtle and compare it with three previous state-of-the-art video restoration methods, ShiftNet, VRT, and RVRT on varying spatial resolution sizes in terms of runtime per frame (ms), GPU memory usage, FLOPs (G), and MACs (G), refer to Table 1 in global rebuttal. Turtle can process videos at varying spatial resolutions on a single 32 GB GPU, while all three other methods throw out-of-memory errors since the memory requirements exceed the total available memory.
**Q1.** Effect of patch size.
CHM (specifically SAB) is effective with reasonable patch sizes. If the patch size is too small, attention computations shift to a pixel-to-pixel level, increasing the computational cost and making it harder to find similar patches since each pixel holds limited information. Conversely, if the patch size is too large, topk filtering suffers due to excessive similarities across the frame's spatial resolution. Based on our experiments, a patch size of 3x3 or 4x4 yields optimal results.
**L1.** No strong temporal guidance.
For the purpose of this work, we opted for a simple L1 loss function without any additional auxiliary losses. However, coupling some temporal consistency-based loss functions with the restoration loss can potentially alleviate such flickering [86] in very high frame-rate videos.
[86] Dai, Peng, et al. "Video demoireing with relation-based temporal consistency." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed rebuttal and my concerns have been well addressed.
---
Rebuttal 2:
Comment: Dear Reviewer PswG,
We are glad that our clarifications have addressed your concerns. Your input has been invaluable throughout the review process. We thank you for your effort in reviewing our paper and for finding it important. | Summary: This paper proposes a truncated causal history model (TURTLE) for video restoration. TURTLE is in a U-Net manner with a historyless encoder and a history-based decoder. The decoder has a causal history model (CHM), which is the core part of the TURTLE. The CHM injects history frames into a hidden state $\mathbf{H}_t$ with the state align block, which is based on a cross-attention mechanism and a top-k selection strategy. The current frame is then fused with the hidden state with cross-frame attention. Experiments on several tasks demonstrate that the proposed methods can outperform the comparable methods.
Strengths: 1. The hidden state strategy provides a new thought for video processing tasks.
2. The proposed method outperforms comparable methods on several video restoration tasks.
Weaknesses: 1. The organization of ablation studies is inappropriate.
2. Some of the datasets are uncommon.
3. Some key information is lacking.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The organization of ablation studies is inappropriate.
1.1 It's improper to put the tables of ablation studies into the appendix.
The ablations of some parts are missing.
1.2 The ablation study of top-k fusion in the state align block is missing. Most of the time, the information fusion operation in the top-k selection position of CHM is implemented by a softmax. How about using a softmax to replace top-k, i.e., remove the top-k selection strategy?
1.3 The implementation of the ''No CHM'' in the ablations is unclear. It states ''two frames are concatenated and fed to the network directly'', which is unclear. What network are the two frames fed to?
1.4 What is the patch size $p$ in the state align block? How about using different $p$?
2. There are important video restoration methods absent in the comparable tables.
[1] K. Zhou, et al., Revisiting Temporal Alignment for Video Restoration. CVPR 2022.
(Link of codes: https://github.com/redrock303/Revisiting-Temporal-Alignment-for-Video-Restoration)
[2] D. Li, et al., A Simple Baseline for Video Restoration with Grouped Spatial-Temporal Shift. CVPR 2023.
(Link of codes: https://github.com/dasongli1/Shift-Net)
3. Recurrent structures in neural networks can bring considerable costs. How does the CHM? In other words, the comparisons of running time and GPU memory cost are absent.
4. Some of the datasets are uncommon. For example, in the video super-resolution experiments, this paper selects MVSR4X as the dataset. However, the recent comparable methods (such as BasicVSR and VRT) use REDS and Vimeo-90k as benchmarks. It seems that in the Table. 7, the TURTLE outperforms BasicVSR++ a lot, but is the BasicVSR++ model used in this paper trained for being evaluated on MVSR4X?
5. Do all the models for each task in the paper use the same number of input frames? If not, how to explain the rationality of the experiments?
6. What are the training datasets for each task? Are all the methods in the same comparable table training in the same dataset (or under the same settings)?
7. It looks like the hidden state $\mathbf{H}$ in the CHM needs some frames to start. How to handle the boundary frames of videos?
8. This paper emphasizes the proposed casual history model, but in the method part, it looks like the core part is a module for alignment (state align block). For extracting features from hidden states for restoration, what is design motivation? It would be better to introduce the components in CHM except the state align block with more details.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please see the Questions part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and insightful comments.
**Q1.1.**
We placed ablation studies in appendix given space limit, but can move them to the main paper.
**Q1.2.**
During rebuttal, we have added ablation experiments comparing softmax and topk (k=5), as shown in Table 2 of the one-page PDF. The main paper included experiments on the effect of k in topk patch selection in Table 11. Softmax is a special case of topk method with k equal to the total number of patches in the frame (retrieving all patches). We can see that topk is better than Softmax, as it prevents the inclusion of unrelated patches and contributes critically to performance and computation efficiency.
**Q1.3.**
Turtle takes a single frame from the video at time $t$ as input; the encoder processes this frame, while the middle and decoder stages condition restoration on historical frames. In `No CHM` setting, we concatenate frames at times $t$ and $t-1$, which are fed into the same architecture as presented in Fig. 1(a) and Sec. 3.1, without CHM blocks in middle or decoder stages—only Transformer blocks are used. We used the same-sized models by adjusting the dimensions/blocks in the `No CHM` setting to match the size of Turtle.
**Q1.4.**
We use a patch size of 3x3. CHM is effective as long as the patch size is reasonable. If the patch size is too small, attention computations shift to a pixel level, making it harder to find similar patches since each pixel holds limited information. Conversely, if the patch size is too large, top-k filtering becomes less effective due to excessive similarities across the frame's spatial resolution. Therefore, a patch size of 3x3 or 4x4 yields optimal results.
**Q2.**
We added comparisons to RTA (CVPR'22) for video denoising at two noise levels, sigma = 30, 50, in Table 3 in one-page PDF. Unlike RTA that requires both the frame and an additional noise map as input, Turtle is completely blind to noise levels when processing a degraded input frame, making the task harder. We did not include ShiftNet as it is significantly more costly than Turtle, although runtimes and other comparisons are presented in Table 1 for a reduced ShiftNet on a single GPU in Global Rebuttal.
**Q3.**
We profile Turtle and compare it with 3 state-of-the-art video restoration methods, ShiftNet, VRT, and RVRT on varying spatial resolutions, in Table 1 of Global Rebuttal. Turtle can process videos at varying spatial resolutions on a single 32GB GPU, while others report out-of-memory errors as the resolution increases. A major benefit of Turtle is that it processes a single frame and uses the truncated history, instead of processing several frames in parallel in multiple branches, and thus is significantly faster and more efficient.
**Q4.**
Our focus is on video restoration, while we used MVSR4x to show Turtle's generalizability to more tasks. In fact, MVSR4x is a more recent real-world SR dataset from CVPR 2023, while VRT and BasicVSR were introduced earlier in 2022 and 2021 and thus didn't evaluate on MVSR4x. MVSR4x is also a challenging dataset, featuring lower resolution frames from phone cameras, resulting in lower PSNR scores compared to REDS and Vimeo90K.
Yes, BasicVSR++ is also trained on the MVSR4x dataset and tested on the heldout testset of MVSR4x. We have carefully ensured all comparisons are fair. In all tables, all the methods are directly trained on the same dataset and follow the same procedure as Turtle. We either take results from their original work or the work that introduced the dataset and retrained these methods on the proposed dataset after carefully tuning the methods (the case of MVSR4x paper, which retrained BasicVSR++ when introducing the dataset).
**Q5.**
They do not. ShiftNet uses a context length of 50 frames, while VRT and RVRT utilize 16 frames. RTA uses 5 context frames. These methods compare against each other according to standard practices in video restoration literature. Although Turtle uses a truncated history of 3 frames, yet it achieves state-of-the-art results across different tasks, demonstrating its efficiency.
**Q6.**
Yes, to ensure complete fairness in comparisons, all methods we compare are trained on the same dataset and follow the same train/test splits recommended in the literature. The datasets used are listed in Appendix F of the main paper.
**Q7.**
Boundary conditions occur in the first $\tau$ frames. The first frame is restored without conditioning, second frame using the first frame as history, third frame using the first two frames and so on. Frame $\tau+1$ will use history up to the set truncation factor $\tau$.
**Q8.** As Sec. 3.2 describes, overall rationale of CHM is best explained by Eqs. (1)(2), where (1) retrieves the relevant information from the prior frames $H_t$, transforms it into channels of (succinct) hidden features $\hat{H_t}$ via $\phi_t$, while (2) aims to attend current frame $F_t$ to this succinct historical feature to restore $F_t$ via $\psi_t$. In detail, Eq (1) is implemented by State Align Block (SAB) $\phi_t$, which for each patch in $F_t$, retrieves and only keeps topk relevant patches from a prior frame (in the truncated history) and “moves” those spatially to align with this patch as additional channels. This ensures that each patch in $F_t$ is not to be attending to all patches in a prior frame or to a patch at the same position in prior frame (since pixels have moved), but only to most relevant ones. Eq (2) is implemented by Frame History Router (FHR) $\psi_t$, which achieves temporal correlation across frames through efficient channel attention with succinct historical features (instead of full attention between frames). For each patch in $F_t$, FHR “routes” attention to prior historical features that are the most helpful in restoring this patch. Therefore, both (1) and (2), i.e., both SAB and FHR are core to this design and work together to achieve said feature reduction and computational efficiency of Turtle.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. I raised my rating to borderline reject.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer BXMC,
Thank you for your reply. We appreciate your comments and time spent on reviewing this paper again. In the rebuttal, we have addressed your main questions/concerns raised in your review, including:
1. Ablation Studies: We provided the ablation study for the proposed top-k fusion in the state align block vs. softmax as suggested, and demonstrated the effectiveness of top-k.
2. Comparisons to more baselines: We added comparisons to ShiftNet and RTA as suggested, highlighting Turtle’s advantages on performance and efficiency.
3. Efficiency/cost evaluation: We provided detailed comparisons of running time and GPU memory cost, showing Turtle’s efficiency across varying spatial resolutions.
4. Dataset for SR: We explained our use of the MVSR4X dataset as a recent and challenging benchmark for super-resolution task, as an additional task we evaluate on, and clarified our fair comparison to BasicVSR++ trained on the same dataset.
5. Extensive and fair comparisons with baselines: We ensured fairness by using the same settings, training/testing data splits, and careful tuning across all the methods evaluated. We have done extensive experimental studies and comparisons on a range of tasks and datasets.
6. CHM Design Rationale: We provided a detailed explanation of the design rationale behind the Casual History Model (CHM), and clarified all its critical components.
We hope the above rebuttal has addressed and cleared all your major questions or doubts raised in the review. And we hope these responses can be thoroughly taken into account to reach into conclusions. Please let us know if you have remaining concerns/questions for this work. Thanks again for your time!
---
Rebuttal 2:
Title: Inquiring about additional questions/concerns.
Comment: Dear Reviewer BXMC,
Thank you very much for your time spent on reviewing our work. In response to your questions, we had provided detailed explanations to clarify the points discussed and addressed the concerns you highlighted. As the deadline for the discussion period is quickly approaching, we are keen to know if our responses have addressed all your concerns. We are committed to answering any further questions that you may have.
Thank you again for your valuable time and expertise.
---
Rebuttal 3:
Comment: Dear authors,
Thank you for your reply. Some of my questions are not addressed well.
1. Comparisons: Shiftnet is not compared. Since the re-training is in the same settings as the proposed method, why not modify the number of input frames?
2. Cost: There are many methods in this paper, only the costs of ShiftNet, VRT, RVRT, and Turtle are shown.
---
Rebuttal 4:
Title: Addressing Further Questions
Comment: Dear Reviewer BXMC,
Thank you for your detailed feedback and questions. We appreciate the opportunity to address your queries and provide further clarification.
**1. Comparison to ShiftNet.**
We had indeed trained ShiftNet with a context of 3 frames to match the input settings of the proposed method, Turtle, on the synthetic video deblurring (GoPro). However, we did not include these results initially in the manuscript because the original ShiftNet was designed for a context length of 50 frames (or even more according to their paper and repo), relying on high-end and multiple GPUs, which is a setting not straightforwardly comparable to our method and other baselines evaluated in this work. However, per the reviewer's request, we now provide these results in comparison to ShiftNet (retrained with 3-frame context on GoPro) in the following table.
| **Method** | **PSNR** | **SSIM** | **MACs (G)** | **Inf. Time (ms)** |
|-|-|-|-|-|
|ShiftNet|33.20| 0.962|399|165|
|Turtle|**34.50**|**0.972**|**181**|**95**|
_**Table 1:**_ Comparison to ShiftNet on the video Deblurring task (GoPro) in terms of PSNR/SSIM, and profiling on input resolution of 256x256x3 on a single 32GB GPU.
We can see that ShiftNet-3frame is almost 1db lower than Turtle and is not even competitive as compared to other baselines in Table 4 runnable on a single v100 GPU. The much lower performance of ShiftNet with a 3-frame context is because its original design needs to perform restoration jointly for a larger number (50 or more) of frames together to incorporate motion-compensated neighboring information.
Additionally, ShiftNet's high computational demands are notable in the community. Although their paper does not specify the compute requirements, their GitHub repository indicates that training a model with a batch size of 1 and 13 frames required 8x 32GB V100 GPUs. The full 50-frame context model's compute and memory requirements at inference remain unspecified, and there have been multiple comments on the computational challenges faced when running their open-source implementation:
[1] https://github.com/dasongli1/Shift-Net/issues/1 {/7,/9}
Another nuance to be considered is that ShiftNet by design assumes the availability of the future frames and restores all frames in the context jointly, while the proposed method Turtle has the strength that it does not rely on future frames and can be applied in real-time scenarios (e.g., video streaming).
**2. Cost Comparisons**
In Table 1 of the global rebuttal, we chose to compare the compute costs, inf. time, and memory requirements of Turtle to VRT [33] (2022, published in 2024), ShiftNet [28] (2023), and RVRT [32] (2022), because they are all general video restoration techniques, which is the same as Turtle, rather than designed for accomplishing a specific task. Methods that are designed for specific restoration tasks are optimized for those tasks and, therefore, are not intended to perform competitively across a broad range of restoration tasks as Turtle does.
We can see from Table 1 in global rebuttal that while methods like ShiftNet, VRT, and RVRT exhibit exponential growth in GPU memory requirements as resolutions increase, Turtle features linear scaling in GPU memory usage for higher resolutions, underscoring its computational efficiency advantage.
However, in Table 8 of the main paper, we have also provided MACs comparisons to more different video restoration methods other than the ones listed in global rebuttal, including some task-specific ones.
The methods for cost comparison in Table 8 are selected as follows. First, we included two general methods, RVRT and VRT, due to their high citation counts and popularity, recent publication, and competitive performance. Second, for the task-specific methods, our selection was based on their respective performances (focusing on those that ranked second or third best in respective tables), while also depending on the availability of open-source implementations (e.g., for SVDNet [9] and MetaRain [46], code is not available, which are also worse than Turtle on task performance). Furthermore, if the performance gap between a method and Turtle was larger than 1dB PSNR (which is a substantially large gap in log scale), we would not consider those methods for cost comparison, since their task performance is not even close.
Table 8 and the extensive task results presented in Tables 1-7 suggest that Turtle either substantially outperforms the baselines or achieves the leading results on all the major video restoration tasks evaluated while still being more computationally efficient. For example, EDVR and BasicVSR, RDDNet have relatively lower GMACs in Table 8, although still higher than Turtle, but they achieve much lower performance than Turtle in Table 7 and Table 5. These results have verified the effectiveness and efficiency of Turtle as compared to baselines.
Thank you again, and we hope these clarifications address your concerns.
---
Rebuttal Comment 4.1:
Comment: Dear authors,
Thank you for your clarification, I've raised my rating to be positive.
---
Rebuttal 5:
Comment: Dear Reviewer BXMC,
Thank you for your valuable input, insight, and feedback that helped improve our work. We appreciate your positive stance of our work, and your decision of raising the score. | Summary: This work presents a video restoration framework named TURTLE, which stands for truncated casual history model. The key innovation of TURTLE is its ability to efficiently model the transition dynamics of video frames governed by motion, a critical challenge in video restoration. Unlike traditional methods that process multiple contextual frames simultaneously, TURTLE enhances efficiency by storing and summarizing a truncated history of the input frame's latent representation into an evolving historical state.
Strengths: 1. The paper introduces a novel video restoration technique that leverages a truncated causal history model, which is a unique approach to handling the transition dynamics of video frames influenced by motion. This represents a significant advancement in the field of video processing.
2. The TURTLE framework achieves state-of-the-art results across a wide range of video restoration tasks, including desnowing, deraining, raindrop and rain streak removal, super-resolution, deblurring, and blind video denoising. The consistent high performance across different tasks underscores the effectiveness of the proposed method.
3. TURTLE is designed to be computationally efficient, which is a critical consideration for practical applications. The framework reduces computational costs compared to existing methods while maintaining high performance, making it suitable for resource-constrained environments.
Weaknesses: 1. The section on related work is seriously lacking in content; the author should revise this section to include a more detailed explanation of related work, especially in the areas of temporal modeling and Causal Learning.
2. There is a scarcity of Visual Comparisons, and many tasks lack real-world sample comparisons. This is a serious issue, and I hope the author can provide a complete set of comparisons.
3. Why is it necessary to use a Casual History Model in each decoder? I believe applying CHM in the latent space should be sufficient. This raises concerns about the actual inference efficiency of the model.
4. There is a lack of comparison regarding actual inference time, as well as a crucial study on the actual VRAM usage at high resolutions, both of which are very important; the reference value of GMacs alone is limited.
Technical Quality: 3
Clarity: 2
Questions for Authors: Why is it necessary to use a Casual History Model in each decoder? I believe applying CHM in the latent space should be sufficient. This raises concerns about the actual inference efficiency of the model.
There is a lack of comparison regarding actual inference time, as well as a crucial study on the actual VRAM usage at high resolutions, both of which are very important; the reference value of GMacs alone is limited.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Please refer to the weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback and reading our work.
**W1.** The section on related work is seriously lacking in content.
Due to space constraints, we had to be selective in related work. In the rebuttal, here we would like to add a literature review on temporal modeling and causal learning for videos as suggested.
**Temporal Modelling:** In video restoration, temporal modeling mainly focuses on how the neighboring frames (either in history or in the future) can be utilized to better restore the current frame.
For such a procedure, the first step usually involves compensating for motion either through explicit methods (such as using optical flow [32], [33], [35], [5]), or implicitly (such as deformable convolutions [61], search approaches [62], or temporal shift [28]). A few works in the literature focus on reasoning at the trajectory level (i.e., considering the entire frame sequence of a video) [36] through learning to form trajectories of each pixel (or some group of pixels). The motivation is that in this case, each pixel can borrow information from the entire trajectory instead of focusing on a limited context. The second step is then aggregating such information, where in the case of Transformers, self-attention is employed, while MLPs are also used in other cases.
**Causal Learning for Videos:** In videos, causal learning is generally explored in the context of self-supervised learning to learn representations from long-context videos with downstream applications to various video tasks such as action recognition, activity understanding, etc. In [85], causal masking of several frames at various spatio-temporal regions as a strategy to learn the representations is explored. To the best of our knowledge, almost all of the state-of-the-art video restoration methods are not causal by design since they rely on forward and backward feature propagation (i.e., they consider both frames in history and in the future) either aligned with the optical flow or otherwise [32], [33], [5].
[85] Bardes, Adrien, et al. "Revisiting feature prediction for learning visual representations from video." arXiv preprint arXiv:2404.08471 (2024).
**W2.** Visual Comparisons.
We have already provided extensive visual comparisons between Turtle and state-of-the-art baseline methods in the main paper for a range of tasks on respective benchmark datasets, including desnowing in Figures 3 and 11, night deraining in Figure 3, deblurring in Figures 4 and 9, raindrop removal in Figure 4, and blind video denoising in Figure 5. These visual comparisons on done on the benchmark dataset where the methods are evaluated in order to align with the reported numerical performance comparisons in tables, which is following standard practice in the literature. We also have included a visual comparison on real-world video super-resolution in Figure 5.
During the rebuttal, we added more real-world Visual Effect results, including real-world deblurring from the BSD 3ms-24ms dataset and removing snow from real snowy videos taken from www.pexels.com (a free stock videos website). Please refer to Figure 1 in the one-page PDF for the visual results.
These results when put together show the competence of Turtle not only numerically but also in terms of visual effects.
**W3.** Is CHM in Latent sufficient?
In rebuttal, we add an ablation study for the potential benefits of adding CHM at the decoder stage compared to adding it only at the latent stage. The ablation results are included in Table 1 of the one-page PDF. This table is also shown below:
| **Ablations** | **PSNR** |
|----------------------------------|----------|
| No CHM | 31.84 |
| CHM in Latent | 32.05 |
| CHM in Latent & Decoder (Turtle) | **32.26** |
**Table 1**: _Ablation experiments on CHM placement in latent, and latent and decoder._
Our experiment indicates that having CHM in both the latent and decoder stages is necessary for optimal performance. In the latent stage, the spatial resolution is minimal, and CHM provides greater benefit in the following decoder stages as the spatial resolution increases.
Furthermore, we also calculated the computational overhead of CHM, and show that out of 181 MACs (G) of Turtle, all CHM blocks collectively contribute 11.8 MACs (G), which accounts for 7% of the overall computation cost to achieve temporal modeling. In contrast, processing and restoring multiple frames in parallel with ShiftNet [28], learning trajectories in TTVSR [26], or deploying an additional optical flow network [32,33] entail considerably higher costs.
**W4.** Lack of comparison at different resolutions.
We profiled the proposed method Turtle and compared it with previous video methods ShiftNet, VRT, and RVRT. We compute per-frame inference time (ms), MACs (G), FLOPs (G), and GPU memory usage (MBs) at varying input resolutions (refer to Table 1 in global rebuttal).
In practice, note that ShiftNet considers a context of 50 frames, VRT considers a context of 16 frames, and RVRT considers a context of 16 frames for denoising and deblurring. For superresolution, a total of 30 frames are fed. Note that both VRT and RVRT also rely on an optical flow architecture (SpyNet) and fine-tune it during training for the restoration procedure. These factors significantly increase their computational costs and limit their deployability.
However, Turtle stands out by only considering a single frame and conditioning the restoration on the history of the current frame up to a truncation factor (τ =3). This setup significantly enhances efficiency, allowing Turtle to perform inference at varying spatial resolutions on a single GPU. This capability is crucial for the successful deployment of video restoration models on hardware-constrained devices, where the availability of multiple GPUs is often limited or impractical.
---
Rebuttal 2:
Title: Follow up and Inquiry for Further Questions/Concerns.
Comment: Dear Reviewer SsXx,
Thank you very much for your time, and insightful comments. We had provided detailed explanations to clarify the points, and questions raised. As the deadline for the discussion period is quickly approaching, we were just wondering if all of your questions have been addressed. We are committed to answering any questions/concerns that you may have.
Thank you again for your valuable time. | Summary: The paper presents a novel framework TURTLE for video restoration. TURTLE aims to improve video restoration tasks by modeling and utilizing truncated historical data of input video frames to enhance the restoration quality while maintaining computational efficiency. The proposed method demonstrates state-of-the-art results across various video restoration benchmarks, including desnowing, deraining, deblurring, and super-resolution.
Strengths: - The introduction of a truncated causal history model is novel and addresses the limitations of both parallel and recurrent video processing methods.
- The paper thoroughly explains the TURTLE architecture and its components, including the encoder, decoder, and Causal History Model (CHM).
- The method achieves superior results on multiple video restoration benchmarks, showing clear improvements over existing techniques. The method is evaluated across various video restoration tasks quantitatively and qualitatively.
Weaknesses: - This paper did not discuss the parameters and FLOPs of this model.
- Lack of comparison with Restormer: Efficient transformer for high-resolution image restoration. (CVPR2022), A Simple Baseline for Video Restoration with Grouped Spatial-temporal Shift. (CVPR2023)
- Lack of visualization results of the effectiveness of truncation factor and value of topk.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Line 24, the author claims recurrent designs often result in cumulative errors. I think it would be better to explain why a truncated causal history model can avoid this phenomenon.
- As shown in Fig. 7, how many frames are affected by incorrect tracking query points? Will this reduce the restoration quality of subsequent frames?
- I noticed a latent cache block in your code that is barely mentioned in the paper. Could you explain its function and impact on the model's performance?
- Please refer to the weaknesses above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: The authors have largely addressed their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for reviewing our work and providing constructive comments.
**W1.** FLOPs of Turtle.
We have provided the Turtle's MACs (G) in Table 8 of the main paper and discussed its comparison to several previously published video restoration methods in Section 4.8. Additionally, during rebuttal, we have profiled the proposed method, Turtle, and compared it with previous methods, including ShiftNet [28], VRT [32], and RVRT [33], as reported in Table 1 of the global rebuttal. We computed per-frame inference time (ms), MACs (G), FLOPs (G), and GPU memory usage (MB).
From these comparisons, we can see Turtle is significantly more computationally efficient. ShiftNet considers a context of 50 frames, VRT considers 16 frames, and RVRT also uses 16 frames. Both VRT and RVRT rely on fine-tuning an optical flow model (SpyNet) to restore a video. In contrast, Turtle only considers a single frame and conditions its restoration on the recent history of the current frame up to a truncation factor (τ = 3), which reduces computational costs and enhances efficiency.
This computational efficiency is achieved by the proposed CHM which contains a state align block (SAB) to retrieve only top-k relevant patches from recent frames to help restore each patch, and a Frame History Router (FHR) to further route attention to features in the most relevant historical frame. Both SAB and FHR together achieve efficient temporal cross-frame attention calculation.
**W2.** Lack of comparison with Restormer, and ShiftNet.
Restormer [76] is designed for Image Restoration and does not account for the temporal nature of video frames. Despite this difference, here we provide a comparison with Restormer on the GoPro video deblurring task. Unlike video restoration methods, Restormer treats video frames as individual images. Consequently, it is uncommon in the literature to compare video restoration methods with image restoration methods as it is not a fair comparison.
| **Methods** | **Task** | **PSNR** | **SSIM** |
|-------------|-------------------|----------|----------|
| Restormer | Image Restoration | 32.92 | 0.961 |
| Turtle | Video Restoration | **34.50** | **0.972** |
**Table 2**: _Comparison of Turtle with Restormer on the GoPro deblurring task in terms of PSNR and SSIM metrics._
We have not compared our model with ShiftNet due to the immense difference in computational cost, where a major benefit of Turtle is its low computation cost. In Table 1 (refer to the global rebuttal), we provide a detailed comparison of Turtle with ShiftNet [28] in terms of FLOPs, GPU memory consumption, and inference time for a single frame on a 32 GB GPU. To compute the numbers in Table 1, we considered an 8-frame context for ShiftNet, as it was not feasible to run even the smallest ShiftNet model with its full 50-frame context length, which was used to report the performance in their paper, on a single 32 GB GPU. Additionally, it is important to note that ShiftNet considers a 50-frame context, both in the future and in history. Therefore, unlike Turtle, ShiftNet is not suitable for online video restoration (a typical use case in streaming scenarios) since it relies on future frames.
**W3.** Lack of visualization results of the effectiveness of truncation factor and value of top-k.
During rebuttal, we have added the visualization results for the value of top-k (with k = 5 and k = 20); refer to Fig. 2 in the one-page PDF. As for the effect of the truncation factor, increasing the truncation factor will increase computational costs. Our experiments, presented in Table 10, revealed that raising the truncation factor from 3 to 5 did not enhance PSNR scores or visual quality and merely increased computational expenses. Conversely, increasing the truncation factor from 1 to 3 led to visual improvements and higher PSNR scores, which justifies the slight rise in computational cost. Thus, we have chosen a truncation factor of 3 to be used in Turtle.
**Q1.** How truncated causal history model can avoid cumulative errors?
This can be seen through Eq (1) and (2). Although like RNN models, in Turtle, the causal history state $\hat{H_t}$ given by (1) will also recursively depend on prior causal history state $\hat{H_{t-1}}$, yet we notice through Eq(1) and (2) that when decoding $y_t$, Turtle only needs to rely on the current frame $F_t$ and the recent truncated history of $H_t$, where $\hat{H_t}$ (although recursive itself) is extracted by retrieving topk patches from $H_t$ to align with each patch in $F_t$ as additional channels. Therefore, the errors only depend on $F_t$ and $H_t$ with a small truncation factor and do not accumulate.
**Q2.** As shown in Fig. 7, how many frames are affected by incorrect tracking query points? Will this reduce the restoration quality of subsequent frames?
In some cases, where the input frames are extremely degraded or occluded (for instance, with snow covering the point of interest), the points found in the previous frames can be slightly wrong. In Fig. 7, we see this phenomenon with Zebra's front part of the torso. In frames at time t-1, t-2, etc., the region of interest is blurred and occluded due to snow and haze. Therefore, in such cases, some mismatch can occur. However, notice that the most similar points are still somewhere on the Zebra's torso, and not on other random regions (like grass).
**Q3.** Latent Cache Block in code.
The latent block (or latent cache block as in the code, or Middle Block in Figure 1 in the main paper) is no different than the decoder blocks in principle. The only difference is in the number of CHM blocks.
---
Rebuttal 2:
Title: Please let us know if you have further questions/concerns
Comment: Dear Reviewer KLsv,
Thank you very much for your time spent on reviewing our work.
We had provided detailed explanations to clarify the points raised. As the deadline for the discussion period is quickly approaching, we are just wondering if all your questions have been addressed. We are committed to address any questions/concerns that you may have.
Thank you again for your valuable time.
---
Rebuttal Comment 2.1:
Comment: Thank you for your rebuttal. Most of my concerns have been addressed. I've raised my rating.
Best regards.
---
Rebuttal 3:
Comment: Dear Reviewer KLsv,
We thank you for your decision to increase the score and your effort in reviewing our paper. We appreciate your acknowledgement that our rebuttal has addressed your concerns. Your insights have helped improve our paper. | Rebuttal 1:
Rebuttal: ## **Profiling Turtle**
We profile the proposed method, Turtle, in terms of per-frame inference time (in ms), MACs (G), FLOPs (G), and GPU memory usage (in MBs) on a single 32GB Nvidia V100 GPU. ShiftNet uses a context length of 50 frames and restores all 50 frames together, while VRT uses (a context length of) 16 frames, RVRT uses 16 frames for deblurring and denoising, and 30 frames for super-resolution. However, these settings are not runnable on a single V100 GPU, which reports out of memory (OOM). Thus, for the purpose of generating this table, we chose a context length of 8 frames for ShiftNet and 2 frames for RVRT and VRT.
| Methods | Resolution | Per Frame Inference Time (ms) | MACs (G) | FLOPs (G) | GPU Memory Usage (MBs) |
|:--------:|:-----------:|:------------------------:|:--------:|:---------:|:----------------------:|
| ShiftNet | 256x256x3 | 190 | 989 | 1978 | 2752 |
| | 640x480x3 | 510 | 5630 | 11260 | 7068 |
| | 1280x720x3 | OOM | OOM | OOM | OOM |
| | 1920x1080x3 | OOM | OOM | OOM | OOM |
| VRT | 256x256x3 | 455 | 1631 | 3262 | 3546 |
| | 640x480x3 | 2090 | 7648 | 15296 | 11964 |
| | 1280x720x3 | OOM | OOM | OOM | OOM |
| | 1920x1080x3 | OOM | OOM | OOM | OOM |
| RVRT | 256x256x3 | 252 | 1182 | 2364 | 5480 |
| | 640x480x3 | 1240 | 5294 | 10588 | 21456 |
| | 1280x720x3 | OOM | OOM | OOM | OOM |
| | 1920x1080x3 | OOM | OOM | OOM | OOM |
| Turtle | 256x256x3 | 95 | 181 | 362 | 2004 |
| | 640x480x3 | 380 | 812 | 1624 | 4826 |
| | 1280x720x3 | 1180 | 2490 | 4980 | 11994 |
| | 1920x1080x3 | 2690 | 5527 | 11054 | 24938 |
**Table 1**: _We profile the proposed method, Turtle, on a single 32 GB V100 GPU, and compare with 3 recent video restoration methods, namely ShiftNet [28], VRT [33], and RVRT [32]. We consider different input resolutions and compute the per-frame inference time (ms), total MACs (G), FLOPs (G), and the GPU memory usage of the model. OOM denotes Out-Of-Memory error i.e., the memory requirement exceeded the total available memory of 32GB._
**Note on References in Reviewer-specific Rebuttals**
We use same the reference numbering as in the main paper. For example, ShiftNet is the reference [28] in the main paper. For papers that we cite additionally, we provide the references in their specific replies.
Pdf: /pdf/544606f0a9a7dd820887d873347c02a317f33a73.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Causal discovery with endogenous context variables | Accept (poster) | Summary: The authors tackle the challenging causal discovery task, namely, causal discovery from the pooled dataset collected under different environments, where the environment (a.k.a., the context) can be dependent on the system (endogenous) variables (i.e., the variables whose causality we are interested in). The proposed algorithm is simple and intuitive: It employs a modified PC algorithm using context-dependent independence tests. Although I am not so familiar with this topic, overall, I have enjoyed reading the paper. However, there are several clarity issues, so I hope that my comments will be helpful for paper revision.
Strengths: - Addressing the context that is dependent on system variables is an important problem.
- Theoretical results seem sound (although I am non-expert of this field and did not follow the details).
- Overall, the paper is well written (though there is much room for improvement).
Weaknesses: Below I will enumerate several clarity issues.
Section 1
* While the first and the second paragraphs in Section 1 are very clear, the third one was a bit disappointing. The authors suddenly introduce several technical terms, such as context-specific models, single causal union graph, and cyclic union graph, etc, without definitions or explications. Readers cannot understand, for instance, which graph in Figure 1 is a « single causal union graph » because there is no description about it.
* « as exemplified in Example 3.1 » Example 3.1 is very far away from Introduction. Thus, this paragraph is not well structured.
Section 3
* Relationship between the notion of intervention and the context indicator is unclear. I am not expert of causal discovery from multiple datasets like JCI algorithm, but I understand that the context is a more general notion than intervention: Some interventions can be represented using context variables, but not vice versa. Am I correct? For instance, how can non-stationary time series be formulated using context variables? Please elaborate the context variables more (in Appendix, if there is not enough space).
* Related to the above, Example 3.1 can be regarded as soft intervention on $Y$?
* Section 3.1 involving Example 3.2 is very hard to follow, although the example is intuitive. Lines 141-149 are very difficult to follow. Please introduce each variable (e.g., ice cream sale) together with notation (e.g., $Y$). Notation is inconsistent: Sometimes $R=0$, and sometimes $R=« ice »$. This paragraph should be clearer to clarify the significance of this work.
* In Section 3.2, each graph is not clearly explained. For instance, is the mechanism graph « a causal graph in a standard sense »? If so, state it clearly.
* Many notations are introduced without definition. Examples include $f_i|_{support P(Pa_i)}$, $\mathcal{F}^M$, $P_M$, and $G_{R=r}$ (Section 4.1). This makes it hard to follow the paper. Please proofread the paper before submission.
Section 4
* I am not so familiar with the statistical test for context-specific independence, but could you elaborate the statistical procedure for the testing? Are you simply picking up only the subset of data instances with $R=r$? Is it statistically reliable? Is the test statistic identical with the usual independence testing?
Technical Quality: 2
Clarity: 2
Questions for Authors: NA
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort in reviewing our paper and for acknowledging the importance of the problem we are working on.
We address the mentioned weaknesses as follows:
- **Section 1:** We agree that the third paragraph should be thoroughly reworked. It fails to make the point it is supposed to due to excessive jargon and details. We propose the following changes: 1) shorten the paragraph into two paragraphs, one discussing the challenges of only looking into context-specific independencies and one highlighting the contribution of our method. 2) remove jargon unless necessary, and explain any introduced jargon in more detail. 3) change Figure 1 as described in the response to reviewer 9Xqc, introducing the definitions of context-specific graphs and union graphs. We will also move Example 3.1, possibly to the introduction, or avoid referring to it, depending on what is best for the reading flow.
- **Section 3:** We agree with the reviewer that some confusion might arise from our definition of a context variable. In our case, the choice of context variable is primarily restricted by the regime-sufficiency assumptions. However, the context variable can be selected from or introduced into the dataset by expert knowledge or any regime detection algorithm, as discussed in the response to reviewers 9Xqc and J2yL. We focus specifically on understanding under which assumptions on the context variable we can obtain reliable results and how these results can be interpreted.
If we understand correctly, the reviewer is considering context variables to be something like Pearl's $\mathcal{F}$-variables. These $\mathcal{F}$-variables are functional variables (taking values in function spaces) that allow writing any (hard or soft) interventions implicitly: If $f_Y(x)$ is the mechanism at $x$, depending on parents $x$ (possibly multivariate), one can add a parent $\mathcal{F}_Y$ taking a function (e.g., the original $f_Y$) as a value and replace the mechanism at $Y$ by the evaluation map eval($f, x$) $:= f(x)$. Since $\mathcal{F}_Y$ can take a value for $f$ that represents an intervention (e.g., a constant function for a hard intervention), this generalizes interventions. Example 3.1 can indeed be regarded as a kind of soft intervention on $Y$.
Context variables are an orthogonal concept: Given multiple distinct datasets over the same set of variables, the context injects information about the specific associations in each dataset to the analysis by appending a pseudo-variable to the dataset. The value of this pseudo-variable typically is a dataset index. This context could be functional, but in general, both contexts and functional nodes coexist independently of each other.
We are interested in context-specific changes. A possible viewpoint on the strong regime-sufficiency assumption (cf. Example 3.1) is that the context variable behaves like a functional node that implements a soft intervention removing some of the parents' effects on certain variables. In comparison to JCI [17], our method reveals the internal structure of some functional nodes: Beyond discovering the children of the functional node, it additionally uncovers the graphical changes induced by the implemented soft intervention.
Generally, phenomena like drifts can be modeled using (possibly continuous) contexts but may not have a simple interventional interpretation. Temporal regimes were an important motivation for our study of context-specific behavior. To make the connection to non-stationary time series, these can be formulated using context variables. For the context to be categorical (which we and others assume), the non-stationarity cannot be a drift, etc., but must be driven by a regime change, i.e., there must be temporal regions sharing a model or context. By finding the contexts in which the time series is stationary and describing the causal relationships for those stationary parts of the time series, we can describe the non-stationary system as a multi-context system.
We will introduce these discussions and clarify the differences between other types of special variables, such as $\mathcal{F}$-variables, in the Appendix.
- **Section 4:** Our method combines testing on the pooled and context-specific data. Context-specific independence (CSI) is indeed simply tested on the samples satisfying $R=r$. Since $R$ is categorical and $P(R=r)>0$ by assumption, $P_r := P(X,Y,Z|R=r)$ is a well-defined distribution. Therefore, testing conditional independencies on $P_r$ is well-defined and no different from any other independence test (albeit assumptions like linearity need some thought, see §B.12). The difficulty lies in the interpretation of the result, as fixing $R$ can lead to selection bias. Overcoming this problem is one of the main contributions of our paper and method.
There is a difference between testing links involving $R$ and not involving $R$ ($R$ is categorical) as discussed in §B.12. Beyond this, a further difference between testing on the pooled and context-specific data is the number of samples for each test. This approach is as reliable as the finite sample properties of each test being used. Lastly, the more general problem of multiple testing, from which constraint-based causal discovery methods generally suffer, can also impact reliability. For this reason, we consider it advantageous that our approach reduces the number of tests.
---
Rebuttal 2:
Title: Response
Comment: Thank you for your response.
I roughly understand what the authors mean, but still, I cannot clearly understand the definition of context variables and related graphical concepts. Since they are strongly relevant to the study, their poor clarity will reduce the impact of this work. For this reason, I will maintain my overall rating, and I hope that the authors can clearly explain these notions more. | Summary: This paper proposes a constraint-based algorithm for context-specific causal discovery, which accommodates endogenous context variables.
Strengths: 1. This paper is well-motivated. In particular, I agree that it is important to investigate the case where the context variable is endogenous.
2. This paper provides theoretical guarantee for their proposed method.
3. This paper provides extensive experimental results to demonstrate the effectiveness of the proposed method.
Weaknesses: 1. The proposed method assumes that the context variable is both observed and known. In real-world applications, we may not know which observed variable is the context variable. Furthermore, the context variable might be a latent variable.
2. This paper lacks readability. It entails too many definitions and notations. It might be better if the authors could summarize these things in Appendix for reference. Besides, the authors would better defer more details to appendix rather than present almost all results in the main text.
Technical Quality: 3
Clarity: 2
Questions for Authors: In line 80, the authors claim that "where the context can also be endogenous (in [17])", but in line 89, the authors also claim that "all above-mentioned methods assume that the context variable is exogenous.", which is quite confusing. If [17] allows endogenous context variables, please detail the difference between [17] and this work.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Please see weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for reviewing our paper and for acknowledging the strengths of our work.
We now address the weaknesses as follows:
* **Context variable:** We agree with the reviewer that the assumptions we have made may not always reflect real-world use cases. However, as we have pointed out in the answer to reviewer 9Xqc, we believe that our work lays a first step towards a more general solution while considering important possible implications, such as the problems arising from shifting observational supports. Since regime detection algorithms exist, it is possible to discover $R$ rather than having a hard-coded specific choice. Our method is modular and can be combined with expert knowledge or any detection algorithm providing a context. In our work, we give assumptions to clearly define what would make a good choice for a context variable and consider that the detection part is out of scope for our work. We also believe that further extensions, such as to the case with latent context variables, are out of scope here, as we have detailed in the answer to reviewer 9Xqc. We wish for our work to further encourage the community to think about extensions.
* **Readability:** We agree with the reviewer that our paper may still need improvement in terms of readability. While we have tried to reduce the notations and definitions in the main paper to a minimum, we will consider how to further improve this. Furthermore, we will introduce the changes that we have also mentioned in the answers for the other reviewers 9Xqc, DjMB, and 7t5U.
* **Question:** We agree with the reviewer that this might be confusing. What we meant is: In the most general form of the formulation of context-specific graphs, [17] allows the context variable to be endogenous, see eq (3) and JCI assumption 0 in [17]. The core difference can be seen, e.g., from eq. (3): [17] describes the distribution over the union of the context-specific graphs, whereas we obtain more (also context-specific) information compared to this by analyzing the context-specific graphs. In the end, [17] proposes an algorithm for causal discovery from the pooled data, while we go beyond this information to further obtain context-specific causal graphs, using both the pooled and the context-specific tests. We will add this discussion to the main paper, as we believe that it is important to more clearly differentiate our work from [17].
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for your hard work. I have read your rebuttal and I'm now sure that I have no misunderstanding about this paper. I agree that this paper has provided some novel insights. But considering that this work is not the first one investigating the problem of causal discovery with endogenous context variables and this work remains at a relatively elementary stage (both detection and latent context variables are out of your scope), I decide to maintain my borderline accept score.
---
Reply to Comment 1.1.1:
Comment: We are sorry we could not change your mind. Nevertheless, we do not want to leave the new input unaddressed: After the introduction, we give a list of four main points where the paper makes novel contributions. Indeed one of those points is "[...] adapting existing constraint-based CD algorithms to extract context-specific information, in addition to joint information [...]". We are unsure which previous work the reviewer is referring to. Assuming this extends on the original question about [17], we want to clarify again that [17] is concerned with joint information and does not consider context-specific information. They employ the same setup to study a different problem. | Summary: The authors consider an SCM $M$ with a labelled and observed context variable $R$ that is endogenous, i.e., causally depends on other variables in the model. In this setting it is generally not true that $P_{M}(\ldots \mid R=r) = P_{M}(\ldots \mid \mathrm{do}(R=r))$. Therefore, selecting only data for which $R=r$ and applying constraint based causal discovery algorithms - as one might do with exogenous context variables - will introduce selection bias when estimating context-specific graphs $G^{\text{phys}}_{R=r} := G[M \mid \mathrm{do} (R=r)]$.
The authors propose an algorithm that adaptively selects whether to do a conditional independence test (CIT) using context specific or pooled data. Their method is provably complete (recovering the population graph of the underlying SCM with oracle CITs) under a set of assumptions that relate both to the underlying SCM and to the (context specific) observational distribution.
Strengths: The authors motivate the challenges of efficient causal discovery with endogenous context variables clearly. They thoroughly document the challenges to their approach, especially in the appendix, and highlight potential avenues for future work.
While their submission needs refining, particularly on the clarity of their definitions, the authors provide structure for - and a novel solution to - an understudied causal discovery problem with many significant real-world applications.
Weaknesses: (Presentation) The authors should present their results more carefully by using terms only once they are defined and being consistent with notation. For instance, the motivation behind a "$G^{\text{phys}}_{R=r}$" is considered informally in the discovery goals section (a few pages before the notation is used), but the notation itself is defined neither formally nor informally in the main text. The definition is left to appendix but is not cross-referenced in the main text. Similarly, I have no idea what the "AR" in their method "PC-AR" actually stands for.
The structure of the paper unfortunately makes simple typos (e.g., "$G^{\text{descr}}_{R=r} \subset G^{\text{union}} = G^{\text{union}}$" on line 262) difficult to interpret or correct by the reader. If the authors could restructure the paper to assist the reader in understanding their technical contributions this would greatly impact my evaluation of the submission's presentation. For instance, the physical and descriptive context graphs could have simply been defined in the discovery goals section.
(Interpretation of strong context-sufficiency) It is not clear to me what the authors mean by "injective" in the definition of strong context-sufficiency. I will assume that "$f_i$" refers to the structural equation for variable $X_i$ (the authors did not define this in the text) and take "injective" to literally mean $f_i$ has at most one input that corresponds to any output in $\mathrm{dom}(X_i)$. This seems to me to be a very restrictive assumption: say $f_i$ had one parent $A$ and a noise term $e$ and both were, e.g., normally distributed, wouldn't the "injective" property preclude any model like $f_i(A, e) = A + e$?
(Experimental design and results) In the experiments, cycle length in the union graph was not controlled despite this being required for the authors' method to recover $G^{\text{phys}}_{R=r}$. This complicates comparison with the brute-force baseline "PC-B". Furthermore, details on the time taken to run PC-B versus PC-AR should have been included to understand the computational advantage of the authors' approach.
Technical Quality: 3
Clarity: 2
Questions for Authors: (Algorithm 1) There is no definition of the dummy variable $i$ in Algorithm 1. Is it meant to be looped over in line 4? Is line 12 in Algorithm 1 rather meant to say "remove $X_i - X_j$ from $G_{R=r}$"?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes - the authors make good effort to document the challenges to their approach and motivate potential future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort in reviewing our paper. We also thank the reviewer for acknowledging the strengths of our paper, and that we have made an effort to document all possible limitations of our approach. We address the weaknesses as follows:
* **Notation and typos:** We realize that in some places our notations have not been introduced, which we will fix in the final version of the paper (main text and appendix). We will revise all notations to ensure that no notation comes without a proper introduction. We will also fix the typo to simply say $G^{\text{descr}}_{R=r} \subset G^{\text{union}}$ (which was intended in that place). There are some unexplained notations, such as what AR stands for ("adaptive regimes"). We will address these problems as well.
* **Restructuring for clarity:** We will introduce a table better explaining the definitions of graphical objects better (see general post). We believe this will considerably help the reader to obtain an overview. We will also revise the Discovery goals sections and combine it with the Graphical objects section in order to improve the reading flow. Further, we will change the introduction such that it is easier to read, as described in the general post.
The third paragraph of the introduction does not seem to make the point we actually wanted to illustrate, and instead contains too many details and jargon (as also pointed out by reviewer 7t5U). We will improve this paragraph. Generally, we will reduce jargon in both the abstract and introduction, and we will more clearly introduce required terms such as system and context variables.
* **Strong context sufficiency:** Thank you for this remark! Indeed, this definition needs a revision. The following is what is supposed to happen when strong context-sufficiency is fulfilled: If the observational support given $R=r$ of, say, $X$, is restricted to a region where a mechanism $f_Y(X, \ldots)$ is constant in $X$ can make the link from $X$ to $Y$ disappear when doing independence testing.
Excluding CSI arising like this requires a way of saying what "constant in a region" means. The single-graph sufficiency assumption formally captures this requirement and restricts it to the most intuitive class of models that do not suffer from the support issues mentioned in Example 3.2. Example 3.1 shows that there clearly is a (rather large) class of intuitive models which look like $f_Y(X,R,\ldots) = 1(R)g(X,\ldots) + h(\ldots)$. Now, one still has to exclude that $g$ or $h$ are constant in a region. The strong sufficiency condition aims to capture this intuitive notion, and one way of ensuring a map is *not* constant on any region is to require it to be injective: Then region simply means any set containing more than one point.
An example of a model that *should* fulfill this assumption is a model which becomes linear after fixing $R$. However, as pointed out correctly by the reviewer, a linear map $\mathbb{R}^n \rightarrow \mathbb{R}$ can only be injective if $n\leq1$. This is not the case here, since every mechanism also depends on its noise term. This lack of applicability can be fixed by requiring injectivity "for each argument" (i.e. while holding all other parents at any fixed value, for example for an additive mechanism $f_Y(X_1, X_2, \ldots)=f^1_Y(X_1) + f^2_Y(X_2) + \ldots$ this means each $f_Y^i$ has to be injective). This actually includes all models which are linear after fixing, i.e., intervening to set, $R=r$. We will fix this and probably add the linear and general additive cases as examples to the appendix as well.
The strong sufficiency assumption provides a class of examples for which we can obtain reliable outcomes with our method. However, there are many possible choices for the precise formal statement of a strong sufficiency assumption in this sense. For example, one could require, that the noise distributions are absolutely continuous and that the causal mechanisms are differentiable, and such that regions where any partial derivative is zero have measure zero. We will add a remark to the appendix discussing our choice of assumption more clearly.
* **Simulation study:** We thank the reviewer for this remark, we agree that we should have specified this. From 71 samples, only two samples have a cycle length 3, while for the rest we only have cycles length 2. Therefore, we believe that the comparison is still informative. We will also include a plot with the comparison of the run times of all evaluated methods in the final version of the paper.
* **Question to Algorithm 1:** There are indeed some typos in the algorithm. First $j$ is iterated, than $i$ is iterated over adjacencies of $j$ and $S$ over subsets of a specific size of adjacencies of $j$ excluding $i$. This will encounter all unordered pairs (thus edges) of $i, j$ twice (once with names/order exchanged), so searching for $S$ in adjacenices of only $j$ (but not of $i$, whose adjacencies will be searched when encountering the pair the second time, in reversed order) suffices. As the reviewer pointed out, line 12 should indeed say $G_{R=r}$, and we will change this as well.
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for their responses, their clarifications and their revisions.
I find the authors' approach quite exciting, especially their insights into challenges for large cycles and unknown endogenous context variables. However, it is difficult to understand from rebuttals alone what impact their (quite heavily) revised presentation will have on the overall clarity of their work. For these reasons, I am willing to revise my score for the soundness of their approach to 3: Good, and my overall score to 6: Weak Accept. | Summary: The authors address the problem of causal discovery in situations where causal relationships change across different contexts. They propose a modified version of the PC algorithm, which either performs a conditional independence test (CIT) on pooled data or context-specific data, depending on the scenario. The paper presents several theoretical results for this modified algorithm and evaluates its performance through various empirical studies.
Strengths: The authors tackle an important and relevant research problem, and their theoretical findings are noteworthy, especially the set of sufficient assumptions under which they derive their results. These assumptions could be valuable for future research in this area.
Weaknesses: The organization of the paper makes it difficult to follow. For instance, numerous graphical definitions are presented without examples or figures, even in the appendix, which would aid understanding. I strongly suggest at least a running simple graphical example to show different graphs. Additionally, the definitions of the main assumptions are not included in the main text, further complicating comprehension.
The results rely on the assumption of causal sufficiency, but it is unclear how these results could be generalized to scenarios with hidden variables. Another significant limitation of the proposed framework, as acknowledged by the authors, is the requirement for knowledge of the context indicator.
The proposed algorithm require $Adj(X_j)$ and whether R belongs to this set for all $j$. This knowledge is obtain by a standard PC algorithm using pooled data. What happen if the output of this PC algorithm is noisy, e.g., it returns that R belongs to the conditioning set but in reality it is not?
While the proposed algorithm clearly outperforms the baselines in certain settings, the margin of improvement is relatively small, particularly when the conditional independence test is non-parametric.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Please see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for reviewing our paper, and for acknowledging the importance of the problem. Furthermore, we thank the reviewer for appreciating the value of the set of assumptions we have derived for a first step towards solving the problem of endogenous context variables.
We address the weaknesses as follows:
* **Graphical definitions:** We agree with the reviewer that our paper should be improved to aid understanding, we address this by the changes listed in the general post. In particular we give a table to systematize the graphical objects better.
* **Main assumptions:** We decided to only informally introduce the assumptions, as we believed that technical detail might actually hinder the gaining of intuition. We will have another look at this paragraph.
* **Causal sufficiency:** As we are the first to consider the implications of changing observational support for context-specific causal discovery with endogenous context variables, we started to tackle the problem with a simple setup. Hidden confounders are a problem very different from the one we study. It is common in the literature to initially assume causal sufficiency, albeit there are some doubts about its realism. Difficulties to a generalization with hidden confounders could arise on multiple levels:
* One has to understand how the ideas underlying the graphical objects we are currently using must be adapted.
* The Markov-property combines independence tests from the pooled data and context-specific data into a single graphical object. To this end, the standard d-separation argument is modified. One has to verify this approach still works when causal sufficiency does not hold.
* One has to ensure that the right tests are executed in an efficient way, which should be possible, by replacing the PC algorithm, for example, by the FCI algorithm.
* **Context indicator:** We agree with the reviewer that the the fact that we must know R in advance can be a limitation. However, we would not necessarily consider it a weakness. Regime detection algorithms can be used to discover R rather than having a hard coded specific choice. Our method is *modular* and can be combined with expert knowledge or any detection algorithm providing a context.
* **Noisy PC output:** Typically, with the PC algorithm, errors in the early stages of the CI testing propagate, and our algorithm is subject to this flaw as well. It is helpful to discuss false negatives and false positives separately, because they lead to different downstream effects:
False negatives (a link involving $R$ is not found): After the false negative, the algorithm behaves like standard causal discovery, because with $R$ removed from the adjacencies (thus from possible separating sets) no CSI is tested. It does not find additional context specific information, but it also does not incur further errors (beyond those from the causal discovery algorithm e.g. PC). So here finite-sample results can be interpreted in a spirit similar to how assumption-violations are understood in Rmk. 4.2: The result is still as correct as it would have been without our modification, but false negatives can lead to incomplete discovery of context-specific information.
False positives (a link involving $R$ is detected to a non-adjacent $Y$): Assuming no simultaneous violation of regime-sufficiency assumptions, if our method discovers false positives, it behaves like the baseline (intersection-graph) method. Thus, in the presence of false positives, our method executes more tests than strictly necessary, because more adjacencies are found. Moreover, if R is erroneously found as adjacent, it will be added to the conditioning set, thereby running the tests for $R=r$, i.e., using the context-specific samples only. Conditioning on $R=r$ wrongly can induce selection bias and can also lead to testing on a reduced sample size, leading to further finite sample effects. Therefore, the subsequent error-rate should be slightly higher than for standard causal discovery.
* **Marginal improvement over baseline:** As we are not sure which baseline the reviewer refers to, we discuss:
(a) Comparison including context-specific information (here *non-*parametric tests actually performed better, but we believe to understand this problem, see §B.11): Indeed, the improvement may seem small with the parametric test. However, using the non-parametric test shown in Figure 2, the CIT is seen to significantly impact performance: We see much larger differences, especially in FPR, when using a non-parametric CIT compared to masking, testing on the pooled data or the (intersection-graph) baseline. Our baseline here is the intersection graph (§B.10), which to our knowledge has not been theoretically studied before. Technically, one of the advantages against this baseline is scalability (§B.9), even though in practice one would need a sufficiently stable causal discovery method, and this is not always the case.
(b) Comparison to existing (pooled) methods: Compared to algorithms not using CSI, we provide additional information in the form of context-specific causal graphs. | Rebuttal 1:
Rebuttal: ## General Response to the Reviewers
We thank all reviewers for their valuable comments, and we are happy that they acknowledge the importance of our work and the value of our results. We agree with the reviewers that the presentation of the paper can be improved, and we summarize the main changes we aim to add to the final version of the paper as follows. Further questions are answered in the individual rebuttals.
* **Introduction:** We will improve the introduction by modifying Figure 1 to a figure which clearly explains the setup where a context variable is endogeneous, and will move the discussion around the current Figure 1 to Section 4. We will then move Example 3.1. to the introduction to better illustrate our goals. We will considerably improve the third paragraph of the introduction by removing unnecessary details and jargon, and rather explain better how our work relates to methods describing context-specific independencies (CSI), and how our assumptions allow us to give an interpretation in terms of structural causal models (SCMs) to the CSIs.
* **Graphical objects:** We agree that the graphical objects may be hard to understand at first sight in the current form of the paper. We therefore propose to add the following table (formatted as best as possible) which underlines the differences and connections between the different graphical objects:
| | Observable... | ...graphs... | ...$G[M,P]$ |
| :----------------: | :------: | :----: | :----: |
| Symbol | $G^{\\text{descr}}_{R=r}$ | $G^{\\text{phys}}_{R=r}$ | $G^{\\text{union}}$ |
| Name | descriptive | physical | union ('standard') |
| Model $M$ | $M$ or $M_{\\text{do}}$ | $M_{\\text{do}}$ | $M$ |
| Observational Support $P$ | $P_M(...,R=r)$ | $P_M(...)$ | $P_M(...)$ |
| Captured Information | independence-structure | altered mechanisms | union mechanisms |
| Context-Specific ($r$-dep.) | yes | yes | no |
| Used here primarily for | discovery | proofs | relation to literature |
| Node Sets | system $\\cup \\{R\\}$ | system $\\cup \\{R\\}$ | system $\\cup \\{R\\}$ |
| Edge Sets | active in context $r$ $\\subset$ | present in context $r$ $\\subset$ | in any context |
We will also combine the graphical objects and the discovery goals sections for better reading flow.
* **Overall readability:** We will revise notation and try to further simplify it. We will better explain the assumptions. However, we believe that the exact definitions are better left to the Appendix. We will fix all discovered typos, including the typos in Algorithm 1.
* **Context variable:** We will revise our definition of a context variable to make it clearer that our method is modular and can be combined with any anomaly or regime-detection method. This change also makes it easier to explain how the current setup, where the context variable is measured and known, is plausible also for real-world scenarios.
## Redraft of third paragraph of introduction
The third paragraph of the introduction did not illustrate well what we actually wanted to say, which is (the following is an early redraft and has to be fit together with the remainder of the introduction, so is likely subject to change):
Multiple context-specific graphs can contain more qualitative
information than a single union graph as illustrated below:
**Example 3.1:** Given a binary context indicator variable $R$ and a *multivariate* mechanism of the form $ f_Y(X,R,\eta_Y) = \mathbb{1}(R) g(X) + \eta_Y,$ the dependence $X \rightarrow Y$ is present in the context $R=1$, but absent for $R=0$. This entails a context-specific independence (CSI) $X \perp Y | R=0$.
As indicated in the example, such additional information can be captured via context-specific independence (CSI). Graphical independence models describing the CSI structure of a dataset, for example, LDAGs, have been studied before [22]. However, a causal analysis, i.e., understanding interventional properties of the context-specific model, requires knowledge about the causal model properties.
In the single-context case, under the faithfulness assumption, knowledge about the causal properties of models is directly connected to the independence structure. As will be explored in detail in §3.1, this simple model to independence correspondence cannot generally hold in the multi-context case. Thus, an important open problem for the causal analysis of multi-context systems is the connection of CSI structure to the underlying causal model.
We provide a connection between the CSI structure and the underlying causal model and specify assumptions under which an efficiently computable subset of CSI and independencies on the pooled data together can be given an interpretation in terms of structural causal model (SCM) properties. The obtained context-specific graphs are of interest due to several desirable properties. For instance, context-specific modeling can avoid spurious edges that arise from cycles in the union graph, as shown in Figure 1.
Furthermore, CSI testing poses multiple finite-sample challenges: It dramatically increases the search space of independence testing. Additionally, CSIs are only tested on a subset of samples, thereby increasing the per-test error rate. Our approach, which requires the framework connecting CSI to causal properties, adaptively decides whether a specific test can be run on the pooled data, or must run on a subset of samples associated with a specific context. It executes only one of both tests and uses as many samples as possible, thereby improving finite-sample performance. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Distributional regression: CRPS-error bounds for model fitting, model selection and convex aggregation | Accept (poster) | Summary: The paper studies distributional regression, a statistical technique that models not just the conditional mean of the response variable given the predictors (as in traditional regression) but the entire conditional distribution. The learning task minimizes the risk function, measured by the Continuous Rank Probability Score (CRPS). The authors aim to estimate the error bounds for the expectation of the predictive distribution. They also investigate model selection and aggregation via CRPS minimization on a validation set.
Strengths: * Comprehensive Approach: The use of distributional regression provides a more holistic view of the data by modeling the entire conditional distribution, offering more insights compared to traditional regression methods. This work explores various aspects of this model and provides a thorough analysis.
* Theoretical Contributions: The authors present a robust theoretical framework, including the estimation of error bounds for the expectation of the predictive distribution. This adds valuable understanding and reliability to the model's predictions.
* Model Selection and Aggregation: The paper addresses practical aspects such as model selection and aggregation using CRPS minimization on a validation set, which is crucial for effectively applying the methodology in real-world scenarios.
Weaknesses: * Typos in the Text: The text should be checked again by the authors to remove the typos (e.g., line 88: "on can show" should be "one can show").
* Math Symbols and Abbreviations: Several math symbols or abbreviations are used in the text without prior definition, reducing the readability of the text. For instance, "EMOS" in the abstract or $\mathcal{G}$ in line 69. Additionally, some symbols are used multiple times with different meanings. For example, $\mathcal{D}$ in line 19 indicates the training set, while $\mathcal{D}$ in line 71 denotes a subset of the set of all probability measures on $\mathcal{R}$. It would be better to use symbols more carefully and ensure consistency throughout the text.
* Comparison with Existing Methods: While the paper focuses on the advantages of the proposed CRPS-error bounds for distributional regression, it would be beneficial to include a comparison with existing techniques, both in terms of theoretical properties and empirical performance. Related works and available baselines have not been properly discussed in the paper. Including such comparisons would provide a clearer context for the contributions of the proposed method and its relative performance.
* Complexity Analysis: A detailed complexity analysis is needed to understand the computational cost of the proposed method, especially in comparison with other techniques.
* Numerical Analysis Clarity: The results presented in the numerical analysis should be clearer, with a more detailed explanation of what the numbers represent and how they support the claims made in the paper. The numerical analysis is not sufficient, and not enough sensitivity analysis has been provided. Only two small datasets have been used, and the simulation does not cover all aspects of the model. It would be better if the authors designed new experiments with different datasets and settings to comprehensively validate the proposed method. This would help demonstrate the robustness and applicability of the approach across various scenarios.
Technical Quality: 2
Clarity: 2
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The limitations have not been properly discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your work and time for reviewing the paper. We believe indeed that distributional regression is an important technique providing a more holistic view of the data and we stress that, despite its importance in practice, CRPS minimization in distributional regression has been very little investigated from a theoretical point of view. Please find below answers to your questions and concerns.
1) We have run again a spell checker and corrected a few typos.
2) Following your advice, we have used a different notation for $\mathcal{D}$ (replaced by $\mathcal{P}_0$) in order to avoid confusion with the training set $\mathcal{D}_n$. We have also checked that abbreviations are correctly explained when introduced for the first time.
3) We have chosen to focus in this work on the CRPS minimization technique because it is the most widely used in applications of distributional regression (see the references in the introduction), in particular in the context of ensemble forecast where the log-score and likelihood methods are not available. Despite it common application, there is a shortage of theoretical results regarding CRPS minimization for distributional regression which is the main motivation for the paper. Comparison with existing methods (e.g. minimization of other scoring rules) is left for further research.
4) Strictly speaking, we do not propose a new method but rather provide solid theoretical ground for existing methods. The original part of the paper is thus the concentration bound establishing the consistency of commonly used method in distributional regression. This is why our paper does not provide a detailed numerical analysis (only an illustration) nor a complexity analysis, but rather focuses on the theoretical aspects.
---
Rebuttal Comment 1.1:
Title: Response to the authors' feedback
Comment: Thank you for your revisions and the effort you’ve put into improving the manuscript. The gap between theoretical claims and numerical experiments still exists. After careful consideration, I have decided to retain my original score. | Summary: This paper considers the problem of conditional distribution prediction. For covariate-response pair $(X, Y)$, the objective is to estimate the conditional distribution of $Y|X=x$ for all $x$. The paper provides concentration bounds for the empirical risk minimization (ERM) estimator with continuous rank probability score (CRPS) as loss. It also provides regret bounds for the model selection and model aggregation with CRPS minimization.
Strengths: This paper is well-structured and clearly written. The proofs are rigorous, and the notations are well-organized. It offers theoretical guarantees that will benefit future research involving the CRPS minimization technique.
Weaknesses: 1. The proving technique for the theorems presented in the paper is pretty standard. The work lacks technical contribution.
2. In Theorem 1, the definition of $R$ which denotes the boundary of the parameter space $\Theta$ should be added in the main text.
3. The theorems have not provided insights for the algorithm design. The paper has not proposed new algorithms, but in line 262 it says "demonstrate the effectiveness of our proposed methods".
4. The entire experimental section does not utilize the theoretical guarantees established earlier. Additionally, there is a lack of experiments to validate the effectiveness and tightness of the theoretical bounds.
Technical Quality: 2
Clarity: 3
Questions for Authors: As given in the "Weakness" section.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: As given in the "Weakness" section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for overall positive appreciation of the paper! Please find below answers to your questions and concerns.
Weaknesses:
We agree that the technical development based on Hoeffding inequality is quite standard. Still, the paper provide the first detailed analysis of the CRPS risk for distributional regression and original contributions appear in this direction: Proposition 5, Lemmas 2 and 3 provide original results for the analysis of the CRPS, Propositions 1 and 2 provide original results for the analysis of various models (distributional nearest neighbours, EMOS, distributional neural networks). Our point of view is that the value of the work is to apply (standard) concentration techniques in an original context and to provide new insight into distributional regression, which is a fundamental task in statistics and machine learning.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed response. I will keep my original score. | Summary: The paper considers the problem of distributional regression, i.e., learning the distribution of Y conditional on X. In particular, the distributional regression problem is formulated as an empirical risk minimization problem, where standard concentration techniques are applied to obtain non-asymptotic bounds on the excessive risks.
Strengths: 1. The paper is well-written and easy to follow.
2. The question is well-motivated, and the technical development is solid.
Weaknesses: After formulating the problem as an empirical risk minimization problem, applying concentration techniques appears to be very standard --- the technical contribution of the current paper is relatively weak.
Technical Quality: 3
Clarity: 3
Questions for Authors: The current paper mainly considers the case of iid data, and correspondingly leverages concentration tools for iid data. I wonder if it could be generalized to adaptively collected data, since there are parallel concentration tools for such data.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper has discussed its limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for overall positive appreciation of the paper! Please find below answers to your questions and concerns.
Weaknesses:
We agree that the technical development based on Hoeffding inequality is quite standard. Still, the paper provide the first detailed analysis of the CRPS risk for distributional regression and original contributions appear in this direction: Proposition 5, Lemmas 2 and 3 provide original results for the analysis of the CRPS, Propositions 1 and 2 provide original results for the analysis of various models (distributional nearest neighbours, EMOS, distributional neural networks). Our point of view is that the value of the work is to apply (standard) concentration techniques in an original context and to provide new insight into distributional regression, which is a fundamental task in statistics and machine learning.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will remain my initial score. | Summary: The paper provides theoretical guarantees for distributional regression, which aims to estimate the conditional distribution of a target random variable Y given covariates X. These theoretical guarantees hold when the regression is learned by minimizing a particular proper scoring rule, the Continuous Rank Probability Score (CRPS), in the presence of i.i.d. data (X,Y). Concentration bounds on the estimation error are provided for model fitting, model selection, and model averaging.
Strengths: 1.) The paper gives perhaps the clearest presentation I've ever seen when reviewing for one of the top ML conferences. Its acceptance is almost warranted from this fact alone; there is significance in the exceedingly clear overview of distributional regression.
2.) The provided concentration bounds cover multiple modeling goals (model fitting, model selection, and model averaging) and a wide range of popular models for distributional regression (both parametric and non-parametric).
Weaknesses: 1.) The experiments do not seem strongly related to the theory, other than that the experiments investigate distributional regression. I expected the theoretical results to play a stronger role here. Is there some way in which the theory guides the data analysis? If not, what is the purpose of including an experiment at all? If nothing else, the experiments would seem to provide an opportunity to perform sanity checks. For example, can the authors provide empirical evidence of consistency to the theoretical limiting values?
2.) Generalizing #1, the authors do not provide guidance on what their theoretical results can be used for.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1.) The authors write: "[...] distributional regression is way stronger and more challenging than standard regression, where only a point prediction for Y given X=x is provided which typically reduces to an estimation of the conditional expectation." I found this statement a bit puzzling. For instance, both frequentist and Bayesian GLM's would seem to estimate conditional distributions of a target given explanatory variables, rather than simply providing point estimates. Could the authors clarify?
Editing notes:
1.) "Way stronger and more challenging" (line 20) is redundant, and the first phrase is too colloquial. Streamline to "more challenging" or perhaps "much more challenging".
2.) It is unclear why "we will mostly use the CRPS for discrete predictive distributions" (line 87). Please clarify.
3.) Can the Wasserstein space `P_1(\R)` (defined between lines 83 and 84) be equivalently described as the set of probability distributions over `\R` with finite first moment? If so, this might be worth stating explicitly.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: No limitations were provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the positive assessment of the quality of the paper presentation! Please find below answers to your questions and concerns.
Weaknesses:
1) We agree that the experiment is loosely related to the theoretical results; our aim is to illustrate that the methods of interest (model selection, model aggregation) work well in practice; ideally it would be interesting to illustrate the convergence to zero of the estimation error, but we have to take into account that the theoretical limit value for the error (oracle) is unknown in our setting.
2) The main message is that the empirical risk minimization widely used in practice (be it for model fitting or model selection) is mathematically grounded. We have clarified and strengthened this message in the sections "Numerical Experiments" and "Conclusion". See in particular the sentence "These new theoretical results are solid mathematical justifications for common practices in the framework of distributional regression".
Questions:
1) Indeed, generalized linear models can be seen as distributional regression techniques since the conditional distribution of Y given X=x is modelled by a distribution belonging to an exponential family.
2) Editing notes have been taken into account in the revised version.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My initial assessment of strengths and weaknesses persists after the rebuttal. Therefore, I maintain my original score. | Rebuttal 1:
Rebuttal: We acknowledge the referees for their work and time spent on the paper. Overall, the comments are positive but two main concerns emerge from the reports that we comment below. All other (minor) suggestions have been taken into account.
1) Technical contributions --
We agree that the technical developments and concentration inequalities based on Hoeffding inequality are quite standard. Still, the paper provide the first detailed analysis of the CRPS risk for distributional regression and original contributions appear in this direction: Proposition 5, Lemmas 2 and 3 provide original results for the analysis of the CRPS, Propositions 1 and 2 provide original results for the analysis of various models (distributional nearest neighbours, EMOS, distributional neural networks). Our point of view is that the value of the work is to apply (standard) concentration techniques in an original context and to provide new insight into distributional regression, which is a fundamental task in statistics and machine learning. We believe the work will benefit future research involving CRPS minimization techniques in distributional regression.
2) Numerical experiments --
We agree that the experiment is loosely related to the theoretical results; our aim is to illustrate that the methods of interest (model selection, model aggregation) work well in practice; ideally it would be interesting to illustrate the convergence to zero of the estimation error, but we have to take into account that the theoretical limit value for the error (oracle) is unknown in our setting.
We very much hope that the reviewers will appreciate this point of view and support the work for presentation in Neurips. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Training-Free Adaptive Diffusion with Bounded Difference Approximation Strategy | Accept (poster) | Summary: The paper presents a novel approach titled "AdaptiveDiffusion" for accelerating diffusion models used in high-quality image and video synthesis. The core issue addressed is the high computational cost and latency associated with existing denoising techniques in diffusion models, which are typically based on step-by-step noise predictions.
**Contributions of the Paper:**
1. **Adaptive Diffusion Process**: The paper introduces AdaptiveDiffusion, which adaptively reduces the number of noise prediction steps during the denoising process. This is achieved by skipping steps where the potential redundancy is high, guided by the third-order latent difference that indicates stability between timesteps.
2. **Plug-and-Play Criterion**: A new criterion is proposed to decide whether to infer new noise predictions or reuse previous results based on the third-order difference distribution. This allows for an adaptive acceleration paradigm that is prompt-dependent.
3. **Extensive Experiments**: The method's effectiveness is demonstrated through extensive experiments on both image and video diffusion models. The results show significant speedups of 2 to 5 times on average in the denoising process without quality degradation.
4. **Error Analysis**: The paper provides a theoretical analysis of the upper bound of the error induced by the step-skipping strategy, ensuring that the quality of the final output is maintained.
5. **Adaptive Acceleration**: The approach is designed to be adaptive to different input prompts, offering a practical solution to the high computational costs of sequential denoising techniques.
6. **Generalization Capability**: AdaptiveDiffusion shows a strong generalization capability, being able to adapt to different models and tasks, including text-to-image, image-to-video, and text-to-video generation.
In summary, the paper offers a substantial advancement in efficient diffusion model acceleration, with the potential to enable real-time and interactive applications of diffusion models.
Strengths: ### Strengths Assessment of the Paper
#### Originality
The paper demonstrates a high degree of originality through the introduction of the AdaptiveDiffusion method, which offers a novel perspective on accelerating diffusion models. The approach创造性地 addresses the computational inefficiency inherent in traditional denoising techniques by adaptively reducing noise prediction steps. This innovation is not just a technical tweak but a strategic rethinking of the denoising process itself. The use of the third-order latent difference as a criterion for deciding when to skip steps is an ingenious way to balance efficiency and quality, which has not been explored in prior works.
#### Quality
The quality of the paper is reflected in its rigorous theoretical foundation and comprehensive empirical validation. The authors have provided a detailed error analysis to support their method's robustness, ensuring that the acceleration does not compromise the output quality. The experiments are thorough, covering a range of models and tasks, which substantiates the method's effectiveness and generalizability. The paper also discusses the relationship between different orders of latent differences and the optimal skipping path, which adds depth to the understanding of the proposed technique.
#### Clarity
The paper is well-structured, with a clear progression from the introduction of the problem to the explanation of the proposed solution, followed by a detailed methodology and extensive experimental results. The figures and tables are used effectively to illustrate the method and results, enhancing the readability and comprehension of the paper. The theoretical proofs and algorithm descriptions are presented in a manner that is accessible to readers with a background in the field.
#### Significance
The significance of this paper lies in its potential to transform the applicability of diffusion models. By significantly reducing the computational cost and latency, AdaptiveDiffusion opens up new possibilities for real-time and interactive applications of diffusion models, which are currently limited by their resource-intensive nature. The ability to tailor the denoising process to different prompts is also significant, as it allows for more flexible and responsive generative models that can cater to diverse content creation needs.
In summary, the paper is a substantial contribution to the field of generative modeling, offering a creative, high-quality, and clearly articulated solution to a pressing problem. Its significance extends beyond technical advancement, promising to enable new applications and use cases for diffusion models.
Weaknesses: While the paper presents a compelling approach to accelerating diffusion models, there are areas where it could be further strengthened:
### Theoretical Depth
- **Assumption Limitations**: The paper relies on certain assumptions for its theoretical analysis, such as the Lipschitz continuity of the noise prediction model. It would be beneficial to discuss how violations of these assumptions might impact the method's effectiveness and under what conditions these assumptions hold true.
### Experimental Scope
- **Diversity of Models**: Although the method is tested on various tasks, the paper could benefit from testing on a broader range of diffusion models to further establish the generalizability of AdaptiveDiffusion.
- **Real-World Applications**: Demonstrating the method's effectiveness in real-world applications or use cases would provide additional context and significance to the work.
### Hyperparameter Sensitivity
- **Threshold δ and Cmax**: The paper discusses the impact of these hyperparameters on performance but could provide more guidance on how to select these values in practice, especially given their critical role in balancing speed and quality.
### Computational Complexity
- **Memory Usage**: While the method aims to reduce computational cost, it would be insightful to have a discussion on memory usage, especially since diffusion models can be memory-intensive.
### Long-Term Viability
- **Adaptability to Model Updates**: The paper could address how well AdaptiveDiffusion might adapt to future updates in diffusion model architectures or training regimes.
### Societal Impact Consideration
- **Ethical Considerations**: Although the paper does not explicitly discuss societal impacts, it would be beneficial to include a brief discussion on potential ethical considerations, especially given the generative capabilities of the models involved.
### Reproducibility
- **Code and Data Availability**: Ensuring that the code and data used for experiments are publicly available would greatly enhance the reproducibility of the results.
### Documentation
- **Algorithm Pseudocode**: Providing pseudocode or flowcharts for the algorithms could help readers better understand the step-skipping strategy and its integration into the overall process.
### Future Work
- **Extensions and Limitations**: While the paper outlines future directions, a more detailed discussion on the limitations and potential extensions of the current work would be valuable.
By addressing these points, the paper could provide a more comprehensive understanding of AdaptiveDiffusion's capabilities and limitations, setting the stage for further research and development in this area.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. **Assumption Validity**: Could the authors elaborate on the conditions under which the Lipschitz continuity assumption for the noise prediction model holds? How do they ensure this assumption is valid across different models and datasets?
2. **Generalization Across Models**: The paper demonstrates results on a few models. What steps have been taken to ensure that AdaptiveDiffusion can generalize across a wider variety of diffusion models, especially those that may not conform to the same architectural patterns?
3. **Hyperparameter Selection**: The paper mentions the importance of hyperparameters δ and Cmax. Can the authors provide more detailed guidelines or methods for selecting these hyperparameters in different contexts or suggest any automated tuning processes?
4. **Memory Usage Discussion**: Given that diffusion models can be memory-intensive, could the authors discuss the memory usage implications of AdaptiveDiffusion, especially when scaling up to larger models or datasets?
5. **Ethical Considerations**: Although the paper focuses on a technical advancement, could the authors comment on any potential ethical implications of the work, particularly related to the generative capabilities of the models?
6. **Reproducibility Assurance**: To ensure the reproducibility of the results, will the authors commit to making their code and datasets publicly available, and if so, when?
7. **Algorithm Visualization**: For better understanding, especially for readers who may be less familiar with the proposed methods, can the authors provide pseudocode or flowcharts illustrating the step-skipping strategy?
8. **Long-Term Viability**: How does the authors' method accommodate or adapt to potential future changes in the architecture or training of diffusion models?
9. **Limitation Discussion**: The paper outlines future work but could benefit from a more explicit discussion of current limitations. Are there specific scenarios or model types where AdaptiveDiffusion might underperform?
10. **Statistical Significance**: The paper reports χ2 stats and p-values for the correlation between estimated and optimal paths. Could the authors provide more details on the statistical tests used and the rationale behind choosing these tests?
11. **Real-World Application**: While the method shows promise in controlled experiments, are there any real-world scenarios or use cases where AdaptiveDiffusion has been tested or is planned to be tested?
12. **Societal Impact**: The paper could be strengthened by a brief discussion on the potential societal impacts, both positive and negative, of the technology. This includes considerations of how the method might be used or misused.
13. **Comparison with State-of-the-Art**: How does AdaptiveDiffusion compare with the state-of-the-art in terms of computational efficiency and quality of results? Are there any specific advantages or disadvantages in particular scenarios?
14. **Documentation and API**: For practical adoption, what level of documentation and API support is available or planned for AdaptiveDiffusion to facilitate its integration into existing systems?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Based on the information provided and the typical guidelines of the NeurIPS Paper Checklist, it appears that the authors have made an effort to address limitations and societal impacts. However, I offer general advice on how authors can improve their discussion of these topics:
1. **Clear Acknowledgment**: Authors should explicitly acknowledge the limitations of their work in the main text of the paper. This includes potential constraints on the generalizability of their findings, assumptions made, and any conditions under which the method may not perform as expected.
2. **Depth of Discussion**: While acknowledging limitations, authors should provide a thorough explanation of how these limitations might affect the results and the applicability of their method. This could include a discussion of how the method behaves under different conditions or with different types of data.
3. **Societal Impact Analysis**: Authors should consider the broader societal impacts of their work, including both positive and negative outcomes. This discussion should be grounded in the context of the work and consider potential misuse, ethical concerns, privacy issues, and fairness.
4. **Mitigation Strategies**: If there are potential negative societal impacts, authors should discuss possible mitigation strategies. This could involve suggesting guidelines for the responsible use of the technology, potential regulatory frameworks, or technical safeguards.
5. **Ethical Considerations**: The paper should include a section on ethical considerations, especially if the work involves generative models that could be used to create misleading or harmful content.
6. **Transparency**: Authors should be transparent about any potential conflicts of interest, funding sources, or affiliations that might influence the research or its interpretation.
7. **Openness to Feedback**: Authors should demonstrate a willingness to engage with the community for feedback on the societal impacts of their work and be open to adjusting their approach based on this feedback.
8. **Long-Term Vision**: While discussing limitations, authors could also provide a long-term vision for how they anticipate overcoming these limitations in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >***Q1: Method Explanation***
- **Assumption Validity:** We follow the assumptions proposed in DPM-solver $^{[1]}$, which are commonly adopted for the high-order approximation in ODE solvers.
- **Algorithm Pseudocode:** We have provided the pseudocodes of AdaptiveDiffusion and the greed search algorithms in Appendix A.2.3.
>***Q2: Experimental Discussion***
- **Diversity of Models:** We further explore the application of our method on unconditional image generation tasks. The results are shown in the following table. Specifically, following Deepcache, we perform unconditional image generation on CIFAR10 and LSUN-Bedrooms. Shown in the table below, our method still achieves a larger speedup ratio and higher image quality than Deepcache on both benchmarks.
|Dataset|Method|FID|Speedup ratio|
|-|-|-|-|
|CIFAR10|Deepcache|10.17|2.07x|
||Ours|**7.97**|**2.09x**|
|LSUN|Deepcache|9.13|1.48x|
||Ours|**7.96**|**2.35x**|
- **Hyperparameter Sensitivity:** We have provided the sensitivity analysis of hyperparameters in Table 4.
- **Memory Usage:** We have listed the memory usage of different models in Table 1, 2, and 3.
>***Q3: Limitation and Future Work***
Currently, our work mainly focuses on the acceleration of diffusion with ODE solvers. In future work, we should explore the effectiveness of our method on more kinds of solvers and models. For example, the acceleration of SDE solvers should consider the interference of randomly-generated noise in the high-order estimation of the skipping strategy. For consistency models, the acceleration should consider the impact of distillation on the trajectory of image generation, which would change the continuity of noise prediction.
***Reference:***
[1] DPM-Solver: A Fast ODE Solver for Diffusion
Probabilistic Model Sampling in Around 10 Steps. NeurIPS 2022.
---
Rebuttal Comment 1.1:
Comment: A reviewer has advised that we consider lowering the rating of the manuscript due to its lack of self-containment. A self-contained paper should include all necessary details to ensure that readers can fully understand its content. Specifically, the current manuscript fails to provide an explanation for the derivation of Equation 1, which leaves readers without a clear understanding of its origin and the methodology used to derive it.
---
Rebuttal 2:
Title: Clarification and Explanation of Eq. (1).
Comment: Dear Reviewer,
We respectfully disagree with the comment that our work lacks self-containment. We would like to clarify that Eq. (1) is the general formulation of the ODE solver, which can be referred to Eq. (3.7) of the DPM-solver$^{[1]}$ and Algorithm 2 of the Euler Sampler$^{[2]}$. Since we would like to formulate a unified and general derivation of the ODE solvers, the coefficients of $x_i$ and $\epsilon_\theta$ are symbolized as $f(i)$ and $g(i)$, respectively. According to the formulations in DPM-solver and Euler Sampler, we can easily get the properties of $f(i)$ and $g(i)$ as mentioned in the rebuttal.
We would like to emphasize that Eq. (1) is not a novel contribution of our work but rather a **summary** of existing formulations of ODE solvers. **We have provided the necessary citations in the original manuscript to support this (See Line 105 of the manuscript)**. If there are any remaining misunderstandings, we are open to further discussion and would greatly appreciate the opportunity to clarify them.
*Reference:*
[1] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps. NeurIPS 2022.
[2] Elucidating the Design Space of Diffusion-Based Generative Models. NeurIPS 2022. | Summary: ## Summary
* This paper propose a greedy approach that accelerate the Probability Flow ODE solvers for txt to image diffusion models. Empirical results on SD 1.5 and SD XL with mulitple solvers (DDIM, DPM, Euler) demonstrate the advantage of their approach over previous acceleration techniques.
Strengths: ## Strength
* The idea of prompt adaptative acceleration is interesting and promising. As previous solvers designed for more general PF-ODE are not adaptative to different prompts. It is quite natural for a prompt adaptative approach to obtain better result.
* The empirical advantage over previous accelerations is obvious, especially in terms of image quality such as PSNR and LPIPS.
Weaknesses: ## Weakness
* A new trend in image generation diffusion model is the adoptation of Flow matching optimal transport (FM-OT) / Rectified flow (RF) (See Learning to Generate and Transfer Data with Rectified Flow). Those lines of works adopt a forward SDE that is neither VP nor VE. Their special SDE has a constant velocity in the corresponding PF ODE. Rectified flow can even achieve single step sampling with this PF ODE. Their PF-ODE has a path that is very close to straight line and be solved with less steps compared with VP / VE SDE. The latest Stable Diffusion 3 already adopt this type of diffusion. Those efforts in diffusion community also speed up sampling, not by solvers but by different models. Those line of works should be discussed as they have the same goal with this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: ## Questions
* For now the results are reported on 3 different ODE solvers. Can the proposed approach be applied to SDE solvers too? As sometimes the SDE has some advantage over ODE in terms of sample quality (See Closing the ODE-SDE gap in score-based diffusion models through the Fokker-Planck equation).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >***Q1: Discussion of Single-step Sampling Works***
Thank the reviewer for the valuable suggestion. We will provide a detailed discussion in the revised manuscript. Here is a brief discussion of single-step sampling works.
In addition to the acceleration paradigms mentioned in Sec 2.2., a recent training-based acceleration paradigm is increasingly receiving attentions from the community. Different from previous acceleration works reducing sampling steps or model size during inference time, this paradigm adopts the idea of flow-matching optimal transport to directly achieve single-step sampling during training, whose trajectory is approximately a straight line $^{[1,2]}$. Compared with this new trend, our method highlights the training-free acceleration of multi-step sampling models, with no need to train a new efficient diffusion model.
>***Q2: Effectiveness on SDE solvers***
Compared with the ODE solver, the SDE solver includes an additional noise item for the latent update, which is unpredictable by previous randomly-generated noises. When the magnitude of random noise is not ignorable, the third-order derivative of the neighboring latents cannot accurately evaluate the difference between the neighboring noise predictions. Therefore, to apply our method to SDE solvers, we should design an additional indicator that decides whether the randomly-generated noise is minor enough or stably changed to trigger the third-order estimator. In this case, we design an additional third-order estimator for the scaled randomly-generated noise. When the third-order derivatives of both the latent and the scaled randomly-generated noises are under the respective thresholds, the noise prediction can be skipped by reusing the cached model output.
To validate the effectiveness of our improved method, we conduct experiments for SDXL with the SDE-DPM solver on COCO2017. The results are shown in the following table. Compared with Deepcache, our method can achieve higher image quality with a comparable speedup ratio, indicating the effectiveness of AdaptiveDiffusion on SDE solvers.
|Method|PSNR $\uparrow$|LPIPS $\downarrow$|FID $\downarrow$|Latency (s)|Speedup Ratio|
|-|-|-|-|-|-|
|Deepcache|16.44|0.346|8.15|**9.2**|**1.63x**|
|Ours|**18.80**|**0.232**|**6.03**|9.8|1.53x|
***Reference:***
[1] Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow. ICLR 2023.
[2] Flow matching for generative modeling. ICLR 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, I still recommend to accept this paper.
---
Reply to Comment 1.1.1:
Comment: Thank the reviewer for your dedicated time and effort in reviewing our submission. Your valuable and positive feedback is greatly appreciated. | Summary: This paper proposes a strategy to speed up image and video diffusion generative models. The speed-up is achieved by skipping denoising steps. The authors suggest implementing an adaptive skipping schedule, where the decision of which steps to skip depends on the processed image or video. Specifically, the proposed algorithm calculates the norm of the third-order derivative in the latent space. It then skips denoising steps when the norm of the third-order derivative is below a predefined threshold while limiting the number of consecutive skips to a set maximum.
The authors evaluate the proposed skipping strategy through various experiments with image and video diffusion models, comparing the speed-up and quality of the generated content with DeepCache [1] and Adaptive DPM-Solver [2]. In most experiments, the proposed skipping strategy achieves higher speed-up and better image and video quality than competitors.
[1] Xinyin Ma, Gongfan Fang, and Xinchao Wang. Deepcache: Accelerating diffusion models for free. In Conference on Computer Vision and Pattern Recognition (CVPR), 2024
[2] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. arXiv preprint arXiv:2206.00927, 2022.
Strengths: The proposed skipping strategy leverages the generated content to speed up image and video diffusion generative models through an adaptive skipping scheme. In experiments with image and video generation, the authors demonstrate that this adaptive skipping scheme can achieve higher speed and better quality than competitors. In my opinion, the concept of using an adaptive skipping scheme to speed up diffusion generative models is potentially valuable and could be of interest to the research community.
Weaknesses: The choice of employing the norm of the third-order derivative as a criterion for skipping denoising steps was made empirically. Theorem 1 (Equation 3) explains why it makes sense to consider the first-order derivative as a criterion for skipping. However, the authors empirically demonstrate that the first-order and second-order derivatives barely correlate with an optimal skipping schedule found by a greedy search. There is a lack of mathematical justification for choosing the third-order derivative.
Technical Quality: 3
Clarity: 3
Questions for Authors: It would be helpful if the authors could provide any mathematical justification or intuition (in addition to the empirical experiments in the paper) for their choice to employ third-order derivatives as the criterion for skipping denoising steps.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >***Q1: Theoretical Analysis of the Relationship between the Third-order Estimator and the Skipping Strategy.***
To explore the theoretical relationship between the third-order estimator and the skipping strategy, we need to formulate the difference between the neighboring noise predictions. According to Eq.(1), we can get the following first-order differential equations regarding the latent $x$.
$\quad \varDelta x_i=x_i-x_{i+1}=[1-f(i)]x_{i+1}-g(i)\cdot\epsilon_{\theta}(x_{i+1},t_{i+1})$;
$\varDelta x_{i-1}= x_{i-1}-x_i=[1-f(i-1)]x_i-g(i-1)\cdot\epsilon_{\theta}(x_i,t_i)$.
Now, let $u(i)\coloneqq 1-f(i-1)$, and we further derive the second-order differential equations based on the above equations.
$\varDelta x_{i-1}-\varDelta x_i = u( i ) x_i-u( i+1 ) x_{i+1}+g( i ) \cdot \epsilon_{\theta}( x_{i+1}, t_{i+1} ) -g( i-1 ) \cdot \epsilon_{\theta}( x_i, t_i )$
$\quad\quad\quad\quad\quad\ \ \ =u( i ) ( x_i-x_{i-1} ) +u( i ) x_{i-1}-u( i+1 ) ( x_{i+1}-x_i ) -u( i+1 ) x_i+g( i ) \cdot \epsilon_{\theta}( x_{i+1}, t_{i+1} ) -g( i-1 ) \cdot \epsilon_{\theta}( x_i, t_i )$
$\quad\quad\quad\quad\quad\ \ \ =u( i ) \varDelta x_{i-1}-u( i+1 ) \varDelta x_i+\varDelta [ u( i ) x_{i-1} ] +g( i ) \cdot \epsilon_{\theta}( x_{i+1}, t_{i+1} ) -g( i-1 ) \cdot \epsilon_{\theta}( x_i, t_i )$
$\quad\quad\quad\quad\quad\ \ \ =u( i ) \varDelta x_{i-1}-u( i+1 ) \varDelta x_i+\varDelta [ u( i ) x_{i-1} ] +g( i ) [ \epsilon_{\theta}( x_{i+1}, t_{i+1} ) -\epsilon_{\theta}( x_i, t_i ) ] +[ g( i ) -g( i-1 ) ] \epsilon_{\theta}( x_i, t_i )$
$\quad\quad\quad\quad\quad\ \ \ =u( i ) \varDelta x_{i-1}-u( i+1 ) \varDelta x_i+\varDelta [ u( i ) x_{i-1} ] -g( i ) \varDelta \epsilon_{\theta}^{i}-\varDelta g( i ) \cdot \epsilon_{\theta}( x_i, t_i )$.
After simplification of the above equation, we can get the following formulation:
$f( i-1 ) \varDelta x_{i-1}-f( i ) \varDelta x_i=\varDelta [ u( i ) x_{i-1} ] -g( i ) \varDelta \epsilon_{\theta}^{i}-\varDelta g( i ) \cdot \epsilon_{\theta}( x_i, t_i )$.
From the above equation, we can observe that the difference between noise predictions $\varDelta \epsilon_{\theta}^{i}$ is related to the first- and second-order derivatives of $x_i$, as well as the noise prediction $\epsilon_{\theta}( x_i, t_i )$. Therefore, it would be difficult to estimate the difference without $\epsilon_{\theta}( x_i, t_i )$. Now we consider the third-order differential equation. From the above equation, we further obtain the following formulation.
$f( i ) \varDelta x_i-f( i+1 ) \varDelta x_{i+1}=\varDelta [ u( i+1 ) x_i ] -g( i+1 ) \varDelta \epsilon_{\theta}^{i+1}-\varDelta g( i+1 ) \cdot \epsilon_{\theta}( x_{i+1}, t_{i+1} )$.
$\Rightarrow \varDelta [ f( i-1 ) \varDelta x_{i-1} ] -\varDelta [ f( i ) \varDelta x_i ] =\varDelta ^{( 2 )}[ u( i ) x_{i-1} ] -\varDelta [ g( i ) \varDelta \epsilon_{\theta}^{i} ] -\varDelta [ \varDelta g( i ) \cdot \epsilon_{\theta}( x_i, t_i ) ]$.
$\Rightarrow \varDelta [ \varDelta g( i ) \cdot \epsilon_{\theta}( x_i, t_i ) ] =-\varDelta ^{( 2 )}[ f( i-1 ) \varDelta x_{i-1} ] +\varDelta ^{( 2 )}[ u( i ) x_{i-1} ] -\varDelta [ g( i ) \varDelta \epsilon_{\theta}^{i} ]$.
From the above equation, it can be observed that the difference of the neighboring noise predictions is explicitly related to the third- and second-order derivatives of $x_i$, as well as the second-order derivative of $\epsilon_\theta^{i}$. Since $\lim_{i\rightarrow 0} f( i ) =1,\lim_{i\rightarrow 0} u( i ) =0,\lim_{i\rightarrow 0} g( i ) =0$, we can finally get the conclusion that $\varDelta \epsilon_{\theta}^{i} \| \_{i \rightarrow 0} \approx \mathcal{O} ( \varDelta ^{( 3 )}x_{i-1} )$. | Summary: To enhance the sampling speed in diffusion models, this paper introduces the AdaptiveDiffusion framework, which utilizes a skipping strategy. Specifically, this strategy is guided by the third-order latent difference, assessing the stability between timesteps throughout the denoising process. Experiments results on image and video diffusion models demonstrate the superiority of the proposed adaptively sampling framework.
Strengths: 1. The motivation is reasonable: to accelerate the sampling speed by reducing redundant time steps.
2. The contribution is helpful in diffusion community.
3. The figures are pretty and the presentation is readable.
4. The proposed AdaptiveDiffusion is effective on multitask.
Weaknesses: 1. In my humble opinion, some improvements are marginal to me, especially on ImageNet 256×256.
2. I am not sure the Theorem one is guaranteed even in larger sampling step size.
3. Since many methods investigate reducing sampling steps by employing higher-order solutions, this approach lacks novelty.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Can you derive the Theorem 1 when the step size is large? Since large step size will magnify the upper bound on the error, and fast sampling method always equals to large step size sampling.
2. Since the proposed method aims to accelerate the sampling speed, can it reduce the NFEs?
3. Is the proposed method effective on pure image generation?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please see in Weaknesses and Questions. If all of my concerns are addressed, I will definitely improve my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: Analysis of Improvements.**
We describe the advantages of our method in two aspects.
- **Novel Method Design:** Endorsed by three other reviewers, AdaptiveDiffusion is the pioneering framework that accelerates the diffusion process adaptively for diverse prompts. Unlike the SOTA method Deepcache, which caches features of multiple blocks uniformly across all stages and imposes a static caching rule, our approach introduces adaptive acceleration with theoretical underpinnings and memory efficiency via a single cache unit for predicted latent.
- **Comprehensive Performance Improvements:** As noted by the other reviewers, AdaptiveDiffusion exhibits strong generalization across various models and tasks. Versus SOTA methods like DPM-solver and Deepcache, our method achieves a comparable or higher speedup ratio while maintaining superior image quality in both static image and video generation tasks. As shown in Tab. 2, due to **the large sampling step number and limited prompt diversity**, the conditional image generation task on ImageNet 256x256 is relatively easy for acceleration, with both methods achieving roughly 6x speedup with negligible quality loss (~0.09 LPIPS). Notably, for specific categories, e.g., 607th and 854th, etc., AdaptiveDiffusion reduces the NFEs to approximately 35 from 250, yielding ~**7x** speedup. When the generation diversity and complexity increases, e.g., video generation, the superiority of AdpativeDiffusion is clearly demonstrated in Tab. 3.
> **Q2: Theorem 1 with A Large Step Size.**
We would like to clarify that the step size is not mentioned as a condition for Theorem 1. If understood correctly, the large step size mentioned by the reviewer may refer to the large skip step of noise prediction. In this case, as the one- and two-step skipping schemes have been explored in Appendix A.2.1 and A.2.2 respectively, we further derive the error estimation of an arbitrary skipping scheme.
Taking $i$-th step to perform $k$-step ($k\geq 2$) skipping of noise prediction, we obtain the following update formulations.
$\quad x_i=f(i)\ x_{i+1}-g(i)\ \epsilon_\theta(x_{i+1},t_{i+1})$;
$x_{i-1}=f(i-1)\ x_i-g(i-1)\ \epsilon_\theta(x_{i+1},t_{i+1})$;
$x_{i-2}=f(i-2)\ x_{i-1}-g(i-2)\ \epsilon_\theta(x_{i+1},t_{i+1})$;
$\vdots$
$x_{i-k}=f(i-k)\ x_{i-k+1}-g(i-k)\ \epsilon_\theta(x_{i+1},t_{i+1})$.
$\Rightarrow\varepsilon_{i-k}=\|x_{i-k}-x_{i-k}^{ori}\|$
$\quad\quad\quad \ =\| f(i-k)(x_{i-k+1}-x_{i-k+1}^{ori})-g(i-k)[\epsilon_\theta(x_{i+1},t_{i+1})-\epsilon_\theta(x_{i-k+1},t_{i-k+1})]\|$
$\quad\quad\quad \ \leqslant f(i-k)\varepsilon_{i-k+1}+g(i-k)\|\epsilon_\theta(x_{i+1},t_{i+1}) -\epsilon_\theta(x_{i-k+1},t_{i-k+1})\|$
$\quad\quad\quad \ \leqslant\sum_{m=1}^{k-1}{\|h^{k-m+1}(i-m)\cdot\mathcal{O}(t_{i-m+1}-t_{i-m+2})\|}+\sum_{m=1}^{k-1}{\|h^{k-m+1}(i-m)\cdot\mathcal{O}(x_{i-m+1}-x_{i-m+2})\|}$
$\quad\quad\quad\quad\ +\|g(i-k)\cdot\mathcal{O}(x_{i-k+1}-x_{i-k+2})\|+\|g(i-k)\cdot\mathcal{O} (t_{i-k+1}-t_{i-k+2})\|$
$\quad\quad\quad \ =\sum_{m=1}^k{\mathcal{O}(t_{i-m+1}-t_{i-m+2})+\mathcal{O}(x_{i-m+1}-x_{i-m+2})}$.
The derivation utilizes the property that $\|\epsilon_\theta(x_i,t_{i+1})-\epsilon_\theta(x_{i+1},t_{i+1})\|$ and $\|\epsilon_\theta(x_i,t_i)-\epsilon_\theta(x_i,t_{i+1})\|$ are upper-bounded by $\mathcal{O}(x_{i}-x_{i+1})$ and $\mathcal{O}(t_i-t_{i+1})$ respectively according to the Lipschitz continuity. Here, $h^{k-m+1}(i-m)\coloneqq g(i-m)\prod\nolimits_{j=1}^{k-m}{f(i-m-j)}$.
It is observed that the error of $k$-step skipping is upper-bounded by the accumulation of previous latent differences. Thus, if the skipping step of noise prediction is large, the upper bound of the error will naturally increase, as also empirically demonstrated by Fig. 5(b).
> **Q3: Novelty of AdaptiveDiffusion.**
We first clarify our acceleration mechanism. Then, we will clarify our method's novelty from two aspects.
**1. Mechanism:**
Generally, the diffusion process comprises two stages at each step: noise prediction and latent update. Given the preset number of inference steps, the main purpose of AdaptiveDiffusion is to **reduce the number of function evaluations (NFEs)** while keeping the latent update number (sampling steps) **unchanged**. By reducing NFEs, we can significantly accelerate the diffusion process.
**2. Novelty:**
- **Motivation-level Novelty:** Prior studies like EDM $^{[1]}$ and DPM-solver leveraged high-order approximations between sampling steps to enhance image quality. In contrast, AdaptiveDiffusion employs these approximations to adaptively reduce NFEs. That is, while earlier high-order methods aimed at refining generation quality given a fixed sampling step number, our work prioritizes adaptive efficiency without compromising image quality.
- **Algorithm-level Novelty:** Our 3rd-order estimator is distinct in its adaptive acceleration tailored to various prompts. Unlike other high-order methods that flexibly choose solver orders for subsequent high-quality generation, our estimator is both empirically and theoretically constrained to 3rd-order approximations to decide whether to skip noise prediction, as shown in Sec. 3.3 and the global response. To our knowledge, this insight is the first in the field of diffusion model acceleration.
Briefly, AdaptiveDiffusion targets a different motivation and is a novel approach to the acceleration community.
> **Q4: Experiments on Pure Image Generation**
Following Deepcache, we perform image generation using DDPMs on CIFAR10 and LSUN-Bedrooms and build our method upon 100-step DDIM for DDPM. As shown in the table below, our method still achieves a larger speedup ratio and higher image quality than Deepcache on both benchmarks.
|Dataset|Method|FID|Speedup ratio|
|-|-|-|-|
|CIFAR10|Deepcache|10.17|2.07x|
||Ours|**7.97**|**2.09x**|
|LSUN|Deepcache|9.13|1.48x|
||Ours|**7.96**|**2.35x**|
***Reference:***
[1] Elucidating the Design Space of Diffusion-Based
Generative Models. NeurIPS 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for your great efforts!
The rebuttal addresses most of my concerns. I will increase my score.
---
Rebuttal 2:
Title: Look forward to further discussion
Comment: Dear Reviewer XP4x:
Thank you for your precious time on the review.
As the deadline for the discussion period is approaching (Author-reviewer discussion will be closed on Aug 13 11:59pm AoE), we sincerely hope that our response can address your concerns and we are looking forward to further discussion on any other issues regarding our manuscript.
Best regards,
Authors of Paper 230
---
Rebuttal 3:
Comment: Dear Reviewer,
Thank you for your valuable time to review our work and for your recognition! Your valuable and positive feedback is greatly appreciated. We noticed that the score has not been updated. There seems to be no final rating box this time, so the score may have to be adjusted in the original rating box. We would greatly appreciate it if you could make adjustments.
Best regards,
The Authors
Title: Sincere Appreciation and Humble Reminder | Rebuttal 1:
Rebuttal: Dear AC and reviewers,
We are deeply appreciative of the reviewers for their valuable time and thoughtful comments. Their feedback has reinforced our confidence in the paper's **clear presentation and organization** (Reviewer XP4x, kqHE, rWvo, hdh3), **the innovative approach** of AdaptiveDiffusion in addressing computational inefficiencies (Reviewer kqHE, rWvo, hdh3), and **the comprehensive experimental validation** across various tasks and models (Reviewer XP4x, kqHE, rWvo, hdh3) with **notable improvements** in speed and quality (Reviewer kqHE, hdh3).
We have diligently addressed each reviewer's critical feedback. Our goal is to resolve all issues and improve our work through this collaborative process. We will integrate these valuable comments into our revision, confident they will significantly elevate our work's quality and benefit the field.
Here is a summary of what we have done in the rebuttal phase.
- We conduct more experiments to cover the concerns of reviewers.
- More experiments on pure image generation using DDPMs on CIFAT10 and LSUN.
- Experiments using SDE solvers.
- We provide the derivation of theorem 1 with a large step size.
- We provide the theoretical analysis of the relationship between the third-order estimator and the skipping strategy.
- We provide a further discussion on the novelty of our work.
- We provide a discussion on single-step sampling works.
Below is the global response that might be commonly mentioned in the responses to several reviewers' comments.
> ***Theoretical Analysis of the Relationship between the Third-order Estimator and the Skipping Strategy.***
We supplement the theoretical relationship between the third-order estimator and the skipping strategy. Specifically, the difference between the neighboring noise predictions is formulated. According to Eq.(1), we can get the following first-order differential equations regarding the latent $x$.
$\quad \varDelta x_i=x_i-x_{i+1}=[1-f(i)]x_{i+1}-g(i)\cdot\epsilon_{\theta}(x_{i+1},t_{i+1})$;
$\varDelta x_{i-1}= x_{i-1}-x_i=[1-f(i-1)]x_i-g(i-1)\cdot\epsilon_{\theta}(x_i,t_i)$.
Now, let $u(i)\coloneqq 1-f(i-1)$, and we further derive the second-order differential equations based on the above equations.
$\varDelta x_{i-1}-\varDelta x_i = u( i ) x_i-u( i+1 ) x_{i+1}+g( i ) \cdot \epsilon_{\theta}( x_{i+1}, t_{i+1} ) -g( i-1 ) \cdot \epsilon_{\theta}( x_i, t_i )$
$\quad\quad\quad\quad\quad\ \ \ =u( i ) ( x_i-x_{i-1} ) +u( i ) x_{i-1}-u( i+1 ) ( x_{i+1}-x_i ) -u( i+1 ) x_i+g( i ) \cdot \epsilon_{\theta}( x_{i+1}, t_{i+1} ) -g( i-1 ) \cdot \epsilon_{\theta}( x_i, t_i )$
$\quad\quad\quad\quad\quad\ \ \ =u( i ) \varDelta x_{i-1}-u( i+1 ) \varDelta x_i+\varDelta [ u( i ) x_{i-1} ] +g( i ) \cdot \epsilon_{\theta}( x_{i+1}, t_{i+1} ) -g( i-1 ) \cdot \epsilon_{\theta}( x_i, t_i )$
$\quad\quad\quad\quad\quad\ \ \ =u( i ) \varDelta x_{i-1}-u( i+1 ) \varDelta x_i+\varDelta [ u( i ) x_{i-1} ] +g( i ) [ \epsilon_{\theta}( x_{i+1}, t_{i+1} ) -\epsilon_{\theta}( x_i, t_i ) ] +[ g( i ) -g( i-1 ) ] \epsilon_{\theta}( x_i, t_i )$
$\quad\quad\quad\quad\quad\ \ \ =u( i ) \varDelta x_{i-1}-u( i+1 ) \varDelta x_i+\varDelta [ u( i ) x_{i-1} ] -g( i ) \varDelta \epsilon_{\theta}^{i}-\varDelta g( i ) \cdot \epsilon_{\theta}( x_i, t_i )$.
After simplification of the above equation, we can get the following formulation:
$f( i-1 ) \varDelta x_{i-1}-f( i ) \varDelta x_i=\varDelta [ u( i ) x_{i-1} ] -g( i ) \varDelta \epsilon_{\theta}^{i}-\varDelta g( i ) \cdot \epsilon_{\theta}( x_i, t_i )$.
From the above equation, we can observe that the difference between noise predictions $\varDelta \epsilon_{\theta}^{i}$ is related to the first- and second-order derivatives of $x_i$, as well as the noise prediction $\epsilon_{\theta}( x_i, t_i )$. Therefore, it would be difficult to estimate the difference without $\epsilon_{\theta}( x_i, t_i )$. Now we consider the third-order differential equation. From the above equation, we further obtain the following formulation.
$f( i ) \varDelta x_i-f( i+1 ) \varDelta x_{i+1}=\varDelta [ u( i+1 ) x_i ] -g( i+1 ) \varDelta \epsilon_{\theta}^{i+1}-\varDelta g( i+1 ) \cdot \epsilon_{\theta}( x_{i+1}, t_{i+1} )$.
$\Rightarrow \varDelta [ f( i-1 ) \varDelta x_{i-1} ] -\varDelta [ f( i ) \varDelta x_i ] =\varDelta ^{( 2 )}[ u( i ) x_{i-1} ] -\varDelta [ g( i ) \varDelta \epsilon_{\theta}^{i} ] -\varDelta [ \varDelta g( i ) \cdot \epsilon_{\theta}( x_i, t_i ) ]$.
$\Rightarrow \varDelta [ \varDelta g( i ) \cdot \epsilon_{\theta}( x_i, t_i ) ] =-\varDelta ^{( 2 )}[ f( i-1 ) \varDelta x_{i-1} ] +\varDelta ^{( 2 )}[ u( i ) x_{i-1} ] -\varDelta [ g( i ) \varDelta \epsilon_{\theta}^{i} ]$.
From the above equation, it can be observed that the difference of the neighboring noise predictions is explicitly related to the third- and second-order derivatives of $x_i$, as well as the second-order derivative of $\epsilon_\theta^{i}$. Since $\lim_{i\rightarrow 0} f( i ) =1,\lim_{i\rightarrow 0} u( i ) =0,\lim_{i\rightarrow 0} g( i ) =0$, we can finally get the conclusion that $\varDelta \epsilon_{\theta}^{i}\| \_{i\rightarrow 0}\approx \mathcal{O} ( \varDelta ^{( 3 )}x_{i-1} )$.
Best Regards,
Authors of Paper230 | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Deep Discriminative to Kernel Density Graph for In- and Out-of-distribution Calibrated Inference | Reject | Summary: This paper proposes new methods, Kernel Density Forest (KDF) and Kernel Density Network (KDN), to address issues in confidence calibration for traditional deep learning models and random forests. The motivation stems from the existing literature that deep neural networks using ReLU tend to exhibit high confidence on out-of-distribution (OOD) data due to affine transformations. The proposed methods improve confidence calibration for both in-distribution (ID) and OOD data by partitioning the feature space into polytopes and replacing affine functions within each polytope with Gaussian kernels. Experimental results demonstrate that the proposed methods outperform existing techniques in terms of calibration performance.
Strengths: Originality:
The approach of replacing affine functions within polytopes with Gaussian kernels is novel. The proposed methods address the confidence calibration problem for both ID and OOD data simultaneously, providing an integrated solution to these calibration issues.
Quality:
The theoretical proofs are robust, and the effectiveness of the proposed methods is validated through both simulations and real-world datasets.
Clarity:
The paper is written clearly and concisely.
Weaknesses: Validity of Metrics:
The paper evaluates calibration using Maximum Calibration Error (MCE) for ID data, but does not justify the use of MCE over Expected Calibration Error (ECE) or Adaptive Calibration Error (ACE)[1]. A more detailed explanation and comparison of these metrics would enhance the paper's credibility. Additionally, the definition and justification for OCE (Out-of-distribution Calibration Error) would benefit from a similar comparison with ACE.
[1] https://arxiv.org/abs/1904.01685
Experiments:
To emphasize the effectiveness of the proposed methods, a comparison of execution times would be beneficial, especially since practical applications like web Click-Through Rate (CTR) estimation place significant importance on runtime. The paper should clarify what the noise in Table 1 represents. It would also be advantageous to include experiments on larger and more varied datasets, as well as an evaluation of the methods' performance when combined with in-training calibration methods, which are commonly used alongside post-hoc calibration methods.
Technical Quality: 2
Clarity: 3
Questions for Authors: Does the formulation of OCE assume a lack of usable features from the in-distribution domain? What practical and theoretical conditions are required for this assumption? For instance, it is known that large parameter models can improve OOD ECE even when trained with ERM on in-distribution features [2].
[2] https://arxiv.org/abs/2307.08187
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: This paper mentions computational complexity and limitations in practical applications, but lacks detailed experimental results to support these claims. Including such data would provide valuable insights for future research and implementation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments. We are pleased that you recognize the efficacy of our proposed approach in providing an integrated solution for both ID and OOD calibration problems in traditional deep learning models and random forests. We believe we have addressed all your concerns in our responses below. If you find these responses satisfactory, we would greatly appreciate it if you could consider updating your score.
- The paper evaluates calibration using Maximum Calibration Error (MCE) for ID data, but does not justify the use of MCE over Expected Calibration Error (ECE) or Adaptive Calibration Error (ACE)...
> Thanks for the reference and feedback. We noticed we actually evaluated ECE and called it MCE (please see kdg_code/kdg/utils.py line 36 in the provided codes and definition of ECE in Section 2.1 in [1]). We apologize for the mistake and have corrected it throughout the draft. Additionally, we have also reported ACE in the attached pdf. ECE and ACE provide nearly similar results when the number of classes is low. We noticed ACE gives better estimation of calibration compared to that of ECE when the number of classes is large (for example, cifar100 new results) [1]. This is due to the fact that ECE considers the calibration error only for the predicted class, whereas ACE considers all the classes. Our 45 datasets from OpenML CC-18 suites have (<=) 10 classes [4]. However, we will add the ACE curves for the Openml datasets in the appendix which is similar to the current ECE curves (because of having <=10 classes). We will add the above discussion to the paper and cite the provided reference.
- Additionally, the definition and justification for OCE (Out-of-distribution Calibration Error) would benefit from a similar comparison with ACE.
> OCE measures OOD calibration and assumes that true class conditional priors for the datasets are known. On the contrary, ACE is used for measuring ID calibration and is a surrogate measure used when true posteriors are not known. For example, in Figure 1 where we know the true distribution, we have used Hellinger distance from the true posteriors instead of ACE. Please see the global response for the corrected definition of OCE. We will add the above clarification in the camera ready version.
- To emphasize the effectiveness of the proposed methods, a comparison of execution times would be beneficial, especially since practical applications like web Click-Through Rate (CTR) estimation place significant importance on runtime.
> Please see the global response.
- The paper should clarify what the noise in Table 1 represents.
> We sample noise samples of size $32 \times 32 \times 3$ according to a Uniform distribution with pixel values within range [0,1]. We will add this text in the draft.
- It would also be advantageous to include experiments on larger and more varied datasets…
> We have added additional vision experiments using cifar100 (100 classes) and SVHN (10 classes and bigger training size) as ID datasets as suggested by reviewer fbQU. We emphasize that cifar10, cifar100 and SVHN are some of the hardest ID and OOD pairs according to various papers [2, 3], and hence they are adopted as the benchmarking datasets by many of the papers in the literature. Note that doing experiments with extremely large datasets like imagenet (14 million images) is computationally and storage-wise expensive using our current implementation within the rebuttal deadline. We will pursue extremely large datasets in future. Many relevant papers on OOD calibration use only small- and mid-sized datasets [5, 6, 7]. We acknowledge this limitation in the paper.
- An evaluation of the methods' performance when combined with in-training calibration methods, which are commonly used alongside post-hoc calibration methods.
> We have used ACET as an in-training approach followed by our approach to do the proposed experiment using CIFAR100 as ID data. Note that ACET improves OOD calibration leaving less room for improvement for KDN, however KDN improves the ID calibration for ACET. ACET+KDN has same ECE and ACE as KDN and CIFAR-10, SVHN, Noise OCE as 0.12, 0.04, 0.04 respectively. In the end, KDN has nearly similar performance with or without in-training approaches. Moreover, ACET adds a significant computational burden to the whole process. The authors in [8] observed a similar phenomenon. We think this experiment provides important insights about our approach and we will add it to the appendix.
- Does the formulation of OCE assume a lack of usable features from the in-distribution domain? What practical and theoretical conditions are required for this assumption?
> The formulation of OCE does not assume a lack of usable features from the ID domain. Our goal is defined in Eq. 1 and OCE measures the OOD calibration error, i.e., difference from the maximum of the class conditional priors in the OOD region according to Eq. 1. For calculating OCE, we need to know the true priors.
- This paper mentions computational complexity and limitations in practical applications, but lacks detailed experimental results to support these claims. Including such data would provide valuable insights for future research and implementation.
> Please see the global response.
[1] https://arxiv.org/abs/1904.01685
[2] Nalisnick, Eric, et al. "Do deep generative models know what they don't know?."
[3] Fort, Stanislav. "Exploring the limits of out-of-distribution detection."
[4] Bischl, Bernd, et al. "Openml benchmarking suites." arXiv preprint arXiv:1708.03731 (2017).
[5] Gardner, Josh. "Benchmarking distribution shift in tabular data with tableshift."
[6] Borisov, Vadim, et al. "Deep neural networks and tabular data: A survey."
[7] Ulmer, Dennis. "Trust issues: Uncertainty estimation does not enable reliable ood detection on medical tabular data."
[8] Wang, Deng-Bao. "Rethinking calibration of deep neural networks: Do not be afraid of overconfidence."
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer fbQU
Comment: I thank the authors for their detailed response. I am satisfied with the answers provided, and I would like to raise my score.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Thank you so much for your effort and valuable feedback! They improved our work by a huge margin. | Summary: The paper proposes a novel approach for OOD detection by learning a posterior distribution that is calibrated for both ID and OOD individuals. It models the class-wise conditional distribution of features by a gaussian kernel respectively for a set of polytopes that cover the feature space. The tail property of gaussian kernels contribute to both ID and OOD calibration. Empirical evidence shows the power of the proposed algorithm across tabular and vision dataset under both ID and OOD settings.
Strengths: The paper is well motivated from the tradeoff of ID calibration and OOD calibration for current approaches for OOD detection methods. The technique of gaussian kernel has a clear geometric intuitive. Compared to affine functions, the tail property ensures that the posterior distribution converges to the prior of labels when a OOD sample deviates far enough from the training support, as proved in Proposition 2. On the other hand, the interpolation by gaussian kernels between neighboring polytopes contributes to ID calibration.
Weaknesses: The major concern is insufficient discussion over the research context of the paper, which renders it hard to precisely evaluate the contribution. The related work section is short. Section 2 shows that "OOD detection" is the closest area to this paper, but this keyword is totally absent from the introduction, where the research area is named "OOD confidence calibration". What is the relation between OOD detection and OOD confidence calibration?
The introduction also reveals two potential approaches for this area: discriminative and generative methods. There are also two settings: ID and OOD confidence calibration. The readers might expect to review current progress for all those categories in the related work section.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Eq 3, how to ensure the non-negativity of the class-conditional density? The model of affine function might take negative values.
2. The model of class-conditional density with gaussian kernels, referring to Eq 5, resembles Kernel Density Estimation. Could the author please discuss the relation between KDE and the proposed method?
3. In Fig 1, KDF shows a weaker advantage for low dimensional settings. Could the author please explain why?
4. In Fig 2, why are OOD approaches absent? Fig 2 demonstrates the effectiveness of KDF compared to ID approaches in terms of OOD calibration. The readers could be more interested in the performance of OOD approaches.
5. In Table 1, it seems OOD approaches including ACET and ODIN can't even beat the parent model under OOD settings. Could the author please explain why?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The author has addressed limitations of their work in terms of sample complexity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful comments. We are glad to see that you recognize the effectiveness of our approach in balancing ID and OOD calibration. We believe we have addressed all your concerns in our responses below. If you find these satisfactory, we would be very grateful if you could consider updating your score.
- The major concern is insufficient discussion over the research context of the paper, which renders it hard to precisely evaluate the contribution. The related work section is short. Section 2 shows that "OOD detection" is the closest area to this paper, but this keyword is totally absent from the introduction, where the research area is named "OOD confidence calibration". What is the relation between OOD detection and OOD confidence calibration?
> We apologize for the confusion. Our work addresses both in- and out-of-distribution (ID and OOD) calibration. While traditional ID calibration methods like Isotonic and Sigmoid regression focus on achieving calibrated inference within the ID region, they do not address OOD calibration. Conversely, there is another group of literature which is primarily concerned about detecting OOD points. These approaches such as ACET, OE, and ODIN mainly focus on OOD detection rather than OOD calibration. Calibration is harder than detection, akin to how regression is harder than classification. To see that calibration is harder than detection, consider the fact that a well-calibrated model can perform detection, but a model capable of detecting OOD points may not be calibrated. OOD detection works as long as there are two distinguishable score sets for ID and OOD points, whereas calibration aims at estimating the true predictive uncertainty of these points. To our knowledge, only Meinke et al. [1] explicitly addressed OOD calibration (Section 3 Theorem 1 in their paper), but they do not consider ID calibration. Our work treats calibration problems as a continuum between the ID and OOD regions rather than addressing them separately. We will revise Section 2 to reflect this discussion.
[1] Meinke, Alexander, Julian Bitterwolf, and Matthias Hein. "Provably Robust Detection of Out-of-distribution Data (almost) for free." arXiv preprint arXiv:2106.04260 (2021).
- In Eq 3, how to ensure the non-negativity of the class-conditional density? The model of affine function might take negative values.
> The class-conditional density is non-negative because of ReLU activation (please see Section 2 in [1], Figure 1,2,3,4 in [2] for the details). We will clarify in the camera ready version.
[1] Hein, Matthias, Maksym Andriushchenko, and Julian Bitterwolf. "Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
[2] Xu, Haoyin, et al. "When are deep networks really better than decision forests at small sample sizes, and how?." arXiv preprint arXiv:2108.13637 (2021).
- The model of class-conditional density with gaussian kernels, referring to Eq 5, resembles Kernel Density Estimation. Could the author please discuss the relation between KDE and the proposed method?
> This is an excellent point! We agree there are similarities between KDE and the proposed method. However, there is an indicator function in Eq. 5 which is implemented using the geodesic distance proposed in the draft (which is absent in KDE), which makes KDN, KDF scale better with higher dimensions. Moreover, the center and bandwidth of the Gaussians are estimated in a data-driven manner using the representation learned by the parent discriminative model. We will add the above clarification after Eq 5.
- In Fig 1, KDF shows a weaker advantage for low dimensional settings. Could the author please explain why?
> In Fig 1, each Random Forest is an ensemble of 500 decision trees. An ensemble of uncalibrated learners has improved calibration over the individual uncalibrated learner [1]. This phenomenon leaves less room for improvement for KDF in low dimensional settings. On the contrary, the deep-net models are standalone learners with poor calibration which can be improved a lot by KDN.
[1] Stickland, Asa Cooper, and Iain Murray. "Diverse ensembles improve calibration." arXiv preprint arXiv:2007.04206 (2020).
- In Fig 2, why are OOD approaches absent? Fig 2 demonstrates the effectiveness of KDF compared to ID approaches in terms of OOD calibration. The readers could be more interested in the performance of OOD approaches.
> ACET, ODIN, OE are tailor-made for vision problems, therefore we can not run them on tabular data using the author provided codes. . To the best of our knowledge, the tabular OOD method [1] that we found does OOD detection, not calibration. As it does not yield any posterior, we could not benchmark with the above method. We will add the above explanations to Section 5.1.2.
[1] Ren, Jie, et al. "Likelihood ratios for out-of-distribution detection." Advances in neural information processing systems 32 (2019).
- In Table 1, it seems OOD approaches including ACET and ODIN can't even beat the parent model under OOD settings. Could the author please explain why?
> ACET and ODIN highly depend on the model architecture and the nature of the ID and OOD testsets. Moreover, they also depend on the OOD set used to train them. We used the OOD set used by the authors of the above algorithms. All these factors contribute to their inconsistent performance across datasets and model architecture. The authors of [1] found a qualitatively similar result.
[1] “Tajwar, Fahim, et al. "No true state-of-the-art? ood detection methods are inconsistent across datasets." arXiv preprint arXiv:2109.05554 (2021).”
---
Rebuttal Comment 1.1:
Comment: I acknowledge and thank the author for their response. In the rebuttal, the author has addressed the paper’s relation to OOD detection in detail. My remaining concern is about the relation between OOD calibration and IID calibration. It is an important and relevant problem to learn a posterior distribution that is calibrated simultaneously for both ID and OOD samples. However, as noted by other reviewers, it is confusing why a metric like ECE or ACE is not simultaneously adopted for both settings, which could have made the results more convincing.
The author has claimed ECE as a metric only for IID calibration, but I do not see a specific distribution assumption for ECE, and I believe ECE is still valid out-of-distribution. To illustrate this, consider the extreme setting where the outcome is independent of the feature. In this case, ECE is 0 if and only if the predictor outputs the prior label distribution, which behaves similarly to OCE. Additionally, ECE has the advantage of measuring calibration error when the distribution shift is moderate, such that the feature is still predictive, albeit less so, for the label.
---
Reply to Comment 1.1.1:
Comment: Thanks for elaborating the above concern. Sorry that we did not understand it fully previously. The idea seems intuitive for measuring calibration in transfer learning settings with distribution shift in the feature space for the target task. We are not sure we understand how to measure ECE for OOD points in our setting. According to the definition of ECE, the calibration error is the difference between the fraction of predictions in the bin that are correct (accuracy) and the mean of the probabilities in the bin (confidence). To calculate ECE, we need to measure accuracy. OOD points are unsupervised in our setting, i.e., there is no label associated with an OOD point. Our goal (Eq. 1 in our paper) is to calibrate the model so that it knows whenever it faces an OOD point (same as existing OOD detection approaches). There is not a target task in this setting where we want to do transfer learning. If a model is well-calibrated in the OOD region, it will always predict the majority (max prior) class in the ID training data for the OOD points. How do we calculate accuracy in this case? To clarify it further, consider we train a model on CIFAR10 and test it on OOD points from CIFAR100. The model will predict class labels within 1 to 10, but CIFAR100 has 100 classes. To the model (if it is calibrated) CIFAR100 should look like unknown points and hence it will have confidence at the prior level.
---
Rebuttal 2:
Comment: - “This experimental setting essentially reduces the task to a binary one: whether to output normal confidence for an ID sample or just a prior for an OOD sample. ”
> The reviewer is correct, in the experimental setting, as one moves further away from the ID setting, an OOD calibrated classifier will output the prior. However, close to the ID setting, the OOD calibrated classifier will output a probability that the sample is in any given class. This is in contrast to an OOD detector, which, regardless of how close or far the data are from the ID data, if is OOD, it always effectively outputs the prior.
- “A binary OOD detector can also immediately output a confidence based on the prior if a sample is identified as OOD.”
> Yes, a binary OOD detector can output a confidence based on the prior. However, without calibration, there is no reason to expect that confidence to be….calibrated. This is precisely why the OOD calibration is more difficult, and more informative, than pure OOD detection.
- “Therefore, the proposed method is essentially an OOD detector, which implies the need to reconsider both the theoretical and empirical results within the broader literature of OOD detection.”
> We would say that our method subsumes OOD detection, because it also includes ID calibration, and OOD calibration. To our knowledge, there are no other papers demonstrating any algorithm with all these properties. | Summary: The paper introduces a way to calibrate ReLU networks or random forests by breaking them down into piecewise linear functions on polytopes and replacing the linear parts with Gaussian kernels. This approximation allows to naturally calibrate the models for the ID domain, where confidence will be high due to the density of ID samples that translates into high kernel values, and for the OOD domain, where confidence will be low due to the large distance to ID samples.
Strengths: - The method is novel and mathematically grounded
- The presentation is clear
- The benchmarks are OK
Weaknesses: The main weakness I find is about the computational time of the method. The number of polytopes scales exponentially with the number of neurons, so I am concerned with the applicability of the method to large (or even medium-scale) neural networks. What is the computational cost of the method for the considered benchmarks, in terms of runtime?
The toy simulations are unnecessarily tedious to grasp and take up a lot of space. I do not say that they are complex, but they hinder the reading flow and do not bring much to the presentation. I would advise putting some of them in the appendix to leave more space for other explanations. Indeed, Section 5 is difficult to read (many "chunk" paragraphs with mathematical notations) and would benefit from more structured writing and more flow.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Section 4.4 How is $\omega_{rs}$ estimated in that case?
2. Table 1: what is the Parent approach exactly compared to KDN and KDF? The definition of Section 3.3 does not seem to describe a whole model but only the parent idea that is then declined with KDN and KDF.
3. What would be the AUROC, FPR, i.e. metrics commonly used in OOD detection?
4. Figure 1. What is "dimensions" in x-axis?
5. l. 192 I do not get how OCE measures an error since it only includes estimated quantities, and I do not see the estimated confidence score $\hat{g}_y(x)$. In addition, are $x_i$ OOD samples in that case? If yes it should be made explicit.
6. l. 213 I disagree that normalizing ensures that distances up to 1 are ID and above are OOD. There can be "holes" in the distribution (as the circle dataset), or modes. What is the authors' opinion about that?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the intuitive comments. We are glad that the reviewer recognized the intuition behind our approach . We believe we have addressed all your concerns in the response below. If you think these responses are satisfactory, we would be very grateful if you can update your score.
- The main weakness I find is about the computational time of the method. The number of polytopes scales exponentially with the number of neurons, so I am concerned with the applicability of the method to large (or even medium-scale) neural networks. What is the computational cost of the method for the considered benchmarks, in terms of runtime?
> Please see the global response.
- The toy simulations are unnecessarily tedious to grasp and take up a lot of space. I do not say that they are complex, but they hinder the reading flow and do not bring much to the presentation. I would advise putting some of them in the appendix to leave more space for other explanations. Indeed, Section 5 is difficult to read (many "chunk" paragraphs with mathematical notations) and would benefit from more structured writing and more flow.
> This is an excellent suggestion! We will put rows four of the rows of Figure 1 to appendix. We will reorganize the chunk paragraphs and make Section 5 more concise in the camera ready version, and put the formal equations in the appendix as necessary.
- Section 4.4 w_{rs} How is estimated in that case?
> We apologize for missing it. We have rewritten line 159-160 as “We estimate $w_{rs}$ by exponentiating the above kernel using Equation 15.“
- Table 1: what is the Parent approach exactly compared to KDN and KDF? The definition of Section 3.3 does not seem to describe a whole model but only the parent idea that is then declined with KDN and KDF.
> Sorry for the confusion. In Table 1, by “Parent approach” we mean the original vision transformer [1] that was trained on CIFAR10. We will update the table accordingly.
[1] https://pytorch.org/vision/main/models/generated/torchvision.models.vit_b_16.html
- What would be the AUROC, FPR, i.e. metrics commonly used in OOD detection?
> We have added AUROC and FPR in the updated table in the attached pdf. KDN has nearly similar AUROC and FPR to those of OOD detection approaches. Note that these scores are used for OOD detection. However, we are addressing both ID and OOD calibration (Eq. 1 in our paper). OOD detection works as long as there are two distinguishable score sets for ID and OOD points, whereas calibration aims at estimating the true predictive uncertainty of these points. That being said, a well-calibrated model can perform detection, but a model capable of detecting OOD points may not be calibrated. We will add the above discussion in Section 2.
- Figure 1. What is "dimensions" in x-axis?
> We will replace it with “Number of dimension” which indicates the number of increasing dimensions from the Trunk simulation.
- l. 192 I do not get how OCE measures an error since it only includes estimated quantities, and I do not see the estimated confidence score
> Thanks for this intuitive observation and catching the mistake. Please see the global response.
- In addition, are x_i OOD samples in that case? If yes it should be made explicit.
> We have rewritten line 192 as: “Given n OOD samples {x_i}_{i=1}^n, we define OOD calibration error (OCE) to measure OOD performance for the benchmark datasets as:”
- l. 213 I disagree that normalizing ensures that distances up to 1 are ID and above are OOD. There can be "holes" in the distribution (as the circle dataset), or modes. What is the authors' opinion about that?
> This is an excellent catch and we agree! We have rewritten line 213 as: “Therefore, ID samples are confined within distance 1. “
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarification and raised my rating. I strongly encourage them to polish the presentation for the camera ready version.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Thank you so much for your intuitive and valuable feedback! It improved our work significantly. We will make sure our presentation is significantly improved taking in consideration all the points raised by you for the camera ready version. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their strenuous effort and time to go through our paper and provide valuable feedback. Below, we address the common concerns:
- Reviewers were concerned about the runtime of our approach, possibly it could be an exponential function of the number of nodes. However, we did additional experiments (see Fig. 10 in the attached pdf) where we show training and testing time both are linear in the number of the nodes. Moreover, training time is only 200 seconds even when there are 40,000 nodes running on a MacBook Pro with an Apple M1 Max chip and 64 GB of RAM. The number of total polytopes in KDN is upper bounded by the training sample size as we only consider the polytopes populated by training data (see the first paragraph of Section 3.3 and Eq. 5). We will add this figure in the camera ready version.
- Reviewers asked about the training time complexity of other baseline approaches. OOD calibration approaches such as ACET, OE and ODIN take about 2 days, an hour, 6 hours, respectively on GPUs. In-distribution calibration methods such as isotonic regression and sigmoid regression take a few minutes and use CPUs. Our approach addresses both ID and OOD calibration while taking a few minutes to train on CPUs, rather than GPUs. All the computations were performed for producing the results in Table 1 using a MacBook Pro with an Apple M1 Max chip and 64 GB of RAM. We will add these numerical results to the camera ready version.
- Reviewers were concerned about the definition of OCE. We have corrected our OCE definition in Eq 18. Previously we erroneously used estimated priors and now we have replaced it with the true priors:
$$\text{OCE} = \frac{1}{n} \sum_{i=1}^n \left|\max_{y \in \mathcal{Y}}(\hat{P}_{Y|X}(y|\mathbf{x}_i)) - \max\_{y\in \mathcal{Y}}(P_Y(y)) \right|.$$
We can only calculate OCE when we know the true priors. For all the experiments, we fixed the priors and sampled our training data accordingly. We have fixed the equation in the paper, and will comment about the requirement to assume the priors are known.
Pdf: /pdf/1d047aa02d6df2a68d6e7512719a1c95705ad02a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generative Forests | Accept (poster) | Summary: This paper proposes a generative model for tabular data based on a forest of trees. It is evaluated on the quality of the generated samples (using an optimal transport based distance to a held out test set), on data imputation and density estimation.
Strengths: Generative modeling for tabular data is an important research topic, and exploiting the strengths of trees (ubiquitous in this modality) for this generative task is a promising direction.
The framework inherits the flexibility of boosting, and the theoretical results (under weak learning assumptions) provide guarantees that the underlying distribution can be approximated well enough with enough trees.
Weaknesses: Further empirical comparison could be conducted. For example, recent works have applied diffusion for tabular data (Kotelnikov et al.) as well as flow matching (Jolicoeur-Martineau et al.). It might be worth comparing to these more recent approaches.
Furthermore, more metrics could be included for the LifeLike experiment to provide more nuanced results. See Jolicoeur-Martineau et al. for examples of such metrics.
References:
Kotelnikov, A., Baranchuk, D., Rubachev, I., & Babenko, A. (2023, July). Tabddpm: Modelling tabular data with diffusion models. In International Conference on Machine Learning (pp. 17564-17579). PMLR.
Jolicoeur-Martineau, A., Fatras, K., & Kachman, T. (2024, April). Generating and imputing tabular data via diffusion and flow-based gradient-boosted trees. In International Conference on Artificial Intelligence and Statistics (pp. 1288-1296). PMLR.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the computational cost for training and inference compared to Generative Trees? (It would be interesting to provide wall-clock times)
How are the parameters chosen (number of trees, number of splits, ...)?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to especially thank the reviewer for singling out the importance of boosting and our theoretical framework and results
## weaknesses
> Further empirical comparison could be conducted. [...]
We could be happy to do so, but kindly note that at this stage we have received criticisms that we cram a lot of experimental stuff in a small allotted space [kTDz-C]. Furthermore, our paper investigates three (3) different experimental settings on which we already have five (5) contenders (in fact 6 if we count the generative trees of [29], and even more if we consider "copy" to be one in our GEN-DISCRIM experiment, Section V.V.3.5); finally, the reviewers altogether suggest to add *five* (5) more contenders, in different parts of our three settings. As we write above [ALL], we would like to keep a good balance theory / experiments and so it would be important for the reviewers to get to a consensus on an additional experiment that we could detail as part of the +1 page. We make a suggestion in [ALL] that would be easy to carry out during camera-ready preparation and which rejoins the reviewer's suggestion.
> Furthermore, more metrics could be included for the LifeLike [...]
Good suggestion but kindly keep in mind that this would induce more space to allocate in our paper (see above). We suggest adding Grower and coverage in the appendix for our method and all our LifeLike experiments. Kindly note also that out of the many metrics in Jolicoeur-Martineau et al., an optimal transport distance is the only one that provides a statistically meaningful comparison at the distribution level (see [kTDz-F]). Jolicoeur-Martineau et al. use Wasserstein's metric, which is computationally expensive and prevents them from computing it on medium+ sized datasets (Cf their section B.2.2); we use a regularized version, now very popular since Cuturi [8], that still bears OT properties and is sufficiently fast that it could be computed on all our datasets even on our "Low-end" laptop.
## questions
> What is the computational cost for training and inference compared to Generative Trees? (It would be interesting to provide wall-clock times)
It is more computationally demanding than generative trees (GT's) for a single reason: when testing a split at a leaf, we have only one support to split in a GT (that corresponding to the leaf) while in a generative forest (GF), we need to test the splitting of all supports in ${\mathcal P}({\mathcal T})$ that are included in the leaf's. Note that this has a positive side: we can train much smaller GFs compared to GTs. So somehow we are slower with GFs but end up training models with less splits to get to the same quality. We could put times but such a question entails a broader one on our contenders and this could be sensitive [kTDz-F]. Note also that we have used a "low end" laptop and a "medium+ end desktop" for our experiments, *but* we have systematically trained our models on the laptop. It was deliberate as we wanted to make sure we could learn such models on a "simple" device. In this context, we are not sure we would not end up comparing peers and apples when it comes to times. At least we can propose to put the training times of our method for all our experiments, with a clear explanation of how / why we did so in a "low-end" laptop (and in the appendix) ?
> How are the parameters chosen (number of trees, number of splits, ...)?
We deliberately did not optimize / hypertune these parameters, in order not to factor the "human tuning" in the explanation for the quality of our models (keep in mind we have three experimental settings), so we just picked informed guesses that would also represent fast enough training, see kTDz-E] [cvp8-F]. The other reason we have not done this is because we are convinced there is an automatic method to find the "right" parameters, which comes down to pruning our models, see [cvp8-E]. This of course would require a work / paper of its own.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Dear Authors,
Thank you for your detailed response. I'd be happy to see the results against Forest-Flow if the other reviewers also agree.
---
Reply to Comment 1.1.1:
Title: Experiments vs Forest-Flow provided
Comment: Kindly see at the top of this page, or Search for tags [Exp-0], [Exp-1] and [Exp-2]. | Summary: This paper introduces generative forests (GF), a new class of generative models designed for tabular data and based on sets of trees (forests), and a training algorithm called GF.BOOST. The algorithm is designed to be simple, making it easy to implement with minor modifications to existing CART-style decision tree induction schemes.
The GF models and GF.BOOST algorithm are evaluated across various domains, demonstrating their capability in tasks such as data generation, missing data imputation, and density estimation. The results show that even a small number of trees within a generator can be effective, highlighting the potential of the proposed approach in handling tabular data.
Strengths: - I believe the paper is a nice addition to the ML community. Prevailing methods in data imputations for DTs are based on heuristics and tricks, whereas this paper approaches this problem with sounding probabilistic point of view, which makes a lot of sense to me. This naturally leads to the ability of generating more data, which may have serious implications in synthetic data generation literature.
- The method is relatively easy to implement and can be done by straightforward extension of CART-style tree induction methods.
Weaknesses: - Experiments section may need some improvements:
- Most datasets in Table 2 (which seem to be the main results table) are standard ML benchmarks. It would be really nice to have real-world kaggle-style datasets with missing values and compare with strong baselines therein.
- Adding traditional forests is also necessary to compare against established methods (xgboost, lightgbm, etc.)
- I find most of the math notations hard to follow and can be significantly simplified.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How computationally efficient to compute Eq. 4? From my understanding it is used for splitting a tree
- Is there any mechanism to prune some trees in a forest (reduce T)? Traditional tree boosting builds trees sequentially, so there is opportunity for early stopping. I was wondering how critical here to have predefined T...
- Is there any separate ablation how well this method works for categorical features?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - there seems to be some limitations in convergence analysis (e.g. symmetry of l).
- otherwise, having separate (sub)section on limitations would be nice (e.g. runtime compared to traditional boosting methods).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for noting the importance of our formal contribution !
## weaknesses
> It would be really nice to have real-world kaggle-style datasets with missing values
Actually, sigma-cabs is a Kaggle dataset with missing values. Stanford Open Policing is not Kaggle but it also has missing values and is real world (see our Table A1). We understand the reviewer would like to have a few more ? Perhaps we can use [uJTo-B] ?
> Adding traditional forests is also necessary to compare against [...]
We understand the reviewer would like us to add tests on supervised learning. We kindly refer to [cvp8-A] for a discussion on this matter.
> I find most of the math notations hard to follow and can be significantly simplified
We attribute it to the fact that our approach is purely grounded in a supervised learning framework, which is indeed seldom for generative models. We refer to [cvp8-B][uJTo-A] for discussions on these and their importance; we would use part of the +1 camera ready to hopefully bring improved explanations.
## questions
> How computationally efficient to compute Eq. 4?
[6MHb-A] Great question. We only need to keep those elements in ${\mathcal P}({\mathcal T})$ **with $>0$ empirical measure** because for all others their contribution to (4) is zero, so the complexity is linear in the size of the training sample (we cannot have more elements in the set). We also refer to [kTDz-I] for side remarks on the code we provide.
> Is there any mechanism to prune some trees in a forest (reduce T)?
Great question: as we say in [cvp8-E], there surely is, but it would deserve a paper of its own (we are talking about a pruning mechanism with provable properties).
> I was wondering how critical here to have predefined T... [...]
We refer to [cvp8-F] for a similar discussion and the reason why we did not "hypertune" our parameters $T,J$.
> Is there any separate ablation how well this method works for categorical features?
We would be happy to get additional context on this question -- does the reviewer think about testing "all features" vs "only numerical features" to test how much categorical features contribute to the model quality ? (note that we would have to push it to the appendix, see [ALL])
## limitations
> there seems to be some limitations in convergence analysis (e.g. symmetry of l).
This is in fact a limitation of a previous paper [29], limitation that we *remove* in our analysis, which is thus more general (see L497, that we can put in the main file using the +1 camera ready page)
---
Rebuttal Comment 1.1:
Comment: Hi, thanks for your rebuttal! However, I feel like my concerns on additional experiments were not addressed adequately.
Most datasets used seem to not have missing values, the ones that do have them appear to be small sixe, e.g. sigma-cabs has 5000 data points and 13 features. I don't understand why it was a problem for not running requested evaluations during rebuttal? One could just get some dataset with reasonable size where tree-based models perform good (e.g. XGBoost) and apply some standard method (e.g. https://www.kaggle.com/code/alexisbcook/missing-values) and evaluate against the proposed method. I feel like this is a major blocker for me to accept the paper without those results. The current evaluations does not convincingly show the advantageous of applying this method specifically over others.
---
Reply to Comment 1.1.1:
Title: We are running more experiments
Comment: We are confused. The reviewer would like us to use XGBoost, *but we do not do supervised learning*. We do (i) data generation, (ii) missing data imputation, (iii) density estimation. XGBoost, lightGBM address *supervised learning*: the missing values need to be in one column (=class). Our missing values can be everywhere, that is why we use MICE, which is state of the art (note that we have also optimized MICE, in particular augmenting the number of trees used in random forests-based imputation to 100 (the default value, 10, is arguably too small), see also [kTDz-E]. We also test CART trees in MICE. Thus, note that the contender method we use is always tree-based.
Since the reviewer mentions many times datasets with missing values, we assume the task we have to tackle is missing data imputation. We have started more experiments with MICE against our method using real world domains bigger+ than sigma-cabs. The results will be communicated before the discussion deadline. Note that our results in missing data imputation are plots (Table 5) and we will not be able to communicate plots, so we will communicate summary metrics that we already use.
---
Reply to Comment 1.1.2:
Title: One more point re: using datasets with missing values
Comment: We forgot to mention an important point: in missing data imputation, the task is to predict missing values and then compare to a known ground truth to compute the metrics. Thus, the missing values must have been removed from the domains and so those domains *must have the values that will be removed before testing missing data imputation*.
Note that this does not prevent the domain to have missing values "as is" (our methods can be trained if the domain has missing values) but the prediction of those cannot be assessed against a ground truth because they are not known -- read: those predictions cannot be included in a metric. This is why we simulate missing data imputation by removing a fixed proportion of values in the domain (5%, Missing Completely At Random, MCAR in the jargon of missing data imputation), and then compute metrics only based on those features "known in the full domain" then predicted by models.
---
Rebuttal 2:
Title: Last reply
Comment: We thank the reviewer for all interactions. As we are getting a few minutes before the deadline, we would like to mention that we have started comparisons on much bigger domains than the ones we have in our current benchmark. As a comparison, our largest dataset (real-world) in the vs-mice benchmark has $m \times d \approx$ 363K. We are now testing on bigger domains, but we are facing scalability setbacks on some contenders [uJTo-L].
Nevertheless, for imputation experiments, we can already report, as an example, an experiment vs mice (same parameters as in our paper) on domain medical_charges (a real world domain from US hospitals with $m \times d >$ 815K [govWD]) with RMSE = 2.86E8 (us) vs RMSE = 2.50E8 (mice), p-val = 0.38 so no statistically significant difference.
To make a uniform treatment among our experiments, we have to test those domains on our two other settings (data generation, density estimation) as well and using all our other contenders, so we propose to select 1-2 of such domains approaching $m \times d = $1M and add those in our benchmark. Our program runs smoothly on such domains [uJTo-L], but have to fairly solve scalability issues for some of our contenders [uJTo-L].
The author(s).
[govWD] L. Grinsztajn , E. Oyallon and G. Varoquaux . Why do tree-based models still outperform deep learning on tabular data? NeurIPS 2020 | Summary: The paper proposes a generative model based on an ensemble of trees (forest) for tabular data. The proposed model enjoys the following peculiar properties:
1. Compared to generative models based on a single tree, it offers improved **expressiveness** in terms of partitioning the input space (linear for generative trees vs. polynomial for generative forest in terms of decision nodes in a tree).
2. The model offers **unbiased** density estimation for sufficient large capacity, thanks to the fact that each node in each tree leverage decisions in proportion to the empirical distribution.
3. Learning the structure of the trees is performed by minimizing an objective function based on the likelihood ratio risk.
4. Computationallly speaking, data generation and density estimation can be performed in **parallel** with respect to the different trees. Indeed, results can be aggregated a posteriori by taking the intersection of selected regions.
5. Apart from **data generation** and **density estimation**, the model can be extended to deal missing data and perform **data imputation**.
Experiments are conducted on a series of tabular datasets from the UCI repository. The proposed solution is shown to outperform/being on par with existing tabular data generation (the most relevant being adversarial random forests), data imputation (e.g. MICE) and density estimation baselines (e.g. kernel density estimation).
Strengths: 1. The problem of learning a good generative model for tabular data is relevant **Relevance**
2. The solution based on a tree ensemble represent a significant advance over solutions based on a single decision tree **Significance**.
3. The proposed model offers the possibility to tackle three different tasks, including density estimation, data generation and data imputation. Moreover, experiments demonstrate the efficacy of the solution when compared with corresponding baselines for each task **Significance**.
4. The theory of the paper seem correct, however I haven’t gone through the proofs **Correctness**
5. Code is available at the time of submission. No additional check has been conducted to verify the reproducibility of the results **Reproducibility**
Weaknesses: 1. One of the major concerns is with the novelty of the proposed solution **Novelty**. What is the relation with the work in [1]? The work in [1] is also providing a model able to tackle the three above-mentioned tasks. Would it be possible to provide a comparison?
2. The paper is sometimes overloaded by unnecessary formalism, rendering the text hard to follow **Clarity**. For instance,
1. Line 92-109, why is the whole theory about risk minimisation for binary classification introduced, when the goal is to perform density estimation? Would simple likelihood learning suffice?
2. Definition of tree. “…labeled with subsets of their tail node’s variable domain”. Do you mean “”labeled with subset of attribute value for the corresponding variable?
3. Line 160-163. Can you make an example or in case leave out the sentence? Am I correct to say that the permutation invariance property is due to the final intersection operation or am I missing something?
4. Algorithm 3 in Step 2.1 How is node selection performed, uniformly at random? Moreover, can you provide some more details about the resulting trees, are they balanced?
5. Can you elaborate on the motivation of proving theorem 5.2 and discuss the significance of its results or link with the results in the experiments?
3. All experiments are conducted on datasets with less 34 variables **Soundness**. Would it be possible to provide some comparisons with [1] on larger dimensional datasets (see the reference for example) and/or discuss the computational requirements?
**Reference**
[1] Generating and Imputing Tabular Data via Diffusion and Flow-based Gradient-Boosted Trees. AISTATS 2024
Technical Quality: 2
Clarity: 2
Questions for Authors: Overall, I like the extension of generative trees to ensembles. I’m willing to raise my score if all weaknesses are addressed during the rebuttal phase.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: There are no additional suggestions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for organising the review in a very clean way that allows for easy referencing, tagging two **significant** key strengths on which we elaborate further below, and being clearly open for discussion. We use W.X.Y to refer to the weakness section.
> W.1 "One of the major concerns [...]"
There are two parts to this question, a technical one (relationship to [1]) and experimental (comparison with [1]).
- technical part: [1] is based on diffusion models while we rely on stochastic generation, so the approaches are radically different (they solve ODEs for data generation, we essentially "just" sample Bernoulli random variables, see our algorithm starUpdate). We thank the reviewer for this reference as it in fact provides us with a contender type that is radically different from all our contenders (not just our method) [ALL].
At this point, we would like to highlight a key difference with [1]: [1] has no formal analysis of the algorithm: no convergence / consistency result of any kind. Experimental results are important to tell the story but a good theory gives the backbone. Among all references pointed by all reviewers, none provides rates of convergence. We do. We also emphasize that it is not complicated to show a consistency result in our case [kTDz-A2].
- experimental comparison: the reviewers provided overall a number of potential additional contenders. Assuming a consensus emerges around our proposition in [ALL], we would be happy to provide an additional comparison with [1] on data generation, hoping our experimental section will remain readable using the +1 page of camera ready (we would not compare on missing data imputation as [1] make it explicit that they do worse than MICE and we are competitive).
> W.2.1 "The paper is sometimes overloaded by unnecessary formalism [...] Line 92-109, why is the whole theory [...]"
[uJTo-A] The formalism we introduce is necessary because we manage to train a generative model using the framework of supervised learning and our algorithm heavily relies on the loss definitions mentioned. Furthermore, the loss *properties* are also important because, it turns out, our convergence results are more general than those of [29]. Those definitions make the "slack" explicit. We are confident we can make the section "breathe" using part of the +1 page camera ready.
> W.2.2 "Definition of tree [...]"
Correct. We can paraphrase.
> W.2.3 "Line 160-163. Can you make an example or in case leave out the sentence"
It is correct *but* the sentence should remain because it grounds the proof (not given) of Lemma 4. We can add the reviewer's remark and merge it.
> W.2.4 "Algorithm 3 in Step 2.1 How is node selection performed, [...]"
We choose the heaviest leaf among all trees (the one with the largest empirical mass in its support). In the code, it is method choose_leaf of class Algorithm.
> W.2.5 "Can you elaborate on the motivation of proving theorem 5.2 [...]"
This is an empirical convergence rate under the weak learning assumption in the boosting framework. To summarize its importance, this has been the cornerstone of boosting algorithms since Valiant's weak/strong model and since Schapire's first proof that boosting is achievable in the model (1990). Such a result is thus fondamental: it shows an explicit rate under assumptions that are so "weak" that not satisfying them would just preclude learning (in the supervised setting, which is easier to state, failing the weak learning assumption would mean getting updates no better than a fair coin, thus indeed totally useless to learn). Kearns and Mansour in [18] summarize very well the importance of boosting in the context of decision tree induction, which is close to our setting. The first sentence of the abstract of the popular paper "Additive logistic regression: a statistical view of boosting", Friedman, Hastie and Tibshirani summarizes the importance of boosting as we just explained:
**"Boosting is one of the most important recent developments in classification methodology."**
> W.3 All experiments conducted on datasets with less 34 variables [...] comparisons with [1] on larger dimensional datasets [...]
[uJTo-B] (we correct: it is less than 35) We would be happy to do so, see [ALL], but note that among all datasets of [1], only 3 out of 27 have more than 35 features (we can use those ?).
---
Rebuttal Comment 1.1:
Title: Additional Questions
Comment: Dear authors,
Thank you for the answers, that have addressed some of my concerns about clarity. There are still a number of questions that I would like to see addressed and that refrain me to raise the score, also in light of the other reviews.
**Novelty**
While it is clear the difference between the proposed solution and Forest-Flow from a model perspective, I don’t understand the difference in terms of properties, especially regarding generalization (as also pointed out by reviewer kTDz). Firstly, Theorem 5.2 provides a result about the weak monotonic reduction on the density distance. Since the reduction is not strict, how do you ensure that the learnt model is unbiased? Secondly, does the mentioned consistency property imply that Forest-Flow is biased, whereas Generative Forests is not? As far as I understand, training Forest-Flow reduces the distance between the model and the empirical data distribution leveraging standard KL divergences through an ELBO-like formulation. Consistency results clearly hold in the infinite sample regime and for sufficiently large network capacity. Can you please clarify this aspect?
**Clarity**
It is mentioned that during learning, leaves with largest empirical mass are selected among all trees. How does this ensure that learning is unbiased as uncertainty is not taken into account? Moreover, how does this criterion influences the topology of the trees, are they balanced?
Can you provide some synthetic visualisations on the convergence results of Theorem 5.2 and the benefits of the strict properness property against other common generative losses, such as KL divergence? This could help to add value my making the concepts clearer.
From an implementation perspective, how are the different trees initialised and how is diversity across trees ensured throughout learning?
**Experiments**
Comparison results against Forest-Flow are promised, but it is not clear whether they will be released during the discussion phase or afterwards. Providing these during the discussion phase could definitely help in the decision.
---
Reply to Comment 1.1.1:
Title: Reply on "Experiments" -- Experiments vs Forest-Flow provided
Comment: Kindly see at the top of this page, or Search for tags [Exp-0], [Exp-1] and [Exp-2].
---
Reply to Comment 1.1.2:
Title: Reply to the additional questions
Comment: We thank the reviewer for these additional comments. Here is our reply, following the comment's order.
On **Novelty**. We have added here the proof that our method is consistent [Th-0] (see above) in the framework used by A.H.C. Correia, R. Peharz and C. P. de Campos. Joints in Random Forests. NeurIPS 2020. We do not want the reviewer to think that we claim any biasedness / inconsistency property for Forest-Flow. We do not know. Our experiments we just added here [Exp-0][Exp-1] at least demonstrate its qualities as a contender. Note that our result in [Th-0] follows from a simple relationship between our generative forests and binary trees (Lemma A above), which allows us to branch immediately to the state of the art. Maybe a similar path exists for Forest-Flow ?
On **Clarity**. We selection the leaf with the largest empirical mass because it is a simple (the simplest ?) way to meet conditions (a) and (c) in our weak learning assumption, thus to get fast empirical convergence. We also realized whlie writing [Th-0] that this strategy has a positive impact in terms of consistency. Note that this strategy of picking the heaviest leaf is also one used in DT induction [kmOT]. Depending in the domain, this can create a diversity of model "shapes" which we believe is a good thing because the model "shape" is also adaptive to the domain at hand. Strict properness is crucial because otherwise the minimization of the loss does not guarantee convergence to a good model (L108-L109). We alleviate the symmetry assumption of [29], and it turns out to be important in the light of the reviewer's question: the KL divergence can be formulated in the proper loss framework with asymmetric partial losses [rwID, Table 2]. Note also that the GAN's Jensen-Shannon divergence can also be represented.
On **Implementation**, trees are all initialized to roots. Diversity is ensured by the adaptivity of the model ``shape'' during training (Cf before).
On **Experiments**, we have released the set of Experiments vs Forest-Flow (see above).
References:
[kmOT] K. Kearns and Y. Mansour. On the Boosting Ability of Top–Down Decision Tree Learning Algorithms. STOC 1996.
[rwID] M. Reid and R.C. Williamson. Information, Divergence and Risk for Binary Experiments. JMLR 2011.
---
Reply to Comment 1.1.3:
Title: Feedback
Comment: Dear reviewer,
Since you lodged your additional questions, we have reported a stream of experimental results [Exp-0][Exp-1][Exp-2][Exp-3], a consistency result [Th-0] and replied to your questions. You made it explicit in your questions that some information "``[...] could definitely help in the decision [...]``", information we have since lodged (we refer to the experiments on Forest-Flow, [Exp-0] and [Exp-1]), so we would like to hear back before the end of the discussion phase on updates relative to this decision (reviews will become invisible again afterwards).
Thank you.
---
Rebuttal 2:
Title: Last reply
Comment: [uJTo-L] We thank the reviewer for this comment and their decision to raise their score.
We apologize for not replying any sooner but as soon as we got the last comments we decided to run some purely scalability experiments on comparing our software and Forest-Flow's.
We are not sure why the reviewer say we mentioned using "[...] libras, connectionist bench sonar and qsar biodegradation [...]". We did not mention those. Looking for the datasets, it seems to us that they are also quite small (we searched on OpenML and the UCI, those are the hits we got on the largest datasets):
-libras (https://www.openml.org/search?type=data&sort=runs&id=299&status=active ?), 24 * 15 = 360 examples, 90 features
-connectionist bench sonar (https://archive.ics.uci.edu/dataset/151/connectionist+bench+sonar+mines+vs+rocks), 208 examples, 60 features
-qsar biodegradation (https://www.openml.org/search?type=data&sort=runs&id=1494&status=active), 1055 examples, 42 features
We do not know if the suggestion from the reviewer was a typo. In the spirit of saving time, we looked ourselves for domains substantially larger to accomodate scalability studies. We have started making scalability comparisons on the biggest domains of the benchmark of [govWD], considering only domains with $m \times d >$ 1M.
As far as our experiments are going (Laptop), we are facing an issue with Forest-Flow: running the same code as we had for all other experiments results in R crashing with a "vector memory limit". Getting the available memory to 50 Gb gets the same error; to 150 Gb results in session aborted with a fatal error.
After several more tries with the same outcomes, we decided to follow recommendations from the webpage [jkkGA] on how to optimize parameters for memory management. Given the size of the dataset, we reduced parameter duplicate_K, initially 100 (from the website, this amount to a number of duplications of the training sample, so targeting this parameter looked like a no brainer). We first reduced it to 50 but this washed the system out of application memory so we had to run it with a smaller value. So far, it seems that trying duplicate_K = 10 seems to work on the domains tried, *but* the webpage [jkkGA] makes it clear that decreasing duplicate_K can lead to worse performances.
So we will probably have to fine-tune our runs against Forest-Flow for fairness.
We do not have issues with our code as far as we can tell. The biggest domains we have tried so far have $m \times d \approx$ 1.5M. For example, with T=200 trees, J=500 iterations, the max memory we use at run time is < 1Gb; for T=500 trees, J=2000 iterations, it is < 1.5Gb. As far as we can tell, the worse training times we get, with T=500 trees, J=2000 iterations are < 7000s.
(This is not meant to criticise Forest-Flow's R implementation, rather to point the difficulties we faced in the context of the scalability analysis to wrap our analysis and justify the choices we made in this short time)
Regarding the last two **Unclear aspects**, here are our answers:
-*On the loss we use (vs KL)*. The loss we choose, derived from Matusita's loss (class Statistics.java) is the one guaranteeing the best possible convergence rate in our theory (ref [24][18] in our paper). The KL is theoretically suboptimal from the boosting rate standpoint.
-*On learning a single tree / diversity*. We initialize all trees to roots (roots do not affect generation because their support is the whole domain) then pick the heaviest leaf among *all current trees*. Hence, during the first $T$ (number of trees) iterations all roots are first split since each root has a larger mass (1) than any leaf in a tree with depth >0. Then we keep on choosing the heaviest leaf among all trees: the couple (tree, leaf) chosen depends on the current set of trees and on the data at hand.
Maybe an example would set the idea that we indeed learn sets of trees and that their structure depends on the domain: suppose we have two trees, $\Upsilon_1$ and $\Upsilon_2$. At the initialisation, they are root leaves, thus of weight (sum of the weight of all examples reaching them) 1.
[A] Step 1, we split $\Upsilon_1$. Suppose the two (new) leaves have weight 0.8 and 0.2. At the next step, we have to split $\Upsilon_2$ because its root (leaf) has weight 1.
(two scenarii follow)
[B1] Scenario 1: we split $\Upsilon_2$ and its two new leaves have weight 0.6 and 0.4.
[C1] At step 3, the next leaf to be split is then in $\Upsilon_1$ (weight 0.8)
[B2] Scenario 2: we split $\Upsilon_2$ and its two new leaves have weight 0.1 and 0.9.
[C2] At step 3, the next leaf to be split is now in $\Upsilon_2$ (weight 0.9)
[and so on]
We are conscious that we are lodging these comments just before the deadline. We thank the reviewer (all reviewers in fact) for a strong engagement around our paper.
[govWD] L. Grinsztajn , E. Oyallon and G. Varoquaux . Why do tree-based models still outperform deep learning on tabular data? NeurIPS 2020 | Summary: The paper presents a novel boosting algorithm based on ensembles of generative trees, addressing tasks such as data binary classification, missing data imputation, and density estimation. The proposed algorithm optimizes specific loss functions and leverages the structure of the models to efficiently estimate densities given a generative model and partially specified observations. The authors demonstrate the practicality of their approach through comprehensive experiments, highlighting its effectiveness in binary classification compared to diverse state-of-the-art methods, including their competitors like Adversarial Random Forests (ARF), CT.GAN, Vine copulas auto-encoders (VCAE) or Kernel Density Estimations(KDE).
The authors introduce a unique training methodology for these generative models, which is based on class probability estimation (CPE) in binary classification tasks. The loss functions are decomposed into partial losses for positive and negative classes, with the Bayes risk function defined as the optimal achievable loss given a certain positive base-rate. Properness and strict properness of the loss functions are critical properties, ensuring that the loss is minimized when the predicted probability matches the true class probability. The paper discusses various proper losses, such as square loss, logarithmic loss, and Matusita's loss, which are symmetric and differentiable. Furthermore, the paper explores loss functions in the context of density ratio estimation and introduces the likelihood ratio risk for assessing the performance of generative models. Using Bregman divergence and the generalized perspective transform, the authors propose a framework for evaluating the density ratios of different distributions.
This theoretical foundation supports the practical applications of their models, which show competitive performance over the dataset proposed. The authors carried out experiments on a total of 21 datasets from sources such as UCI, Kaggle, OpenML, and the Stanford Open Policing Project, or simulated datasets. These include datasets like iris, ringGauss, gridGauss, forestfires, tictactoe, ionosphere, student performance, winered, abalone, kc1, sigma-cabs, compas, artificial characters, jungle chess, open-policing-hartford, and electricity, with varying attributes and missing data conditions. The paper concludes that their models, trained with the novel generative tree algorithm, are strong contenders against existing state-of-the-art techniques.
Strengths: Regarding to the originality, the paper introduces a novel approach to generative modeling by creating a generative forest based on Class Probability Estimation (CPE). This method represents a significant advancement over current state-of-the-art algorithms. By leveraging ensemble methods and decision tree induction for generative tasks, the paper presents a fresh perspective that has not been extensively explored in the literature. The combination of well-known techniques in a new context adds substantial value to the field and showcases the authors' innovative approach to generative modeling.
The quality of the submission is commendable. The paper is well-organized and meticulously indexed, with a comprehensive appendix that supports the main content. The methodology is robust, encompassing a detailed mathematical explanation, pseudocode for the proposed algorithm, and extensive experimental validation. The authors have tested their model on 21 diverse datasets from reputable sources such as UCI, Kaggle, OpenML, and the Stanford Open Policing Project. This thorough evaluation across various tasks, including data generation, missing data imputation, and density estimation, provides strong evidence of the model's performance and versatility. Additionally, the paper continues the line of research in generative models, building on previous work and demonstrating significant progress in this domain.
The paper is clearly written and well-organized, ensuring that readers can easily follow the authors' reasoning and methodology. The detailed descriptions of the mathematical foundation, the pseudocode, and the experimental setup are particularly noteworthy. These elements allow that an expert can reproduce the results. Furthermore, the inclusion of a comprehensive appendix adds to the clarity by providing additional details without cluttering the main text.
The results presented in the paper are of high value. In that line, the authors have made a commendable effort by comparing their method with the closest approaches (ARF, copycat) cited in [42] and [29]. Furthermore, they have clearly outlined the distinctions and enhancements of their algorithm in comparison to these approaches.
The authors suggest that generative forest can effectively eliminate "the slack" when utilizing decision trees as discriminators. This implies that decision trees exhibit a more regulated and foreseeable behavior in contrast to neural networks. Decision trees are inherently more calibrated and provide more deterministic outcomes. As a result, the feedback they provide to the generator during training is more accurate and reliable. The proposed method seems to enhance binary classification performance compared to existing approaches and also addresses challenging tasks such as missing data imputation and density estimation. The model's ability to produce competitive results across a diverse array of datasets and tasks underscores its potential impact on both the research community and practical applications.
Weaknesses: In section 1, the authors delve into the use of generative AI in the ML community and highlight two significant features of their generative forest. They emphasize its simplicity and the strong convergence guarantees it offers in a weak/strong learning model, akin to Valiant’s PAC learning boosting model. However, it remains unclear whether this generative forest, based on Class Probability Estimation (CPE), can effectively handle multiclass scenarios.
Section 3 introduces the supervised loss for the generative tree. Nevertheless, from a synthesized perspective, the drafting in line 75 lacks clarity. The authors should consider enhancing the clarity of these lines.
Moving on to Section 4, in line 125, the authors refer to 'c', but it is not clearly connected to the context provided in the text.
Regarding the supplementary material, Table A3 is challenging to read as the footnotes overlap with the image.
Technical Quality: 3
Clarity: 2
Questions for Authors: In Table 2, the final row displays the wins and losses for us. I am not closely following the numbers as you have already chosen the green one, which includes p-values indicating significance. Another query pertains to comparing the number of trees. It seems unfair to compare T=500 with competitors having fewer trees (10, 50, 100, 200). In simpler terms, I coudn’t comprehend why you mentioned that all these approaches rely on models that are vastly distinct from each other. I know that in the article the authors mentioned that ARFs are trained with a varying number of trees T ∈ {10, 50, 100, 200}. Additionally, ARFs incorporate an algorithm for tree size selection, eliminating the need for manual selection. On the other hand, CT-GANs undergo training with a different number of epochs E ∈ {10, 100, 300, 1000} but I don't follow why these models are then compared with generative forests comprising T = 500 trees and trained over a total of J = 2000 iterations. That approach was done for Table 2 and 4.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: The work presented by the authors is highly promising since they have introduced a novel methodology for constructing generative tree models based on class probability estimation (CPE) as a loss function in binary classification tasks. However, the paper would benefit from a dedicated section discussing the current limitations of this method. Specifically, it would be useful to understand whether this approach can be applied with the same efficacy to multiclass classification or, with some modifications, to regression tasks.
Additionally, it would be valuable to identify the types of tabular data that the proposed method might struggle with or the conditions under which its performance may be suboptimal. For instance, are there specific data types or distributions where this model does not perform as well? A thorough examination of these aspects could enhance the article.
Moreover, while the authors have laid a strong foundation for binary classification problems in their work, constructive suggestions for improvement include providing an analysis or discussion on how the current method could be extended or adapted to handle multiclass classification tasks or regression problems, which would be helpful.
In that line, another interesting thing could be identifying the specific types of tabular data that might present challenges for the proposed method, which would be beneficial. Under what conditions does the model's performance decline, and what are the characteristics of datasets where this approach is less effective?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for noting that our approach "[...] presents a fresh perspective that has not been extensively explored in the literature [...]" and noting that the quality of our submission is commendable.
## weaknesses
> In section 1, [...]
[cvp8-A] we assume the reviewer wants to know if our models can tackle supervised learning with multiclass problems in the context of class probability estimation. The answer is yes, and it just resorts to the fact that our generative models can easily provide the full distribution of missing values given the observed ones and the generative model. Indeed, supervised learning is "just" missing data imputation where the class label is missing and the full distribution we obtain, in this case, is the complete Bayes rule estimates for $P[Y=c|X]$ (where $c$ ranges in any set, thus allowing multiclass). Note that we can do something "slicker" than supervised learning when additional variables are missing because in this case, we get the complete estimates for $P[\mbox{all missing variables} = \mbox{set of values}|\mbox{observed variables}]$. The pointwise max for the set of values necessarily contains a prediction for the class.
> Section 3 introduces the supervised loss for the generative tree. [...]
[cvp8-B] we would definitely be happy to use part of the +1 camera-ready page to expand on this part as it grounds our approach in the supervised learning framework.
> Moving on to Section 4, in line 125, [...]
[cvp8-C] we agree. In fact part of the hardness probably comes from the fact that we did some "acrobatic" implicit referencing to save space: (C) is used before in lines L119+. Same as for [cvp8-B], we believe we can use part of the +1 page to polish this.
> Regarding the supplementary material, Table A3 [...]
Indeed !! We apologize (this will be fixed), the obfuscated line is:
"Table A3: 2D density plots ( Generative forests x = Life Style Index and y = Customer Rating) for data generated on sigma_cabs, from models learned with 5% missing features [...]"
It is important because sigma_cabs is a real-world domain and some features appear to be "trickier" to generate than others.
## questions
We directly reply here to the list of concerns.
[cvp8-D] First, ARF have fewer trees *but* they are much bigger than ours (the technical reason being that to get a good model overall, each tree needs to be a good model of its own already, because the distribution learned is a convex combination of tree's -- get one wrong, and the overall model can be heavily penalized). In fine, their total number of splits is equivalent or bigger than ours. We propose to indicate the average total number of splits as well for ARF.
[cvp8-E] "[...] Additionally, ARFs incorporate an algorithm for tree size selection, eliminating the need for manual selection [...]" Correct ! As we say in L311-L312, we do not have one in this paper. Such a matter would deserve a work of its own. However, the blueprint of such an algorithm seems intuitive: get inspiration from pruning algorithms in supervised decision tree induction.
"On the other hand, CT-GANs [...]" We understand that the reviewer points out that some parameters look like pears and apples. It is true and we accept the criticism: we deliberately wanted to compare techniques coming from many different model types [ALL]. It then came as a constraint to figure out some good model parameters to train for the contenders: in the case of KDE, we kept the kernel that gave overall the best results on our datasets; we tried to use a number of epochs that looked sound for the datasets that we had in CT-GANs, etc. . This explains why we give several different results for our contenders, in order not to "fix arbitrary" values.
## limitations
> The work presented [...] multiclass classification [...] regression.
Our method can indeed cope with supervised learning and this comes from the abilities of our models [cvp8-A]. **However**, this would represent a fourth task and we are not sure this represents a limitation of our work, which already contains experiments on 3 different tasks (no paper cited by the reviewers addresses this many tasks). Probably worth adding in a work of its own on [cvp8-E] ?
> Additionally, it would be valuable to identify [...] proposed method might struggle with [...]
[cvp8-F] We agree. We did not do it because we strongly suspect from our experiments that more than "hard" datasets, there is actually an eventually suboptimal choice of our parameters $J, T$. By deliberately choosing those from a very small set in our experiments, we accepted the eventual suboptimality of some of our results. We did not want to hypertune our method and we believe there is a simple path forward, which would then be useful to guess eventual hard datasets: first solve the pruning problem [cvp8-E]. We are happy to elaborate more on this in the +1 page.
> Moreover, while the authors have laid a strong foundation [...]
We refer to [cvp8-A] on this. | Rebuttal 1:
Rebuttal: ## To all reviewers
[ALL] We warmly thank all reviewers for their work and for all granting our paper with general "good or excellent" contribution field. Many questions have been asked and we take it as a value found that so many contenders were proposed, added to the five (5) we already have (in fact, 6 if we include the generative trees of [29] -- ref in our paper, and 7 if we add COPY in gen-discrim) for our three (3) distinct experimental settings (TVAE, TABDDPM, ARF-FORDE, RFDE, Forest-Flow). We hope it is obvious that it would be hard to cram substantially more content in our paper as it has already been remarked that its current state can already be hard to parse at points in the experiments [kTDz-C]. However, since there is a +1 page in the camera-ready we propose to add Forest-Flow, because it is another kind of model that does not belong to the categories of the models we have compared against (mixing elements of trees, neural nets, kernels or graphical models). However, the paper on Forest-Flow clearly mentions that their results are worse than MICE on missing data imputation. Since we are competitive with MICE, we would be happy to instead compare on data generation with Forest-Flow.
We would like to stress the need to *keep* the current formal part to its current state and not reduce it: for example, none of the papers cited by all reviewers have convergence rates on training. We also would like to add a few comments on the statistical consistency of our method [kTDz-A2].
Tags like [ALL], [kTDz-C] have been put for easy search in the page's content. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The authors introduce a novel generative model (Generative Forests - GF) for sampling and density estimation, leveraging an ensemble of trees. Their method partitions the input domain by considering all possible intersections of the supports of the leaves across the ensemble, resulting in a much finer partitioning than a single tree could achieve. For each subset of the input domain thus obtained, the model estimates a uniform density based on an empirical training measure. Efficient algorithms for sampling from such a model and evaluating the densities are derived.
To train the model, the authors propose an algorithm that iteratively splits a selected leaf node in one of the ensemble's trees. This splitting is guided by minimizing a strictly proper loss for a binary class probability estimation task.
The authors evaluate their method's performance against competing approaches across three main dimensions: the quality of synthetic data generation, missing data imputation, and density estimation.
Strengths: - I find that the paper reads resonably clearly. The language and notation used throughout the paper is very precise although it occasionally appears overly complex.
- The proposed method is interesting and I believe enjoys reasonable novelty.
- The method and theoretical results all appear to be sound.
- The proposed method's ability to estimate densities makes it more versatile than other generative methods commonly used for tabular synthetic data generation. Few methods can match its generality in handling various types of tabular data, except for other tree-based methods like Adversarial Random Forests (ARF). In this context, the positive results presented in comparison to ARF are promising.
- The authors have made their implementation code available.
Weaknesses: While the authors adequately explain their ensemble tree-based model and its sampling and density computation methods, the explanation of the training algorithm is less thorough. There is insufficient discussion on how the method scales computationally with the number of trees and splitting iterations (see my first question). Additionally, more discussion on the motivation behind the formulation of the CPE task would be welcome.
According to the paper's Lemma 4.4, GF estimates a different density value at each non-empty intersection of the supports of each leaf, derived directly from the empirical training measure. This is similar to ARF or other works such as [1] or [2] when using uniform densities at the leaves with the exception that these methods would output an averaged density estimated independently from leaves of each tree in the ensemble.
It is unclear why the proposed approach should be better. From a generalization perspective, I would intuitively assume that averaging densities from multiple trees has a regularizing effect compared to using a single tree (see [2]).
On the other hand, GF's density estimation is based on a finer partitioning, whose cardinality grows with the product of the depths of all trees in the ensemble. While this could providing greater representational capacity, it would seem to make the method more susceptible to overfitting.
The theoretical analysis provided also focuses only on approximating this empirical training measure, without addressing overfitting and the approximation of the data-generating measure.
For a model such as described above, it seems obvious that the less coarse the partitioning becomes, the more faithfully an empirical distribution can be approximated. I am therefore unsure of the practical significance of these results and I think the paper would benefit more from a discussion of generalization ability.
My main concern with the paper, however, lies with the evaluation and comparison to other methods:
- For the many analyses presented in the main paper and the appendices, different methods are compared against and for each method/scenario combination different datasets are used. This lack of consistency in comparisons and datasets in the presented results, leaves an impression that results are missing.
- The choice of datasets focuses on relatively small numbers of data points and mixes both synthetic toy data and real datasets together. This is compounded by some of the main results being presented only as summary data (#wins/#ties/#losses) over this heterogeneous group of datasets.
- There is no serious attempt to compare the proposed method with state-of-the-art approaches for synthetic data generation, particularly neural network-based ones:
- The comparison is limited to CT-GAN, which has been shown to underperform almost every other popular neural network method, including the TVAE method put forward as a baseline in the same paper.
- Additionally, the authors do not tune any hyperparameters for CT-GAN beyond the number of epochs. As a GAN, CT-GAN can have convergence issues, and without proper tuning, it may fail to converge. This lack of tuning undermines the fairness of the comparison, especially since it is unclear how much the authors optimized the hyperparameters of their own method.
- The authors also report unusually long training times on a very small dataset, presumably because no GPU is used to train these models and possibly training for too long to no avail. While I understand that the proposed method can be trained on a CPU alone, and maybe that is the point being made, I don't think this remark is made in good faith. In my experience, methods like CT-GAN, TVAE, and TabDDPM can achieve reasonable performance quickly (within minutes to tens of minutes) when trained on a consumer GPU (e.g., 2080) even on larger datasets than those used in the reported experiments.
- Results for CT-GAN on the two larger datasets are not reported, so the comparison primarily relies on very small datasets where neural networks might perform worse.
A comparison with TabDDPM [3], a strong neural network-based generative method, would be valuable, as it is currently a standard benchmark in papers evaluating synthetic data and, in my opinion, a consistent contender for a state-of-the-art method.
- The evaluation of generated samples relies solely on an Optimal Transport distance metric. Although I am not well-versed in the practical computation of this metric, the authors' comment about selecting an entropic regularization parameter that avoids numerical instabilities across all domains is not particularly reassuring. It raises the question of why other metrics were not used in conjunction.
- Additionally, the setup described by the authors (GEN-DISCRIM), which uses Random Forests to distinguish real from synthetic data, appears both reasonable and well-founded. I am curious why this wasn't the primary evaluation method for comparing with other approaches. Instead, it is only used to compare GF against optimistic and pessimistic baselines.
- On this topic, the authors might also want ot look at e.g., [4] for utility metrics in machine learning tasks that are common in comparing synthetic data generators, although I wouldn't personally fault them if they choose to disregard such metrics.
- Finally, given that the proposed model is able to estimate densities and that evaluating the log-likelihood of held-out data is the gold standard for objectively measuring the quality of such models I don't understand why this evaluation doesn't have a more prominent place in the paper:
- The authors only compare their method to KDE, which GF seems to underperform on several datasets, particularly those with a moderate number of dimensions. This is concerning, and I think it is important to investigate how the proposed method scales to higher-dimensional data compared to KDE.
- ARF is missing as a baseline in these evaluations which is strange since it is another model that is capable of estimating densities. Additionally, other methods, such as [2], report consistently outperforming KDE, making it worthwhile to include as an additional comparison in my view.
- Finally, the evaluation setup is not throughly explained in the main paper and only summary results against KDE are presented. It seems that the authors don't always use log-likelihood to compare these models due to their proposed model predicting 0 density for test samples (see question 3 below). This issue is only mentioned in the appendix but should, in my view, be disclosed in the main paper.
[1] Alvaro H. C. Correia, Robert Peharz, Cassio de Campos. Joints in Random Forests, 2020 (https://arxiv.org/abs/2006.14937)
[2] Hongwei Wen, Hanyuan Hang. Random Forest Density Estimation, 2022 (https://proceedings.mlr.press/v162/wen22c.html)
[3] Akim Kotelnikov, Dmitry Baranchuk, Ivan Rubachev, Artem Babenko. TabDDPM: Modelling Tabular Data with Diffusion Models, 2022 (https://arxiv.org/abs/2209.15421)
[4] Lasse Hansen, Nabeel Seedat, Mihaela van der Schaar, Andrija Petrovic. Reimagining Synthetic Tabular Data Generation through Data-Centric AI: A Comprehensive Benchmark, 2023 (https://arxiv.org/abs/2310.16981)
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In line 218 the authors mention regarding finding the optimal split in $\text{splitPred}$ that "the optimisation is no more computationally demanding" than for decision tree induction. I'm not sure if this is a typo but it seems to me that, as the trees in the ensemble grow deeper, finding the best split could quickly become intractable.
As an example, in the case where only single stumps are used, the potential cardinality of $\mathcal{P}(\mathcal{T})$ at iteration $j$ is $2^j$ which means that intersections with that many subsets have to be considered when finding the best split point. This seems like an untenable scaling of the computation with the iteration number but maybe I am misunderstanding something about how the best split is selected as the authors' explanation is not clear.
2. Regarding density modelling the authors only compare with KDE. Since ARF also provides density estimates why not compare against this method as well? Additionally [2] could also be worthwhile comparing against.
3. Could the authors clarify what they mean by "expected densities" that are used "to cope with the eventuality of zero (pointwise) density predictions" in appendix section V.V.3.7? If their proposed model assigns 0 probability to a subset of the domain that has non-zero mass under the data generating distribution I would say that this is a defect of the model and a sign of poor regularization. Changing from the commonly used log-likelihood metric to one that doesn't penalize this so harshly should be surfaced and discussed in the main text in my opinion. Furthermore, it seems that this choice is made on a dataset-by-dataset basis, I can only assume depending on whether the proposed model yields $-\infty$ log likelihoods which seems to happen for most non-toy datasets. As it stands, these details are hidden in the appendix while the main paper presents a potentially misleading picture based only on #wins.
4. Still regarding 0 density predictions, the authors propose a way to avoid this issue for GFs in line 173. It would be interesting if this was explored further and the resulting (more regularized) density model compared to other approaches for density estimation based on log-likelihood.
[2] Hongwei Wen, Hanyuan Hang. Random Forest Density Estimation, 2022 (https://proceedings.mlr.press/v162/wen22c.html)
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: I find that the authors hardly discuss the limitations of their method. Most importantly there is little discussion of training times and how they scale with the number of iterations/data dimensions/number of samples. There is also no discussion of how performance and generalization is expected to scale with these factors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for pointing the soundness, novelty and versatility of our approach. We must confess we found the review a tad offensive at times [kTDz-D] [kTDz-H], accusing us of a *complete lack of seriousness* in some of our experiments, acting *not in good faith* elsewhere, finding *strange* missing parts elsewhere. It is hard to swallow not just because of the language that implies we did not meet ethical standards of scientific papers, but also because it sounds like an *a priori* guilty verdict, that comes even before any explanation of ours. We apologize if our tone sounds "combative" at times.
We proceed in order of the review, with tags to help cross-referencing and navigation. Due to the length of our rebuttal, we put answers to "questions" field here and postpone the rest of our rebuttal to additional comments.
## Questions
> In line 218 [...] I'm not sure if this is a typo but it seems to me that, as the trees in the ensemble grow deeper, finding the best split could quickly become intractable. [...]
[kTDz-I] Wrong: to find the best split, we only need to keep those elements in ${\mathcal P}({\mathcal T})$ **with $>0$ empirical measure** because the rest has zero measure (and related calculi to find the best split are trivial for all those), so we only have to store those elements with $>0$ empirical measure and their number is limited by the size of the training sample. It has also been implemented this way (please check the commented code, this is the purpose of the class MeasuredSupportAtTupleOfNodes). in addition, as we get deeper, the computation can in fact be much quicker because it also depends on the size of the training sample in the elements to compute the effect of the split, and this size obviously decreases through splitting.
It is no more computationally demanding than DT induction because then we only need to parse the whole set of training point at the leaf and check whether they go left or right, which is exactly the same procedure as for DT induction. At the end, we need to eventually split the corresponding elements of ${\mathcal P}({\mathcal T})$ whose support is included in the leaf, instead of one for a DT, but this cannot represent more than the size of the training sample at the leaf and thus costs in general less than the spltting test, for a global complexity that is still O(DT induction) (we sketch) -- we would be happy to make this more explicit.
See also [6MHb-A].
> Regarding density modelling the authors only compare with KDE. Since ARF also provides density estimates why not compare against this method as well?
We hope we answered in [kTDz-H].
> Could the authors clarify what they mean by "expected densities" that are used "to cope with the eventuality of zero (pointwise) density predictions" in appendix section V.V.3.7? [...] this is a defect of the model and a sign of poor regularization.
Indeed, but this happened with our contender, not our technique: for generative forests, a mechanism prevents this, that we have implemented (see L173-L174) and for ensembles of generative trees (Section IV) this situation is impossible (L672-L673). We can make this more clear using part of the +1 page space.
> [...] As it stands, these details are hidden in the appendix while the main paper presents a potentially misleading picture based only on #wins.
We agree but this comes as a consequence of (i) us adapting to our contender and (ii) us having to summarize all of this in a short format. See also [kTDz-C]. Since there is +1 page in the camera ready, we would be happy to oblige in moving more details to main file.
> Still regarding 0 density predictions, the authors propose a way to avoid this issue for GFs in line 173. It would be interesting if this was explored further and the resulting (more regularized) density model compared to other approaches for density estimation based on log-likelihood.
Apart from indeed implementing this mechanism, we have not, for a simple reason: we did not optimize our models for density estimation because it was a side task (our "motto" could be "*train a generative model, get missing data imputation and density estimation for free*"). We are however *convinced* that there would be further optimization possible for the density estimation metrics that would most surely get better results (we can elaborate more in the camera ready using the +1 page)
---
Rebuttal 2:
Title: comments on "weaknesses" (1/3), first two bullet points
Comment: > [...] This is similar to ARF or other works such as [1] or [2] when using uniform densities at the leaves [...]
[kTDz-A] Even with this simplification of the models, this would not hold *unless* the trees used in ARF, [1] and [2] are very big. Consider for example Table 1 in our paper and the 50 stumps model we obtain. As far as we can tell, each of the other methods would have to learn very big models to have the contrast we get between hi and low densities (otherwise, the convex combination of densities, as e.g. in ARF, would lead to a very "blurry" solution -- extreme case for such methods: datasets with functional, non-stochastic dependences among variables, see [kTDz-B]). Hence, each tree would have to *separately* be a good model (and thus big), which is not our case. Our models could be very small.
This leads us to the formal standpoint: *none* of the methods cited have convergence results: [1] stick to behaviour *in the limit*: consistency for [1] and ARF. For [2], approximation is guaranteed with respect to the sample size and not the model size (hence to get a better model one needs to strictly have diverging sample size). Note also the restrictive assumptions made in all these papers to get the results (on the target density for [2] and ARF, on the model size and structure for [1]). We can argue that our Weak Learning Assumption is much weaker *and* it yields rates (on training). We can only stress the need for training rates (convergence), not just for their importance "per se" but also when they are not known for so many existing techniques.
[kTDz-A2] Regarding generalization, we disagree, both from the intuition and formal standpoint: from the intuition standpoint, the claimed regularization effect can be *sufficient* to get good results but it is in no way *necessary*. From the formal standpoint, it is in fact not hard to show that our algorithm is consistent. A straightforward way to show it in the setting of [1] would be to use our Lemma 4 and then use the regularity assumptions about the model in [1] to show the (almost sure) convergence of our model to the true one, using e.g. the $l_1$ norm as in [1]. Note that more sophisticated proofs that would rely on different / weaker assumptions are possible. Our purpose is to give one example of such and show that we can get the same asymptotic results as in the review's cited papers. Those being the references in the review's content, we hope is sufficient to address the "generalization" issue the review mentions there. We would be happy to extend this using the +1 page in the camera ready.
> For the many analyses presented in the main paper and the appendices, different methods are compared against and for each method/scenario combination different datasets are used. This lack of consistency in comparisons and datasets in the presented results, leaves an impression that results are missing.
[kTDz-B] We do agree with the reviewer, but this essentially comes as the consequence of having experiments on **three** different settings -- data generation, missing data imputation, density estimation -- with **five+** different contenders (as an example, none of the papers cited by the reviewer have this level of complexity in experiments). Each of the contenders has constraints in the form of data used, which can depend on the setting. We refer to the Appendix (V.2) and the related papers for discussion on settings and restrictions / bugs that we eventually experienced. The datasets that we kept were definitely worth keeping at least for **one** setting: for example, we could not run kc1 on ARF (thus, no "lifelike" results for kc1, yet the 2D heatmap we got in Figure 3 are definitely show that our models can definitely model **deterministic** (functional, non-stochastic) dependences as well).
> The choice of datasets focuses on relatively small numbers of data points and mixes both synthetic toy data and real datasets together. This is compounded by some of the main results being presented only as summary data (#wins/#ties/#losses) over this heterogeneous group of datasets.
[kTDz-C] We are not sure this is a criticism: the small number of data points (see e.g. Table 2) was only picked to show that a simple guess of our algorithm's parameters can beat a number of parameterisations of other algorithms. Summary data is just a consequence of the numerous experiments (settings x contenders) that we have run, that need to fit in the paper's constraints (we can only point the suggestion of other reviewers to **add** even more experiments [ALL]). Indeed, we have considered heterogeneous datasets, but the main reason was only to show results on a wide range of datasets instead of focusing on a single category (e.g. real-valued) of datasets. We wanted to provide a fair "panoramic" picture of the results of our algorithm. See [ALL] for a proposal to all reviews on additional experiments to take into account the +1 page.
---
Rebuttal 3:
Title: comments on "weaknesses" (2/3), third bullet point without last sentence
Comment: > There is no **serious** attempt [...] I don't think this remark **is made in good faith** [...] might perform worse [emphases ours].
[kTDz-D] We must confess we found these lines of comment a *tad* offensive. Note that the request thus addresses 1/3 of our settings and 1/5+ of our algorithms.
- First, we are surprised by the comment: "The comparison is limited to CT-GAN, which has been shown to underperform almost every other popular neural network method, including the TVAE method put forward as a baseline in the same paper.", when in fact the CT-GAN paper, which indeed introduces CT-GAN and TVAE, says quite the *opposite*: "CTGAN achieves competitive performance across many datasets and outperforms TVAE on 3 datasets." (page 2). This is why we used CT-GANs.
- Second, we are surprised by the next comment: "Additionally, the authors do not tune any hyperparameters for CT-GAN beyond the number of epochs. As a GAN, CT-GAN can have convergence issues, and without proper tuning, it may fail to converge. This lack of tuning undermines the fairness of the comparison, especially since it is unclear how much the authors optimized the hyperparameters of their own method."
[kTDz-E] The reason why a reader might have the impression that we do not comment on our hyperparameter tuning is because there was essentially **none**: we have just two parameters to tune for our algorithm ($J, T$). Each of our experiments is done using essentially a single choice of parameters, eventually *shared* among experiments. To make sure we would not disadvantage other contenders, we made sure to test them on a range of their parameters (number of trees for ARF, of epochs for CT-GAN, kernels for KDE, etc). The reviewer may conclude that this is not enough for a fair comparison and we should have tuned more CT-GAN to get better results, but then one may ask: given the results that we have, isn't a method with almost no necessary hyperparameter tuning better than some that would require a heavy (dataset dependent) machinery to tune theirs ?
- Third, we were also surprised by the comment on times: "The authors also report unusually long training times on a very small dataset, presumably because no GPU is used to train these models and possibly training for too long to no avail. While I understand that the proposed method can be trained on a CPU alone, and maybe that is the point being made, I don't think this remark is made in good faith. In my experience, methods like CT-GAN, TVAE, and TabDDPM can achieve reasonable performance quickly (within minutes to tens of minutes) when trained on a consumer GPU (e.g., 2080) even on larger datasets than those used in the reported experiments."
[kTDz-F] We understand the reviewer is a strong proponent of CT-GAN and we respect that. Our only remark on times is in L850-L851 (we assume it is the one that led to the comments). Our point was not to discredit any (CT-GAN or other) NN algorithm based on times. Our paper is not a paper competing on computers "running on steroids" (for whatever specs that would represent), we never write so and a short glimpse at computers used (L758-L760) show that it could not have been the case. We know that different techniques eventually require different material to run optimally -- this is why we eventually did not provide times, to avoid such discussions. And note that such discussions would be useless in our case: on all our datasets, **our algorithm was deliberately run on the "low-end" Laptop** (not even on our desktop !) because we had the constraint to be able to train our algorithms on very simple material. **CT-GANs were trained on our (more powerful) desktop**. We do not want to suggest to add this before L850, but we are confident that with this knowledge, the reviewer will take back their comment on us putting comments **not in good faith**. If computation times are sensitive to discuss, then we can just remove them -- this will leave space to treat questions from all reviewers -- or make a general statement around the fact that running times depend on the material used.
- Fourth and last, we reply to the comment: "Results for CT-GAN on the two larger datasets are not reported, so the comparison primarily relies on very small datasets where neural networks might perform worse.". Indeed... but Table A6 explains that we had issues on getting results on some datasets. This is not a criticism of CT-GAN's implementation. The reviewer may notice that VCAEs and ARFs also did not provide results sometimes (Table A7, Section V.2).
---
Rebuttal 4:
Title: comments on "weaknesses" (3/3), from last sentence of third bullet point to last two bullet points
Comment: > A comparison with TabDDPM [3] [...]
[kTDz-G] We value this opinion but we would like the reviewer to consider that we have, at this stage, *five (5)* contenders in our experiments (in fact, 7 according to other metrics, see Sk9u). This suggestion adds one more and in fact, aggregating the suggestion of other reviews [ALL] we would get a total of **12 contenders**. Consider that the TabDDPM reference [3] contains comparisons with *three* contenders (CTABGAN+ is an extension of CTABGAN). At this stage, the reviewer already criticizes the sheer amount of results that forces us to cram a lot in a small space [kTDz-C]. Considering also the fact that CT-GAN is arguably among the most cited Tabular + NN generative method, we would prefer to stick to our choices of contenders. **This being said**, we are happy to extend our list of contenders to a family of techniques not yet included in our benchmark, see [ALL] -- should all reviewers concur, of course.
> The evaluation of generated samples relies solely on an Optimal Transport distance metric. [...] entropic regularization parameter [...]
[kTDz-F] This is a difficult question, not in technical terms (we regularize the OT metric with an entropic term because it considerably speeds up computations -- it is fact standard now, as can be seen from the many papers citing Cuturi's paper [8], and it is largely necessary because of computational burden of acting without, see our rebuttal to Sk9u). It is difficult because, as the reviewer points out, there comes the question "why not another metric" ? To compare densities, one has essentially three generic categories of "distortion measures" (some are not metrics): (i) based on the density (e.g. f-divergences), (ii) based on the parameters (e.g. Bregman divergences) and (iii) based on the support (e.g. OT). Of course, one can "break down" the computation of the metric into several ones, e.g. depending on features, but we wanted to avoid "hybrid" ones that are eventually harder to explain. The interest in an OT metric is that it is computed using the "full" support of the distribution (A Bregman divergence depends on parameters = "aggregates"), it does not require absolute continuity assumptions (unlike f-divergences), and it is computed using a cost metric over this *whole* support (no part of the space "left behind"). Since we learn generative models, it looked like a maximally fine-grained way to assess the distribution learned.
> Additionally, the setup described by the authors (GEN-DISCRIM), [...] I am curious why this wasn't the primary evaluation method for comparing with other approaches.
[kTDz-G] Because all domains used for GEN-DISCRIM had to be supervised domains, i.e. we could not use it in general.
> The authors only compare their method to KDE, which GF seems to underperform on several datasets [...]
[kTDz-H] We quote our introduction (emphases ours): "A GF can thus be used to generate data, but can also be used for **side tasks** like missing data imputation or density estimation.". Density estimation is a side task among a total of *three* that we had. All papers cited by the reviewer in [1-4] focus on exactly *one* task, with the exception of Kotelnikov et al', which focuses on *two*, but the "side task" involves one contender only. We did investigate those side tasks because our generative models allow to treat those side tasks "for free" (not additional training) and it would have been a pity not to try how it fares against techniques *specifically designed for such tasks*.
Our objective was **not** to be better than techniques that were designed for the side tasks (think "no free lunch") but rather to fairly locate our technique with respect to well known contenders. Again, think that we can treat those side tasks "for free". We were very pleased that we would very efficiently compete against KDE *at least sometimes*. We also invite the reviewer to check from L750-L752 that we did some tuning of KDE so select the *best kernel on our data*.
> ARF is missing as a baseline in these evaluations [...] which is **strange** [...] Additionally, other methods, such as [2] [...] worthwhile to include (emphasis ours).
[kTDz-H] So far in the review, these are the fourth and the fifth baseline that we are being asked to add. There are two reasons why we did not use FORDE (such is the name). First, FORDE is just a wrapper around FORGE, which we use to compare for synthetic data generation, since we substantially beat FORGE, we were concerned that beating FORDE would be, or would be seen as, *unfair*. Second, we make it clear that our objective was to get a panorama of many *different* techniques. This is, we believe, a strong selling point of our paper, exposed from the abstract: we do not just focus on comparing against neural nets or tree-based approaches. KDE is a very well used family of techniques that brought kernels to our comparison. We believe this explains why we did not consider [2] either.
---
Rebuttal 5:
Title: Response to Authors
Comment: I thank the authors for their detailed response.
I also apologize for my language but I want to reassure the authors that they are extending their interpretation of what I wrote far past what I actually wrote or ever meant to imply.
In order to be mindful of the authors' time I will focus the discussion on the **main** concerns that are stopping me from increasing my score. Most of these relate to the evaluation setup.
I will also note that I had no context on what other reviewers would ask and I do sympathize with the authors but I need to evaluate the paper based on what is currently there and my opinion remains that the experimental setup is inadequate.
### (1) Comparisons on Synthetic data generation
The authors claim that their main task is **synthetic data generation** and use that to justify not comparing with ARF (FORDE) on **density estimation**. I would expect any paper to compare to the best available methods especially on their main task but that is not the case here.
Even if not TabDDPM, I believe TVAE or perhaps CTAB-GAN would provide a bigger challenge than CTGAN, particularly given that only the number of epochs was tuned.
### (2) Single metric for evaluating synthetic data generation
1. I remain somewhat skeptical of the OT metric used for synthetic data evaluation because it requires defining a distance in the domain of the data. It is not obvious to me how to meaningfully define one on a domain with differently-scaled numerical features and categoricals. I would therefore be much more comfortable if there was at least another metric used for comparison.
2. I still don't understand why GEN-DISCR can only be used in supervised tasks.
I assumed that L901 referred to the pipeline in the ARF paper and that the authors' pipeline only involved training discriminators to distinguish between real and fake data (L895-L896). If not GEN-DISCR, at least some ML utility metric.
### (3) Density comparisons
1. I would not frame ARF (FORDE) as an additional evaluation since the authors already compare to ARF for sample generation (FORGE).
This sets my expectation that this comparison should have been done in the original paper despite the author's claim that it is a *side quest*.
Even looking at the documentation for the ARF package, it is necessary to estimate a density model (FORDE) **first** in order to **then** run sample generation (FORGE).
2. I also want to remark that the authors beat FORGE **on the OT metric** used for synthetic evaluation. In my opinion, a density evaluation based on log-likelihoods would be that much more valuable as it offers a different type of metric and one that is much more beyond criticism for this type of data in my opinion.
3. Synthetic data for ARF and GF is just sampling from a density model. Therefore comparing the density models directly makes a lot more sense in my opinion.
### (4) Scalability
My concerns with scalability were mostly addressed by the authors.
However, I would very much welcome if the authors were able to share some actual training times for their algorithm as other reviewers also requested (Sk9u). I am particularly interested in seeing how these times vary with the size of the datasets.
### (5) Results Reporting
I agree with the authors' own statement that they cram a lot of results in a small allocated space but, in my view, the problem is not the too many results but the too small space that was allocated for reporting them.
I particularly do not agree with the pooling of toy datasets and real datasets into the same aggregated results.
### (6) Minor point
I appreciate the author's clarification on the OT metric. I wonder however if regularization is really necessary with such small datasets. Couldn't the assignment problem be solved exactly and avoid introducing a regularization parameter that will only raise further questions?
---
Rebuttal 6:
Title: Response to new results.
Comment: I really appreciate the authors' efforts to resolve some of my main concerns, particularly those regarding the use of a single evaluation metric and the need for stronger comparisons.
However, the authors only provide these new metrics for CTGAN and the new Forest-flow method. It would be valuable to see the same comparison against ARF, given that the authors themselves identify it as the primary competing method.
---
Rebuttal 7:
Comment: Thank you for the ARF results.
While I still have some reservations about the paper, I am increasing my score to a 5 to reflect the reduced set of concerns. | null | null | null | null | null | null |
Credit Attribution and Stable Compression | Accept (poster) | Summary: This paper studies credit attribution and stable compression. Credit attribution aims to assign recognition to the original creators of content generated by machine learning models. The authors propose formal definitions for credit attribution and connect it to differential privacy (DP). The proposed framework extends notions of stability, such as stable sample compression. The paper uses the PAC learning framework to study the learnability of machine-learning problems under the constraints of counterfactual credit attribution (CCA) and stable sample compression.
Strengths: 1. The paper proposes formal definitions for credit attribution.
2. It shows that every PAC learnable class can also be learned using a CCA learning rule.
3. It connects CCA and sample compression to differential privacy.
4. It characterizes the expressive power of different learning rules, such as CCA, sample compression, and differentially private learning, using the PAC learning framework.
Weaknesses: 1. The paper motivates the contributions using generative models but presents results only for PAC learnable classes. Showing how credit attribution could be applied to generative models would enhance the paper.
2. Even for classification problems, the paper does not provide examples of credit attribution on common tasks such as image and text classification. Some examples of applications of CCA would significantly improve the impact of this work.
3. The paper could benefit from connecting its contributions to existing literature. Adding a section on related work and positioning the contributions within the broader literature would provide valuable context.
4. The intuition behind semi-differentially private learning is not very clear. An explanation using a concrete example would be helpful.
Minor Correction: Sec 2, Line 71: X* -> Z*
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How could this framework be extended to generative models, such as text and image generators?
2. How can the quality of the credit attributions be evaluated? Is there a risk that a credit attribution mechanism could mistakenly assign credit to creators who are unrelated to the generated content (along with the actual creators on which the generated content was based)?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the authors have addressed the limitations of their work at different points in the paper, but not in a separate "Limitations" section, as encouraged in the guidelines.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for reading our paper, and for your comments.
With regards to the weaknesses you bring up:
> 1. The paper motivates the contributions using generative models but presents results only for PAC learnable classes. Showing how credit attribution could be applied to generative models would enhance the paper.
2. Even for classification problems, the paper does not provide examples of credit attribution on common tasks such as image and text classification. Some examples of applications of CCA would significantly improve the impact of this work.
Our definitions for credit attribution (CCA and DP sample compression schemes) apply verbatim to any mechanisms in general, including generative models producing text/images. As such, they merely formalize the requirement that a mechanism satisfy the principle of counterfactual attribution.
> 3. The paper could benefit from connecting its contributions to existing literature. Adding a section on related work and positioning the contributions within the broader literature would provide valuable context.
We have elaborated on the many connections of our work (along with ample references) to the existing literature on differential privacy, stable compression schemes, private learning with public data, as well as copyright management throughout the Introduction and Definitions sections (Sections 1 and 2). If there are concrete examples of references that we missed, please let us know and we will be glad to incorporate those in the final manuscript.
> 4. The intuition behind semi-differentially private learning is not very clear. An explanation using a concrete example would be helpful.
Semi-differentially private learning is applicable in settings where it is conceivable to have access to a corpus of public data, in addition to some sensitive, private data. For example, many public surveys collecting data of some sort routinely have an option for users to “opt-in” to voluntarily allow the organization to use their data in a non-confidential manner. Users who do not care too much about the privacy of their data could choose to opt-in, while others that are more worried about privacy could choose not to. We can also imagine settings where a large corpus of unlabeled, public data is freely available (e.g., unlabeled images available online), but obtaining labeled data is expensive and necessitates preserving privacy. In such cases, where we have access to public data on which privacy constraints can be relaxed, it is reasonable to ask if we can design mechanisms that have better performance guarantees compared to the setting requiring full privacy. And as it turns out, this is indeed true—while simple function classes like thresholds on the unit interval cannot be efficiently learned under complete differential privacy, this task becomes much more efficient under semi-differential privacy. Further background about semi-differential privacy is provided in the works [1,2,3].
[1] Amos Beimel, Kobbi Nissim, and Uri Stemmer. Private learning and sanitization: Pure vs. approximate differential privacy.
[2] Amos Beimel, Kobbi Nissim, and Uri Stemmer. Learning privately with labeled and unlabeled examples.
[3] Noga Alon, Raef Bassily, and Shay Moran. Limits of private learning with access to public data.
---
With regards to your questions:
> 1. How could this framework be extended to generative models, such as text and image generators?
Thanks for this question, we should indeed add a discussion on the challenges in establishing for which classes generative learning is possible within our setup. In DP, there is a known close relationship between classes that are PAC learnable and classes that one can produce private synthetic data. There are certain challenges in proving a similar connection between PAC learning and synthetic data generation in our framework, and we believe this is an important future work. But overall, the first key step in understanding which classes one can generate synthetic data for is understanding what is PAC learnable.
> 2. How can the quality of the credit attributions be evaluated? Is there a risk that a credit attribution mechanism could mistakenly assign credit to creators who are unrelated to the generated content (along with the actual creators on which the generated content was based)?
Mechanisms satisfying our definitions are technically allowed to cite superfluous and possibly unrelated works; however, this is similar in spirit to the “Precision vs Recall” tradeoff. In the context of credit attribution, our definitions focus primarily on the “Recall” aspect of the problem—namely, an algorithm should not miss out on citing any work if its output derives heavily from it, even if it cites some extraneous works. To us, this seems like the more pressing objective of the two, owing to legality concerns: the owner of the work that the algorithm failed to acknowledge could sue in court. On the other hand, while the “Precision” problem of citing needless other works does seem like an issue, it appears to have less drastic implications. It is an interesting direction to further add the precision constraint in our definitions.
---
Please let us know if we can answer any more questions!
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification. I would change my rating and suggest acceptance of the paper.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thank you! Just a quick reminder to raise the score when you can.
---
Rebuttal Comment 1.2:
Comment: I would like to thank the authors for their responses to my comments and questions. While some of my concerns have been addressed, my main concerns still remain.
1. The proposed framework for credit attribution does not seem directly applicable to the generative models discussed in the abstract and introduction as a motivation for this work. If the framework is indeed applicable to such models, it would be beneficial to include a clear example demonstrating this. If not, it might be advisable to reduce the emphasis on generative models in the early sections of the paper. Otherwise, this emphasis could potentially mislead readers into believing that the framework is effective for credit attribution in the context of generative models.
2. As I understand it, the paper examines the proposed framework for support vector machines in the context of classification. While this contribution is valuable, it would be even more impactful to demonstrate credit attribution for modern machine learning models, such as convolutional neural networks used for image classification. If the framework does not apply to such models, it would be helpful to include a comparison between the performance of state-of-the-art classification models and those that support credit attribution. This would provide valuable insights into the performance trade-offs associated with incorporating credit attribution into classification tasks.
3. While it is understandable to prioritize recall in the context of credit attribution, it is equally important to ensure that precision remains at a reasonable level. If precision is allowed to be too low, it would be trivial to achieve perfect recall simply by attributing credit to all training samples. An empirical analysis of the trade-off between recall and precision within the proposed framework would be highly beneficial.
---
Reply to Comment 1.2.1:
Title: Response to comment by Reviwer ariQ
Comment: Thank you again for your response. We really appreciate your time in engaging in this discussion!
1) We would like to clarify again that our definitions are valid for any mechanisms in general, **including generative models**. More explicitly, we can think of a generative model as a CCA mechanism $M: Z^*\to C \times Z^*$ satisfying Definition 1, where the input $Z^*$ is the training data that the model sees (e.g., existing artworks), and the output $C$ is the new content it generates (e.g., new artwork) and $Z^*$ is the artworks that it cites. In this sense, the definition is general and *is* effective for credit attribution in the context of generative models (it is just a criterion that the model must satisfy). Perhaps it also helps to keep in mind how Differential Privacy is a definition applicable to mechanisms in general, has a well-established theory in the PAC learning setting, and at the same time is also applied in practice for many other types of algorithms. Please let us know of an example if there is something more specific that you are looking for!
2) SVM is just an example of a learning algorithm that (conveniently) already satisfies the definition that we propose for credit attribution. In general, modern ML algorithms like CNNs and Transformers, by themselves do not satisfy our definitions. It is also well-agreed at this point that these models significantly outperform SVMs on benchmark tasks (e.g., image classification). One of our motivations behind proposing the definitions that we do (which, again are general criterions) is that the community can start to modify and suitably adapt modern ML algorithms like CNNs so that they adhere to the credit attribution criterion.
3) We agree that it is always trivially possible to get full recall, by just citing the entire training dataset. However, the objective is to cite only a small-sized, relevant portion of the training data. In this sense, our PAC learning algorithm that satisfies CCA (Theorem 1) obtains meaningful bounds---even in the **worst case**, it cites only $k=O(d\log{n})$ examples, upon seeing a training dataset of $n$ examples labeled by a hypothesis class of VC dimension $d$. As we state, it would be an interesting future direction to obtain optimal, distribution dependent bounds that possibly also factor in precision. | Summary: The authors propose a new notion of credit attribution, which is a relaxation of differential privacy (DP). They discuss the relationship between the proposed notion, semi-differential privacy where part of the data records are public, and a DP sample compression scheme. PAC learning theory under these notions is investigated.
Strengths: The proposed credit attribution notion is interesting, and the established relationship between this notion, semi-differential privacy, and stable sample compression is intriguing. The PAC learning part seems correct, and the results intuitively make sense, even though I haven't read the detailed proof.
Weaknesses: Even though the proposed new definition of credit attribution may provide a new perspective on copyright and privacy, the current content may not support this strongly due to the following weaknesses:
1. In the introduction, the authors discussed the applications of credit attribution, particularly in copyright analysis. However, in the rest of the paper, they did not mention any applications of the proposed credit attribution notion (Definition 1). In fact, I am quite curious about the connection between Definition 1 and copyright analysis (such as in Section 2 of "On Provable Copyright Protection for Generative Models").
To me, it is necessary to establish a solid mathematical connection between Definition 1 (or Definition 3) and existing widely used copyright notions to convince readers that the proposed notion is truly useful. If not, an empirical study of the proposed Definition 1 or 3 in real-world algorithms, such as how to achieve the proposed definition by modifying SGD (just like DP-SGD, which is a DP counterpart of SGD), is necessary.
2. The authors claimed that Definition 1 extends the semi-DP notion in Definition 2. However, this extension is not clearly stated in the current version. Specifically, I was confused about how to convert a special case of Definition 1 to Definition 2 (which part is public in Definition 1). Moreover, why is Definition 2 a stronger notion than Definition 1, as claimed by the authors?
It would be a significant enhancement if the authors could establish a solid mathematical relationship between these three definitions.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please clarify my concerns in the weaknesses section. Additionally, please point out if I made any mistakes in the review.
I believe this paper has the potential to be accepted as it might provide a new perspective on copyright and privacy analysis. However, for the current version, I would not suggest accepting it due to the weaknesses I listed.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for reviewing our manuscript. We appreciate your feedback and would like to clarify the scope and objectives of our paper to facilitate a more accurate assessment. The paper aims to explore and propose notions of credit attribution. It does not cover topics such as copyright analysis or the study of SGD.
We now address your concerns:
> In the introduction, the authors discussed the applications of credit attribution, particularly in copyright analysis. However, in the rest of the paper, they did not mention any applications of the proposed credit attribution notion (Definition 1). In fact, I am quite curious about the connection between Definition 1 and copyright analysis (such as in Section 2 of "On Provable Copyright Protection for Generative Models").
Please notice that, while we see the problem of credit attribution as **part** of the larger problem of copyright, we explicitly state that the paper **does not deal with the copyright question as a whole but only with a particular aspect.** Therefore, providing any copyright analysis is not within the scope of this paper, and we ask that the paper be judged by its merits and not by objectives that the paper doesn’t presume to tackle.
> To me, it is necessary to establish a solid mathematical connection between Definition 1 (or Definition 3) and existing widely used copyright notions to convince readers that the proposed notion is truly useful. If not, an empirical study of the proposed Definition 1 or 3 in real-world algorithms, such as how to achieve the proposed definition by modifying SGD (just like DP-SGD, which is a DP counterpart of SGD), is necessary.
We are also explicit that the notion of credit attribution that we consider is essentially **orthogonal** to previous work and existing copyright notions –-- while their focus is on the question of *substantial similarity*, we focus on algorithms that are allowed to be influenced by previous work (and may even be substantially similar), but must attribute credit. Summarily, these are two different tasks.
As to the analysis of SGD, in this paper we suggest a general framework for credit attribution, and as such, we intentionally do not focus on a specific algorithm or a specific use-case of the notion. While designing an SGD version for our algorithm may be an interesting future work, it is not within the scope of the current paper.
> The authors claimed that Definition 1 extends the semi-DP notion in Definition 2. However, this extension is not clearly stated in the current version. Specifically, I was confused about how to convert a special case of Definition 1 to Definition 2 (which part is public in Definition 1). Moreover, why is Definition 2 a stronger notion than Definition 1, as claimed by the authors?
Thanks, we will add a detailed proof to how a semi-DP can be turned into a CCA mechansim in the final version.
Roughly, every semi-DP mechanism $M$ with $k$ public datapoints defines a CCA mechanism $A$ as follows:
For a dataset $S$, the CCA mechanism $A$ outputs $(h, R)$, where $R$ is the first $k$ datapoints in $S$.
The function $h = M(S_{\leq k}, S_{>k})$ is obtained by applying $M$ using the first $k$ examples in $S$ as public data and the remaining examples as private datapoints. Please let us know if further details are needed.
---
We hope that our response addresses your concerns. Please let us know if we can answer any more questions! | Summary: This paper addresses the challenge of credit attribution within the context of machine learning algorithms. It proposes new definitions that relax the stability of a subset of data points. The framework extends established notions of stability, such as Differential Privacy, differentially private learning with public data, and stable sample compression, within the PAC learning framework. The authors provide a characterization of learnability for algorithms adhering to these stability principles and suggest future research directions.
Strengths: 1. The paper introduces novel definitions of stability that allow weaker stability of the designed subset of the data points.
2. This framework extends notions of stability, including Differential Privacy, differentially private learning with public data, and stable sample compression.
Weaknesses: 1. Examples that satisfy Definition 1 could be further discussed. Could the authors clarify examples of (ε>0,δ)-counterfactual credit attributor (Definition 1) which are not (ε=0,δ=0)-CAA?
2. The conditional distribution in Definition 1 might make the definition challenging to verify. It would enhance the paper if the authors could provide case studies of (ε, δ)-CAA applied to classical problems with varying parameters ε,δ controlling the privacy-utility tradeoff.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. See weaknesses
2. Could the authors explain the difference or connection between Definition 3 and adaptive composition + post-processing in DP?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for reading our paper, and for your comments.
With regards to addressing the weaknesses/questions:
> Examples that satisfy Definition 1 could be further discussed. Could the authors clarify examples of $(\varepsilon>0,\delta)$-counterfactual credit attributor (Definition 1) which are not ($\varepsilon=0,\delta=0$)-CAA?
Example 2.1 shows that any stable sample compression scheme satisfies Definition 1 with $\varepsilon=0, \delta=0$. In particular, Figure 1 shows that SVM satisfies $\varepsilon=0, \delta=0$. Furthermore, as we mention in the proof of Theorem 1, a slight variant of AdaBoost also satisfies this condition. This shows that even simple and well-known algorithms satisfy (a more restrictive version of) Definition 1.
In terms of algorithms satisfying Definition 1 with $\varepsilon > 0$: any DP algorithm satisfies this condition, with $R=\emptyset$. Any semi-DP algorithm satisfies this condition, with $R \neq \emptyset$, but chosen non-adaptively (i.e., $R$ does not depend on the input). It would be interesting to construct examples of mechanisms that choose $R \neq \emptyset$ adaptively, and also satisfy Definition 1 with $\varepsilon > 0$. Note that we leave open (line 153) the question of constructing a PAC learner satisfying Definition 1 with $|R|=k=O(1)$, and conceivably, the first thing to try here might be to consider both $\varepsilon > 0$ and $R$ chosen adaptively.
> The conditional distribution in Definition 1 might make the definition challenging to verify. It would enhance the paper if the authors could provide case studies of $(\varepsilon, \delta)$-CAA applied to classical problems with varying parameters $\varepsilon,\delta$ controlling the privacy-utility tradeoff.
This is true, however this caveat is also true for DP and cryptographic security. It is not clear to us if there exists a verifiable definition in this context, but, like with DP, verifying Definition 1 will require an a priori proof by the algorithm designer.
> Could the authors explain the difference or connection between Definition 3 and adaptive composition + post-processing in DP?
Recall that the reconstruction function in Definition 3 is a semi-DP mechanism, and post-processing a semi-DP mechanism is also semi-DP. But if we compose a mechanism satisfying Definition 3, since the reconstruction function is a function of the $k$ compressed examples, privacy is foregone for these examples. For the points that do not get compressed, privacy decays as in advanced composition in DP.
---
We hope our response addresses your concerns. Please let us know if we can answer any further questions!
---
Rebuttal Comment 1.1:
Comment: Thank you for your efforts in the rebuttal! I will maintain my original score. | Summary: This paper studies the problem of credit attribution in machine learning tasks. Motivated by the moral and legal need to appropriately credit input data points when they significantly influence the output of a learning or generative model, the authors develop a characterization of reasonable credit attribution algorithms. Similar to how in differential privacy an individual is considered protected if keeping or omitting their data in the input data set leads to at most a limited $(\varepsilon,\delta)$ bounded shift in max-divergence of the output distribution, this work considers an individual to be safely not credited if omitting their data would have lead to a bounded shift in this sense.
The authors link the notion of generating a not-too-long list of samples to be credited (since a trivial solution is to credit everyone) to the notion of stable sample compression schemes of size $k$, which allow identification of a subsequence of size $k$ for every input data set such that restricting the input to any input subset containing that subsequence leads to the same output. This is stronger than the notion of credit attribution they are considering, so they consider a relaxation using semi-differentially private algorithms, which only need to have a bounded shift in the output distribution when a sample is added or dropped from a `private' subset of the input data. This relaxation is called a DP Sample Compression Scheme.
The authors give an example showing that their definition is not vacuous and go on to state three theorems:
1. PAC learnable classes with VC dimension $d$ admit solutions with valid credit attribution and only $O(d\log n)$ many samples.
2. If a concept class is not DP learnable (in which case it would have been automatically a valid credit attribution algorithm without having to credit any point), then it must credit at least $k=\Omega(1/\alpha)$ many points.
3. The additional number of points one is allowed to pick by relaxing from random subsampling of $k$ points (and crediting the whole sample) to a valid credit attribution algorithm allows one to pick at most an $\exp(\epsilon)$ factor many more samples.
Strengths: 1. This paper considers an important problem and comes up with a very reasonable definition to address it. Basing their notion of indistinguishability on that which is used in DP has found some mainstream and legal acceptance as well as far as I know, so in principle this might turn out to be quite viable.
2. The directions of inquiry (comparison with random subsampling, sample complexity, checking for potential vacuousity) are all good.
Weaknesses: 1. The notion of the privacy parameter $\varepsilon$ in standard DP has some interpretability in simple contexts like that of linear queries via lower bounds on the number of $\varepsilon$-private queries needed before reconstruction attacks become viable. The definition of credit attribution via DP necessitates some more thought into what a reasonable parameter choice looks like.
2. I feel strongly that the label 'DP Sample Compression Scheme' needs to be changed. To recall, in this paper's terms a DP Sample Compression Scheme is a mechanism $M: \mathcal{Z}^n \to \mathcal{C}$ such that $M(S) = \rho(S_{\kappa (S)},S_{\neg \kappa(S)})$, where $\kappa$ (the Compression function) is a $\varepsilon$-DP sampler from $S$, and $\rho$ (the Reconstruction function) is semi-DP (i.e. only DP with respect to its second component, the subset of points which were not sampled by $\kappa$). This is a good definition and makes sense, but I feel any label along the lines of DP <*> where <*> is a mechanism taking as input data sets suggests that it is in fact differentially private. This mechanism is clearly not DP (nor does it need to be for the purposes of this paper), because in particular any point sampled by $\kappa$ can be released in the clear (which is fine for the purposes of credit attribution). I understand that the subsampler is what is DP here, so maybe something along the lines of 'Sample Compression Scheme with DP compression or 'semi-DP sample compression scheme' might be a better descriptor, or whatever else the authors prefer.
One of the reasons I feel this is important is that we rely all the time on the post-processing properties of DP algorithms, and it is not unlikely that someone who did not read this definition properly mistakes this for a DP subroutine and uses it incorrectly as a DP blackbox in a larger DP algorithm. Any such instance would be that person's fault, but if we can reduce its likelihood/any sources of confusion that would be a good thing.
Technical Quality: 3
Clarity: 3
Questions for Authors: If you can address the points I have raised in the Weaknesses section that would suffice for most of the questions I would like to have answered. In particular the second point is a bit concerning to me, although maybe I have misunderstood the definition. If my understanding is correct and if you feel strongly that the label/name ought to stay as it is and you can point to other important instances which violate the principle I described I would be happy to be corrected.
Additional questions/notes:
1. In the introduction you describe DP using swap DP but then later on use add-drop DP when defining credit attribution - perhaps you could just stick to add-drop DP throughout?
2. I think there might be an interesting link between user-level DP and appropriately attributing the works of an artist/data source, as opposed to any particular artwork/data point. I'm just curious if you have given this much thought, I might have missed it in the paper if so.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes, limitations have been adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for reading our paper, and for your comments. We are glad you liked our paper!
With regards to the points you bring up:
> The notion of the privacy parameter $\varepsilon$ in standard DP has some interpretability in simple contexts like that of linear queries via lower bounds on the number of $\varepsilon$-private queries needed before reconstruction attacks become viable. The definition of credit attribution via DP necessitates some more thought into what a reasonable parameter choice looks like.
This is a good point. We did not think much about the problem of credit attribution in terms of an indistinguishability test that the algorithm should pass under private queries; however, semantically mapping intuitions and interpretations from the now well-established theory on differential privacy to the setting of credit attribution seems like an excellent avenue for future research.
> I feel strongly that the label 'DP Sample Compression Scheme' needs to be changed. To recall, in this paper's terms a DP Sample Compression Scheme is a mechanism $M:\mathcal{Z}^n \to \mathcal{C}$ such that $\mathcal{M}(S)=\rho(S_{\kappa(S)},S_{\neg \kappa(S)})$ where $\kappa$ (the Compression function) is a $\varepsilon$-DP sampler from $S$, and $\rho$ (the Reconstruction function) is semi-DP (i.e. only DP with respect to its second component, the subset of points which were not sampled by $\kappa$). This is a good definition and makes sense, but I feel any label along the lines of DP <*> where <*> is a mechanism taking as input data sets suggests that it is in fact differentially private. This mechanism is clearly not DP (nor does it need to be for the purposes of this paper), because in particular any point sampled by $\kappa$ can be released in the clear (which is fine for the purposes of credit attribution). I understand that the subsampler is what is DP here, so maybe something along the lines of 'Sample Compression Scheme with DP compression or 'semi-DP sample compression scheme' might be a better descriptor, or whatever else the authors prefer. One of the reasons I feel this is important is that we rely all the time on the post-processing properties of DP algorithms, and it is not unlikely that someone who did not read this definition properly mistakes this for a DP subroutine and uses it incorrectly as a DP blackbox in a larger DP algorithm. Any such instance would be that person's fault, but if we can reduce its likelihood/any sources of confusion that would be a good thing.
This is a good point. We indeed mean that the compression function is DP but not further. We will consider the names Sample DP-Compression Scheme or Sample Compression Scheme with DP compression as suggested.
> In the introduction you describe DP using swap DP but then later on use add-drop DP when defining credit attribution - perhaps you could just stick to add-drop DP throughout?
Thanks, we will make this change.
> I think there might be an interesting link between user-level DP and appropriately attributing the works of an artist/data source, as opposed to any particular artwork/data point. I'm just curious if you have given this much thought, I might have missed it in the paper if so.
This is a great idea! If we understand your suggestion correctly, the principle of counterfactual attribution could also be stated at a per-user level, instead of a per-artwork level i.e., if in creating a work $W$, a mechanism does not cite/acknowledge any of the works $W_A$ by an author $A$, then the mechanism should be able to create $W$ as if it had not seen $W_A$ at all. This is a totally reasonable granularity to instantiate the definition, and constitutes an interesting formulation for future study.
---
Please let us know if we can help answer any more questions!
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the rebuttal, I think all of my questions and concerns have been addressed for now. My only main concern was about the term 'DP Sample Compression Scheme', and I'm sure that any term that you would like to use (not necessarily the ones I mentioned) should work well as long as it doesn't imply that the algorithm as a whole is DP. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for taking the time to read the manuscript. We would like to reiterate that in this work, we present a first candidate notion of learning with credit attribution as well as provide a first characterization of PAC learnability under credit attribution. **This should be regarded as the scope of the current work.**
The reviewers have suggested several interesting and important open problems as well as future directions of research which we find all to be of great interest. We do want to emphasize, though, that while these suggestions highlight the potential of further study of credit attribution, our work is focused on presenting a model for credit attribution and initiates basic research of this model. We ask the reviewers to take this into account, and focus their assessment of our work as much as possible on the scope of this work and not on future study. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Robust Reinforcement Learning from Corrupted Human Feedback | Accept (poster) | Summary: The paper presents a robust reward learning approach by formulating it as an $\ell_1$-regularized maximum likelihood estimation problem. And it also introduces an alternating optimization algorithm, which introduces minimal computational overhead when compared to the standard RLHF approach.
Strengths: * The paper is well-written and contains two types of experiments: robotic control and text generation.
* The method is straightforward, and the motivation is reasonable.
Weaknesses: * The paper lacks an experiment for formal RLHF, such as Proximal Policy Optimization (PPO), in the context of text generation. Including such an experiment could provide a more comprehensive evaluation of the proposed method.
* The use of $\delta$ as a regularization method may appear somewhat trivial for this approach. Conducting additional experiments on HH-RLHF or Ultrafeedback to provide further evidence and support for the effectiveness of this method would be beneficial.
Technical Quality: 3
Clarity: 3
Questions for Authors: * It is unclear how far the Perturb Percentages can be extended before the model completely collapses. Further investigation and experimentation are needed to determine the limits and thresholds that lead to model failure.
* The difficulty level of optimizing $\delta" is not specified. It would be helpful to elaborate on the challenges associated with optimizing this parameter and any potential limitations encountered during the optimization process.
* In order to gain a comprehensive understanding of the effects and limitations of the proposed method, I hope the author can conduct experiments specifically focused on PPO.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The context lacks information regarding the limitations of the paper or the proposed method. Besides, the analysis on DPO does not fully reflect the true impact of RM's signal on PPO, such as overrated scores. Hence, to adequately assess the effect of the real RLHF's RM on PPO, it is essential to conduct experiments specifically targeting PPO. The current approach lacks experimentation on PPO, which limits our understanding of how the RM's signal truly impacts this particular method
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for your constructive comments! In the following, your comments are first started and then followed by our point-by-point responses.
**W1, Q3, L1: The paper lacks an experiment for formal RLHF, such as Proximal Policy Optimization (PPO), in the context of text generation. Including such an experiment could provide a more comprehensive evaluation of the proposed method.**
We agree that including experiments with PPO on NLP tasks can help fully comprehending the impact of our method. Please refer to point 4 in the global rebuttal.
**W2: The use of $\delta$ as a regularization method may appear somewhat trivial for this approach. Conducting additional experiments on HH-RLHF or Ultrafeedback to provide further evidence and support for the effectiveness of this method would be beneficial.**
Thanks for pointing this out! We conduct experiment on the Ultrafeedback dataset to further demonstrate the effectiveness of the proposed method. Results and details are included in point 1 in the global rebuttal.
**Q1: It is unclear how far the Perturb Percentages can be extended before the model completely collapses. Further investigation and experimentation are needed to determine the limits and thresholds that lead to model failure.**
Response is included in point 3 in the global rebuttal.
**Q2: The difficulty level of optimizing $\delta$ is not specified. It would be helpful to elaborate on the challenges associated with optimizing this parameter and any potential limitations encountered during the optimization process.**
We found that optimizing $\delta_i$ is not difficult, as $\delta_i$ has a closed-form solution in Eq. (3.4) for each iteration. Therefore, we can efficiently find the optimal $\delta_i$ without needing any optimization algorithms such as the proximal gradient descent for solving Eq. (3.3).
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' comprehensive response. I would like to maintain the current score and acknowledge the positive aspects. Thanks!
---
Reply to Comment 1.1.1:
Title: Thank you for your review
Comment: Dear Reviewer ugMr,
We are grateful for your review and your positive evaluation of our work. Your suggestions are invaluable, and we will try to include the recommended experiments in the paper. | Summary: The paper proposes a robust RLHF method which models the potentially corrupted preference label as sparse outliers. They prove that their method can consistently identify outliers in addition to learning the underlying reward functions, under proper conditions. The results on both robotic control and natural language generation tasks show that the proposed approach improves robustness of reward learning phase within RLHF framework.
Strengths: This article raises an important issue in RLHF, that human annotators may give incorrect or inconsistent preference labels. and then introduces a simple but useful extension to the existing RLHF framework — including a perturbation factor. The authors theoretically prove the statistical rate of convergence in fitting the reward functions as well as the outliers, under particular conditions. They conduct thorough experiments over different noise modes of human preferences in robotic control, and further extend their approach to DPO within real-world language land to support their claims.
Weaknesses: - Error bars are expected in Table 1.
- The Anthropic Helpful and Harmless dialogue preferences dataset has a high disagreement rate, so that flipping a random portion of the training labels may not increase noise as expected, which could probably cause the unexpected results. The authors should probably filter the dataset or try a cleaner dataset to show the results on perturbation.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What’s the relationship and difference between Win Rate and Winning Score in your Table 1?
- Could you try to draw a fair comparison between Data Filtering and $R^3M$-DPO? Although your Data Filtering baseline doubles the training cost, it still indicates that filtering the dataset could be as effective as modeling the outliers. One thing you could provide is the performance of $R^3M$-DPO over the filtered dataset.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not discuss the limitation of the work. Please discuss the potential issue of applying your approach to real-world RLHF tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Error bars are expected in Table 1.**
Please refer to point 2 in the global rebuttal.
**W2: The Anthropic Helpful and Harmless dialogue preferences dataset has a high disagreement rate, so that flipping a random portion of the training labels may not increase noise as expected, which could probably cause the unexpected results. The authors should probably filter the dataset or try a cleaner dataset to show the results on perturbation.**
Since accurately filtering the dataset is challenging and there is no consensus on the best approach, we conducted the experiment on a cleaner, more modern dataset, Ultrafeedback. More details and analysis are included in Point 1 of the global rebuttal.
**Q1: What’s the relationship and difference between Win Rate and Winning Score in your Table 1?**
Win rate is defined as $\dfrac{\text{\\# Win}}{\text{\\# Total comparisons}}$ and Winning score is defined as $\dfrac{\text{\\# Win - \\# Lose}}{\text{\\# Total comparisons}}$ + 1. This means Winning Score additionally considers the "tie" cases compared to Win Rate. For example, two models with the same Win Rate can have different Winning Scores. The model with more tie cases would have a higher Winning Score. Therefore, we can view Win Rate as the primary criterion for evaluating models, and Winning Score as a secondary criterion that also accounts for tie cases.
**Q2: Could you try to draw a fair comparison between Data Filtering and $R^3M$-DPO? Although your Data Filtering baseline doubles the training cost, it still indicates that filtering the dataset could be as effective as modeling the outliers. One thing you could provide is the performance of $R^3M$-DPO over the filtered dataset.**
We believe that directly comparing Data Filtering with $R^3M$-DPO is fair and valid for noisy preference datasets. This is because our method improves the performance of standard DPO on noisy datasets, rather than on clean ones. With a noisy preference dataset, data filtering involves filtering the data first and then applying standard DPO to the filtered data. In contrast, our method does not require data filtering, as optimizing $\delta$ inherently performs a similar function.
**L1: The paper does not discuss the limitation of the work. Please discuss the potential issue of applying your approach to real-world RLHF tasks.**
Thank you for the suggestion! We provide our discussion as follows and will include it in our next version: Our approach does not introduce significant limitations compared to previous work. However, it does introduce an additional parameter, $\lambda$, in the regularization term, which may increase the tuning efforts in practical use. Developing more adaptive methods or conducting analysis on selecting this hyperparameter could be considered as future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, and I appreciate the newly added results in global rebuttal. I would maintain the current score. Thanks!
---
Reply to Comment 1.1.1:
Title: Thank you for your review
Comment: Dear Reviewer XAMr,
Thank you for your thorough review and for recommending our work for acceptance! Your insightful comments are invaluable in helping us further improve our paper. | Summary: The paper studies the problem of robust reinforcement learning when a small fraction of the human feedback preference data can be corrupted by adversary.
Strengths: The paper formulates the robust RLHF problem and provides a straightforward and easy-to-use $\ell_1$ regularization algorithm for learning the reward signal. It provides experimental justification for the proposed algorithms.
Weaknesses: I have a few questions regarding the formulation and experimental results. Please see next section.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In practical dataset, how do we justify the correctness of sparse perturbation assumption rather than stochastic noise in the original Bradley-Terry model?
2. What is the criterion for tuning the parameter $\lambda$ in the regularized loss?
3. The TL;DR and HH dataset might not lead to significant improvement over preferences due to both responses being too much worse than current llama2 generation. Is it possible to run on modern preference dataset like HelpSteer2, Nectar or UltraFeedback, and report the Arena-Hard score as the evaluation?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We provide a detailed response to your questions as follows:
**Q1: In practical dataset, how do we justify the correctness of sparse perturbation assumption rather than stochastic noise in the original Bradley-Terry model?**
As discussed in Section 6 and Remark 3.1, the sparse perturbation assumption in the preference data is a more challenging setting compared to the stochastic noise assumption. Therefore, our method can also improve robustness against other types of noise. Our experimental results illustrate that our method performs better across a wide range of perturbation types, including stochastic noise. While it is challenging to accurately verify assumptions in practical datasets, our method demonstrates better results due to its robustness.
**Q2: What is the criterion for tuning the parameter $\lambda$ in the regularized loss?**
We generally tuned $\lambda$ by performing a grid search within the range (0, 1). For the tuning metric, we used different criteria depending on the task: for the robotic control task, we optimized $\lambda$ based on the true episode return, while for the natural language generation task, we optimized $\lambda$ based on the win rate against a supervised fine-tuned model.
**Q3: The TL;DR and HH dataset might not lead to significant improvement over preferences due to both responses being too much worse than current llama2 generation. Is it possible to run on modern preference dataset like HelpSteer2, Nectar or UltraFeedback, and report the Arena-Hard score as the evaluation?**
Response included in point 1 in the global rebuttal.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thank you for your response. I appreciate the author’s efforts in running new experiments and most of my concerns are resolved. I have increased my score accordingly.
One tiny suggestion is that llama2 might be too weak for arena hard. It might be better to start with at least llama 3 / 3.1 / Gemma 2. But I understand that during the short time window of rebuttal it can be very hard to finish the experiments. I’m excited to see the results in the future version of the paper!
---
Reply to Comment 1.1.1:
Title: Thank you for the discussion
Comment: Dear Reviewer m2Ct,
Thank you for the engaging discussion and the willingness to raise your score! We will be sure to include the contents of the rebuttal in the paper, and we will try using our method with llama 3.1 and arena hard as well. | Summary: This paper proposes a framework for robustifing learning from human preferences. It models noise and bias in human annotation of the dataset as an offset added to the true margin between the preferred and disprefered examples. It further utilizes L1 regularization to induce sparsity in the offset. Empirically, the method is benchmarked on robotics and natural language through its extension to Direct Preference Optimization (DPO).
Strengths: - The paper is generally well-written
- The method is well-motivated
- Empirical results are generally comprehensive, across two domains, and a hand-full of baselines are considered.
Weaknesses: - The value of the $\delta$ offset could also be interpreted as by how much is the positive completion preferred over the rejected. So in a way, the method is modeling an explicit different in quality of the preference pairs. However, this offset may be orthogonal to label-noise e.g. examples that have small offset (positive and negative are close to one another) may be more difficult to annotate as there is not striking difference between the two completions. However, the statement on line 113
> ... the annotator is very likely to give an incorrect preference
only takes into account the opposite extreme.
Prior to this work, "Direct Preference Optimization with an Offset" explores DPO with an offset to model the difference in degree of preference (ODPO).
- Given that the canonical framework for RLHF of in LLMs is using REINFORCE or PPO with a learned reward model (through logistic regression) , results in that setting are necessary given the framing / positioning of the paper, where the application to DPO is a plesent bonus. In that setting, the test-set classification accuracy of the reward model would of interest in the the different noise settings.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I am a bit concerned about the results in in Figure 3, from reading off the graph, the increase / decrease in Winning score when changing the perturb % is about the same for $R^3M$ and DPO for dialogue? It would be clearer if the authors simply plot the delta in the score as that's the metric underlying raw performance, that we care about in this context.
- More ablations along the extremes would help ground the experimental results further and increase the quality of the work. e.g. for $\lambda$, higher perturb % (100% perturbation would give a sense of how big the drops in performance are relative to their practical worst-case)
-Reporting the ranking accuracy when training DPO (i.e. % of time $r_w - r_l > 0$) with and without the robustification scheme would further strengthen the results.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No limitations have been provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comprehensive review and valuable feedback. We provide a detailed response to your comments as follows:
**W1: The value of the $\delta$ offset could also be interpreted as by how much is the positive completion preferred over the rejected. This offset may be orthogonal to label-noise. Prior to this work, "Direct Preference Optimization with an Offset" explores DPO with an offset to model the difference in degree of preference (ODPO).**
Thank you for your discussion and suggestions regarding related literature. We would like to clarify that our parameter, $\delta$, is fundamentally different from the margin parameter, $\Delta$, used in ODPO [1]. Specifically:
In our method, $\delta$ is jointly optimized with the reward model. By doing so, larger $\delta$ values are learned for corrupted preference samples to achieve smaller reward differences, compensating for the perturbations.
In contrast, $\Delta$ in ODPO is not learnable but is a prefixed value proportional to the score difference between winning and losing responses. Optimizing the ODPO loss with larger $\Delta$ (i.e., pairs with strong preference strength) results in a larger learned difference, which is opposite to the effect of our $\delta$.
When corrupted preferences (where the ranking of scores is flipped) are present, ODPO would exacerbate the label noise. Additionally, our $\delta$ parameter is sparse due to the use of an $\ell_1$ regularizer.
We further illustrate the relationship between $\delta$ and the learned reward difference in the Ant task by Figure 1 in the attached file in global rebuttal. We categorized the learned reward differences into five percentile bins and then binned $\delta$ accordingly to compute the average for each bin. From the figure, it is evident that larger $\delta$ values correspond to smaller learned reward differences.
**W2: Given that the canonical framework for RLHF of in LLMs is using REINFORCE or PPO with a learned reward model (through logistic regression) , results in that setting are necessary given the framing / positioning of the paper.**
Response included in point 4 in the global rebuttal.
**Q1: I am a bit concerned about the results in Figure 3, from reading off the graph, the increase / decrease in Winning score when changing the perturb \% is about the same for $R^3M$ and DPO for dialogue? It would be clearer if the authors simply plot the delta in the score as that's the metric underlying raw performance, that we care about in this context.**
Thank you for your suggestion. Since both the HH dialogue and TL;DR summarization datasets contain over 20\% noise labels [3] and the response quality in these datasets is considered low, evaluating our methods solely on these datasets could lead to unexpected results, as you pointed out.
To address this, we trained our method on the high-quality UltraFeedback dataset [3] with different levels of random perturbation. Due to resource limitations, we considered only 10\% and 20\% perturbations. The results are shown in Table 6 in the attached file. We evaluated the standard DPO and $R^3M$-DPO by their win rates against HH SFT. We define $\Delta$ as the win rate difference between two methods and present the value in the table. We will include the plot of delta in our next version.
**Q2: More ablations along the extremes would help ground the experimental results further and increase the quality of the work. e.g. for $\lambda$, higher perturb \% (100\% perturbation would give a sense of how big the drops in performance are relative to their practical worst-case).**
Response included in point 3 in the global rebuttal.
**Q3: Reporting the ranking accuracy when training DPO (i.e. \% of time $r_w-r_l>0$) with and without the robustification scheme would further strengthen the results.**
\textbf{Rebuttal: } Thank you for your suggestion. Including the ranking accuracy on the test set will provide additional support for our results. Table 5 in the attached file shows that training DPO with robustification scheme slightly improves the ranking accuracy compared to the one without robustification scheme. We would like to note that the accuracy can not completely reflect its quality towards policy optimization, since the accuracy only evaluate the sign of the reward difference, but not the scale, which could be important during policy optimization. Additionally, the test set may also be contaminated and hence the accuracy may not be accurate.
### Reference
[1] Amini, Afra, Tim Vieira, and Ryan Cotterell. "Direct preference optimization with an offset." arXiv preprint arXiv:2402.10571 (2024).
[2] Gao, Yang, Dana Alon, and Donald Metzler. "Impact of preference noise on the alignment performance of generative language models." arXiv preprint arXiv:2404.09824 (2024).
[3] Cui, Ganqu, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. "Ultrafeedback: Boosting language models with high-quality feedback." arXiv preprint arXiv:2310.01377 (2023).
---
Rebuttal Comment 1.1:
Title: Any Further Questions
Comment: Dear Reviewer hEuH,
Thank you again for the detailed and insightful review. As the discussion period is ending, we are wondering if you have any further questions or comments we can address. If our rebuttal has addressed your concerns, we would sincerely appreciate it if you would consider raising your score.
---
Rebuttal 2:
Title: Please respond to author's rebuttal
Comment: Please respond to the authors' rebuttal to acknowledge that you have read it, and let them know if they've successfully answered your questions and you are willing to change your score, or if you need additional clarification. The discussion period is ending tomorrow, so to give authors sufficient time to respond please try to respond to their rebuttal today.
Currently, while reviewers unanimously agree on accepting this paper (scores are 5,5,6,6), the scores are also low, which means it may not reach the bar for acceptance unless some reviewers increase their score. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for the valuable feedback! Before we answer to each of the reviewers individually, we address common concerns and present our newly added results below:
>**1. Is it possible to run on modern preference dataset like HelpSteer2, Nectar or UltraFeedback, and report the Arena-Hard score as
the evaluation?**
We agree that using modern preference datasets, such as UltraFeedback, would enhance the credibility and validity of our paper. We have trained the model on UltraFeedback and compared its responses against HH SFT on HH dataset and against GPT-4-0314 on Arena-Hard dataset. Due to the limited time during the rebuttal period, we used HH SFT (with Llama2 7b as the backbone model) for our instruction-tuned model. The results are shown in Table 6 in the attached file.
As shown, UltraFeedback improves the performance of both standard DPO and $R^3M$-DPO, with $R^3M$-DPO maintaining its lead over standard DPO on HH dataset. This finding still holds true on Arena-Hard dataset as well. Here, we note that Llama2 7b-chat model has an Arena-Hard score of 4.6. The lower performance compared to Llama2 7b-chat model would be due to using a weaker instruction-tuned model (HH SFT) and the fact that Llama2 7b-chat was trained on a larger and more diverse set of preference data than UltraFeedback. We will revise the manuscript to include these new experimental results.
>**2. Error bars are expected in Table 1.**
Due to computational restrictions, we used a single run for the experiments in Table 1. We agree with your suggestion and have now performed the same experiments with 3 different seeds. The updated results, including the error, are shown in Table 7 in the attached file.
>**3. More ablations along the extremes would help ground the experimental results further and increase the quality of the work.**
We add additional experimental results with extreme noise parameters for the three different noise models on the HalfCheetah task. For the stochastic and myopic noise models, we control a noise-specific parameter that indirectly affects the perturbation rate, so controlling the perturbation rate itself to the extreme would be challenging for these cases. The results are shown in Tables 2, 3, and 4 of the attached file, corresponding to the three different noise models.
For stochastic and irrational noise settings, we observe that model training is completely disrupted for $\tau =100.0$ and $p = 2/3$. However, in the myopic setting, the performance drop is significantly less severe.
>**4. The paper lacks an experiment for formal RLHF, such as Proximal Policy Optimization (PPO), in the context of text generation. Including such an experiment could provide a more comprehensive evaluation of the proposed method.**
Thanks for pointing out the absence of an RLHF baseline in the natural language generation task. We understand the importance of experimenting with PPO instead of DPO to fully comprehend the impact of our method on RM's signal. Due to resource constraints, we prioritized experiments requiring PPO for robotic control tasks. Conducting PPO experiments for natural language generation tasks presents significant computational challenges, including the need to store and manage both reward and critique models (which are LLMs). PPO also involves more training steps and numerous hyperparameter tunings, making it difficult to implement within a resource-limited environment. Therefore, we opted for DPO as an alternative in our submission.
Due to the limited resources and the need to run other experiments, we are currently unable to provide PPO results for natural language generation tasks. Nonetheless, we will try our best to update the results before the discussion deadline.
Pdf: /pdf/67cbac560435725ee9a90fc0a86567cbdf74a92e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improving Deep Reinforcement Learning by Reducing the Chain Effect of Value and Policy Churn | Accept (poster) | Summary: This work studies a phenomenon called churn, which refers to that the outputs of a network after updates could change to unexpected values for input data not included in the training batch.
Specifically, it studies the value churn, the policy churn, and the interplay between them in deep reinforcement learning (DRL) setting.
Then a simple regularization method is propose to reduce the the value and policy churn. Empirical results demonstrate the effectiveness of the proposed solution across many DRL settings, such as value-based methods, policy gradients methods, and offline RL.
Strengths: - Except some minor issues, the paper is generally well-written and easy to follow.
- The proposed method is simple and effective, verified in various RL settings.
- Besides improving the final performance, direct evidences are also provided to show that the proposed method indeed can reduce churns.
Weaknesses: ## Originality
Although this work studies a relatively new concept (churn), the investigations of the root causes of this phenomenon, the proposed method, and the idea behind are not novel.
- The churn phenomenon is strongly related to interference and generalization of neural networks. And using NTK to measure the interference and generalization is already proposed by previous works for DRL, such as [1-4]. These works should be included in the related works and discussed. Out of DRL, similar phenomena are also studied under supervised learning setting, such as [5-7].
- I like the simplicity and effectiveness of the proposed algorithm (CHAIN), but it is really similar to MeDQN(R)[8] although the latter work focuses on reducing forgetting instead. This greatly damages the novelty of this work unfortunately.
## Quality
The empirical and theoretical results of this work can be further improved.
- Equation 1 is inappropriate as the definition of churn. An absolute operation or square operation should be applied. Otherwise, positive and negative values may cancel out. The issue exists for Equation 4 as well. This is a serious issue since changing Equation 1 would also results in changing the latter derivations in the paper. For example, Equation 5 may no longer hold.
- In Equation 3, although using $B_{train} = \\{s,a\\}$ and $B_{ref} = \\{\bar{s}, \bar{a}\\}$ help with notation clarity. However, it may lead to misleading derivations. For example, the cancel-out effect is no longer revealed (see previous point about Equation 1). Given this, I would suggest to consider the general case of using more samples in $B_{train}$ and $B_{ref}$.
- All experiments use only 6 random seeds which are not enough. I would suggest at least 10 seeds.
- The theoretical results are interesting and useful to help understand and support the intuitions behind the problem. However, I don't see how they could help with the proposed algorithm. In other words, the connection between theoretical derivations and the proposed method is too weak.
## Clarity
There are some minor issues.
- What is the difference between $Q\_{\theta}$ and $\tilde{Q}\_{\theta}$ ($\pi\_{\phi}$ and $\tilde{\pi}\_{\phi}$)? I understand that $Q_{\theta}$ is an approximation of $q^{\pi}$ (Section 2.2) where $\theta$ are the parameters of the Q-network. But what is $\tilde{Q}\_{\theta}$ in Section 3.1 and Figure 1 exactly? Isn't it also an approximation of $q^{\pi}$? In fact, I would suggest to remove $\tilde{Q}\_{\theta}$ and $\tilde{\pi}\_{\phi}$ to simplify notations, as they seem to be redundant (but I might be wrong).
- Missing one right parenthesis in the definition of $\delta(s,a)$ in Line 110.
- Typo in Line 322: policy churn reduction (VCR) --> value churn reduction (VCR).
1. Liu, Vincent, et al. "Towards a practical measure of interference for reinforcement learning." arXiv preprint arXiv:2007.03807 (2020).
2. Liu, Vincent, et al. "Measuring and mitigating interference in reinforcement learning." Conference on Lifelong Learning Agents. PMLR, 2023.
3. Joshua Achiam, Ethan Knight, and Pieter Abbeel. Towards characterizing divergence in deep Q-learning. arXiv:1903.08894, 2019.
4. Emmanuel Bengio, Joelle Pineau, and Doina Precup. Interference and generalization in temporal difference learning. arXiv preprint arXiv:2003.06350, 2020.
5. Fort, Stanislav, et al. "Stiffness: A new perspective on generalization in neural networks." arXiv preprint arXiv:1901.09491 (2019).
6. He, Hangfeng, and Weijie Su. "The Local Elasticity of Neural Networks." International Conference on Learning Representations.
7. Lan, Qingfeng, and A. Rupam Mahmood. "Elephant neural networks: Born to be a continual learner." arXiv preprint arXiv:2310.01365 (2023).
8. Lan, Qingfeng, et al. "Memory-efficient Reinforcement Learning with Value-based Knowledge Consolidation." Transactions on Machine Learning Research.
Technical Quality: 2
Clarity: 3
Questions for Authors: How does policy/value churn related to catastrophic forgetting? For example, Figure 2 in MeDQN paper[8] shows that a change of greedy action could result in forgetting. Will reducing forgetting reduce churn? Or vice versa?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes, the limitations are discussed in the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s careful review and the recognition of our experimental work. Our additional results in the one-page pdf further demonstrate the effectiveness of CHAIN in **ten more DMC tasks** and **improving the learning when larger networks are used**.
The main concerns focus on the relationship between our work and the previous works mentioned by the reviewer on catastrophic forgetting and interference. Our response aims to systematically clarify these aspects.
> Q1: CHAIN v.s. MeDQN(R) [8]
CHAIN and MeDQN(R) are **different on two key points**.
MeDQN(R) was created to improve the DQN algorithm only when there is a target network available to use as a fixed point to reduce forgetting. **CHAIN does not need this fixed point** and CHAIN applies to both value-based and policy-based RL settings in a plug-in manner.
The second point is that **CHAIN does not aim to reduce forgetting or stop all types of churn**. Some churn is beneficial and we don’t want to over-constrain the algorithm otherwise we may lose the plasticity of the network. We show the importance of this point in our additional experiments in Table 9 of the one-page pdf, where the results show how **CHAIN improves the training of larger networks**.
Moreover, in the next question, we discuss the distinction between CHAIN and MeDQN(R) regarding their effects in reducing churn and forgetting.
> Q2: “How does policy/value churn related to catastrophic forgetting? For example, Figure 2 in MeDQN paper[8] shows that a change of greedy action could result in forgetting. Will reducing forgetting reduce churn? Or vice versa?”
Churn is a natural behavior of neural networks. It accompanies each training and occurs instantly. To some extent, we consider that **catastrophic forgetting can be viewed as a consequence of the accumulation of churn**.
Therefore, **reducing churn also helps to reduce forgetting** as it suppresses the accumulation. However, **reducing forgetting does not necessarily reduce churn** and **could even increase churn**. It depends on the method considered.
Concretely, for example, MeDQN [8] reduces forgetting by preventing DQN’s value prediction from deviating from a previous copy (in practice, the target network). Note that there is a **time gap** between the current Q-network and the target network. Therefore, MeDQN does not reduce the churn (which occurs instantly) and could **incur extra churn when rolling back the network outputs to the target network**.
> Q3: The relationship between churn and interference
For the earliest NTK-related paper in DRL [3] mentioned by the reviewer, we **cited it in Line 159 of our submission** (i.e., the first time we mentioned NTK in our paper), and our analysis with NTK expressions is inspired by it. Also following [3], we found [1,2,4] studied the interference as the change in value approximation error with different forms (e.g., squared TD/TD($\lambda$) error), and they mainly focused on value-based RL (e.g., DQN).
To establish a possible view of the relationship between churn and interference: churn is the change in the network output, while inference is the change in the objective function of the network. Thus, we agree that the studies on churn and interference are closely related essentially.
**We emphasize** that we also **study the churn in the policy network** (thus going beyond value-based RL). Moreover, we present **the interplay** between the value churn and the policy churn and the dynamics under **the chain effect**. This is **not studied in previous work** to the best of our knowledge.
We appreciate the reviewer also noted [5-7]. We included them in our revision to strengthen our related work section.
> Q4: The definition of churn in Equation 1 (an absolute operation or square operation should be applied) and amendment of the derivations
We appreciate the reviewer for pointing out this discrepancy in our derivations. After carefully re-checking the derivation, we found this issue is amendable with several modifications (as shown below), and **the insights given by our derivations do not change**.
**The amendment has been done as follows** (please **refer to the official comment attached for a complete description with math**):
- First, we add the (signed) definitions of churn on a single data point, denoted by: $c_{Q}(\theta, \theta^{\prime}, \bar s, \bar a), c_{\pi}(\phi, \phi^{\prime}, \bar s)$.
- Next, we re-write our Eq.1 as the mean of the (element-wise) absolute churn of each data point, i.e., with $| c_{Q}|, |c_{\pi}|$.
- We then apply the same logic and amend the definition in Equation 4 by defining $d_{\nabla_a}^Q, d_{Q}^{\pi}$ and then using them to re-write $\mathcal{D}\_{\nabla_a}^Q, \mathcal{D}\_{Q}^{\pi}$ with absolute operations.
- By using the absolute operation, there is no cancel-out issue any longer. This also makes it rationale to use $B_{\text{train}}=\\{s, a\\}, B_{\text{ref}} = \\{\bar s, \bar a\\}$ for simplification in our NTK discussions. We replace the $\mathcal{C}\_{Q}, \mathcal{C}\_{\pi}, \mathcal{D}\_{\nabla_a}^Q, \mathcal{D}\_{Q}^{\pi}$ in Equation 3 and 5 with $c_{Q}, c_{\pi}, d_{\nabla_a}^Q, d_{Q}^{\pi}$.
- This logic also applies to the analysis in Equation 6 and further the analysis of the upper bound.
> Q5: The connection between theoretical derivations and the proposed method
The main message of our formal analysis is the chain effect of churn in Figure 2. It directly motivates us to reduce the value or policy churn to suppress the chain effect. Then the concrete consequences presented in Section 4.1 guide us to the specific choices in different problem settings.
> Q6: The difference between $Q_{\theta}$ and $\tilde{Q}\_{\theta}$ ( $\pi_{\phi}$ and $\tilde{\pi}\_{\phi}$) in Section 3.1 and Figure 1
Please kindly refer to **Additional Response** in the official comment attached.
> Q7: Number of random seeds
Please kindly refer to **Additional Response** in the official comment attached.
---
Rebuttal 2:
Title: (Continued for Q4) A complete description of the amendment on the churn definition and analysis
Comment: After carefully re-checking the derivation, we found this issue is amendable with several modifications (as shown below), and the **insights given by our derivations do not change**.
As we use a general distance metric $d$ (which should be non-negative) at the beginning of Section 3.2 for demonstration, we meant to use non-negative distance metrics in our specific cases but we missed them.
**The amendment has been done as follows**:
- First, we add the (signed) definitions of churn on a single data point as: $c_{Q}(\theta, \theta^{\prime}, \bar s, \bar a) = Q_{\theta^{\prime}}(\bar s, \bar a) - Q_{\theta}(\bar s, \bar a), \ c_{\pi}(\phi, \phi^{\prime}, \bar s) = \pi_{\phi^{\prime}}(\bar s) - \pi_{\phi}(\bar s)$.
- Next, we re-write our Eq.1 as the mean of the (element-wise) absolute churn of each data point: $\mathcal{C}\_{Q}(\theta,\theta^{\prime}, B_{\text{ref}}) = \frac{1}{|B_{\text{ref}}|}\sum_{\bar s, \bar a \in B_{\text{ref}}} |c_Q(\theta,\theta^{\prime}, \bar s, \bar a)|, \mathcal{C}\_{\pi}(\phi,\phi^{\prime}, B_{\text{ref}}) = \frac{1}{|B_{\text{ref}}|}\sum_{\bar s \in B_{\text{ref}}} |c_{\pi}(\phi,\phi^{\prime}, \bar s)|$.
- We then apply the same logic and amend the definition in Equation 4 by defining the (signed) deviations on a single data point $d_{\nabla_a}^Q, d_{Q}^{\pi}$ and then using them to re-write $\mathcal{D}\_{\nabla_a}^Q, \mathcal{D}\_{Q}^{\pi}$ with absolute operations.
- By using the absolute operation, there is no cancel-out issue any longer. This also makes it rationale to use $B_{\text{train}}=\\{s, a\\}, B_{\text{ref}} = \\{\bar s, \bar a\\}$ for simplification in our NTK discussions. We replace the $\mathcal{C}\_{Q}, \mathcal{C}\_{\pi}, \mathcal{D}\_{\nabla_a}^Q, \mathcal{D}\_{Q}^{\pi}$ in Equation 3 and 5 with $c_{Q}, c_{\pi}, d_{\nabla_a}^Q, d_{Q}^{\pi}$.
- This logic also applies to our analysis of parameter update bias in Equation 6 and Appendix A.2 from the view of a single data point. With the absolute operation, this further applies to the analysis of the upper bound of the parameter update bias when we consider a batch of data points.
---
Rebuttal 3:
Title: (For Q6 and Q7) Additional Response
Comment: > Q6: The difference between $Q_{\theta} \ \text{and} \ \tilde{Q}\_{\theta} \( \pi_{\phi} \ \text{and} \ \tilde{\pi}_{\phi} \) $ in Section 3.1 and Figure 1
Each function approximation training accompanies churn, i.e., the network is changed by the (active) explicit training and the (passive) churn. Conventionally, the two components are viewed as a whole. In Section 3.1, we use **a two-step view** for the two components, e.g., for the value learning, step 1 is that the value network is updated to $Q_{\theta}$ with the explicit training on a batch of data for the policy evaluation, and step 2 is the value churn occurs and leads to $\tilde{Q}_{\theta}$.
In practice, the two steps happen at the same time. Thus, $Q_{\theta}$ and $\pi_{\phi}$ in Figure 1 are **the virtual intermediate states** in the learning process, used for the need of a disentangled illustration.
We have clarified it and amended the notations to eliminate the confusion in the revision.
> Q7: Number of random seeds
In the one-page pdf, we provide the results across 12 seeds for CHAIN PPO in Figure 17 and
additional results for ten DeepMind Control Suite tasks with 12 seeds in Figure 19. The results demonstrate the effectiveness of CHAIN in improving the learning performance of PPO.
We found that 6 seeds are sufficient to present a clear evaluation for MinAtar. We will run more seeds for TD3/SAC as suggested.
---
Rebuttal 4:
Comment: Dear Reviewer,
We hope that you've had a chance to read our responses and clarification. As the end of the discussion period is approaching, we would greatly appreciate it if you could confirm that our updates have addressed your concerns.
---
Rebuttal Comment 4.1:
Comment: As the end of the discussion period is approaching, we would greatly appreciate it if the reviewer could confirm that our response has addressed your concerns.
Concretely, we provide detailed discussions and clarifications in our response to address the main concerns raised by the reviewer on the difference between CHAIN and MeDQN(R) (please refer to **Q1**), the relationship between churn, catastrophic forgetting (**Q2**) and interference (**Q3**) and the amendment of the formal study (**Q4**). We also provided additional experimental results in the one-page pdf material with ten new tasks, more seeds, and extra learning settings.
We believe that these additions and modifications help to address all of the concerns raised in the review, but please let us know if there are any further issues to address.
---
Rebuttal 5:
Comment: Sorry for a late reply. And thank you for your detailed response.
It is good to see that the theory issue is resolved. I've updated my score accordingly.
A few more comments.
> Churn is the change in the network output, while inference is the change in the objective function of the network.
This claim is wrong. Interference refers to the change of the network output as well. Afterall, how could it be possible to change the objective function of the network without changing the network output?
Specifically, I believe churn is just an outcome of interference.
> policy/value churn v.s. catastrophic forgetting
Thank you for the discussion about these two topics. It would be beneficial to include the discussion (policy/value churn v.s. catastrophic forgetting, MeDQN v.s. CHAIN) in the updated draft.
> catastrophic forgetting can be viewed as a consequence of the accumulation of churn
I don't agree. Forgetting can also happen after one single update, as demonstated in [8]. In fact, I tend to think that both catastrophic forgetting and churn are outcomes of interference. That is, interference is the root cause of both phenomenia.
> 6 seeds for MinAtar is enough.
That's not sure. The shaded areas of two algorithms in space invaders still overlaps with each other. More seeds are needed.
Overall, the biggest value of this work is neither pointing a new phenomena (i.e., churn) nor a new algorithm, since churn is just an outcome of interference and the key idea behind CHAIN is largely similar to MeDQN. The most valuable insight in this work is identifying the interplay between the value churn and the policy churn and the dynamics under the chain effect.
---
Rebuttal Comment 5.1:
Title: Thanks for the reviewer's time and effort, and some more discussions
Comment: We greatly appreciate the time and effort the reviewer devoted to reviewing our work and actively participating in the discussion. The reviewer's valuable comments helped us to amend our formal analysis, and established insightful connections to related research topics like catastrophic forgetting and interference.
Here are some more clarifications and discussions regarding our claims on churn and interference mentioned by the reviewer.
We agree with the reviewer that, interference is usually a more general word in the RL community, as claimed in [Liu et al., 2023] “classically refers to an update **negatively** impacting the agent’s previous learning—eroding the agent’s knowledge stored in the value function”. On the opposite, the word "generalization" often (but not always) refers to an update **positively** impacting the agent’s previous learning. Churn is a more **neutral** word, that describes the phenomenon or the behavior of NN. However, they have very similar meanings in essence and their specific meanings often depend on the context.
**In the context of our work and the relevant works** mentioned by the reviewer, i.e., [Liu et al., 2023] and [Liu et al., 2020], interference is formally defined as several functions regarding TD error $\delta(\theta)$ in Section 3 of [Liu et al., 2023] or as the difference in Mean-Squared Value Error in Section 4.1 of [Liu et al., 2020] (they are what we meant by “the objective function of the network” in our claim); while churn is defined in Equation 1 in our paper to be the change of network output.
Therefore, we made the claim “Churn is the change in the network output, while inference is the change in the objective function of the network” **according to their specific formal definitions in these works**. After all, this is a slight difference and specific to the formal definition used in different works.
As in the works mentioned by the reviewer, interference has a specific meaning to denote the change (or deviation) in concrete objectives. That is why interference is often considered to be a “negative” word while generalization is often positive (e.g., the change of network leads to a better prediction). In this sense, churn is neutral since it only describes the change of network output, without taking into consideration an optimal objective. In this context, this is also why we provided the possible understanding: interference is the negative outcome of churn and generalization is the positive outcome of churn. However, we agree that the meanings of churn, interference, and generalization are arguable in the RL community, and we would not make these claims in our paper.
We have added the relevant works mentioned by the reviewer along with these insightful discussions in our paper. And we are running 6 more seeds for MinAtar and will make sure to update our results in our revision. | Summary: The authors focus on improving the current state of deep reinforcement learning by addressing the churn effect in deep neural network training. Churn effect in deep reinforcement learning is the phenomenon where output predictions for data outside training dataset can change during training updates. This can lead to instability, suboptimality and even collapse. Previous works in addressing churn are limited in scope and do not provide a method to counter churn effect. In this work, the authors perform a detailed exploration of churn effect in RL and propose an algorithm to counter this issue. They characterize churn in a view of Generalized Policy Iteration with function approximation and discover a chain effect of churn. This chain effect leads to a cycle that compound and bias the learning dynamics throughout the iteration. The authors study the effect of churn across different setups and propose CHAIN algorithm, which can be easily plugged into most existing DRL algorithms. Experiments show that their method is effective in both reducing churn and improving learning performance across online and offline, value-based and policy-based RL settings.
Strengths: The problem is well motivated, the authors provide detailed explanations of what the problem is, why its important, what has been done previously, and a detailed explanation of their methodology.
Weaknesses: 1. In the introduction, please provide a short intuition behind the CHAIN algorithm, how is it achieving the explicit control of churn. Perhaps mention that by modifying the original loss function, CHAIN can achieve this.
2. Expand on the figure captions. This applies to all figures 1-6.
3. For figure 1, what is meant by the policy churn and value churn arrows? Are there any updates going on in between? In the description, the authors mentioned “explicit training”, please elaborate what is meant by this.
4. For figure 2, expand the caption for 2b, in the figure 2a there is an arrow without direction, please double check if this is accurate, if yes mention in the caption why that is the case
5. For figure 3-6, mention the key takeaways from the graphs in figure caption
6. In table 1, CHAIN seems to perform worse for AM-large-diverse-v2 dataset, any explanation/intuition as to why that may be the case?
7. Figure 3 appears after figure 4, please correct this.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weakness section
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors mention limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s careful review and constructive feedback, and the reviewer’s recognition of the motivation of this work and the importance of studying the churn problem in deep RL. Our additional results in the one-page pdf **further demonstrate the effectiveness of CHAIN** equipped with our method for **auto-adjustment of the regularization coefficient**, with **ten new DMC tasks** and **an additional investigation on learning with larger networks**.
The main concerns focus on several expression details and the reviewer also noted valuable writing suggestions. Our response aims to clarify these aspects in detail.
> Q1: “For figure 1, what is meant by the policy churn and value churn arrows? Are there any updates going on in between? In the description, the authors mentioned “explicit training”, please elaborate what is meant by this.”
There are no updates between $Q_{\theta}$ and $\tilde{Q}\_{\theta}$ (and $\pi_{\phi}$ and $\tilde{\pi}\_{\phi}$), which are denoted by the two red wave arrows. The only “explicit training” is the policy evaluation and the policy improvement, which are denoted by the black arrows.
Each function approximation training accompanies churn, i.e., the network is changed by the (active) explicit training and the (passive) churn. Different from the conventional view that takes the two components as a whole, in Figure 1, we use **a two-step view** for both the value and policy learning, e.g., for the value learning case, step 1 is the value network is updated to $Q_{\theta}$ with explicit training on a batch of data for the policy evaluation, and step 2 is the value churn occurs and leads to $\tilde{Q}_{\theta}$.
In practice, the two steps happen at the same time. Thus, $Q_{\theta}$ and $\pi_{\phi}$ in Figure 1 are **the virtual intermediate states** in the learning process, used for the need of a disentangled illustration.
We appreciate the reviewer for pointing out this. We have clarified it and amended the notations to eliminate the confusion in the revision.
> Q2: “In table 1, CHAIN seems to perform worse for AM-large-diverse-v2 dataset, any explanation/intuition as to why that may be the case?”
After adding the seed number to 12 and trying smaller regularization coefficients $\lambda_{\pi}$, the scores of CHAIN IQL for AM-large-diverse-v2 are:
26.67 $\pm$ 3.96 for $\lambda_{\pi}=1000$ (the one reported in Table 1), 28.33 $\pm$ 4.05 for $\lambda_{\pi}=500$, 35.0 $\pm$ 4.48 for $\lambda_{\pi}=200$, and 32.5 $\pm$ 2.08 for $\lambda_{\pi}=100$.
Thus, we hypothesize that $\lambda_{\pi}=1000$ slightly over-regularizes IQL while using $\lambda_{\pi}=200$ or $100$ improves IQL. This indicates that there is still room to achieve a better score with a smarter strategy for coefficient choice.
> Q3: On the writing suggestions
We appreciate the reviewer’s valuable suggestions on improving the expressions in the introduction, the figure captions, etc.
- [Introduction, “please provide a short intuition behind the CHAIN algorithm, how is it achieving the explicit control of churn. Perhaps mention that by modifying the original loss function, CHAIN can achieve this”]
We added the details “The main idea of CHAIN is to reduce the undesirable changes to the policy and value networks for states (and actions) that are outside of the current batch of data. The motivation is similar to the monotonic improvement principle of PPO and TRPO, which in their cases is achieved by improving just the action distribution for the samples in the current batch. Concretely, CHAIN minimizes one additional churn reduction regularization term computed with a separate data batch along with the optimization of the original policy or value learning objectives” to describe the intuition, right after the first sentence in Line 50.
- [Figure 2a, “there is an arrow without direction”] It should be an arrow towards the right (i.e., from $Q_{\theta_t}$ to $\pi_{\phi_{t+1}}$) and we have amended it.
- [Figure 3-6] We have added takeaways in the captions as suggested.
---
Rebuttal 2:
Comment: Dear Reviewer,
We hope that you've had a chance to read our responses and clarification. As the end of the discussion period is approaching, we would greatly appreciate it if you could confirm that our updates have addressed your concerns.
---
Rebuttal Comment 2.1:
Title: Sincere Request for Further Feedback from Reviewer c7UJ
Comment: As the end of the discussion period is approaching, we would greatly appreciate it if the reviewer could provide further feedback to our rebuttal, and let us know whether our response has addressed your concerns.
In the response, we provided a detailed clarification for Figure 1 (**Q1**) and an additional discussion on the results in AM-large-diverse-v2 with more results (**Q2**). In **Q3**, we detailed how we took the writing suggestions and modified the text in the revision.
Moreover, to further strengthen our experimental evaluation and demonstrate the significance of CHAIN, we provided additional experimental results in the one-page pdf material with ten new tasks from DeepMind Control Suite, more seeds, and an extra learning setting on DRL scaling.
We believe that these additions and modifications help to address all of the concerns raised in the review, but please let us know if there are any further issues to address. | Summary: Deep RL optimisation exhibits many instabilities and performance collapses. Schaul et al. [2022] discuss a pathology termed the "policy churn", where even a single update to the value network frequently changes the optimal action for a huge fraction of all states (most of which were not present in the training batch). Following in these steps, the current paper characterises churn through the lens of generalised policy iteration, and proposes a simple regularisation loss that can be intuitively described as restricting the actor and critic predictions from changing "too much" on other states.
Strengths: The authors argue convincingly that churn effects in actor and critic optimisation can interact amplify each other, in what they call the "chain effect", and propose a surprisingly simple regularisation loss to weaken this effect. As a result, the churn measurably goes down, which appears to correlate with better learning performance.
As there are few papers on mitigating churn, I think these findings are useful and can help inform further research.
Weaknesses: # Writing
I have some concerns about the writing. The topic is important, and relatively new, so good communication would be valuable.
The paper is quite well written until page 4, but later, it gets hard to follow.
For example, the language starts to get more vague, e.g. line `176` -
> One thing to note is the two types of deviation are derived in an **exchange manner** between the value and policy churn,
I found Section `3.3` particularly hard to follow and had to reread it many times. It introduces a lot of new symbols in one go, uses them in 1/3rd of the page, and then moves on to the next topic. The section's conclusion is that churn and parameter update bias amplify each other as the optimization goes on (something which the reader is already informally aware of since early on in the paper).
Afterwards, the main method (CHAIN) is introduced as late as *page 7*, following which the paper appears quite rushed.
I think the paper could benefit from:
- moving the NTK discussion (which is never used afterwards) and everything theoretical after that to the appendix. The math may be saying the right things, but I did not find it *useful* in this context.
- focusing on describing CHAIN and its experiments in more detail.
`326`: You mention Fig 5, but I think you mean Fig 6
`328`: I'm not sure what the following bolded part means. Does it mean that the target critic network slows down credit assignment / weight updates, and therefore reduces churn? It would be helpful to be more verbose in the text right here.
> We hypothesize that this is because policy interacts with the environment directly **and the target critic network also helps to cap the value churn**.
# Experiments
The churn reduction looks quite noticeable and the MinAtar evaluations also show a notable performance boost. How much of this observed churn reduction can be explained by simply "slowing down" the parameters' changes?
I think this is quite important to test, one could also simply train both actor and critic with:
1. slightly reduced learning rates
2. different target network momentums
... and observe if the churn also reduces due to those things (and whether the agent performance also correspondingly improves). I believe such an experiment would give clearer insights, and possibly strengthen the paper.
Line `624`:
> Between TD3 and SAC, CHAIN-PCR works better for TD3 rather than SAC. We hypothesize that this is because KL-based PCR used for SAC poses more challenges in code implementation.
That is concerning, because it sounds like a bug in your code. Not affecting the entire paper, but it does make it hard to believe the SAC results.
Technical Quality: 4
Clarity: 3
Questions for Authors: **IQL**: Line `338` says you apply CHAIN to IQL --- could you explain how? as far as I can tell, IQL does not need to train an actor network in tandem with a critic. It's likely that I did not fully understand this part.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors do mention the limitations of the work, but the paper could be greatly improved by reducing the role of the mathematical exposition that takes up the majority of the paper, and giving more space to the experiments.
This is an important area of work that can benefit from new empirical findings, and I am quite willing to improve my rating if my concerns are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s valuable review and the recognition of the value of the chain effect and the method proposed in our work. Our additional results in the one-page pdf further demonstrate **the effectiveness of CHAIN in ten DMC tasks and improving the learning when larger networks are used**. We also believe that our study on the chain effect of churn can inspire more future work on understanding and addressing the learning issues of DRL.
The main concerns focus on our writing and several experimental details. Our response aims to systematically clarify these aspects.
> Q1: Churn reduction v.s., Slowing down the parameters’ changes (with reduced learning rates and different target network momentums)
We appreciate the reviewer for pointing out this insightful investigation. We ran additional results for using different learning rates and target network replacement rates. The results are summarized **in Table 10 of the one-page pdf**.
In terms of learning performance (i.e., final episode return), we can observe that **either reducing learning rate or target network replacement rate often leads to worse performance**, especially for TD3. To some degree, this matches the common knowledge in the RL community.
As to churn reduction, in principle, using smaller learning rates or target network replacement rates should lead to less churn. This is because churn is positively related to the parameter update amount (as shown by the NTK in Equation 3) and a slower target network also slows the churn that occurs instantly in each training more when computing the target value to fit for the critic network. We also observed empirical evidence for this (omitted due to one-page space limitation).
This indicates that **the issue of churn cannot be addressed by reducing learning rate or target network replacement rate** (which just slows down the learning process). Churn is a “by-product” of the training of DRL agents and should be addressed separately. Actually, we observed that applying CHAIN when using smaller learning rates also improves the learning performance in some tasks, which is omitted due to one-page space limitation.
> Q2: “Line 338 says you apply CHAIN to IQL --- could you explain how? as far as I can tell, IQL does not need to train an actor network in tandem with a critic. It's likely that I did not fully understand this part”
According to Section 4.3, Equation 7 ($L_{\pi}(\phi)$) and Algorithm 1 in the original paper of IQL (“Offline Reinforcement Learning with Implicit Q-Learning”), **a policy network is explicitly trained** with advantage-weighted regression based on the Q and V networks. Concretely, a Gaussian policy is generated by the policy network of IQL for action selection as the D4RL tasks in our experiments have continuous action spaces.
Therefore, we apply CHAIN to IQL by optimizing $L_{\pi}(\phi)$ together with $L_{PC}(\phi, B_{ref})$ as Equation 10 in our paper. Please let us know if there are any further questions about this point.
> Q3: On the writing suggestions
We appreciate the reviewer for pointing out these issues in our writing and providing suggestions, which are very valuable to the further polish of our paper.
- [Line 176, “exchange manner”] We amended this improper expression by re-rewriting the sentence to “the two types of deviation show the interplay between the value and policy churn as the value churn causes the deviation in policy (i.e., the action gradient) and the policy churn causes the deviation in value (i.e., the policy value)”.
- [Section 3.3 and the organization of NTK discussions and experiments] In Section 3, we aim to present (1) how parameter update causes churn with the NTK in Equation 3, (2) the interplay between the policy and value churn with the NTK in Equation 5, (3) how churn introduces bias in parameter update with Equation 6. The three components form the cycle: parameter update —> churn —> parameter update —> …. Therefore, we are afraid that completely moving the NTKs to the appendix could make some readers lose track of these connections which causes instability.
However, we agree with the reviewer that we can simplify and shorten the content of Section 3 to make the thread clearer and more prominent, and then make more room to strengthen our experiment section with more analysis and our additional results (as provided in the one-page pdf). We have **re-organized the content in our revision and included additional experimental analysis** on scaling networks as well as an updated version of CHAIN with the method for adaptive regularization coefficients to reduce hyperparameter tuning
- [Line 326] We amended it to be Figure 6 as pointed out.
- [Line 328, “the target critic network also helps to cap the value churn”] It means that the value churn that accompanies each training of the critic network is not immediately (or fully) reflected in the target critic instantly due to delayed synchronization of the target network. Thus, the target critic should reduce the value churn by a factor associated with the exponential moving coefficient $\tau$ (i.e., the momentum hyperparameter). We added these explanations to make it clearer.
- [Line 624, “KL-based PCR for SAC”] We meant to express that the variation (regarding the scale and range) is higher in optimizing the MaxEnt objective and KL-based PCR term together for SAC than in optimizing the Q objective and L2-based PCR term for TD3. Another hypothesis is that the MaxEnt nature of SAC prefers the encouragement of more stochasticity in policy. We re-wrote the sentence to eliminate the expression issue.
---
Rebuttal 2:
Comment: Thanks for the rebuttal. That addresses some of my concerns.
> Therefore, we apply CHAIN to IQL by optimizing $L_{\pi}(\phi)$ together with $L_{PC}(\phi, B_{ref})$ as Equation 10 in our paper. Please let us know if there are any further questions about this point.
If my understanding is correct, IQL trains only the critic with in-sample learning, it does not rely on an actor. In the end, once the Q network is trained, an actor can trained to exploit this frozen Q network with AWR. Sure, in practical implementations, one might be constantly training an actor against an also-being-trained Q network, but the actor has no effect on the Q network's training dynamics.
**Therefore, IQL cannot, by design, exhibit the "chain effect" because training the actor network is optional. ** You should acknowledge this in the relevant section.
IQL is a very nice guinea pig for your theory because it permits training without the funky actor-critic two-network dynamics. So in principle, you only have critic-churn to deal with.
The fact that you still get better performance in IQL despite that is quite interesting, and merits a discussion (and you need more space in the paper for that...). I am very curious to hear your thoughts on why this happens. You are still optimising L_QC with the modified IQL, correct? An ablation would be interesting; due of the lack of the "chain effect", I'd hypothesise that only L_QC should be contributing to the improved performance, not L_PC. You could try ablating L_QC and L_PC by setting their coefficients to zero in IQL training: i.e., do the VCR / PCR / DCR experiments. I would be curious to hear what happens, in your response (fine as a markdown table in an openreview comment, if you can't update the PDF).
Because you're using CORL, these ablations should be doable quite quickly.
I would be willing to raise my score upon receiving a through update!
---
Rebuttal 3:
Title: More discussion about IQL
Comment: We appreciate the reviewer for making this point clearer and for the valuable suggestions.
Yes. The training of the actor network does not influence the dynamics of the value network training, although the actor and critic are trained iteratively in practice. Therefore, to make a claim like “IQL suffers from the chain effect” would be improper. We will clarify this point in our draft.
**[“You are still optimising L_QC with the modified IQL, correct?”] No**, we only applied PCR to IQL and AWAC without touching the training of the critic network (as mentioned in Line 343-348).
Therefore, we can now **credit the performance improvement achieved by CHAIN to its effect in reducing the policy churn only** and the ablation does not exist. One thing to note is that, despite the lack of the chain effect, as long as an explicit actor network is trained, policy churn exists (like in the PPO case), and the actor network directly interacts with the environment for the final evaluation.
For the extra experiment suggestion, **we will try to present the results for only applying VCR to IQL** to investigate whether "only L_QC should be contributing to the improved performance", before the discussion stage ends with our best effort.
---
Rebuttal Comment 3.1:
Title: more on IQL
Comment: Thanks for the swift response!
> Yes. The training of the actor network does not influence the dynamics of the value network training, although the actor and critic are trained iteratively in practice. Therefore, to make a claim like “IQL suffers from the chain effect” would be improper. We will clarify this point in our draft.
Great!
> No, we only applied PCR to IQL and AWAC without touching the training of the critic network (as mentioned in Line 343-348).
Hmm, I checked the mentioned lines and here are some issues
- You mention "[...] we use $\lambda_\pi=1000$ for both CHAIN IQL and CHAIN AWAC [...]". But **you do not mention that $\lambda_Q=0$ !** So you need to be clearer and more verbose there.
- Further in `346-348`: "[...] CHAIN suppresses the dual bias of policy value and helps address the extrapolation error of offline learning." I think "the dual bias of policy value" is vague and needs to be phrased better. It also gives the impression that the "chain effect" is treated in these methods, but it's not (as you said, you're not addressing value churn here at all!)
> Therefore, we can now credit the performance improvement achieved by CHAIN to its effect in reducing the policy churn only
Okay, but this is really very surprising and needs to be probed further. As we discussed, training an actor is optional in IQL. What happens if you train IQL to the very end (without training the actor), then do a fixed budget of actor-only training against the final critic (without L_PC), and compare the results?
- If those **actor-trained-against-final-frozen-critic** results are better than the IQL baseline (which has interleaved actor and critic updates), it could suggest there is something harmful about training an actor against a time-varying critic, and can support your theory further.
- It would be also interesting to see how it compares against your current IQL + CHAIN runs, and I'd be curious to see your interpretation of the results.
> For the extra experiment suggestion, we will try to present the results for only applying VCR to IQL to investigate whether "only L_QC should be contributing to the improved performance", before the discussion stage ends with our best effort.
Thank you, looking forward to it!
---
Rebuttal 4:
Comment: Sorry for the wrong line number in the last response. We mentioned that we apply CHAIN to IQL by implementing the policy churn reduction in Line 337-340: “To apply CHAIN on IQL and AWAC, we implement the regularization for the policy churn reduction (Eq. 10) by adding a couple of lines of code without any other modification”. We appreciate the reviewer for pointing out the confusion and we have clarified this in our draft.
For Line 346-348, we will rephrase the discussion by focusing on the effect of policy churn reduction and also with the potential additional results as suggested by the reviewer.
---
Rebuttal 5:
Title: Additional Results for "actor-trained-against-final-frozen-critic" and CHAIN IQL with VCR only
Comment: We provide the additional results for **IQL (sequential)** over 12 seeds, i.e., **actor-trained-against-final-frozen-critic**, and **CHAIN IQL (VCR, $\lambda_{\pi}=0, \lambda_Q=0.01$)** over 6 seeds below, along with the results for IQL and IQL (PCR, $\lambda_{\pi}=1000, \lambda_Q=0$) which were reported in our submission.
In the following, we provide the implementation details and discuss the results. We are willing to hear more valuable comments from the reviewer and address any further questions.
| Task | IQL | **IQL (sequential)** | CHAIN IQL (PCR) | **CHAIN IQL (VCR)** |
| --- | --- | --- | --- | --- |
| AM-umaze-v2 | 77.00 $\pm$ 5.52 | 60.00 $\pm$ 3.91 | 86.66 $\pm$ 4.11 | **88.33 $\pm$ 3.66** |
| AM-umaze-diverse-v2 | 54.25 $\pm$ 5.54 | 55.00 $\pm$ 5.46 | 63.33 $\pm$ 9.42 | **71.67 $\pm$ 7.23** |
| AM-medium-play-v2 | 65.75 $\pm$ 11.71 | 52.50 $\pm$ 3.36 | **83.33 $\pm$ 9.33** | 70.00 $\pm$ 5.27 |
| AM-medium-diverse-v2 | 73.75 $\pm$ 5.45 | 53.33 $\pm$ 5.93 | **80.00 $\pm$ 12.90** | 70.00 $\pm$ 6.67 |
| AM-large-play-v2 | 42.00 $\pm$ 4.53 | 17.5 $\pm$ 4.10 | **50.00 $\pm$ 5.77** | 41.67 $\pm$ 7.61|
| AM-large-diverse-v2 | 30.25 $\pm$ 3.63 | 5.83 $\pm$ 2.19 | 26.67 $\pm$ 12.50 | **38.33 $\pm$ 5.97** |
> “What happens if you train IQL to the very end (without training the actor), then do a fixed budget of actor-only training against the final critic (without L_PC), and compare the results?” (the results for actor-trained-against-final-frozen-critic)
As suggested by the reviewer, we slightly modified the training process of the CORL implementation of IQL to realize **“actor-trained-against-final-frozen-critic”**:
1. First train the value network and Q network for 1M steps.
2. Then train the policy network for 1M steps with the value network and Q network frozen.
3. We do not modify any other implementation detail and use the same hyperparameters.
4. We check the learning curves of the policy network and the final scores.
We call this variation **IQL (sequential)**. The total number of gradient steps is the same as the default IQL implementation where the critic and actor are trained iteratively.
We report the **means and standard errors of the final scores** achieved by **IQL (sequential)** over 12 random seeds.
The results in terms of final score show that **IQL (sequential) performs worse than IQL in 5 of 6 tasks**. For the learning curves (which we are not able to present at this stage), we found the policy performance increases efficiently and does not change much after 3e4 - 1e5 steps (depending on different tasks) of the policy training, without a large variance or a collapse through training.
The difference between IQL (sequential) and IQL can be fully attributed to the difference in the training dynamics, mainly on the policy network (as the training of the policy network does not influence the value network training in IQL).
We think it is somewhat tricky to explain the difference between IQL (sequential) and IQL. The difference in the training dynamics here is beyond the scope of the chain effect of churn studied in our work.
To provide some possible explanation, we guess that the difference in the training dynamics of the policy network of IQL results in a further difference in the policy output on out-of-sample states.
The policy outputs of these out-of-sample states are fully determined by generalization. Compared with IQL (sequential), the distribution of the advantages should cover a wider range of values, thus providing more diverse gradient directions for the training of the policy network. This could lead to better robustness on out-of-sample states.
Finally, we believe that a more systematic study for offline RL is needed by focusing on how the training on in-sample data affects the network output on out-of-sample data under different training schemes, which is beyond the scope of our work.
> Discussion on “(only) applying VCR to IQL”
Technically, the value training in VCR includes the training of a Q network and the training of a V network. To apply VCR to IQL, we modify IQL by optimizing one additional Q-value churn reduction regularization term jointly with the original Q loss (without changing the training of the V network). We call this variation as **CHAIN IQL (VCR)**.
The results for **CHAIN IQL (VCR)** with $\lambda_{Q}=0.01$ are reported in the table above.
We can observe that **CHAIN IQL (VCR) outperforms IQL in 4 of 6 tasks, and outperforms CHAIN IQL (PCR) in 3 of 6 tasks**. This demonstrates the effectiveness of reducing the value churn in IQL. This also indicates that even though the chain effect does not exist in IQL, the value churn and the policy churn in the value training or the policy training can still negatively affect the learning performance.
---
Rebuttal Comment 5.1:
Title: Thank you
Comment: Thanks for running these experiments.
I certainly did not expect IQL (sequential) to perform worse. The new results are quite surprising and will make people think.
I would urge you to add these findings to the paper, they are quite valuable. Appendix or main text, up to you!
I’m raising my score. Great work!
---
Reply to Comment 5.1.1:
Title: We appreciate the reviewer's time and effort
Comment: We greatly appreciate the time and effort the reviewer devoted to reviewing our work and actively participating in the discussion.
The reviewer's valuable comments led us to improve our study with more careful discussions and better interpretations of our method. The discussion with the reviewer also inspired us to think about new ideas for future research beyond this work.
We will add these new findings to our paper with a better organization as suggested.
Thank you! | Summary: This paper focues on the training instability from the non-stationary nature of DRL, whose pheononmena are unexpected shift in policy and value network outputs (policy churn and value churn). To mitigate policy and value churn, the proposed CHAIN algorithm add penalize in the policy and value updates.
Strengths: CHAIN is a versatile solution that can be integrated into different backbone algorithms.
Weaknesses: * Extra introduced hyperparameters and no-trivial hyperparameters setting.
* No statistically significant improvements in many tasks, even equipped with good hyperparameters.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. CHAIN introuduces two extra coefficients, $\lambda_Q$ and $\lambda_\pi$ to control the degree of regularization. I find that for different benchmark suites or tasks or backbone algorithms, are set differently. So I have the following questions:
What is the insight behind the chosen hyperparameters for different settings? If the potential readers want to apply CHAIN to other settings, how can they set the two hyperparameters? Actually, I find some explanations on this in Appendix C.2, but it requires making [policy loss]/[regularization loss] within 0.1 to 0.01, thus it needs to first try and then observe the ratio and then turn, maybe need several runs, whose costs could not be ignored. Introducing extra hyperparameters and a simple one set of them could not suffice to get satisfying results for varying settings is not encouraged, as actually, it may output worse performance than the backbone algorithm if setting wrong hyperparameters.
2. How many seeds do you use for the experiments in your main paper? For example, Figure 6. The CHAIN SAC in Figure 6 does not show a statistically significant improvement to SAC in Hopper-v4, Walker2d-v4, and HalfCheetah-v4, which may bring some doubts on the effectiveness of CHAIN, especially considering that an unsuitable hyperparameters set may provide even worse results of CHAIN SAC.
3. As the paper focuses on mitigating the training instability of DRL and the non-stationary properties, some other methods may need to be considered to compare to or discuss, such as AlgaeDice [1], etc.
4. Further, in online RL, over-conservative may hinder performance, so maybe the authors need to discuss well: whether the CHAIN would negatively influence the exploration in online RL and then hinder the asymptotic performance.
5. And, another style of work, for example, [2], in online RL actually encourages the network neurons to awaken, thus preventing the local optimal. Could you provide some explanations on whether CHAIN would deteriorate the local optimality issue?
[1] Nachum, O., Dai, B., Kostrikov, I., Chow, Y., Li, L., & Schuurmans, D. (2019). Algaedice: Policy gradient from arbitrary experience. *arXiv preprint arXiv:1912.02074*.
[2] Xu, Guowei, et al. "Drm: Mastering visual reinforcement learning through dormant ratio minimization." *arXiv preprint arXiv:2310.19668* (2023).
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have acknowledged the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s valuable review and the recognition of the versatility of the proposed method. Our method is simple but versatile to improve the performance of many learning algorithms. We show that the method works across continuous and discrete control tasks, online and offline settings. Moreover, in our additional results (Table 9 in the one-page pdf) we show that **the method can greatly improve the scaling of RL algorithms**. This scaling is also improved by an adaptive version of our method that reduces hyperparameter tuning.
The main concerns focus on the hyperparameter choice for different settings and CHAIN’s effect on exploration and optimality. Our response aims to clarify these aspects in detail.
> Q1: The insight behind the chosen hyperparameters for different settings
How to pick or adjust the regularization coefficient is an unavoidable tricky problem for most regularization-based methods. In the context of our work, the difference in the scale of the quantities like Q value and policy objective across different settings leads to the use of difference coefficients. This is also a long-standing problem of DRL.
To relieve the pain of manually picking coefficients in different tasks and settings, we additionally propose **a simple but effective method for automatic adjustment of** $\lambda_{\pi}, \lambda_{Q}$ with a target relative loss scale denoted by $\alpha$. It is realized by maintaining the running means of the absolute policy ($| \bar L_{\pi}|$) or Q loss and the PCR ($| \bar L_{PC} | $) or VCR term, and dynamically computing the regularization coefficient, e.g., $\lambda_{\pi} = \frac{\alpha | \bar L_{\pi}| }{| \bar L_{PC} | }$. This method adjusts the coefficient dynamically to keep the consistent relative loss scale $\alpha$.
Our additional experiments in the one-page PDF show that **this method matches or surpasses the results achieved by the manual choices of** $\lambda_{\pi}, \lambda_{Q}$ for DDQN in five MinAtar task (Figure 16), for PPO in five MuJoCo tasks (Figure 17) and ten additional DMC task (Figure 19), and for TD3 in four MuJoCo tasks (Figure 17).
> Q2: Whether the CHAIN would negatively influence the exploration in online RL and then hinder the asymptotic performance
In principle, reducing churn suppresses the correlation of network outputs among different data. This encourages the agent to keep the independence of the action on different states. Thus, we hypothesize that it could help to **keep the diversity of action** (in other words, prevent the collapse of action mode as often seen in the cases with severe learning instability for both RL[2] and LLM [3]) and **positively influence exploration**.
In our experiments, we can see that CHAIN **improves the asymptotic performance in the tasks that require effective exploration** even though the value or policy churn is significantly reduced. For MinAtar, the original MinAtar paper mentioned “Seaquest and Freeway present a greater exploration challenge than the other games” on page 5, and our results in Figure 4 show CHAIN improves the final performance of DDQN by clear margins in these two environments.
Moreover, we additionally provide the comparison between CHAIN PPO and PPO in two sparse-reward tasks of DeepMind Control Suite (DMC): ball_in_cup-catch-v0 and cartpole-swingup_sparse-v0. The results are shown in the first two subplots in Figure 19. We can see that CHAIN PPO achieves a higher asymptotic performance in ball_in_cup-catch-v0 and achieves almost the same performance in cartpole-swingup_sparse-v0.
> Q3: DrM [2] encourages the network neurons to awaken, thus preventing the local optimal. Whether CHAIN would deteriorate the local optimality issue
DrM prevents the neurons from being dormant with an adaptive perturbation reset method. This addresses the loss of plasticity/learnability and thus alleviates the local optimality issue.
In principle, CHAIN has **a positive effect in preventing the loss of plasticity**. As in Equation 3, reducing churn encourages $k_{\theta}, k_{\phi}$ to 0. This prevents the empirical NTK matrix from being low-rank, which is shown to be a consistent indicator of plasticity loss by [1] (they also claimed empirical NTK is a more reliable indicator than dormant neuron ratio).
In our experiments, we can see that CHAIN improves the final scores in most cases across MinAtar, MuJoCo (especially with CHAIN PPO) and additional ten DMC tasks (in Figure 18 of the one-page pdf).
Additionally, we provide more results in **the one-page pdf** to show that CHAIN improves the final performance both when **running longer** (Figure 17 Ant and Humanoid for 10e6 steps) or with **larger networks** (Table 9).
> Q4: The discussion on AlgaeDice
We appreciate the reviewer for reminding us of this. AlgaeDICE aims to address the limitation of on-policy samples by re-expressing the on-policy max-return objective by an expectation over any arbitrary off-policy data distribution, which is followed by a solution in the form of dual optimization.
AlgaeDICE does not have a direct relation to churn or generalization of DRL agents, however, we believe it is orthogonal to CHAIN in mitigating training instability. We have integrated AlgaeDICE and more in our related work discussion.
> Q5: The number of random seed
In the one-page pdf, we provide the results across 12 seeds for CHAIN PPO in Figure 17 and
additional results for ten DeepMind Control Suite tasks with 12 seeds in Figure 19. The results demonstrate the effectiveness of CHAIN in improving the learning performance of PPO.
We found that 6 seeds are sufficient to present a clear evaluation for DDQN in MinAtar. We will run more seeds for TD3/SAC as suggested.
---
Reference:
[1] Disentangling the causes of plasticity loss in neural networks. 2024.
[2] Overcoming Policy Collapse in Deep Reinforcement Learning. 2023.
[3] Controlling Large Language Model Agents with Entropic Activation Steering. 2024.
---
Rebuttal Comment 1.1:
Title: More discussions on the task-specific option and hyperparameter related topic.
Comment: Thank you for your rebuttal.
I have further questions regarding the factors influencing performance. I am particularly interested in understanding which factors contribute most significantly to achieving high performance.
It is evident that hyperparameters play a crucial role; manual selection often yields suboptimal performance across various tasks. On the other hand, employing a smart mechanism typically enhances performance significantly. In light of this, I am curious why the smart mechanism is not included in the primary implementation. Would such an approach impact the theoretical analysis differently?
Additionally, Figure 17 highlights different churn reduction options, where the efficacy of DCR and PCR varies across tasks. How should one decide between these options, and what strategies can mitigate the need for task-specific tuning?
---
Rebuttal Comment 1.2:
Title: Discussion on "the CHAIN could enhance exploration"
Comment: In your response, I am puzzled by the statement: "In our experiments, we can see that CHAIN improves the asymptotic performance in the tasks that require effective exploration even though the value or policy churn is significantly reduced." Could you elaborate on why CHAIN enhances exploration capabilities without becoming overly conservative? Specifically, what attributes of CHAIN contribute to its improved performance on tasks with sparse rewards? CHAIN applies regularization to policies and values but does not explicitly propose exploration-related terms. How, then, does it effectively address challenging exploration problems?
---
Rebuttal 2:
Title: Response to "More discussions on the task-specific option and hyperparameter related topic"
Comment: > It is evident that hyperparameters play a crucial role; manual selection often yields suboptimal performance across various tasks. On the other hand, employing a smart mechanism typically enhances performance significantly. In light of this, I am curious why the smart mechanism is not included in the primary implementation. Would such an approach impact the theoretical analysis differently?
Thanks to the reviewer’s valuable review, the newly proposed method for automatic adjustment of the regularization coefficient is **done during the rebuttal stage**. We will add this method/mechanism in the main body of the revised paper.
Across all our experiments, we found that manually selecting a regularization coefficient that makes the target relative loss scale less than 0.01 rarely harms the learning performance of the baseline algorithms. The suboptimal performance due to over-regularization is almost only observed when the scale regularization term matches or surpasses the original learning objectives.
This method adjusts the regularization coefficient dynamically through learning to keep a consistent relative loss scale. It follows the same insight into churn reduction presented by our analysis of the chain effect of churn. We consider that this method does not impact our formal analysis.
> Figure 17 highlights different churn reduction options, where the efficacy of DCR and PCR varies across tasks. How should one decide between these options, and what strategies can mitigate the need for task-specific tuning?
For applying CHAIN to deep AC methods, we recommend the practitioner to use PCR. This is because policy interacts with the environment directly and the target critic network also helps to alleviate the value churn in the learning of the critic network in practice. Empirically, we also found PCR often contributes more than VCR.
As shown by our additional results in the one-page pdf, the need for task-specific selection of the regularization coefficient is addressed by our auto-adjustment method. Effective adjustment is achieved **with a common target relative loss scale** for all the tasks within the same domain.
More broadly, we have **three pieces of advice** for the community audiences to adapt CHAIN to different problems in practice:
- Use our “target relative loss scale” insight to do automatic adjustment of the regularization coefficient.
- Start from a small target relative loss scale (e.g., 1e-5). It should be safe in the sense that the performance will not be harmed.
- Use techniques to normalize the scale of quantities (e.g., reward, advantage, Q). This is also one recipe of the DreamerV3 [Hafner et al., 2023].
---
Rebuttal Comment 2.1:
Title: Thanks for providing details on HP and mechanism chocies. I suggest to clearly state it as limitations.
Comment: Thank you for providing these details, which might be a guide for practical use.
Yet, I suggest the authors clearly state that the need for HP tunning/ DCP and PCR selection is a limitation. Because I think it is not good to propose several mechanisms and solve different tasks using different HPs and mechanisms - of course with only one exception that if all the proposed mechanisms and HPs could outperform.
Further, one of my concerns has not been answered well, "I am particularly interested in understanding which factors contribute most significantly to achieving high performance."
I appreciate your efforts in detailed response. I will adjust my score accordingly.
---
Rebuttal 3:
Title: Response to the discussuion on "the CHAIN could enhance exploration"
Comment: We appreciate the reviewer for the interest in discussing and investigating the potential effect of CHAIN in exploration, which also inspires us of further research on this point in the future.
The exploration ability/behavior is determined by two factors: (1) the **extrinsic factor**, i.e., the exploration mechanism used, (2) the **intrinsic factor**, i.e., the independence or diversity of the policy (or Q-value) network output.
For the **extrinsic factor**, CHAIN does not suppress the extrinsic exploration mechanisms used in DRL algorithms, e.g., epsilon-greedy in DQN, Gaussian noise in TD3, state-independent variance parameter vector in PPO, and the state-dependent variance parameter vector in SAC.
For the **intrinsic factor**, in our rebuttal text, we provided a discussion about how CHAIN helps to prevent action correlation (more severely, collapse) and could encourage action independence. Indeed, in the extreme case where the churn reduction regularization term dominates the joint learning objective, the agent will be prevented from learning any effective behavior. Our newly proposed method for keeping a consistent relative loss scale should help to avoid this kind of extreme case.
However, the best that CHAIN can do is to prevent the over-correlation of action and the loss of independence. It **does not introduce any extra heuristics or principles to encourage exploration** (e.g., novelty, curiosity, uncertainty), which we think is necessary for learning effective exploration behaviors in challenging exploration problems.
Therefore, we do not think CHAIN has the ability to address challenging exploration problems by itself. It could be interesting to study the effect of CHAIN when applied together with existing exploration methods to DRL agents in the future.
Finally, **we would like to emphasize that we do not propose CHAIN as an exploration method**. It is a method to reduce churn and address the learning issues (as presented in Section 4.1) for better performance of DRL algorithms.
---
Rebuttal Comment 3.1:
Comment: Dear Reviewer,
We hope that you've had a chance to read our responses and clarification. As the end of the discussion period is approaching, we would greatly appreciate it if you could confirm that our updates have addressed your concerns.
---
Rebuttal 4:
Comment: We sincerely appreciate the reviewer's valuable comments and further feedback during the discussion stage. As the end of the discussion period is approaching, we would greatly appreciate it if the reviewer could confirm that our response has addressed your concerns.
For a brief summary, we provided detailed discussions about how we leverage the insight of relative loss scale to further develop a simple but effective method for adapting the regularization coefficient throughout training. With our additional results in the one-page pdf, we showcased that this method greatly helps to relieve the need for manual selection of the regularization coefficient in different tasks and domains, while achieving and even surpassing the performance of manual coefficients. To apply our churn reduction method to a different problem, we provided our thoughts and several pieces of practical advice.
We also discussed the effect of CHAIN on exploration and plasticity in detail. To provide more empirical evidence for this, we provided two more sparse-reward DMC tasks (in addition to the two exploration tasks in MinAtar) to show the effectiveness of CHIAN in these tasks, although our work focuses on how churn occurs and affects DRL and does not aim to address challenging exploration problems.
Moreover, to further strengthen our experimental evaluation and demonstrate the significance of CHAIN, we provided additional experimental results in the one-page pdf material with ten new tasks, more seeds, and an extra learning setting on DRL scaling.
We believe that these additions and modifications help to address all of the concerns raised in the review, but please let us know if there are any further issues to address.
---
Rebuttal 5:
Title: We appreciate the reviewer's time and effort, and some more discussion on "which factors contribute the most"
Comment: We greatly appreciate the time and effort the reviewer devoted to reviewing our work and actively participating in the discussion. The reviewer's valuable comments drove us to propose a method for dynamic adjustment of the regularization coefficient, and inspired us of more thinking on the effect of churn reduction in exploration, plasticity, etc.
> "I am particularly interested in understanding which factors contribute most significantly to achieving high performance."
As we considered different learning settings in Section 4.1, we would like to note that Value Churn Reduction (VCR) and Policy Churn Reduction (PCR) are not always used together in the different learning settings considered in our experiments.
To be specific, **only VCR is used for CHAIN DDQN** (mentioned in Line 281) because there is no policy/actor network other than the Q-network (and the policy is implicitly derived by performing the greedy actions based on the Q-network). For the PPO case, **only PCR is used for CHAIN PPO** (mentioned in Line 302) because the main role played by the V-network is a baseline to subtract (rather than a real critic, according to the opinion in Chapter 13.5 of [Sutton and Barto, 2018]). Therefore, **in these cases, only one factor is added to the baseline algorithms and the difference in the learning performance can be fully credited to the corresponding factor**.
For deep AC methods, we agree that a better way to select DCR/VCR/PCR is needed. In this work, we provided the results of DCR/VCR/PCR for TD3 and SAC. We have clearly stated this point and updated the limitation section to include the main messages of our discussions as suggested.
Finally, please let us know if we misunderstood the meaning of the “factors” in your comments. | Rebuttal 1:
Rebuttal: We appreciate all the reviewers’ careful review and valuable comments. Here **we summarize the main points of our response to each review** and **the content of our additional results** enclosed in the one-page pdf.
The summary of the responses to individual reviews:
- **[Reviewer DNMP]** We appreciate the reviewer’s recognition of the versatility of the method proposed in our work. The main concerns focus on the insight behind hyperparameter choice for different settings, and CHAIN’s effect on exploration and optimality. To this end, we propose **a simple but effective method for automatic adjustment of the regularization coefficients** by developing the insight of keeping a consistent relative scale between the regular policy or value loss and the churn reduction term. Our additional results (Figure 16-19 in the one-page pdf) show that **this method can match or surpass the results achieved by manually picked coefficients** across different tasks in MinAtar, MuJoCo and additional tasks in DeepMind Control (DMC) suite (Figure 19). We believe that this provides a useful reference for practitioners to use CHAIN in a different problem. Besides, we discuss **how CHAIN can have positive effects in encouraging exploration and preventing plasticity loss in principle**, and show comparable or better asymptotic performance in four exploration tasks (including 2 in MinAtar and 2 in DMC).
- **[Reviewer Qjzk]** We appreciate the reviewer’s recognition of the value of the chain effect and the method proposed in our work. The main concerns focus on an insightful discussion on “the effect of reducing learning rate or target network replacement rate in reducing churn” and several expression details. In the individual response, we show **either reducing learning rate or target network replacement rate slows down learning and leads to worse performance** (with the additional results in Table 10 of the one-page pdf). We also note that **using smaller learning rates or target network replacement rates does not address the issue of churn**, although it has less churn in principle (which is explained by our NTK expressions in Equation 3).
- **[Reviewer c7UJ]** We appreciate the reviewer’s recognition of the motivation of this work and the importance of our study. The main concerns focus on several expression details and the reviewer also noted valuable writing suggestions. In the individual response, we provide detailed explanations and describe how we took the writing suggestions to polish our paper concretely.
- **[Reviewer oLah]** We appreciate the reviewer’s recognition of our experimental work. The main concerns focus on the relationship between our work and the previous works mentioned by the reviewer on catastrophic forgetting (especially MeDQN) and interference. Our response explains how **reducing churn also helps to reduce forgetting** but **reducing forgetting does not necessarily reduce churn and could even increase churn**. We also point out the essential connection between churn and interference, and the clear distinction of our work on the study of policy churn, the interplay, and the dynamics under the chain effect.
In the one-page pdf, we provide additional results:
- **[A simple method for automatic adjustment of the regularization coefficient (refer to Q1 of Reviewer DNMp for the details)]** Figure 16-18 show the results of our auto-adjustment method for CHAIN DDQN/PPO/TD3. The results show that, this method can match or surpass the learning performance achieved by manually picked regularization coefficients across different tasks and different settings. We believe that this method can help to relieve the pain of coefficient choice in practice, and inspire the development of smarter methods.
- **[Additional evaluation in ten DMC tasks]** Figure 19 shows additional results of CHAIN PPO for 10 tasks in DeepMind Control Suite, with two sparse-reward tasks (the first two).
- **[CHAIN helps the scaling of PPO]** Table 9 shows our additional investigation on the effect of CHAIN when scaling DRL agents up. We take PPO as the exemplary setting and widen both the policy and value networks to 4 and 8 times. The results demonstrate that CHAIN helps to scale up PPO and achieves clear improvement in terms of episode return. Moreover, we found that **larger networks exhibit more churn**. We hypothesize that it is a possible reason for the notorious scaling issue of DRL agents, and **CHAIN helps the scaling by reducing churn effectively**.
- **[(To Reviewer Qjzk) Churn phenomenon and learning performance when using smaller learning rates and target network replacement rates]** Table 10 shows smaller learning rates and target network replacement rates often slow down learning and lead to worse performance.
With these additional discussions and experimental results, **we emphasize** that **our formal study on the churn in both value and policy** (including the interplay, the chain effect, and the concrete issues in popular DRL settings), **the simple and general churn reduction method** (including CHAIN and the additional auto-adjustment method), and **the empirical discoveries** (including what the churn is like in popular DRL agents, and how they can benefit from churn reduction) **are novel** to the best of our knowledge.
We believe that our paper can inspire more future work on understanding and addressing the learning issues of DRL from angles like churn, interference, and generalization as mentioned by the reviewers. And the method proposed in this paper can be implemented very easily and adopted in broader scenarios, e.g., PPO-based alignment of LLMs.
Finally, we sincerely hope that our response can address the questions and concerns. We also hope that the reviewers can re-evaluate the value of our work based on the responses and the additional results.
We are also more than willing to address any further questions or concerns during the discussion stage.
Pdf: /pdf/54611a4c2ee61b96660822a9f9808773a8ed9f03.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Private Geometric Median | Accept (poster) | Summary: The paper proposes algorithms that compute the geometric median with differential privacy. Composed of two parts, the algorithm firstly finds the quantile radius that covers enough points. Secondly, it fine-tunes the geometric median. The authors suggest two methods for the second part: LocDPGD and LocDP cutting plane methods.
Additionally, the paper shows a pure DP algorithm for computing the geometric median and a lower bound on the sample complexity required to compute the geometric median with differential privacy.
Strengths: This paper studies a well-motivated problem. As mentioned in the paper, a line of works has studied computing private versions of the medians, but little is known about the world when the dimension is larger than one.
This problem is difficult because the runtime of applying vanilla DP-GD is highly affected by outliers. This paper uses two steps: first, it estimates the quantile radius and an initial point that is not greatly affected by outliers, and then it fine-tunes the computation of the result. This idea is intuitive but requires some good techniques. For example, the way they implement the DP cutting plane method by adding Gaussian noise to the gradient that "cuts" the set seems novel and interesting to me.
This paper is also very complete in the sense that it shows results in pure DP setting and lower bound in sample complexity.
From the writing perspective, this paper is well-organized and straightforward to read.
Weaknesses: It is a bit difficult to understand the dependency between parameters in Thm 3.1 and Thm 3.4. It would be very helpful if the authors could make Thm 2.7, 3.1, and 3.4 more readable and easier to compare.
Technical Quality: 3
Clarity: 3
Questions for Authors: I am wondering if the authors can explain a bit more about the comparison between LocDPGD and LocDP cutting-plane. For example, you could compare the time complexity and explain under what circumstances one is better than the other.
I am also wondering what part is the bottleneck if someone wants to improve this to a linear run-time.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and thorough review. Based on your suggestions, we revised the statements in Thm. 2.7, Thm. 3.1, and Thm. 3.4 to improve readability.
> **W1: dependency between parameters in Thm 3.1 and Thm 3.4.**
Note that there is **no** dependency between the parameters in Thm. 3.1 and Thm. 3.4. in the sense that the statements of the theorems are self-contained. Theorems 3.1 and 3.4 concern the utility of the algorithm using DP-(S)GD and DP cutting plane method, respectively. Both of the algorithms have the same utility guarantee. There is an additional log-factor (only in $n,d,\varepsilon$) in the utility guarantee of the cutting plane method.
Due to the space limitation, in the submitted version of the paper we needed to introduce $\kappa$ and $\alpha$ so that we can fit the equations in the line. In the revised version of the paper, we will improve the readability of the mentioned theorems.
> **Q1: On comparison of LocDPGD and LocDP.**
Both of the algorithms have the same utility guarantee (up to a log factor only in $n,d,\varepsilon$). However, the runtime is different. Consider the case that $\varepsilon=\Theta(1)$. Then, if we approximate the matrix-multiplication exponent by $2.5$, we have the following:
Given $n \leq \tilde{O}(d^{1.75})$, LocDPSGD achieves better runtime, while for $n > \tilde{\Omega}(d^{1.75})$, localized DP cutting plane methods achieve a better runtime. This comparison is based on the runtimes presented in Table 1. We added this discussion into the paper.
> **Q2: Bottlenecks for improving runtime.**
There are two bottlenecks in our algorithms:
1- The first roadblock is that the estimation of quantile radius requires $n^2$ computations as discussed in Remark 2.1. One possible solution here is to consider subsampling for estimating $N_i(v)$, i.e., the number of data points within distance of $v$ from $x_i$. In particular, one can sample a constant number of points and only use this subset to estimate $N_i(v)$. While this approach provides an accurate estimate of $N_i(v)$ with high probability in a non-private setting, it introduces technical challenges for privacy analysis.
2- The second roadblock is that for private convex optimization of non-smooth loss functions there is **no** general first-order method with an optimal excess error that requires only $n$ gradient evaluations. (see [FKT20] and [CJJLLDT23] for recent developments.) To address this, we need to develop a specific first-order algorithm for the geometric median problem that requires $n$ gradient evaluation to achieve the optimal error.
By addressing the aforementioned challenges, we can develop a linear-time algorithm that achieves optimal excess error for the geometric median problem. We will incorporate this discussion into the revised version of the paper.
[FKT20] Feldman V, Koren T, Talwar K. Private stochastic convex optimization: optimal rates in linear time. InProceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing 2020 Jun 22 (pp. 439-449).
[CJJLLDT23] Carmon Y, Jambulapati A, Jin Y, Lee YT, Liu D, Sidford A, Tian K. Resqueing parallel and private stochastic convex optimization. In2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS) 2023 Nov 6 (pp. 2031-2058). IEEE.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! Nice work!
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply! We will incorporate all the suggested revisions into the final version of the paper. | Summary: This paper introduces a pair of polynomial-time DP algorithms for computing the geometric median of a dataset. The excess error guarantees of the algorithm scale with the effective radius.
The algorithm includes two parts. The first part shrinks the feasible set to a ball whose diameter is proportional to the quantile radius of the optima, which is defined as the radius of the smallest ball containing sufficiently many points. First, it gives a private estimation of the quantile radius. Then, it locates a good initializer based on the estimated quantile radius. AboveThreshold and DP-GD are implemented in the procedure to ensure the approximated DP.
The second part fine-tunes the output of the first part. The paper provides two fine-tuning methods. The first one utilizes DP-GD. The second one modifies the cutting plane method. It adds Gaussian noise to achieve DP. Using a novel sensitivity analysis of the loss function, the authors show that the noise can be scaled down.
This paper also provides the lower bound to demonstrate the optimality of the sample complexity of their algorithm.
Strengths: 1. Quality of result:
This paper studies the DP convex optimization task. DP-GD method is well-established in this field. The authors point out that its excess error depends linearly on the radius R of the feasible set. Previous work shows the strongly convex assumption can remove this deficiency, but it is too strong and unrealistic.
This paper improves the dependence on the radius for the geometric median task under weak and natural assumptions. The main contribution of this work is a pair of polynomial-time DP algorithms (with different run-time) for geometric median estimation with an excess error linearly depending on the effective radius r of the dataset, instead of the worst-case radius R. This significantly improves the error bound when r << R. This condition is common since the dataset may have an enormous R due to a small number of outliers.
Other contributions include a pure DP algorithm based on the inverse smooth sensitivity mechanism (yet it is computationally inefficient), and the demonstration of the sample lower bound, which verifies the optimality of the algorithm.
2. Novel techniques: The algorithm's good error bound is based on a structural result stated by the paper: Given a point \theta, if its distance to the optima \theta^* is larger than the quantile radius \Delta_{3n/4}(\theta^*}, it has a linear growth in its loss value: F\left(\theta ; \mathbf{X}^{(n)}\right)-F\left(\theta^{\star} ; \mathbf{X}^{(n)}\right) \gtrsim\left|\theta-\theta^{\star}\right|. This simulates the effect of strongly-convexity, which has a quadratic growth. This implies that one can take a larger step size when performing the DP-GD. It will move fast towards a sub-optimal point. To utilize this property, the authors build a private estimation algorithm to find this radius quantile. Then, the algorithm fine-tunes the sub-optimal point to find a good estimation of the geometric mean. This paper states a modified DP-SGD algorithm to do this task. To reach a better run-time under certain conditions, this paper also provides a private cutting plane method by adding local noise.
3. Technically soundness: This paper is sound technical with solid proof and analysis. It has an informative technical overview. It also includes a numerical experiment to strengthen their result. It compares their algorithm with DPGD in solving the geometric mean task. Under large R, their algorithm shows significant improvement upon DPGD.
4. Reproducibility: The quantile radius technique in the warm-up algorithm is of independent interest. It may be utilized to solve other DP convex optimization tasks. The authors also state the open problem of developing a linear time algorithm with the optimal excess error.
Weaknesses: More words can be added in Section 1.1 to introduce the intuition of their solution (e.g., how to solve the challenge in line 122?). A comparison of run-time and excess risk with methods from previous work may also be helpful.
Technical Quality: 4
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and thorough review! We respond to the questions below.
> **W1: Presentation of the ideas behind the algorithms**
Thanks for the suggestion. In the revised version of the paper, we expanded on these challenges and also presented the high level ideas of the proposed solutions. For instance, we added this sentence regarding the challenges discussed in Line 122:
The general analysis of the exponential mechanism involves analyzing the sensitivity of the numerator and normalizing factor separately. We provide a novel analysis of the exponential mechanism with the geometric median loss function as the quality function by **coupling** the sensitivity analysis of the numerator and denominator. In particular, we show that given a set of candidate points that lie in a ball with radius $\Delta$, the noise due to the private selection is only $O(\Delta/\epsilon)$ instead of $O(R/\epsilon)$ one would obtain by the direct application of the exponential mechanism.
> **W2: Comparison with prior work**
Since the geometric median loss function is convex and 1-Lipschitz, one might use an off-the-shelve method for private convex optimization for this task. However, a key shortcoming of this approach is that the excess error scales linearly with $R$. In particular, one can use DP-SGD from [BST14]. The runtime of this method is $n^2 d$ and the excess error is
$$
F(\text{DP-SGD};\mathbf{X}^{(n)}) - \min_{\theta \in \mathbb{R}^d} F(\theta;\mathbf{X}^{(n)}) = O\left(\frac{R \sqrt{d}}{\epsilon}\right)
$$
Based on your suggestion, we have added a row to Table 1 summarizing the performance of the off-the-shelve method for private convex optimization for this task.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! My questions have been addressed.
---
Reply to Comment 1.1.1:
Comment: Thank you for the reply! We will update the paper based on your suggestions. | Summary: This paper considers the differentially private geometric median problem. The goal of the geometric median problem is given n data points, find a point to minimize the sum of Euclidean distances from this point to all data points.
The previous DP-GD algorithm requires prior knowledge of the radius R of the data set and also has the excess risk linearly depending on this radius R.
This paper provides three algorithms for the differentially private geometric median. They provide LocDPSGD and LocCuttingPlane algorithms which achieve approximate DP and the excess risk bound depends on the optimal cost instead of the radius R. Finally, they provide an exponential time algorithm that achieves pure DP.
Strengths: 1 This paper considers the differentially private geometric median problem, which is an important problem since the geometric median is widely used in many algorithms and requires privacy for many real-world applications.
2 This work provides interesting techniques and algorithms to achieve the excess risk bound depending on the optimal cost instead of the radius of data points R. They developed a private estimation of quantile radius and private localization for the warm-up stage such that the excess risk does not depend on the radius R.
Weaknesses: -
Technical Quality: 3
Clarity: 3
Questions for Authors: -
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the positive comments! | Summary: This paper studies DP algorithms for geometric median. The paper starts with DP-SGD on geometric median as the baseline. The error scales linearly in R, the radius of a ball containing all data points. This can be a very loose bound since the geometric mean is known to be robust to outliers and one single outlier can make this radius R to be arbitrarily large.
The algorithms proposed in this paper has both multiplicative error and additive error. The idea is to replace the estimate of R by the quantile radius -- the value \Delta such that a ball of radius \Delta has at least a certain fraction of the points. This comes with a cost of a multiplicative error.
The algorithm has two phases, with the first phase to estimate the quantile radius in a private manner and the second phase to find the geometric median. The first phase uses techniques from [NSV16]. For the second phase the paper reports several ideas: 1) just apply DP-SGD using the quantile radius; 2) uses a cutting plane method (non-gradient descent method) and 3) for pure DP uses inverse smooth sensitivity of [AD20] -- this one is not efficient in terms of running time.
In summary I think this paper did an OK job but the ideas are in general incremental. I would give a positive rating but I am not extremely enthusiastic (or lacks excitement).
Strengths: The paper has merits of developing DP algorithms for geometric medians. The algorithms have both multiplicative error and additive errors. Adding these algorithms to the repository of DP algorithms for this problem is nice.
Weaknesses: The idea of using "robust" notion of radius, essentially the quantile radius, is not new and has been a standard in many prior work on clustering algorithms -- search clustering with outliers. It will be good to include citation and discussion on that.
The writing can be improved. There are many places where clarification and definition will be helpful to readers. Some examples are mentioned below.
On DP-SGD, may you explain what is the definition of "excess error" (line 23)? Is this the same as excess risk (line 45)?
Equation (2), what is A_n? the DP -SGD algorithm? It is never defined.
Line 63, effective diameter of the data points -- it will be good to explain what effective diameter means.
Table 1, define lower case r? Again r does not appear until much later in the paper.
Technical Quality: 3
Clarity: 2
Questions for Authors: See above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their thorough review and valuable feedback. Below, we address each of the points raised.
> **Q1: Robust Notion of Radius and Our Contribution**
We agree with the reviewer that proposing a new robust notion of radius is not a contribution of this work. Our primary contribution lies in connecting the notion of quantile radius to the convergence of gradient-based methods. We establish several fundamental properties of the quantile radius and leverage these properties to design private algorithms:
For instance, 1) the geometric median function satisfies a **growth condition** for a point $\theta$ such that $\lVert \theta - \theta^\star \rVert \gtrsim \Delta_{\gamma n}(\theta^\star)$. (See Lemma 2.6 for a formal statemtn ).This is a key result for developing the warm-up algorithm. It lets us show that we can design gradient-based methods that use large steps sizes with the goal of consuming less privacy budget. 2) Quantile radius is a data-dependent quantity but one can privately estimate it using pairwise distances between the data points. In Lemma 2.3, we show that computing these pairwise distances serves as an effective proxy for determining the quantile radius, which again relies on specific properties of quantile radius.
In summary, we emphasize that the main contribution of our paper is not introducing a new notion of radius. Instead, we demonstrate that the quantile radius has several interesting connections to various properties of the geometric median function, and we exploit these connections to develop a pair of polynomial-time and sample-efficient private algorithms.
Based on your suggestion, we have added a discussion on related work in clustering in the non-private case. In particular, we will mention in the revised version of the paper that the robust notion of radius appears in clustering with outliers algorithms as well (for instance, [BHI02]). We have already discussed the work on the private version of clustering, such as [NSV16; NS18; CKMST21; TCKMS22]. We appreciate the reviewer’s suggestion to include further work on robust clustering.
[BHI02] Bādoiu M, Har-Peled S, Indyk P. Approximate clustering via core-sets. InProceedings of the thiry-fourth annual ACM symposium on Theory of computing 2002 May 19 (pp. 250-257).
> **Q2: Excess Error and Excess Risk**
Thank you for catching this typo. We use the notion of excess error of Algorithm $\mathcal{A}$ to refer the achievable cost function compared to the global optimum, i.e., $ F(\mathcal{A}_n;\mathbf{X}^{(n)}) - F(\theta^\star;\mathbf{X}^{(n)}) $ . We fixed it.
> **Q3: $\mathcal{A}_n$ in Equation 2**
Thank you for pointing this out. As you mentioned it is DP-SGD. We fixed this in the paper.
> **Q4: Effective Diameter**
In Line 63, effective diameter of the data points is an informal way to refer to the quantile radius. Later in Line 77, we mentioned that quantile radius formalizes the idea of effective diameter. Based on your comment, we added a sentence to help a reader understand better the effective diameter.
> **Q5: $r$ in Table 1**
$r$ can be thought of as a very small constant that shows the minimum separation of the data points. More precisely, assume that no data point has 3n/4 of other data points within a ball of radius $r$ centered on it. Then, as we show in the second part of Theorem 3.1., our algorithm has a “purely” multiplicative error. However, in order to get a completely **general result** without placing any assumption on the dataset, we need to incur an additive error proportional to $r$. Note that as the sample complexity and runtime only depends on $\log(1/r)$, we can assume it is very small. Introducing $r$ is unavoidable due to the impossibility results in [NSV16].
Based on your suggestion to improve readability, we have removed $r$ from the summary of the results in Table 1 in the introduction and only present the multiplicative error guarantee. We defer the general results, including the additive error, to the later sections.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Please include these revision items in the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your helpful feedback! We will include a discussion on the robust notions of radius and update the introduction based on your suggestions. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper studies differentially private algorithms for computing geometric median (GM) of a dataset. Previous methods such as DP-SGD requires knowing in advance that all data points live in a ball of radius R (contribution bounding), and the resulting utility guarantee also has a dependency on that R, which can get very big even though the majority of the points are concentrated around the GM. The paper proposes three algorithms, two for approximate DP and one for pure DP. For approximate DP, we run a pre-step for finding a quantile radius which covers the majority of the points and then find a good initialization point using this radius. From there we can either run DP-SGD or using an DP-adapted cutting plane algorithm. For pure-DP the authors describe a discretized inverse-sensitivity mechanism.
Strengths: The problem solved in this paper is a very interesting problem, that is important in practice, and the solution provided in this paper provides the first performance guarantee without the dependency on the spread of dataset that can be achieved with an implementable, polynomial-time algorithm, which is impressive.
While developing the algorithms the paper made many observations that are insightful and could be worth knowing for the community. I think the way it relates the quantile radius around the GM to the average of top m quantile radii of all points in the dataset is clever, and so is the relationship between distance from GM and median cost expressed in terms of quantile radius.
Although there are certain complicated aspects of algorithms, the overall quality of writing is good.
Weaknesses: I found the pure-DP algorithm not as interesting as the approximate one for a couple of reasons: (1) it requires an oracle finding exact GM, which is not known, so I'm wondering what happens if we are given an approximation but then the definition of $len_r(X, y)$ is messed up; (2) the paper does not talk about how to compute $len_r(X, y)$; which I think is non-trivial? (3) is the purpose of introducing $r$ mostly to discretize the original inverse sensitivity mechanism? what is the benefit of using $r$ compared to $r \to 0$?
Numerical setting is relatively simple but I don't really mind that since the greatest merit of this paper is in the theory.
Technical Quality: 3
Clarity: 3
Questions for Authors: It seems that the authors have used $\rho-zCDP$ and $(\epsilon, \delta)$-DP interchangeably but without providing preliminaries on the connection between the two? It's better to unify.
For others, see the weakness part.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and thorough review. We respond to the main points below.
> **On Our Pure-DP Algorithm**
Our pure DP algorithm is not the main focus of our paper. Our main result is a pair of polynomial time and sample-efficient algorithms for this problem using gradient methods under the constraint of approximate-DP. However, it is important to understand the achievable utility-privacy tradeoff under the stringent notion of pure-DP. Our pure-DP algorithm is not computationally efficient, but it shows that under pure-DP, we can achieve a similar data-dependent utility guarantee with $\sqrt(d)$ more samples. The analysis of this algorithm relies on the several robustness properties of the geometric median, which might be of independent interest.
> **W1: Oracle access to exact GM**
Thank you for raising this point! We can relax the assumption of having access to an exact geometric median oracle. Instead, consider an oracle such that for every data set $\mathbf{X}^{(n)}$, outputs $\theta$ such that $\lVert \theta - \mathrm{GM}(\mathbf{X}^{(n)}) \rVert \leq \alpha$, i.e., the output has distance $\alpha$ from the exact geometric median. We can modify the definition of the length function to use such an oracle. Obviously, inexact oracle changes the utility guarantees and we get an additional additive error. We did not develop this relaxation as it would add complexity without providing additional insight.
> **W2: Computing $\text{len}_r$ function**
Computing this function is not efficient in general. This is a well-known limitation of the inverse sensitivity approach. Indeed, most of the DP algorithms that try to adapt to local sensitivity, such as smooth sensitivity [NRS07] and propose-test-release [DR09], have a similar step that one needs to compute such a length function.
[NRS07] Nissim K, Raskhodnikova S, Smith A. Smooth sensitivity and sampling in private data analysis. InProceedings of the thirty-ninth annual ACM symposium on Theory of computing 2007 Jun 11 (pp. 75-84).
[DR09] Dwork C, Lei J. Differential privacy and robust statistics. InProceedings of the forty-first annual ACM symposium on Theory of computing 2009 May 31 (pp. 371-380).
> **W3: Benefit of r>0**
Introducing $r>0$ is necessary for analyzing the algorithm: In the analysis of the algorithm based on sampling, one can come up with some pathological datasets such that the normalization factor for the sampling distribution becomes infinity. A standard technique to address this is to smooth out the cost function using $r>0$. Notice that the sample complexity of this algorithm scales with $log(1/r)$ as shown in Theorem 4.1. Therefore, we need to assume $r$ is bounded away from zero.
> **Q1: Privacy definitions**
Thanks for the suggestion! We already included in Appendix 3 (A.3) definitions of approximate-DP and zCDP as well as the connection between these two notions. Based on your suggestion, in the final version of this paper, we will add this appendix to the main body.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I've read the authors' response and my questions have been addressed adequately. I will keep my current score at 7. Thank you!
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your reply. We will revise our paper according to the constructive comments in your reviews. | null | null | null | null | null | null |
Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models | Accept (poster) | Summary: The paper focuses on enhancing the robustness of CLIP models against adversarial perturbations. The approach utilizes saliency maps generated by the inner products between image and text embeddings. Two regularization terms are introduced: the first aims to align the saliency maps of the original and its adversarial examples, while the second seeks to match the saliency maps of the original example between the pre-trained CLIP model and the target model. Experimental results indicate that fine-tuning CLIP with these two regularization terms improves its robustness to attacks.
Strengths: 1. The paper is well-written and organized, with clear descriptions of the motivation and methods.
2. To the best of my knowledge, enhancing CLIP robustness by aligning the saliency maps produced through inner products between image and text embeddings, termed "Text-Guided Attention" in the paper, is a novel approach.
3. It is good to see the detailed ablation analysis.
Weaknesses: 1. The abstract claims that the goal is to enhance both the generalization and robustness of the CLIP model; however, the experimental results do not fully support this assertion. Specifically, regarding generalization, Table 2 shows a significant decrease in clean accuracy compared to the original pre-trained CLIP model. I kindly ask the authors to use the terms "generalizability" and "adversarial robustness" more carefully within the paper. Typically, the generalizability of CLIP models refers to their zero-shot classification performance across different datasets. It may be argued that the proposed method improves the adversarial robustness of CLIP models while maintaining relatively good generalizability compared to previous adversarial training methods. However, the authors must acknowledge the decrease in generalizability compared to original CLIP model as clearly evidenced by the results in Table 2.
2. Since the training uses adversarial examples produced by PGD, it is not surprising that the proposed method results in a model with good robustness to PGD attacks, as shown in Table 1. However, when the evaluation comes to AutoAttack, the advantage of the TGA-ZSR over the previously best method is marginal as shown in Table 3. Could the authors provide explanations for this point, and could they demonstrate the robustness of the resulting models to more attacks?
Technical Quality: 2
Clarity: 4
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: The primary limitation of the paper is that the experimental results presented do not sufficiently support its claims. I recommend that the authors either revise the claims to align more closely with the data presented or extend the experimental section to provide additional evidence supporting their assertions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Use the terms "generalizability" and "adversarial robustness" more carefully.**
**A1**: Thank you for pointing that out! The reviewer is correct. To balance the performance between clean and adversarial samples, our goal is to maintain the generalization and enhance the adversarial robustness of the original CLIP model. We will modify it in the revised version of the paper.
**Q2: The advantage of the proposed method is less on AutoAttack than PGD.**
**A2**: In the paper, we train our model using the PGD attack and validate our approach on both PGD and AutoAttack. Thus, testing on AutoAttack serves as a cross-validation experiment. Additionally, AutoAttack combines multiple attack strategies, effectively covering a wide range of defense techniques and model architectures. It can adapt to the characteristics and defense strategies of the target model, making its attacks more powerful. For these reasons, AutoAttack generally results in lower adversarial accuracy compared to methods like PGD, and improving model performance against AutoAttack can be challenging. Consequently, our method shows less improvement on AutoAttack, but we still achieved state-of-the-art performance.
**Q3: Results on more attacks.**
**A3**: To further evaluate the effectiveness of the proposed method, as suggested by the reviewer, we conducted experiments using another type of attack, specifically the CW attack [c]. The results, shown below, demonstrate that our method significantly outperforms the state-of-the-art method PMG-AFT on both adversarial and clean samples. This substantial margin indicates the adversarial robustness of our proposed method.
Table 7: Zero-shot adversarial robust accuracy and clean accuracy across 16 datasets with **CW attack**.
| Test |Methods | Tiny- | C.10 | C.100 | STL | SUN | Food | Pets | Flowers | DTD | EuroS. | Airc.| ImageN. | Ca.101 | Ca.256 | Cars | PCAM | Avg. |
|:------:|:-------:|:-------------:|:--------:|:---------:|:------:|:------:|:-------:|:----------:|:----------:|:-----:|:-------:|:-------------:|:--------:|:-----------:|:-----------:|:------------:|:-----:|:-------:|
| | CLIP | 0.21 | 0.36 | 0.10 | 10.59 | 1.16 | 0.82 | 1.23 | 1.09 | 2.18 | 0.01 | 0.00 | 1.14 | 13.50 | 7.36 | 2.36 | 0.07 | 3.64 |
| Robust | PMG-AFT | 44.59 | 44.86 | 24.15 | 74.11 | 19.99 | 17.33 | 39.88 | 20.95 | 13.51 | 12.09 | 1.47 | 19.51 | 60.99 | 44.46 | 10.57 | 48.59 | 31.07 |
| | Ours | 63.85 | 60.50 | 34.62 | 84.11 | 22.03 | 33.28 | 58.33 | 32.95 | 21.22 | 13.89 | 4.56 | 20.42 | 70.34 | 59.73 | 20.20 | 48.02 | 40.50 |
|
| | CLIP | 57.26 | 88.06 | 60.45 | 97.04 | 57.26 | 83.89 | 87.41 | 65.47 | 40.69 | 42.59 | 20.25 | 59.15 | 85.34 | 81.73 | 52.02 | 52.09 | 64.42 |
| Clean | PMG-AFT | 66.98 | 74.50 | 44.66 | 88.95 | 37.31 | 37.42 | 66.12 | 35.65 | 21.49 | 18.02 | 4.53 | 35.92 | 76.82 | 61.95 | 25.00 | 49.97 | 46.58 |
| | Ours | 76.99 | 86.24 | 56.11 | 93.66 | 47.12 | 56.80 | 77.65 | 47.10 | 28.94 | 24.64 | 11.52 | 43.89 | 80.65 | 74.76 | 35.67 | 49.78 | 55.72 |
[c] Carlini, Nicholas and David A. Wagner. “Towards Evaluating the Robustness of Neural Networks.” 2017 IEEE Symposium on Security and Privacy (SP) (2016): 39-57.
**We will revise and add the corresponding context in the final version.**
---
Rebuttal Comment 1.1:
Comment: The author have addressed my concern, and I will change my score from 4 to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to read our response and increasing your score! We are glad to hear that the response addressed your concerns. | Summary: This work studies the robustness to adversarial samples for CLIP. Inspired by an observation that adversarial perturbations induce shifts in text-guided attention, the work proposes a simple yet effective approach to improve the zero-shot robostness, i.e., align the text-guided attention of clean samples and adversarial samples. The state-of-the-art performance across 16 datasets demonstrate the effectiveness of the proposed method.
Strengths: This work focuses on an interesting topic, which involves the robustness of VLMs. To improve the zero-shot robustness of CLIP, the work proposes a text-guided attention-based method, which is simple but intuitive and effective. Extensive experiments on 16 datasets demonstrate the effectiveness of the proposed method. Additionally, the paper is well written and the experiments are solid.
Weaknesses: I would like to see more comparison with other types of attention, e.g., vision-only self-attention. Does the proposed method works because of the attention mechanism or the text guidance?
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How do you utilize the class token in the ViT backbone of CLIP?
2. In Tables 13 and 14, why the losses L1 and L2 have very different performance?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The paper has discussed the limitations and potential impacts of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: More comparison with other types of attention.**
**A1**: We also considered this point when exploring the comprehensive effect of attention. Thus, we conducted an experiment by replacing the text-guided attention with vision-only attention, as detailed in the first part of Section 4.4 ("Different types of attention"). We used Grad-CAM to generate the vision-only attention map. The results in Table 4 demonstrate that vision-based attention already achieves results comparable to the state-of-the-art method PMG-AFT in terms of both zero-shot adversarial robustness accuracy and clean accuracy, although lower than those achieved with text-guided attention. This indicates that the zero-shot adversarial robustness of vision-language models benefits from the constraints of attention mechanisms, with text-guided attention further enhancing performance. We will explore more types of attention in future analysis.
**Q2: How to use class token?**
**A2**: In Figure 2, the class token used as $f(x_a)$ is obtained from $f_g^{tar}(x_a)$ after pooling. We will clarify this in more detail in future versions.
**Q3: In Tables 13 and 14, why the losses L1 and L2 have very different performance.**
**A3**: We investigated the use of L1 and L2 norms to constrain the text-guided attention map. The results indicated that the L2 norm significantly outperforms the L1 norm. The primary reason for this is that the L2 norm, tends to distribute the penalty more evenly across all elements of the attention map. This leads to smoother and more stable attention distributions, which can enhance the model's adversarial robustness.
In contrast, the L1 norm, tends to produce sparser solutions by penalizing non-zero elements more heavily. While this can be beneficial in certain contexts by promoting sparsity, it may lead to less stable and less coherent attention maps in the case of text-guided attention, ultimately degrading performance.
**We will revise and add the corresponding context in the final version.**
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. My concerns have been addressed.
Additionally, I have one more question about the proposed model: Why introducing the text guidance can significantly improve the robustness? I think this work is insightful and the proposed model is interesting, and thus I would like to see more analysis or more insights.
If the authors can address this, I am willing to increasing my rating.
---
Rebuttal 2:
Comment: Thank you for taking the time to review our response and for recognizing the insights and interest in our work. We are pleased to hear that our previous response addressed your concerns.
Regarding your new question, we believe that the significant improvement in robustness with text guidance can be attributed to two key aspects:
(1) **Selective Focus on Relevant Features**: Text-guided attention allows the model to selectively focus on the most relevant parts of the image based on the text input. For instance, if the text describes "a photo of an apple," the attention mechanism guides the model to focus on the apple in the image, while ignoring irrelevant details. This selective focus enables the model to extract the most pertinent information, thus enhancing robustness by minimizing distractions from non-essential features.
To illustrate the advantage of text-guided attention over vision-only attention, we calculated the mean Intersection over Union (mIoU) of the attention maps generated by both methods on the validation set of the ImageNet-S-50 dataset [d], which includes ground truth masks for semantic segmentation tasks. The results below demonstrate that the mIoU of text-guided attention is significantly higher than that of vision-only attention, indicating the superior effectiveness of our approach.
Table 8: mIoU Comparison of Text-Guided and Vision-Only Attention on ImageNet-S-50 Dataset.
| Method | mIoU |
|----------| -------|
| Text-guided | **0.485** |
| Vision-only | 0.400 |
(2) **Consistency Across Modalities**: By integrating text-guided attention, the model maintains consistency between the visual and textual modalities. The attention mechanism aligns visual features with corresponding textual descriptions, reinforcing the semantic connections between the two. This alignment reduces the risk of the model making inconsistent or erroneous predictions based on a single modality, thereby enhancing overall robustness.
To further support this point, we conducted an experiment measuring the difference (L2 distance) in attention maps before and after an adversarial attack using both the text-guided attention-based model and the vision-only model on the TinyImageNet validation set. The results below show that the text-guided attention model maintains better consistency after the attack compared to the vision-only model.
Table 9: L2 Distance Comparison Before and After Attack on TinyImageNet.
| Method | L2 distance|
|----------| -------|
| Text-guided | **29.886** |
| Vision-only | 52.918 |
Based on these findings, we believe that the introduction of text guidance significantly enhances the robustness of the model.
We hope this response addresses your concerns.
[d] Shanghua Gao, Zhong-Yu Li, Ming-Hsuan Yang, Ming-Ming Cheng, Junwei Han, and Philip Torr. Large-scale unsupervised semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
---
Rebuttal Comment 2.1:
Comment: Thanks for the authors' response. I would like to confirm with you the details of your experiment settings. Specifically, I wonder how do you calculate the attention map when using text-guided attention and vision-only attention in this experiment (and how in Fig. 1).
---
Reply to Comment 2.1.1:
Comment: Thanks for your question. We will incorporate these discussions into the final version.
We compute the **text-guided attention** $A(x)$ by simply multiplying the text embedding $g(t)$ with the vision embedding $f_{g}(x)$. Specifically, this is expressed as $A(x)=f_{g}(x) \cdot g(t)^ \mathsf{T} $, where $f_{g}(x)$ represents the global image features before the pooling operation of the class token $f(x)$ (as described in Equation 5 and in Lines 162-166 of the paper). Text-guided attention incorporates textual context, potentially offering more precise and context-aware localization of relevant image features.
For comparison, the **vision-only attention** is obtained using the widely adopted visualization technique in deep learning, Grad-CAM [49]. The Grad-CAM heatmap $L_{\text{Grad-CAM}}^c $ for class $c$ is defined as $L_{\text{Grad-CAM}}^c = \text{ReLU} \left( \sum_k \alpha_k^c A^k \right)$, where $\alpha_k^c = \frac{1}{Z} \sum_i \sum_j \frac{\partial y^c}{\partial A_{ij}^k}$. Here, $y^c$ denotes the class score, $i,j$ are the spatial dimensions within the activation map of the $k$-th convolutional layer $A^k$, and $Z$ represents the total number of spatial locations $i \times j$ in the feature map. The Grad-CAM method highlights regions of the image that are most relevant to the class prediction by calculating the gradients of the class score with respect to the activation map.
Figure 1 demonstrates the results of our text-guided attention. We first convert a grayscale attention map into a color map using the ‘cv2.applyColorMap’ function. This colorized attention map is then overlaid onto the original image through a weighted sum: $ image \times 0.4 + attention × 0.6$ (these weights can be adjusted to achieve the desired visual emphasis), as similarly demonstrated in references [e] and [f]. The resulting image, shown in Figure 1, effectively highlights the areas of focus for the model, providing a clear visualization of its attention.
[49] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and456
Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In457
Proceedings of the IEEE international conference on computer vision, pages 618–626, 2017.
[e] Song, Y., Jang, S., Katabi, D., & Son, J. (2023). Unsupervised Object Localization with Representer Point Selection. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 6511-6521.
[f] Li, Y., Wang, H., Duan, Y., & Li, X. (2023). CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks. ArXiv, abs/2304.05653. | Summary: This paper proposes an approach to improve the adversarial robustness of vision-language models while maintaining performance on clean images. The key idea is to aligns the attention maps of adversarial examples with clean examples. Extensive experiments demonstrate the effectiveness of the proposed method in zero-shot adversarial robustness across multiple datasets while maintaining high clean accuracy.
Strengths: 1. The proposed method is simple, straightforward and effective. It leverages text-guided attention to significantly enhance zero-shot adversarial robustness while maintaining clean accuracy across diverse datasets.
2. The experiments are comprehensive. The results on 16 datasets demonstrate consistent improvements over baseline methods.
Weaknesses: 1. The method is only demonstrated on CLIP, raising questions about its applicability to other vision-language models or architectures.
2. The method introduces additional hyperparameters without a clear strategy for tuning them across different datasets or tasks.
3. No error bars or statistical significance tests reported for the experimental results.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How sensitive is the method to the choice of hyperparameters? Is there a principled way to select these across different datasets or tasks?
2. Are there any scenarios or types of adversarial attacks where this method performs poorly? What are the limitations of this approach?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors did not explicitly address the limitations such as the types of adversarial attacks where this method performs poorly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Applicability to other vision-language models.**
**A1**: We follow TeCoA and PMG-AFT, focusing on improving the zero-shot adversarial robustness of the CLIP model for classification tasks. To further validate the effectiveness of our method as suggested by the reviewer, we replaced the CLIP model with another vision-language model, **OpenFlamingo-3B**[a]. In this setup, ViT-L/14 serves as the vision encoder and MPT-1B as the language encoder. Additionally, we evaluated our method on two other tasks: image captioning and visual question answering (VQA). We report the CIDEr score for image captioning and VQA accuracy for visual question answering tasks. We employ the APGD attack[b] with a strength of epsilon 8/255 for 10 iterations. The results are shown below. Our method outperforms FARE in most scenarios for both image captioning and VQA tasks across a range of datasets. We believe that with task-specific design enhancements, our results can be further improved.
Table 2: CIDEr Scores for Image Captioning Task with OpenFlamingo-3B.
| | | COCO | Flickr30k |
|---------|-------|------------|-----------|
| Robust | FARE | 3.68 | 2.71 |
| | Ours | **4.13** | **2.90** |
| Clean | FARE | 3.09 | 3.02 |
| | Ours | **3.56** | **3.13** |
Table 3: Accuracy for Visual Question Answering (VQA) with OpenFlamingo-3B.
| | | TextVQA | VQAv2 |
|---------|-------|------------|-----------|
| Robust | FARE | 3.58 | 31.88 |
| | Ours | **4.44** | **32.14** |
| Clean | FARE | 3.40 | **35.38** |
| | Ours | **4.38** | 34.72 |
[a] Awadalla, A., et al. OpenFlamingo: an opensource framework for training large autoregressive vision-language models. arXiv:2308.01390, 2023.
[b] Croce, F. and Hein, M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In ICML, 2020.
**Q2: Strategy for tuning hyper-parameters.**
**A2:** Following the protocol of previous works (TeCoA,PMG-AFT,FARE), we fine-tuned the CLIP model on adversarial samples from a single dataset (Tiny-ImageNet in our case) for `adversarial fine-tuning' and subsequently evaluated its performance across 15 datasets, including Tiny-ImageNet itself. Thus we only need to tune hyperparameters on just one training dataset. We randomly selected 80% of the training set for training and the remaining 20% for validation to choose the hyperparameters. The validation set results are shown below. The final results on the test set were obtained by training on the entire training set using the optimal hyperparameters (alpha=0.08, bata=0.05) identified from the validation set.
Table 5 : Results on validation set of Tiny-ImageNet dataset.
| Hyper-parameters | Robust | Clean | Average |
|-------|-------|-------|-------|
| alpha=0.07 beta=0.05 | 64.32 | 75.92 | 70.12 |
| alpha=0.08 beta=0.04 | 47.25 | 76.20 | 61.72 |
| alpha=0.08 beta=0.05 | 64.01 | 77.79 | **70.90** |
| alpha=0.08 beta=0.06 | 58.28 | 76.08 | 67.18 |
| alpha=0.09 beta=0.05 | 46.20 | 76.10 | 61.15 |
**Q3: Reporting error bars or statistical significance tests.**
**A3**: Thank you for your suggestions. We acknowledge the absence of reporting error bars and statistical significance tests in the initial submission. To address this, we have included the standard deviation from three runs of the main experiments to demonstrate the stability of our method. We will include comprehensive statistical significance tests in the subsequent versions of the paper.
Table 6: Results with standard deviation from three runs across 16 datasets.
| Test | Tiny- | C.10 | C.100 | STL | SUN | Food | Pets | Flowers | |
|------|-------------|----------|----------|----------|----------|----------|----------|----------|----------|
| Robust | 63.95±0.11 | 61.45±0.67 | 35.27±0.07 | 84.22±0.21 | 33.22±0.39 | 33.97±0.20 | 57.75±0.76 | 34.55±0.35 | |
| Clean | 75.72±0.12 | 86.46±0.26 | 56.52±0.35 | 93.48±0.19 | 51.99±0.25 | 57.59±0.34 | 77.32±0.30 | 48.08±0.37 | |
| | DTD | EuroS. | Airc.| ImageN. | Ca.101 | Ca.256 | Cars | PCAM | Avg. |
|------|-------------|----------|----------|----------|----------|----------|----------|----------|----------|
| Robust |22.08±0.16 | 14.27±0.26 | 4.75±0.27 | 28.74±0.11 | 70.97±0.42 | 60.06±0.46 | 20.40±0.68 | 47.76±0.35 | 42.09±0.12 |
| Clean | 29.06±0.35 | 24.24±0.49 | 11.93±0.27 | 48.04±0.06 | 80.70±0.29 | 74.74±0.18 | 36.62±1.03 | 49.58±0.17 | 56.44±0.08 |
**Q4: Adversarial attack scenarios where this method performs poorly and its limitations.**
**A4**: In this paper, we primarily train using the PGD attack and validate our approach on both PGD and AutoAttack. Our method significantly outperforms other methods on the PGD attack and achieves comparable results with state-of-the-art methods on AutoAttack, which integrates multiple attack strategies (APGD_CE, APDG_DLR, FAB, Square Attack). These results suggest that our method is robust but could be further enhanced to withstand stronger and more complex attacks. One potential improvement is to design a more robust text-guided attention mechanism.
**We will revise and add the corresponding context in the final version.**
---
Rebuttal 2:
Comment: I thank the authors for their responses. All my concerns have been addressed. I will increase my rating from 4 to 5. | Summary: The paper proposes a framework, Text-Guided Attention for Zero-Shot Robustness (TGA-ZSR), to enhance the robustness of vision-language models (VLMs) against adversarial attacks. The proposed method incorporates two modules: the Attention Refinement module and the Attention-based Model Constraint module. The Attention Refinement module aligns the text-guided attention of adversarial examples with clean examples, while the Attention-based Model Constraint module maintains the model’s performance on clean samples.
Strengths: 1. The use of text-guided attention allows to enhance zero-shot robustness.
2. Extensive evaluation of the approach was conducted across diverse datasets.
3. The proposed method outperforms current state-of-the-art techniques in both zero-shot robust accuracy.
4. The paper provides a thorough analysis of the impact of adversarial attacks on text-guided attention, offering insights into the model's decision-making process.
5. The method improves robustness without significantly sacrificing performance on clean data, achieving a favorable trade-off.
Weaknesses: 1. The incorporation of text-guided attention mechanisms and multiple modules may increase the complexity of implementation and computational overhead.
2. Although the authors claim zero-shot robustness in Vision-Language Models, the experiments and methods only target the CLIP model.
3. Missing limitation in Section 5.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the additional computational overhead introduced by the Attention Refinement and Attention-based Model Constraint modules? How does this impact training and inference times compared to baseline methods?
2. How well does the proposed method generalize to other types of vision-language tasks beyond image classification, such as image captioning or visual question answering?
3. Why is the proposed method fine-tuned on Tiny-ImageNet rather than ImageNet? Both TeCoA [1] and FARE [2] are trained on ImageNet.
[1] Mao, C., Geng, S., Yang, J., Wang, X., & Vondrick, C. (2022). Understanding zero-shot adversarial robustness for large-scale models.
[2] Schlarmann, C., Singh, N. D., Croce, F., & Hein, M. (2024). Robust clip: Unsupervised adversarial fine-tuning of vision embeddings for robust large vision-language models.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Discussion of additional computational overhead and training/inference time.**
**A1**:Thank you for your suggestion. We have evaluated our method against others in terms of memory usage, training time, and test time, and the findings are summarized below:
**Memory Usage**: Our method increases memory consumption by approximately 15% compared to the state-of-the-art method PMG-AFT. This is due to the additional computation required for the text-guided attention map.
**Training Time**: The training time for our method is comparable to that of PMG-AFT, which utilizes a KL divergence constraint on logits.
**Test Time**: The test time remains consistent across all methods.
Table 1: Comparison of memory usage, training time, and test time.
| Method | Train memory usage |Train time (per epoch/batch) | Test time (per batch) |
|---------|-----------------------------------|-----------|-----------|
| CLIP | 0Mb | 0s / 0s | 21s |
| TeCoA | 12873Mb | 512s / 0.65s | 21s |
| PMG-AFT | 18449Mb | 828s / 1.06s | 21s |
| Ours | 21227Mb | 885s /1.13s | 21s |
**Q2: Applying to other types of vision-language tasks on other tasks.**
**A2**: We follow TeCoA and PMG-AFT, focusing on improving the zero-shot adversarial robustness of the CLIP model for classification tasks. To further validate the effectiveness of our method as suggested by the reviewer, we replaced the CLIP model with another vision-language model, **OpenFlamingo-3B**[a]. In this setup, ViT-L/14 serves as the vision encoder and MPT-1B as the language encoder. Additionally, we evaluated our method on two other tasks: image captioning and visual question answering (VQA). We report the CIDEr score for image captioning and VQA accuracy for visual question answering tasks. We employ the APGD attack[b] with a strength of epsilon 8/255 for 10 iterations. The results are shown below. Our method outperforms FARE in most scenarios for both image captioning and VQA tasks across a range of datasets. We believe that with task-specific design enhancements, our results can be further improved.
Table 2: CIDEr Scores for Image Captioning Task with OpenFlamingo-3B.
| | | COCO | Flickr30k |
|---------|-------|------------|-----------|
| Robust | FARE | 3.68 | 2.71 |
| | Ours | **4.13** | **2.90** |
| Clean | FARE | 3.09 | 3.02 |
| | Ours | **3.56** | **3.13** |
Table 3: Accuracy for Visual Question Answering (VQA) with OpenFlamingo-3B.
| | | TextVQA | VQAv2 |
|---------|-------|------------|-----------|
| Robust | FARE | 3.58 | 31.88 |
| | Ours | **4.44** | **32.14** |
| Clean | FARE | 3.40 | **35.38** |
| | Ours | **4.38** | 34.72 |
[a] Awadalla, A., et al. OpenFlamingo: an opensource framework for training large autoregressive vision-language models. arXiv:2308.01390, 2023.
[b] Croce, F. and Hein, M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In ICML, 2020.
**Q3: Missing limitation.**
**A3**: We mentioned the limitation in the final sentence of Section 5. We will emphasize and expand upon it in future versions.
**Q4: Train on ImageNet dataset.**
**A4**: We follow the state-of-the-art method PMG-AFT, which was fine-tuned on Tiny-ImageNet. Thank you for your suggestion. Due to time constraints, we further evaluate our method on the ImageNet_subset (a random selection of 100 classes from the full ImageNet dataset). The results are shown below:
Table 4: Zero-shot adversarial robust accuracy and clean accuracy across 16 datasets by training on ImageNet_subset.
| Test | Robust | Clean | Average |
|-------|-------|-------|-------|
| CLIP | 4.90 | 64.42 | 34.66 |
| TeCoA | 20.42 | 40.68 | 30.55 |
| FARE | 11.41 | 60.00 | 35.70 |
| PMG-AFT | 23.93 | 43.10 | 33.51 |
| Ours | 24.74 | 46.90 | **35.82** |
**We will revise and add the corresponding context in the final version.**
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. I will be maintaining my initial score.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our response and for your support. We are glad to hear that the response addressed your concerns. | Rebuttal 1:
Rebuttal: **1. Summary**: We thank the reviewers for their positive and constructive comments. The reviewers agree that the topic is interesting and that the proposed method is novel, simple, and effective. All reviewers appreciate the comprehensive, solid, thorough, and detailed experiments. They also acknowledge that the paper is well-written and organized. The detailed feedback is as follows:
| Reviewers | Aspects | Comments |
|---------------------------|----------------|----------------------------------------------------------------------------------------|
| BFe6 | Topic | Interesting |
| mEBo & qndz & BFe6 | Method | Novel; simple, straightforward and effective; simple but intuitive and effective |
| w4iU & qndz & BFe6 & mEBo | Experiments | Thorough analysis, offering insights; comprehensive; solid; detailed ablation analysis |
| BFe6 & mEBo | Representation | Well written; well written and organized, clear descriptions of motivation and method |
The reviewers' major comments suggest that additional analysis could provide a deeper understanding of the proposed method. Specifically, they recommended including: Analysis of additional computational overhead / Application to other types of vision-language models / Results on more attacks / Discussion of scenarios where our method performs poorly. **Note that the primary analyses widely used in existing zero-shot adversarial robustness methods for VLMs have already been provided in the paper and supplementary materials**. The additional analyses suggested by the reviewers are complementary and would enhance the understanding of our proposed method.
**2. More analysis of the proposed method**:
- *Additional computational overhead and training/inference time of the models are shown in Table 1.*
Table 1: Comparison of memory usage, training time, and test time.
| Method | Train memory usage |Train time (per epoch/batch) | Test time (per batch) |
|---------|-----------------------------------|-----------|-----------|
| CLIP | 0Mb | 0s / 0s | 21s |
| TeCoA | 12873Mb | 512s / 0.65s | 21s |
| PMG-AFT | 18449Mb | 828s / 1.06s | 21s |
| Ours | 21227Mb | 885s /1.13s | 21s |
- *The results with OpenFlamingo-3B on image captioning and VQA are shown in Table 2 and 3.*
Table 2: CIDEr Scores for Image Captioning Task with OpenFlamingo-3B.
| | | COCO | Flickr30k |
|---------|-------|------------|-----------|
| Robust | FARE | 3.68 | 2.71 |
| | Ours | **4.13** | **2.90** |
| Clean | FARE | 3.09 | 3.02 |
| | Ours | **3.56** | **3.13** |
Table 3: Accuracy for Visual Question Answering (VQA) with OpenFlamingo-3B.
| | | TextVQA | VQAv2 |
|---------|-------|------------|-----------|
| Robust | FARE | 3.58 | 31.88 |
| | Ours | **4.44** | **32.14** |
| Clean | FARE | 3.40 | **35.38** |
| | Ours | **4.38** | 34.72 |
- *The results training on ImageNet_subset are shown in Table 4.*
Table 4: Zero-shot adversarial robust accuracy and clean accuracy across 16 datasets by training on ImageNet_subset.
| Test | Robust | Clean | Average |
|-------|-------|-------|-------|
| CLIP | 4.90 | 64.42 | 34.66 |
| TeCoA | 20.42 | 40.68 | 30.55 |
| FARE | 11.41 | 60.00 | 35.70 |
| PMG-AFT | 23.93 | 43.10 | 33.51 |
| Ours | 24.74 | 46.90 | **35.82** |
- *The results on the validation set of Tiny-ImageNet are shown in Table 5.*
Table 5 : Results on validation set of Tiny-ImageNet dataset.
| Hyper-parameters | Robust | Clean | Average |
|-------|-------|-------|-------|
| alpha=0.07 beta=0.05 | 64.32 | 75.92 | 70.12 |
| alpha=0.08 beta=0.04 | 47.25 | 76.20 | 61.72 |
| alpha=0.08 beta=0.05 | 64.01 | 77.79. | **70.90** |
| alpha=0.08 beta=0.06 | 58.28 | 76.08 | 67.18 |
| alpha=0.09 beta=0.05 | 46.20 | 76.10 | 61.15 |
- *The results included the standard deviation from three runs of the main experiments are shown in Table 6.*
(Table 6 is available in the response to Reviewer qndz due to character limit constraints and is not shown here.)
- *Results with CW attack are shown in Table 7.*
Table 7: Zero-shot adversarial robust accuracy and clean accuracy across 16 datasets with **CW attack**.
| |Methods | Avg. | |Methods | Avg. |
|:------:|:-------:|:-------------:|:-----:|:-------:|:-------------:|
| | CLIP | 3.64 | | CLIP | 64.42 |
| **Robust** | PMG-AFT | 31.07 | **Clean** | PMG-AFT | 46.58 |
| | Ours | 40.50 | | Ours | 55.72 |
**3. More explanation of the method.**
- *Results on AutoAttack*: In this paper, we primarily train using the PGD attack and validate our approach on both PGD and AutoAttack. Our method significantly outperforms other methods on the PGD attack and achieves comparable results with state-of-the-art methods on AutoAttack, which integrates multiple attack strategies. These results suggest that our method is robust but could be further enhanced to withstand stronger and more complex attacks.
- *How to use class token*: In Figure 2, the class token used as $f(x_a)$ is obtained from $f_g^{tar}(x_a)$ after pooling. We will clarify this in more detail in future versions.
- *"Generalizability" and "adversarial robustness"*: Our goal is to maintain the generalization and enhance the adversarial robustness of the original CLIP model. We will modify it in the revised version of the paper. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
PACE: Marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization | Accept (spotlight) | Summary: The paper proposes a consistency regularization for improving the generalization performance of PEFT methods. Specifically, the regularization constructs two predictions perturbed by different noises and penalizes the L2 distance between them. The paper theoretically studies the benefits and shows that the regularization reduces gradient norms. Empirically, the paper benchmarks the method on multiple tasks and datasets and achieves state-of-the-art performance.
Strengths: * Theoretical results are clearly presented and easy to follow.
* The paper presents an extensive evaluation of multiple datasets and tasks.
Weaknesses: * **Missing related works and comparisons.** The major weakness of the paper is the lack of discussion and comparisons with related works. For example, L2-SP [1] directly penalizes deviation in the weight space and ensures alignment. DELTA [2] penalizes feature differences similar to this paper. FTP [3] shows that feature deviation can be reduced to weight deviation in linear layers. While the experiment settings are diverse, the paper only compares to vanilla PEFT methods. The method should be compared to other regularization methods for a fair evaluation.
* **Increased computation.** To construct the two predictions. The method requires two forward passes through the model. This can significantly increase the computation and memory requirement of the algorithm.
* **How does $D^{PACE}$ ensure alignment?** It is clear that $D^{fp}$ (eq.5) encourages alignment. However, the proposed PACE regularization does not seem to encourage alignment explicitly.
[1] Xuhong, L. I., Yves Grandvalet, and Franck Davoine. "Explicit inductive bias for transfer learning with convolutional networks." International Conference on Machine Learning. PMLR, 2018.
[2] Li, Xingjian, et al. "Delta: Deep learning transfer using feature map with attention for convolutional networks." arXiv preprint arXiv:1901.09229 (2019).
[3] Tian, Junjiao, et al. "Fast trainable projection for robust fine-tuning." Advances in Neural Information Processing Systems 36 (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: * Could the authors provide discussion and comparisons to other robust regularization techniques?
* Could the authors comment on how $D^{PACE}$ ensures alignment.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper does not have a potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## We thank the reviewer for insightful questions that help refine our work further.
# 1. Comparison with L2SP, DELTA & FTP.
L2SP, DELTA, and FTP aim to retain pre-trained knowledge by aligning finetuned models with pre-trained ones, reducing distance in weight space, feature space and using projected gradient descent, respectively. They provide **no guarantees for generalization error nor study it.** But **we will of course cite/compare with these interesting works in paper.**
**Our PACE is very different. We leverage the fact that small gradient norms contribute to flat loss landscapes, which enhances generalization** (as per our Th. 2 & 3). PACE introduces a novel consistency regularization on output space over different adapter perturbations, implicitly reducing gradient norms (which improves generalization as per Th. 1) and aligning models.
Thus, **Our alignment is very different** than the above methods.
PACE offers a key advantage over other alignment methods:
* guaranteed gradient reduction, which is crucial for model convergence,
* robustness and generalization (Th. 1-2).
To validate PACE's effectiveness, we compared it with L2SP, DELTA, and FTP on CIFAR-100 (VTAB) and ImageNet (Domain Adaptation) datasets.
The results, presented in the table below, clearly demonstrate PACE's superior performance.
Method|CIFAR-100 (VTAB)|ImageNet |(Domain |Adapt)||||
:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:
||Source|-Sketch|-V2|-A|-R|Avg
$LoRA_{mul}+VPT_{add}$|74.9|78.3|30.6|68.5|14.1|32.5|44.8
$\ $+L2SP|75.9|78.5|30.4|68.7|14.9|33.5|45.2
$\ $+DELTA|76.4|78.4|30.8|68.7|14.6|33.7|45.2
$\ $+FTP|76.2|78.6|30.8|68.6|15.8|33.6|45.4
$\ $+PACE|**79.0**|**79.0**|**31.8**|**69.4**|**16.3**|**35.2**|**46.3**
# 2. Increased computation.
To achieve computation and memory efficiency on par with compared baselines, we have explored two variants to maintain similar computational and memory requirements:
* **1), PACE$\_{fast}$:** We store model outputs for each sample from the previous epoch (${\bf o}\_{e-1}=f_{e-1}({\bf x})$) in the CPU memory (or disk if preferred). During training, we feed these into the consistency regularization w.r.t. the current outputs ($||f\_{e}({\bf x})-{\bf o}_{e-1}||\_2^2$) as two versions used different i.i.d. noises (meeting our theoretical need).
* **2), PACE$_{lazy}^{half}$:** At every $N^{th}$ iteration during training, we apply consistency regularization and halve the batch size.
The table below compares the maximum GPU memory usage, total training time, and accuracy for each task, demonstrating that **PACE$\_{fast}$ and PACE$\_{lazy}^{half}$ significantly improve over the baseline while maintaining similar computational demands as baselines**.
|Method|CIFAR-100|VTAB|(ViT/16-B)|Camelyon|VTAB|(Swin-B)|ImageNet|DomAda|(ViT/16-B)|
|:-|-:|:-:|:-|-:|:-:|:-|-:|:-:|:-|
||Mem|Time|Acc|Mem|Time|Acc|Mem|Time|MeanAcc|
|Baseline|8.9g|29m|74.6|15.7g|33m|86.7|8.9g|161m|44.8|
|PACE|17.7g|53m|79.0| 29.4g|60m|89.3|17.7g|278m|46.3|
|PACE$_{fast}$|**9.0g**|**29m**|**78.3**|15.7g|34m|88.8|**9.0g**|**162m**|**46.1**|
|PACE$_{lazy}^{half}$ (N=2)|9.3g|29m|78.7|**15.7g**|**36m**|**89.2**|9.0g|165m|46.0|
|PACE$_{lazy}^{half}$ (N=4)|9.3g|29m|78.4|15.7g|35m|88.9|9.0g|163m|45.6|
|PACE$_{lazy}^{half}$ (N=6)|9.3g|29m|78.4|15.7g|35m|89.0|9.0g|163m|45.7|
|PACE$_{lazy}^{half}$ (N=10)|9.3g|29m|78.2|15.7g|35m|88.9|9.0g|162m|45.6|
# 3. How $D^{pace}$ align models.
Recall that $D^{pace}$ aims to reduce $||f({\bf \theta}_0+{\bf z}_1\odot \Delta{\bf \theta}) - f({\bf \theta}_0+{\bf z}_2\odot \Delta{\bf \theta})||_2^2$ where ${\bf \theta}_0, \Delta{\bf \theta}$ are pretrained and adapter weights and $f$ is the network, and ensure it is small for all noise ${\bf z}_1, {\bf z}_2\sim\mathcal{N}({\bf 1}, \sigma^2)$.
* Intuitively, imagine we drew $({\bf z}_1={\bf 0}, {\bf z}_2={\bf 1})$ or $({\bf z}_1={\bf 1}, {\bf z}_2={\bf 0})$. Then it becomes clear
we indeed reduce the distance between finetuned and pretrained model $D^{fp}=||f({\bf \theta}_0+\Delta{\bf \theta})-f({\bf \theta}_0)||_2^2$.
\
To understand general case with any $({\bf z}_1,{\bf z}_2)$, follow these steps:
* Theoretically, with Prop. 1 and Theorem 3, the distance between finetuned and pretrained model can be approximated and upper bounded:
$$
D^{fp}=[f({\bf \theta}_0+\Delta{\bf \theta}) - f({\bf \theta}_0)]^2\approx[\Delta{\theta}^T{\bf \nabla}-\frac{1}{2}\Delta{\theta}^T{\bf H}\Delta{\bf \theta}]^2 \leq 2d{\color{red}{||\Delta{\bf \theta}\odot{\bf \nabla}||_2^2}}+d^2{\color{red}{||(\Delta{\bf \theta}\Delta{\bf \theta}^T)\odot{\bf H}||_F^2}}
$$
where symbols ${\bf \nabla, H}, d$ are the gradient, hessian matrix and dimension of weights.
* Through Theorem 2, $D^{pace}$ can be approximated as:
$$
D^{pace}\approx2\sigma^2{\color{red}||\Delta{\bf \theta}\odot{\bf \nabla}||_2^2}+\sigma^4{\color{red}||(\Delta{\bf \theta}\Delta{\bf \theta}^T)\odot{\bf H}||_F^2}
$$
where $\sigma^2$ is the noise variance.
* Since $\sigma$ and $d$ are constants during training, **minimizing** $D^{pace}$ **leads to small** ${\color{red}||\Delta{\bf \theta}\odot{\bf \nabla}||_2^2}$ and ${\color{red}||(\Delta{\bf \theta}\Delta{\bf \theta}^T)\odot{\bf H}||_F^2}$. **This results in a small upper bound for** $D^{fp}$, **effectively reducing** $D^{fp}$ and **ensuring alignment of the finetuned model with the pretrained model.**
\
\
**Thus, PACE effectively aligns the finetuned model with the pretrained model even though PACE does not explicitly do that**.
\
\
Figures 2(b) & 3(b) illustrate that distance from finetuned model to pretrained model become small after applying PACE, **verifying that PACE ensures alignment.**
# **Importantly**:
* Beside alignment, $D^{pace}$ ensures gradient reduction, as proven in Theorem 2 & shown empirically in Fig. 2(a) and 3(a). As per our motivation, **this reduction is crucial for improving generalization, as established in Theorem 1.**
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response. The rebuttal addressed most of my questions, so I will raise my score.
However, I am still concerned about the methods' computation costs. In the rebuttal, the authors proposed two more efficient processes. They either increased memory requirements or introduced additional hyper-parameters. These will hinder the practicality of the proposed method.
---
Rebuttal 2:
Comment: We sincerely appreciate your quick reply and are delighted that our rebuttal has addressed most of your questions. We are grateful for the time and effort you have invested in reviewing our work and providing valuable feedback that has strengthened the background and clarity of our work.
1. Regarding your concern about the additional memory requirements of the introduced variants, we would like to clarify that PACE$\_{lazy}^{half}$ ideally introduces no additional memory requirement. Although it includes a hyperparameter $N$, simply setting $N=2$ without any additional hyperparameter search yields much better results than the baseline.
\
\
For PACE$\_{fast}$, the introduced memory to store the output from the model's classification head is marginal compared to the baseline GPU memory requirement. Below table compares the memory required by PACE$\_{fast}$ and the GPU memory required to train the baseline:
|Dataset|Mem. PACE$\_{fast}$|Baseline GPU Mem|ratio|
|:-|:-:|:-:|:-:|
|CIFAR-100 VTAB (ViT/16-B)|390KB|8.9GB|0.0042%|
|Camelyon VTAB (Swin-B)|7.81KB|15.7GB|0.000047%|
|ImageNet DomAda (ViT/16-B)|61MB|8.9GB|0.67%|
As shown, **memory required by PACE$_{fast}$ is trivial compared to the Baseline GPU memory requirement**.
\
\
Despite the variants potentially increasing memory or training time in practice, we demonstrate that **by simply reducing the batch size and training epochs, PACE$\_{fast}$ still outperforms the baseline** while requiring much less GPU memory and training time.
|Method|CIFAR-100|VTAB|ViT/16-B|Camelyon|VTAB|Swin-B|ImageNet|DomAda|ViT/16-B|Average|||
|:-|-:|:-:|:-|-:|:-:|:-|-:|:-:|:-|-:|:-:|:-|
||Mem|Time|Acc|Mem|Time|Acc|Mem|Time|MeanAcc|Mem|Time|Acc|
|Baseline|8.9g|29m|74.6|15.7g|33m|86.7|8.9g|161m|44.8|11.1g|74m|68.7|
|PACE$_{fast}$ ($\frac{1}{2}$ batch size, $\frac{1}{2}$ epochs)|5.4g|17m|78.1|8.6g|21m|88.9|5.4g|85m|45.8|6.5g|41m|70.9|
|PACE$_{fast}$ ($\frac{1}{4}$ batch size, $\frac{1}{4}$ epochs)|3.5g|10m|77.8|6.0g|14m|88.7|3.5g|50m|45.6|4.3g|25m|70.7|
|PACE$_{fast}$ ($\frac{1}{8}$ batch size, $\frac{1}{8}$ epochs)|2.9g|6m|77.2|5.2g|10m|88.6|2.9g|32m|45.5|3.7g|16m|70.4|
The table shows that with 1/8 batch size and epochs, PACE$\_{fast}$ still outperforms the baseline by 1.7% while only using ~1/3 GPU memory and ~1/4 training time. This demonstrates the robustness and generalization benefits that PACE brings to models, allowing them to excel under constrained training configurations.
\
\
However, we want to emphasize that the most valuable part of our work lies in the theoretical contributions and analysis, **especially the link between small gradient norm and better generalization**, and the implicit effect of gradient reduction and mode alignment in PACE.
\
\
These insights provide a deeper understanding of neural networks and their behavior, which we believe will benefit the research community and inspire more effective and robust methods next.
\
\
We appreciate all the concerns raised by the reviewer and the opportunity for discussion. Kindly do let us know if there is anything else we can further clarify or if you have further questions. | Summary: This paper introduces PACE, an extension to PEFT methods for ViTs that includes consistency regularization. The paper shows that consistency regularization encourages smaller gradient norms and better alignment between pre-trained and fine-tuned models, resulting in better fine-tuning performance than existing PEFT techniques.
Strengths: **(S1)**: The technique introduced is well-presented and easy to implement, borrowing ideas from parameter-efficient fine-tuning (PEFT) and consistency regularization (CR) to present a combined technique that is empirically effective.
**(S2)**: The paper rigorously grounds their approach via theoretical analysis, demonstrating how reduced gradient norms lead to better generalization. This is well presented and the analysis itself will be relevant to future work and related research in this field, as other techniques that emulate the same properties (eg: reduced gradient norms) can benefit from PACE.
**(S3)**: The ablation study is well conducted and systematically demonstrates the usefulness of the authors’ design. Throughout the experimental section, the authors also include experiments that probe properties of the model relevant to their method (eg: Fig 2, 3, 4), demonstrating that their theoretical hypotheses are well justified in empirical results.
Overall, this paper is well-presented, technically sound, and is a pleasure to read. I think it will have a good impact on future research in this area.
Weaknesses: **(W1)**: The experimental baseline is not really clarified. Only a ViT-B/16 pre-trained on ImageNet is used. Was the pre-training supervised or unsupervised? Do the analysis and results differ for supervised vs self-supervised backbones? Most common and performant ViTs nowadays (eg: MAE [1], Dino [2]) are self-supervised, so it would be good to clarify if there are differences in the analysis that depend on the method of pre-training.
**(W2)**: It would be interesting to see this technique extended to larger domain shifts (eg: WILDS [3]). Also, it is not clear how the amount of fine-tuning data affects the performance (beyond few-shot learning). I.e. can PACE make better use of large datasets from new domains more effectively than prior methods.
**(W3)**: Minor quibble. It would be useful to include comparisons with orthogonal fine-tuning techniques (eg: BOFT [4], OFT [5]).
---
References:
[1] He, Kaiming, et al. "Masked autoencoders are scalable vision learners." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[2] Oquab, Maxime, et al. "Dinov2: Learning robust visual features without supervision." arXiv preprint arXiv:2304.07193 (2023).
[3] Koh, Pang Wei, et al. "Wilds: A benchmark of in-the-wild distribution shifts." International conference on machine learning. PMLR, 2021.
[4] Liu, Weiyang, et al. "Parameter-efficient orthogonal finetuning via butterfly factorization." arXiv preprint arXiv:2311.06243 (2023).
[5] Qiu, Zeju, et al. "Controlling text-to-image diffusion by orthogonal finetuning." Advances in Neural Information Processing Systems 36 (2023): 79320-79362.
Technical Quality: 4
Clarity: 4
Questions for Authors: L220-221: Which pre-training method was used for these backbones? Was it supervised or self-supervised?
I don’t see PACE_h in table 6.
It would be great to see this extended to language applications!
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors adequately discuss this section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## We thank the reviewer for insightful questions that help refine our work further.
# 1. Experiment settings.
ViT/16-B and Swin-B were pre-trained on ImageNet-21K using supervised learning. Our analysis extends to self-supervised pre-trained models as well, since our goal is to retain knowledge from large pre-training datasets, regardless of the specific supervision/pre-training method used.
To demonstrate the effectiveness of our approach across different pre-training methods, we conducted experiments on three VTAB datasets (CIFAR-100, Camelyon and Clevr-Count) using ViT/16-B models pre-trained by Masked Autoencoder (MAE) and DINO, both self-supervised methods trained on ImageNet-1K. The results are shown in the table below.
|Method|MAE|||Dino|||
|:-|:-:|:-:|:-:|:-:|:-:|:-:|
||CIFAR-100|Camelyon|Clevr-Count|CIFAR-100|Camelyon|Clevr-Count|
|Full|10.3|74.6|52.5|37.2|73.1|34.5|
|Linear|18.7|79.9|57.1|49.1|82.5|44.2|
|$LoRA_{mul}+VPT_{add}$|25.1|82.7|82.1|58.1|85.4|55.7
|$\quad$+PACE|**44.8**|**85.8**|**86.4**|**60.8**|**88.1**|**61.0**|
# 2. Large domain shift.
Indeed, our experiments on the VTAB benchmark already demonstrate PACE's effectiveness in handling large domain shifts. VTAB consists of 19 diverse tasks drawn from various domains, including several that exhibit significant domain shifts from the ImageNet pre-training data, especially the specialized and structured datasets.
Specifically, **we have tested PACE on Camelyon (which is also part of WILDS) and Diabetic Retinopathy**. These **medical imaging tasks represent a substantial domain shift** from natural images in ImageNet.
Our results show consistent improvements over the baseline across these diverse task with large domain shifts. This performance demonstrates PACE's ability to effectively adapt to new domains while leveraging pre-trained knowledge.
# 3. Vary number of training samples on FGVC.
As per request, we varied the data percent (10% to 100%) of training samples on FGVC. Table below shows that PACE gain more improvement when data size is small, which is motivated by our generalization error analysis of PACE (Theorem 1-3).
|Method|CUB||||NAB||||Flowers||||StanDogs||||StanCars||||
|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|-:|:-:|:-:|:-:|-:|:-:|:-:|:-:|
||100%|50%|20%|10%|100%|50%|20%|10%|100%|50%|20%|10%|100%|50%|20%|10%|100%|50%|20%|10%|
|$LoRA_{mul}+VPT_{add}$|88.9|87.1|83.9|79.1|87.5|80.7|75.0|70.2|99.4|98.5|96.5|93.1|91.2|90.6|88.7|86.9|87.5|78.7|54.9|30.1|
|$\ $+PACE|**89.8**|**88.4**|**85.5**|**81.4**|**88.8**|**82.9**|**77.5**|**73.8**|**99.5**|**99.2**|**97.9**|**96.1**|**92.2**|**91.8**|**90.9**|**89.8**|**88.8**|**80.5**|**57.3**|**33.2**|
# 4. Experiments on large domain shift and large dataset.
In line with current trends, our focus is on finetuning pre-trained models for downstream tasks, where small/medium size datasets are common. In these scenarios, PACE shows significant improvements. For larger datasets, our improvements are smaller, **which aligns with Lemma 1 and Theorem 1** - large datasets already allow models to achieve good generalization, diminishing the need for additional generalization improvement techniques.
However, the theoretical findings from PACE, particularly the implicit gradient reduction and alignment, could motivate future work in areas dealing with large domain shifts in big datasets. While we have not extensively tested PACE in such scenarios, **these insights could inspire new approaches for handling significant domain shifts, even when data is abundant.**
# 5. Combine with OFT and BOFT.
We re-implemented OFT and BOFT following the code and experimental settings provided by the authors. For OFT, we found that the constrained version (COFT) yields better results than the unconstrained version.
The table below compares results with/without PACE on CIFAR-100 (VTAB) and ImageNet (Domain Adaptation), demonstrating that incorporating PACE leads to improved performance.
|Method|CIFAR-100 (VTAB)|ImageNet|(Dom|Ada)||||
|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|||Source|-Sketch|-V2|-A|-R|Avg|
|COFT|71.8|76.9|26.4|66.7|13.1|30.7|42.7|
|$\ $+PACE|**75.3**|**77.8**|**27.9**|**68.2**|**14.9**|**32.9**|**44.3**|
|BOFT|72.3|77.1|27.0|66.8|12.8|31.1|42.9|
|$\ $+PACE|**75.7**|**77.9**|**28.3**|**68.2**|**14.7**|**33.4**|**44.5**|
# 6. PACE$_h$.
PACE$_\text{merge}$ refers to PACE$_h$, where perturbation is applied after merging the adapter feature $\Delta h(\cdot)$ with the pretrained layer feature $h_0(\cdot)$, namely, PACE$_h$ perturbs $h(\cdot)=\Delta h(\cdot)+h_0(\cdot)$. This typo has been corrected.
# 7. Experiments on language tasks.
Following VeRA (Kopiczko et al., ICLR 2024), we conducted GLUE benchmark experiments using RoBERTa-base. We report Matthew's correlation for CoLA, Pearson correlation for STS-B, and accuracy for other tasks. The table below compares PACE+LoRA with other methods, demonstrating PACE's effectiveness in language tasks.
|Method|COLA|STSB|MRPC|RTE|QNLI|SST2|Avg.|
|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Full|63.6|91.2|90.2|78.7|92.8|94.8|85.2|
|BitFit|62.0|90.8|**92.7**|81.5|91.8|93.7|85.4|
|Adapt$^D$|62.6|90.3|88.4|75.9|93.0|94.7|84.2|
|VeRA|**65.6**|90.7|89.5|78.7|91.8|94.6|85.2|
|LoRA|63.4|91.5|89.7|86.6|93.3|95.1|86.6|
|+PACE|65.0|**92.3**|92.0|**86.9**|**93.6**|**95.8**|**87.6**|
---
Rebuttal Comment 1.1:
Comment: Thanks for your great response!
I think the paper is good and so I will keep my score.
I'm a little surprised by the CIFAR-100 column in the first table though-- it looks much lower than in the pdf. Am I misreading something?
---
Rebuttal 2:
Comment: Esteem Reviewer,
\
\
Thank you for your prompt reply and valuable feedback, which has helped strengthen and clarify our paper.
\
\
Regarding the lower CIFAR-100 results on self-supervised MAE/DINO in comparison to our paper, there are two key differences:
* 1) **Pretraining stage**: Here ViT was pretrained using self-supervised learning on ImageNet 1K, whereas in our paper, it was pretrained with supervised learning on ImageNet 21K.
* 2) **Fine-tuning stage**: For the VTAB benchmark, we fine-tuned the network using only 1,000 images without data augmentation. **This means CIFAR-100 on VTAB has only 10 images per class**, and the downstream task was supervised learning.
These differences result in less knowledge overlap between pretraining and fine-tuning compared to our paper's setup.
However, **applying data augmentation** techniques in the finetuning stage (in agreement with pretraining augmentation types) to increase this overlap **improved PACE results:**
* MAE increased from 44.8 to 47.2
* DINO improved from 60.8 to 62.9
Despite the improvement, these results remain lower than those in our paper due to the smaller number of images per class in CIFAR-100 VTAB and reduced knowledge overlap with MAE/DINO.
\
\
Thank you again for your insightful review and for recognizing the potential impact of our work. We sincerely appreciate your time and expertise. We hope this explanation clarifies the discrepancy in results. If you have any further concerns or questions, kindly do let us know and we will do our best to clarify further details. | Summary: This paper proposes to regularize the model consistency by optimizing the fine-tuned model to remain consistent for the same sample under different perturbations.
Strengths: 1. The paper is well-written.
2. Several experiments are conducted on four visual adaptation tasks: VTAB-1k, FGVC, few-shot learning, and domain adaptation.
Weaknesses: 1. **Novelty Concerns**: Incorporating consistency regularization with PEFT approaches is already commonplace in multimodal parameter-efficient fine-tuning. For example, VioLET [1], PromptSRC [2], and CoPrompt [3] propose to use additional encoders with shared weights to keep the consistency between the updated model and the original one. Considering that these papers focus on the consistency constraints on each modality itself, rather than cross-modal constraints. Therefore, the approach proposed in this paper to copy the original model parameters to enforce consistency lacks innovation and is simply a repetition of existing ideas.
2. **Efficiency Concerns**: The proposed method, as referenced in Eq. 12, necessitates multiple forward passes during fine-tuning, whereas most compared methods require only a single pass. This discrepancy significantly impacts efficiency. The authors should provide detailed efficiency metrics alongside performance results to enable fair comparisons.
3. **Experimental Settings**: Several experimental settings appear unconventional. The proposed method is based on a new baseline, LoRAmul+VPTadd. However, PACE does not seem to depend on specific designs of the base PET methods. Therefore, it would be more compelling to incorporate PACE into more general baseline methods such as AdapterFormer or existing SOTA methods like GLoRA. Demonstrating PACE's ability to enhance various PET methods would strengthen its contribution. Additionally, PACE requires 300 epochs for fine-tuning on the VTAB-1K dataset, whereas other methods typically require only 100 epochs. The necessity of such prolonged fine-tuning should be justified. Furthermore, in some benchmarks (e.g., Tables 2 and 3), PACE shows only limited improvement, which undermines its robustness.
4. **Typos and Notational Errors**: For instance, L505 misses a square in the second row. Additionally, the equal sign should be replaced with an approximate equal sign.
[1] Wang Y, Liu Y, Zhang X, et al. VioLET: Vision-Language Efficient Tuning with Collaborative Multi-modal Gradients[C]//Proceedings of the 31st ACM International Conference on Multimedia. 2023: 4595-4605.
[2] Khattak M U, Wasim S T, Naseer M, et al. Self-regulating prompts: Foundational model adaptation without forgetting[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 15190-15200.
[3] Roy, Shuvendu, and Ali Etemad. "Consistency-guided prompt learning for vision-language models." arXiv preprint arXiv:2306.01195 (2023).
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the weaknesses.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## We thank the reviewer for helpful comments.
# 1. Copy the original model for consistency regularization not novel.
We believe there is misunderstanding.
\
\
Existing "consistency" models (we will cite/compare these interesting works in paper) align features of fine-tuned model with the pretrained model. They provide no guarantees for generalization error. Our PACE guarantees control of generalization by interplay of noise modulator in network branches. **Our "consistency" is thus completely different.**
\
\
We introduce 3 key innovations:
**1) Novel consistency mechanism:** Unlike previous methods that apply consistency regularization between finetuned and pretrained models, PACE applies consistency between differently perturbed versions of the same finetuned model while learning adapter parameters. This crucial difference addresses gradient limitations of traditional alignment approaches.
**2) Gradient reduction:** We prove that **naive finetuned-pretrained alignment can not guarantee gradient reduction** and even cause gradient explosion (**Prop. 1, Figs 4, 7, 8**). **PACE overcomes this issue**, ensuring gradient reduction and better generalization.
**3) Theoretical guarantees:** We provide rigorous analysis (Thm. 2 & 3) showing how **PACE implicitly achieves gradient regularization** and model alignment even though PACE does not explicitly do so, a key advancement over previous works. **Thm. 1 explains how controlling gradient helps generalization.**
**4) Superior performance:** Our experiments show **PACE outperforms both FPA** (explicitly align finetuned and pretrained model in output space) **and DELTA** (Li et al, ICLR 2019) (explicitly align finetuned and pretrained model in feature space with supervised attention) across various datasets (table below).
Method|CIFAR-100 (VTAB)|ImageNet |(Dom|Ada)||||
:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:
||Source|-Sketch|-V2|-A|-R|Avg
LoRA$\_{mul}$+VPT$\_{add}$|74.9|78.3|30.6|68.5|14.1|32.5|44.8
$\ $+DELTA|76.4|78.4|30.8|68.7|14.6|33.7|45.2
$\ $+FPA|76.6|78.8|31.2|68.6|14.7|33.5|45.3
$\ $+PACE|**79.0**|**79.0**|**31.8**|**69.4**|**16.3**|**35.2**|**46.3**
PACE is a significant step forward in consistency regularization, with a theoretically grounded/empirically superior approach.
# 2. Efficiency concern.
We have explored two variants with similar computational/memory needs as baseline:
**1) PACE$\_{fast}$:** We store model outputs for each sample from the previous epoch (${\bf o}\_{e-1}=f_{e-1}({\bf x})$) in CPU memory (or disk). During training, we feed these into the consistency regularization w.r.t. the current outputs ($||f\_{e}({\bf x})-{\bf o}_{e-1}||\_2^2$) as two versions used different i.i.d. noises (our theoretical need).
**2), PACE$_{lazy}^{half}$:** At every $N^{th}$ iteration during training, we apply consistency regularization and halve the batch size.
Table below compares the max GPU memory usage, total training time, and accuracy for each task, showing that PACE$\_{fast}$ and PACE$\_{lazy}^{half}$ significantly improve over the baseline under similar compute demands.
Method|CIFAR-100|VTAB|(ViT/16-B)|Camelyon|VTAB|(Swin-B)|ImageNet|DomAda|(ViT/16-B)|
:-|-:|:-:|:-|-:|:-:|:-|-:|:-:|:-
||Mem|Time|Acc|Mem|Time|Acc|Mem|Time|MeanAcc
Baseline|8.9g|29m|74.6|15.7g|33m|86.7|8.9g|161m|44.8
PACE|17.7g|53m|79.0|29.4g|60m|89.3|17.7g|278m|46.3
PACE$_{fast}$|9.0g|29m|78.3|15.7g|34m|88.8|**9.0g**|**162m**|**46.1**
PACE$_{lazy}^{half}$ (N=2)|**9.3g**|**29m**|**78.7**|**15.7g**|**36m**|**89.2**|9.0g|165m|46.0
PACE$_{lazy}^{half}$ (N=4)|9.3g|29m|78.4|15.7g|35m|88.9|9.0g|163m|45.6
PACE$_{lazy}^{half}$ (N=6)|9.3g|29m|78.4|15.7g|35m|89.0|9.0g|163m|45.7
PACE$_{lazy}^{half}$ (N=10)|9.3g|29m|78.2|15.7g|35m|88.9|9.0g|162m|45.6
# 3. PACE+AdaptFormer & GLoRA.
Method|CIFAR-100 (VTAB)|ImageNet |(Dom|Ada)||||
:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|||Source|-Sketch|-V2|-A|-R|Avg
AdaptFormer|70.6|77.4|26.5|67.4|12.4|28.7|42.4
+PACE|**74.8**|**78.2**|**27.4**|**67.9**|**13.9**|**31.7**|**43.8**
GLoRA|75.9|78.2|30.3|68.1|13.5|31.6|44.3
+PACE|**78.6**|**78.8**|**31.7**|**69.0**|**15.9**|**34.4**|**45.9**
# 4. PACE with 100 epochs on VTAB.
We use 300 epochs for VTAB tasks (small gain over 100 ep.) but PACE does not needs more time than others to converge. **As the optimizer uses cosine learning rate decay, reducing epochs to 100 has minimal impact.**
To ensure fair memory/compute budgets, **we tested PACE with 1/2 batch size, 50 epochs** where **PACE still improves baseline accuracy by 2.1% and outperforms the previous SOTA GLoRA (500 train epochs and 30 for parameter search).** Thus, PACE is efficient/effective.
#Epoch|Method|Natural|Specialized|Structured|Avg
:-:|:-:|:-:|:-:|:-:|:-:
530|GLoRA|83.61|87.02|63.27|77.97
100|Baseline|81.94|85.40|61.40|76.24
100|+PACE|83.94|87.44|64.62|78.67
50|+PACE (1/2 batch size)|83.77|87.32|63.92|78.34
200|Baseline|82.28|85.30|61.64|76.40
200|+PACE|84.13|**87.57**|64.85|78.85
300|Baseline|82.41|85.00|61.80|76.40
300|+PACE|**84.32**|87.55|**65.13**|**79.00**
# 5. Tables 2/3: results not that good.
**1) In Table 2, PACE shows gains on all shot settings, especially in low-data scenarios** crucial for fine-tuning large models on downstream tasks with limited data, and consistently with our motivation to improve the generalization error (Thm. 1-3).
Table below shows average results across datasets and shots. **PACE consistently outperforms baseline by 0.9 to 2.9.**
Method|AvgAcc|Improvement
:-|-:|:-
LoRA|61.1
$\ $+PACE|63.0|+2.9
VPT|61.0
$\ $+PACE|61.9|+0.9
LoRA$\_{mul}$+VPT$\_{add}$|62.7
$\ $+PACE|64.2|+1.5
**2) In Table 3, we observe a 0.7 gain (expected on large data size).** When reducing the data size, we see more gains.
Below is averaged accuracy over different data percentages on FGVC tasks (average gain: 1.8 points).
|Method|100% data|50% data|20% data|10% data|Avg|
:-|:-:|:-:|:-:|:-:|:-:
LoRA$\_{mul}$+VPT$\_{add}$|90.8|87.1|79.8|71.8|82.3
$\ $+PACE|91.5|88.5|81.8|74.8|84.1
# 6. Typos.
Duly noted. We will fix.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. Most of my concerns have been resolved, and I have decided to raise my rating to 5. By the way, I would like the authors to discuss in detail the similarities and differences with the multimodal fine-tuning approach in maintaining modal consistency in the revised manuscript. Additionally, while the authors propose two new variants, these designs are relatively minor tricks rather than novel technical contributions. I hope the authors will further investigate ways to avoid the higher computational complexity associated with multiple forward passes.
---
Rebuttal 2:
Comment: Esteem Reviewer,
\
\
We thank you for your prompt response. We greatly appreciate your time, effort, and the concerns you have shared with us, which significantly improve the completeness, strength, and clarity of our paper.
## 1. We will be sure to discuss in detail the similarities and differences with the multi-modal fine-tuning approaches maintaining consistency.
Previous works such as VioLET, PromptSRC, and CoPrompt identified the forgetting problem during multi-modal fine-tuning. They propose aligning the fine-tuned model with the pre-trained model, and maintain consistency to prevent catastrophic forgetting.
Specifically:
* VioLET prevents forgetting through collaborative multi-modal gradients
* PromptSRC uses L1 distances between pre-trained and fine-tuned outputs
* CoPrompt employs cosine similarity between pre-trained and fine-tuned outputs.
PACE establishes the link between smaller gradient norm and better generalization, identifying the gradient issues in typical alignment, and proposes a new consistency regularization between fine-tuned models with different noise perturbations, rather than between fine-tuned and pre-trained models.
PACE aligns models and regularizes gradients, thus preventing catastrophic forgetting and improving generalization.
## 2. Regarding novel ways to avoid higher computational costs.
We value the reviewer's suggestions and will explore this further.
\
\
While seemingly simple, these variants are firmly grounded in our theoretical analysis of gradient behavior during fine-tuning. They serve as practical demonstrations of our core insights while avoid higher computational costs.
The simplicity of these variants allows for easy extension and adaptation to various scenarios and model architectures. We want to emphasize that the most valuable contributions of our work are establishing an important link between small gradient norms and better generalization and introducing a new type of consistency regularization.
\
\
We hope our ideas will benefit a deeper understanding of fine-tuning neural networks and their behavior, and will motivate further works.
\
\
We truly thank the reviewer again for the constructive suggestions that help us enhance the completeness and clarity or our work. If there is anything we can clarify/improve further, kindly do let us know.
Title: Thank you | Summary: The paper proposes a consistency regularization that minimizes the squared L2 distance between two outputs of a model obtained using the same parameters, but multiplying the activations by 2 different noise samples. It proves that the population loss is bounded by the gradient norm, indicating that smaller gradients are better for generalization. It proposes the consistency regularization for aligning the fine-tuned model with the pretrained model for better generalization, and proves that the consistency regularization penalizes the first and second order gradients of the parameters of the model. This results in the training process preserving the knowledge learned by the pretrained model. It also proposes an efficient method for implementing the consistency regularization where the regularization is applied every N iterations.
Strengths: - The paper provieds a theoretical analysis and intuition for the consistency regularization and links generalization to small gradient norm.
- The empiracal results show that using the consistency regularization along with existing finetuning methods improves their perfromance.
- The consistency regularization can be made somewhat efficient by adding the noise to the features instead of parameters, and by using the consistency regularization in intervals rather than at every iteration.
Weaknesses: **Gap between theoretical analysis and practical implementation**
- The theoretical analysis is done for functions that map from R^d R. However, the in practice, the outputs of the heads of the classification models are multi-dimensional. It's not clear from the proofs that the theoretical results hold for multi-dimensional outputs. While the experimental results do suggest this, a theoretical proof for this would strengthen the paper.
2. **Increased compute and memory requirements**
- Since the consistency regularization needs to run the model twice (with shared parameters but with two different noise samples), and also backpropagate through both, the compute and the memory required is doubled.
- The authors use H100 GPUs (96GB) for their experiments, hence the experiments are feasible. However, with lower end GPUs, this might not be feasible. Moreover, the requirements for large models (in the scale of billion parameters) might be prohibitive.
- With this increased compute and memory requirements, the method's use as a PEFT method is limited.
- While the paper proposes a method for efficient implementation by applying the regularization every N iterations, it is only tested on one dataset from VTAB-1k (CIFAR100). This should be tested on more datasets for the claim to hold.
3. **Lack of evaluation beyond vision tasks**
- The paper does not study natural language tasks (language understanding and generation). This would be necessary to show the generality of the method to other domains besides vision tasks.
4. **Large training time**
- The method is trained on VTAB for a large number of epochs (300). This means that the method can take a long time to converge. Are the comparisons with the other methods (other than the chosen baselines) done with a similar compute budget?
- The method is trained for 100 epochs for the FGVC tasks. Why does the method require fewer epochs for FGVC tasks compared to VTAB?
Technical Quality: 3
Clarity: 3
Questions for Authors: - Line 155-156 "as even smaller weight changes can lead to significant divergence in the output space": Can the authors provide references/experimental results to backup this statement?
- In Figure 2 (a) and Figure 3 (b), how is the plot shown for the gradient norm of parameters in multiple layers?
- See Point 4 in Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## We thank the reviewer for insightful questions that help refine our work further.
# 1. Theoretical analysis done for functions from $R^d\rightarrow R$.
Thank you. In practice, we use the squared L2 distance for multi-dimensional outputs for $D^{fp}$ and $D^{pace}$, which allows our one-dimensional analysis to naturally generalize to multiple dimensions. For example, for a vector-valued function in the naive alignment, $f({\bf \theta}) = [f_1({\bf \theta}), ..., f_m({\bf \theta})]$, where $m$ is the output dimension, we have:
\
\
$
||f({\bf \theta}_0) - f({\bf \theta}_0 + \Delta {\bf \theta})||\_2^2 = \sum\_{i=1}^m [f_i({\bf \theta}_0) - f_i({\bf \theta}_0 + \Delta {\bf \theta})]^2.
$
\
\
This equality shows that the squared L2 distance in multiple dimensions is simply the sum of non-negative squared differences in each dimension. Consequently, this additive nature enables our one-dimensional analysis to extend seamlessly to multiple dimensions in practice, aligning with our empirical observations.
# 2. Increased computation and memory requirements.
Thank you. Motivated by your questions, we have explored two variants to maintain similar computational and memory requirements as the baseline:
* **1), PACE$\_{fast}$:** We store model outputs for each sample from the previous epoch (${\bf o}\_{e-1}=f_{e-1}({\bf x})$) in the CPU memory (or disk if preferred). During training, we feed these into the consistency regularization w.r.t. the current outputs ($||f\_{e}({\bf x})-{\bf o}_{e-1}||\_2^2$) as two versions used different i.i.d. noises (meeting our theoretical need).
* **2), PACE$_{lazy}^{half}$:** At every $N^{th}$ iteration during training, we apply consistency regularization and halve the batch size.
The table below compares the maximum GPU memory usage, total training time, and accuracy for each task, demonstrating that **PACE$\_{fast}$ and PACE$\_{lazy}^{half}$ significantly improve over the baseline while maintaining similar computational demands as baselines**. Moreover, techniques such as gradient accumulation can be applied for smaller GPU memory if needed.
|Method|CIFAR-100|VTAB|(ViT/16-B)|Camelyon|VTAB|(Swin-B)|ImageNet|DomAda|(ViT/16-B)|
|:-|-:|:-:|:-|-:|:-:|:-|-:|:-:|:-|
||Mem|Time|Acc|Mem|Time|Acc|Mem|Time|MeanAcc|
|Baseline|8.9g|29m|74.6|15.7g|33m|86.7|8.9g|161m|44.8|
|PACE|17.7g|53m|79.0| 29.4g|60m|89.3|17.7g|278m|46.3|
|PACE$_{fast}$|**9.0g**|**29m**|**78.3**|15.7g|34m|88.8|**9.0g**|**162m**|**46.1**|
|PACE$_{lazy}^{half}$ (N=2)|9.3g|29m|78.7|**15.7g**|**36m**|**89.2**|9.0g|165m|46.0|
|PACE$_{lazy}^{half}$ (N=4)|9.3g|29m|78.4|15.7g|35m|88.9|9.0g|163m|45.6|
|PACE$_{lazy}^{half}$ (N=6)|9.3g|29m|78.4|15.7g|35m|89.0|9.0g|163m|45.7|
|PACE$_{lazy}^{half}$ (N=10)|9.3g|29m|78.2|15.7g|35m|88.9|9.0g|162m|45.6|
# 3. Experiments on language tasks.
Following VeRA (Kopiczko et al., ICLR 2024), we conducted GLUE benchmark experiments using RoBERTa-base. We report Matthew's correlation for COLA, Pearson correlation for STSB, and accuracy for other tasks. The table below compares PACE+LoRA with other methods, demonstrating PACE's effectiveness in language tasks.
|Method|COLA|STSB|MRPC|RTE|QNLI|SST2|Avg.|
|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Full|63.6|91.2|90.2|78.7|92.8|94.8|85.2|
|BitFit|62.0|90.8|**92.7**|81.5|91.8|93.7|85.4|
|Adapt$^D$|62.6|90.3|88.4|75.9|93.0|94.7|84.2|
|VeRA|**65.6**|90.7|89.5|78.7|91.8|94.6|85.2|
|LoRA|63.4|91.5|89.7|86.6|93.3|95.1|86.6|
|+PACE|65.0|**92.4**|92.0|**86.9**|**93.8**|**95.8**|**87.6**|
# 4. Results of 100 epochs on VTAB.
We use 300 epochs for VTAB tasks as we observed slight extra improvements over 100 epochs. However, it does not mean PACE needs more training time to converge. **Since the optimizer uses cosine learning rate decay, reducing the number of training epochs to 100 has minimal impact on performance.** For FGVC tasks, we maintained 100 epochs as their larger datasets (1k-20k samples vs. VTAB's 1k) ensure convergence with fewer epochs.
The table below shows results for different training epochs on VTAB.
To ensure fair memory and computational budgets, **we tested PACE with half batch size and 50 epochs.** Under these conditions, **PACE still improves baseline accuracy by 2.10% and outperforms the previous SOTA GLoRA, which uses 500 epochs for training and 30 for parameter search.** This demonstrates PACE's efficiency and effectiveness across various training configurations.
|#Epoch|Method|Natural|Specialized|Structured|Avg|
|:-:|:-:|:-:|:-:|:-:|:-:|
|530|GLoRA|83.61|87.02|63.27|77.97|
|100|Baseline|81.94|85.40|61.40|76.24|
|100|+PACE|83.94|87.44|64.62|78.67|
|50|+PACE (half batch size)|83.77|87.32|63.92|78.34|
|200|Baseline|82.28|85.30|61.64|76.40|
|200|+PACE|84.13|**87.57**|64.85|78.85|
|300|Baseline|82.41|85.00|61.80|76.40|
|300|+PACE|**84.32**|87.55|**65.13**|**79.00**|
# 5. Why smaller weight changes lead to significant divergence in the output space.
Smaller weight changes can lead to significant divergence in output space, particularly when a network is sensitive to weight changes or has entered a sharp local minimum. In such cases, even minor weight adjustments can cause substantial output changes or even incorrect predictions. This behavior contrasts with robust networks or those in flatter minima. Consequently, constraining changes in weight space alone is suboptimal, as it disregards the loss landscape, which is also influenced by gradients. For further details on this phenomenon, kindly refer to the papers listed below.
> 1. Wu et al, Adversarial Weight Perturbation Helps Robust Generalization, NeurIPS 2020.
> 2. Foret et al, Sharpness-aware minimization for efficiently improving generalization, ICLR 2021.
> 3. He et al, Defending and Harnessing the Bit-Flip based Adversarial Weight Attack, CVPR 2020.
# 6. How is the plot shown for the gradient norm of parameters in multiple layers?
We plotted the sum of gradient norms from all layers.
---
Rebuttal 2:
Comment: Thank you for the detiled response and for clarifying my questions. I have read the authors' response as well as other reviews. Overall, the method is promising as an *add on* to standard finetuning methods, since it has demonstrated applicability to LoRA, OFT and BOFT. Furthermore, it shows theoretical links between gradient norm and generalization which is valuable to the community. I would encourage the authors to add the case of $R^d \rightarrow R^d$ this to the manuscript for completeness. I also thank the authors for providing references in point 5 of the rebuttal. However, I will maintain my original score. Following is a summary of my reasoning:
- The newly proposed efficient variants do seem effective, but the come at a cost of increased CPU memory/disk space. In case where an offload to the disk is required, it increases the read/write time (which can be very slow). Thus, the method will have additional overheads no matter the variant.
- Due to the additional overhead, I would hesitate to call it a PEFT method, although it leads to a small improvement in accuracy.
- While the authors provided experiments on GLUE, the method is still not tested on generative tasks, which is an important application area.
---
Rebuttal 3:
Title: Minor clarifications.
Comment: Esteem Reviewer,
\
\
Thank you for your prompt and detailed reply.
\
\
We are humbled that our rebuttal has addressed most of your concerns and that you recognize our method is promising and has theoretical value, which we believe will motivate more effective PEFT methods in the future.
We want to thank again for your constructive suggestions, which have truly strengthened our work.
1. Regarding the memory cost.
We appreciate the concern about efficiency. Our method is designed for fine-tuning models on downstream tasks, which typically involve limited data. We want to clarify that PACE$_{fast}$ only needs to store the output of the model's classification head for the training samples. This means that the additional memory required by PACE$\_{fast}$ is trivial (and can be stored even on the GPU).
\
\
Below we compare the memory required by PACE$\_{fast}$ and the GPU memory required to train the baseline.
|Dataset|Mem. PACE$\_{fast}$|Baseline GPU Mem|ratio|
|:-|:-:|:-:|:-:|
|CIFAR-100 VTAB (ViT/16-B)|390KB|8.9GB|0.0042%|
|Camelyon VTAB (Swin-B)|7.81KB|15.7GB|0.000047%|
|ImageNet DomAda (ViT/16-B)|61MB|8.9GB|0.67%|
As shown, **memory required by PACE$_{fast}$ is trivial compared to the Baseline GPU memory requirement** and can even stored into the GPUs.
\
\
Even in the rare scenario of fine-tuning on the full ImageNet 1K dataset (1.2 million samples), PACE$_{fast}$ requires only 4.8G of additional memory for temp. storage of the output of the model's classification head. This can be easily done on GPU alone. This is significantly smaller than the dataset itself (>100G) and can be easily accommodated in CPU/GPU memory without needing disk storage.
\
\
We should have not mentioned disk storage as it is not needed. Even for datasets with 10 million images or more, with typical multi node processing, we cannot see any need for a disk usage. We just mean extreme situations, e.g., a user with a single 3090GPU and 4GB RAM trying to use on full ImageNet. Even then, half-precision floats would fit output into 2.4GB.
\
\
**While PACE$\_{fast}$ requires trivial small amount of memory, PACE$\_{lazy}^{half}$ does not require additional memory and also enjoy competitive performance.**
2. We appreciate your question regarding parameter efficiency. Our understanding of parameter efficiency is that when adapting to downstream tasks, especially during inference, the required additional parameters should be minimal. We can confidently state that PACE adheres to this principle:
* PACE introduces no additional learnable parameters for model adaptation. The small set of model classification head outputs we keep for PACE$_{fast}$ are not learnable and will be discarded after training.
* During inference, all PACE variants require no extra parameters beyond the baseline PEFT method.
\
\
Therefore, PACE maintains the parameter efficiency of the baseline PEFT method while enhancing performance.
\
\
We will add all detailed clarifications into the paper and we are keen to see reviewer's further suggestions that we will accommodate accordingly.
3. Regarding generative tasks, we kindly ask for reviewer's understanding that **PACE is built upon the generalization theory for discriminative tasks.**
\
\
Currently, **the generalization theory for generative models remains underdeveloped** and requires collective effort from the entire research community.
\
\
Nonetheless, we are looking at this problem and we are trying to figure out the potential implementation details. We will try and see if we are able to get something implemented and processed within remaining discussion time but we feel this is perhaps outside of the scope of the paper given the theoretical framework.
\
\
\
Meantime, kindly do let us know if you have any other questions that we can answer.
---
Rebuttal Comment 3.1:
Comment: Thank you for providing further clarifications. I will revise some of my previous comments:
- I acknowledge the point made about the additional memory requirements. I would ask the authors to include these statistics in the revised manuscript to give the readers a clear idea about the overheads associated with the method.
- For point 3, I see that Section 3.2 mentions the structure of the models the theoretical analysis is done for (network with a classification head), but later sections do not mention that the method applies only to discriminative tasks. I would also ask the authors to make this clear in the manuscript, since generative models are an important class of models which have shown a significant impact and interest, and the baseline methods like LoRA work well for those. This is required to give the readers the correct context.
Given that these comments are incorporated, I do not mind raising my score from 6 to 7.
---
Rebuttal 4:
Comment: Thank you for your thoughtful feedback.
\
\
We are very glad that our response has resolved your concerns.
We appreciate your active involvement and advice, which has provided valuable opportunities to improve our work and make it clearer for the readers.
We will incorporate all your suggestions, along with those from other reviewers, into the final version.
Below are the changes (we will doublecheck if we did not miss any other details/requests) we will implement in the final version of our paper based on your suggestions:
> 1) Include $\mathbb{R}^d \rightarrow \mathbb{R}^m$ before Prop. 1.
> 2) Introduce two efficient variants in Sec 3.5. Add Sec. 4.4 to report experiments on these variants and include overhead statistics in the supplementary material.
> 3) Add experiments on language tasks and 100-epoch results on VTAB in Sec. 4.1.
> 4) Include citations and explain why smaller weight changes lead to divergence in Sec. 3.3.
> 5) Clarify how to calculate the gradient norm in Sec. 4.2.
> 6) Clarify that PACE requires no additional parameters beyond those required by baseline PEFT methods in Sec. 3.5.
> 7) Clarify our context focuses on discriminative tasks in Sec 3.3 and 3.5.
Additionally, we will implement other changes based on suggestions from other reviewers.
> 1) Compare PACE with VioLET, PromptSRC, CoPrompt, L2-SP, DELTA, and FTP in Sec 2, with experimental comparisons in Sec. 4.3.
> 2) Add experimental comparisons with AdaptFormer, GLoRA, OFT, and BOFT in Sec. 4.3.
> 3) Include MAE and Dino experiments in Sec 4.1
> 4) Refine explanation for Theorem 3 to clarify why $D^{pace}$ aligns models despite not using explicitly alignment like exisiting methods.
> 5) Present experiments with varying training sample sizes on FGVC and provide average improvements in Tables 2 and 3.
> 6) Address reviewer-identified typos and conduct a comprehensive proofreading of the entire paper.
We truly appreciate your time and effort.
If there is anything important we missed/oberlooked, kindly do let us know.
---
Rebuttal Comment 4.1:
Title: Generative Experiment
Comment: As per reviewer's request, we have finalized conducting **experiments on language generation tasks by fine-tuning Phi-3-mini-4k-instruct** on the GSM8K dataset (Cobbe et al., "Training Verifiers to Solve Math Word Problems," arXiv 2021) using causal language modeling.
\
\
We used learning rate of 2e-6, batch size of 4, LoRA rank of 16, prompt "Answer below question. First think step-by-step and then answer the final number:\n\n<Question>" as instruction and fine-tune models on the training set and evaluated the performance on the test set.
\
\
The results are as follows:
|Method|Accuracy|
|:-|:-:|
|Pre-trained|62.01|
|Full|73.16|
|LoRA|75.66|
|$\ $+PACE|**78.77**|
As shown, PACE enhances LoRA's performance on this generative language task, despite its roots are in discriminative theory.
\
\
Although generalization theory for generative modeling lags behind, the principles of aligning fine-tuned and pre-trained models to retain knowledge and achieving smaller gradient norms for better generalization remain effective in this setting too.
\
\
We thank the reviewer again for challenging us to go deeper and evaluate our model further. | Rebuttal 1:
Rebuttal: # We thank all the reviewers for constructive feedback and questions shaping our revised paper.
\
\
We have addressed all comments in individual responses to each reviewer.
\
\
\
Below we just provide few highlights:
* 1. Increased computation and memory requirements: we have provided now **PACE$\_{fast}$ and PACE$\_{lazy}^{half}$** variants which **enjoy almost identical memory and compute time footprint as baselines** while still bringing **2-3% gains.**
* 2. Experiments on language tasks. Following VeRA (Kopiczko et al., ICLR 2024), **we conducted GLUE benchmark** experiments using RoBERTa-base, and showed gains.
* 3. We provided results on 50, 100 epochs on VTAB.
* 4. Consistency novelty. We have explained that **our "consistency" model is very different to existing works.**
* We do not explicitly align per se with a frozen pretrained model.
* We use noise modulators to align two branches. These noise modulators implicitly **regularize gradient (Theorems 2 & 3)** and **improve generalization error (Theorem 1)**. Standard works on "consistency" with frozen pretrained model for alignment do not enjoy these properties.
* 5. We provided results on PACE+AdaptFormer & GLoRA.
* 6. We have provided experiments on finetuning MAE and DINO (self-supervised models).
* 7. We provided experiments on OFT+PACE and BOFT+PACE.
* 8. We provided comparisons with L2SP, DELTA and FTP. We will of course cite these works and discuss accordingly.
* 9. We detailed how $D^{pace}$ align models while improving generalization error.
\
\
**We truly hope this rebuttal, theoretical arguments and empirical arguments will convince reviewers about novelty of our work:**
* Current alignment methods do not study or lower generalisation error (Theorems 1-3)
* Our "consistency" is not what other works do. We use noise modulation to achieve regularized gradients and hence improved generalization error.
* Our experiments as predicted by theory consistently outperform other 'consistency" approaches.
* Our compute and memory cost remains in line with baselines thanks to **PACE$\_{fast}$ and PACE$\_{lazy}^{half}$** while enjoying 2-3% gains.
\
Kind regards,
\
Authors | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Initializing and Retrofitting Key-Value Adaptors for Traceable Model Editing | Reject | Summary: The paper addresses the tracable sequential model editing challenge by plugging in additional model components to a transformer MLP blocks. The proposed approach adds additional model components for each edit, allowing for traceability for each edit.
Strengths: • The results indicate that it is a strong approach compared to relevant literature and its performance is relatively stable when scaling it to thousands of edits.
• The approach allows for "separability of each edit" which in turn allows for additional operations such as edit updation or deletion as showcased in the Edit withdrawal experiment.
• The edit withdrawal experiment is both unique and intriguing, as the concept of removing edits appears to be a novel area of exploration.
Weaknesses: • The overall approach does not appear to be novel, as it closely resembles T-Patcher. Both iReVa and T-Patcher rely on inserting neurons for editing and using neuron activations to control when to run the patch/adopter. Furthermore, analysis of the editing approach across different layers reveals the same pattern as discussed in the T-Patcher paper which involves adding additional model components in the final layer for optimal results.
• Experiments with T-Patcher are missing from the comparisons to the existing methods section. Given its similarity, T-Patcher should be included for comparison.
• Although T-Patcher performs editing in batches, it still uses a single patch of neurons for each edit, making its editing similarly traceable. Thus, the paper's claim of a "stronger capacity for carrying traceable edits" seems unfounded.
• The Edit Withdrawal Test section is hard to understand. How exactly was the experiment conducted? Were all edits removed or only a limited set? Detailed experimentations for this section are needed as it is the only use case of traceability explored in the paper.
• Editing techniques that rely on code books with playback vectors e.g. GRACE would allow for edits to be removed. The authors should make it clear that the withdrawal test is not possible for the editing techniques that they have chosen for comparison.
Technical Quality: 3
Clarity: 2
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reply to reviewer T9Qb
Thank you, Reviewer T9Qb, for your valuable feedback! We appreciate your recognition of our work. This response primarily addresses the questions you raised in your review.
- Treat T-Patcher [1] as baseline (Weakness 2). The reason for missing of T-Patcher as baseline is specified in our global rebuttal, we hope our explain will address your concerns.
- Stronger capacity for carrying traceable edits (Weakness 3). Regarding traceable edits, similar to T-Patcher, we construct an independent neuron for each piece of knowledge. These neurons explicitly map the question to the answer in the new knowledge. Additionally, these neurons are independent of each other (with batch_size=1 during training), as you noted in the Strength section. Concerning capacity, if you meant "volume", iReVa's space overhead for new knowledge insertion is 2nd, where n is the number of new knowledge (q & a pairs) and d is the size of the model's embedding size (hidden size). As the value of n increases, this overhead is slightly less than 2nd because some knowledge can be added through updates rather than inserts. Section 6.3 mentions that for 10K edits, the additional parameter size is 0.08B (1.5B for the base model), increasing the parameters by only 5% for 10K edits. The comparison with T-Patcher in global rebuttal may be relevant to this weakness, you can access that part for detail.
- Withdrawal analysis (Weakness 4): We apologize for the unclear description of the withdrawal analysis. In iReVa, one piece of knowledge corresponds to one neuron. Therefore, in withdrawal analysis, we withdraw one neuron at a time (setting its corresponding Vi to a zero vector) and test whether the model's response to the knowledge associated with this neuron changes, potentially reverting to the unedited state. Batch rollback of knowledge neurons is also feasible. Due to the independence of the neurons, the effect of batch rollback is consistent with single rollback.
- Code book-based method's withdrawal analysis (Weakness 5): GRACE [2] and MELO [3] use code books to manage batches of knowledge. However, in GRACE and MELO, each cluster center corresponds to a batch of knowledge. Therefore, to withdraw a single piece of inserted knowledge, one must withdraw all knowledge inserted in the same batch, affecting other knowledge that does not need to be withdrawn. If we reduce the batch size, the number of clusters will significantly increase, which will markedly affect the performance of these methods. In our revised version, we will also adjust the description of this part.
- Reference:
[1] Transformer-patcher: One mistake worth one neuron. In The Eleventh International Conference on Learning Representations, 2023.
[2] Aging with grace: Lifelong model editing with discrete key-value adaptors. ArXiv, abs/2211.11031, 2022.
[3] Melo: Enhancing model editing with neuron indexed dynamic lora, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. While my concerns about T-Patcher have been addressed, I remain somewhat skeptical about its traceability compared to GRACE.
The GRACE algorithm processes edits individually in a streaming fashion(batch size 1). Each edit typically initiates its own cluster, with the cluster radius being dynamically adjusted based on the labels of subsequent edits that fall within its scope. The default cluster radius is maintained at a relatively small value to minimize the probability of grouping unrelated edits. However, the possibility of unrelated edits being clustered together, while very low, cannot be entirely eliminated. The paper should have a detailed discussion on the traceability of edits in comparison to other works and a comparison between iReVa and T-Patcher as in the global rebuttal.
The default GRACE approach may impact performance. However, since iReVa does not share neurons across various edits, it could have performance trade-offs in comparison with T-Patcher.
Given the current presentation of the work, I do not feel comfortable raising the score, I will be maintaining my current positive evaluation of your manuscript.
---
Reply to Comment 1.1.1:
Comment: Thank you for your insightful response! Your concern indeed requires further clarification, as there are some similarities between Method iReVa and Method GRACE when establishing adaptors for multiple edits. Therefore, we will further compare the traceability of iReVa and GRACE, and we hope this will help address your concern.
- Similarity. In general, Method iReVa can be seen as a variant of Method GRACE, where each cluster is limited to only one edit, which is the cluster center itself. Additionally, the activation bias in iReVa can be considered similar to the cluster radius in GRACE. If, in a benchmark, it is rare for multiple edits to share the same edit target, then iReVa and GRACE are essentially similar because GRACE's codebook rarely involves the expand operation.
- Difference. If, in a benchmark, multiple edits share the same edit target (e.g., country name), GRACE‘s codebook will experience a trade-off based on the size of the initiative radius. A radius that is too large can lead to errors and forgetting, while a radius that is too small can result in too many clusters. In contrast, Method iReVa always maintains one edit per neuron, avoiding the issue of edits forgetting. The phenomenon of forgetting often occurs during the split stage in the codebook. When the current edit has a similar input representation x to a certain cluster center C0 but has a different edit target y, GRACE's codebook will perform a split operation, creating a new cluster C1 from the current edit and splitting the radius between the new cluster and C0 based on their distance. In this process, the reduction in C0's radius can cause some edits that originally belonged to C0 to fall outside of C0. If these edits are fundamentally different from C0, the codebook loses these edits. You mentioned in your response that Method GRACE has a low probability of merging two unrelated edits into the same cluster. Considering that different clusters in GRACE's codebook do not overlap with each other, and the representation space for cluster centers is limited, when the number of edits is large (GRACE performs 1K edits whereas iReVa performs 10K edits), the increased number of clusters filling the space may involve more expand or split operations, potentially introducing more errors.
Additionally, regarding the trade-off phenomenon you mentioned between Method iReVa, we discussed this in Section 4.3 of our paper. During training, although the neurons corresponding to previous edits are frozen during backpropagation, they are included in the forward propagation calculations. This mechanism will introduce noise to forward propagation so as to increase the robustness of the editing.
These comparisons will be included in the revised paper after simplification. Thank you again for your valuable feedback. | Summary: This paper introduces iReVa, a novel method for model editing that explicitly initializes and retrofits key-value pairs into MLP blocks of transformer models to perform CRUD (Create, Read, Update, Delete) operations on LMs. iReVa aims to update knowledge in LMs without damaging irrelevant knowledge, offering better interpretability and traceability compared to existing methods. The method is validated through experiments on GPT series models, showing significant improvements in edit success and generalization without affecting specificity.
Strengths: Provision of the first attempt at conducting knowledge withdrawal tests for model editing methods.
The paper includes a comprehensive analysis of iReVa's performance, including knowledge withdrawal tests and generalization tests.
iReVa's approach to model editing is innovative, focusing on retrofitting key-value adaptors into MLP blocks for traceable model editing
Weaknesses: This paper could benefit from a more detailed comparison with other model editing methods, especially those focusing on lifelong learning and continual editing [1][2].
It does not discuss the computational efficiency of iReVa in terms of inference time or memory, which is crucial for real-world applications.
The reliance on the hypothesis that factual knowledge is stored in MLP blocks may be limiting [3], and the authors could explore the broader implications of this assumption.
The method's applicability to other types of tasks, such as erasing hallucinations, is not validated.
There is a noticeable absence of experimental validation on other recent and updated models such as GPT-J (used by ROME etc.), LLaMA.
The technical novelty of iReVa is somewhat limited, as it builds upon existing concepts like MEMIT [4] and key-value memory structures in MLPs [2].
The absence of a strategy for selecting the adaptor layer may hinder the method's rapid migration and application to various language models。
Equation 3 requires clarification, why 'i' and 'o' in Equation 3 are both passed through SELF_ATTEN again?
References
[1] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors, Hartvigsen et al,
Neurips 2023.
[2] Transformer-Patcher: One Mistake worth One Neuron, Huang et al, ICLR 2023.
[3] What does the Knowledge Neuron Thesis Have to do with Knowledge? Niu et al, ICLR 2024
[4] Mass-Editing Memory in a Transformer, Meng et al, ICLR 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reply to reviewer QEHi
Thank you, Reviewer QEHi, for your valuable feedback! We appreciate your recognition of our work. This response primarily addresses the questions you raised in your review.
- More baselines (Weakness line 1): Indeed, the baselines you mentioned are worth comparing. We explain the reason for this in our global rebuttal and hope our specification will address your concern.
- Inference computational efficiency (Weakness line 2): In Section 6.3, we mentioned that iReVa's theoretical inference time overhead compared to the base model is 2nd, where n is the number of new knowledge (q & a pairs) and d is the size of the model's embedding size (hidden size). This overhead is independent of the prompt sequence length L and constitutes only a small portion of the entire model's forward propagation. The inference space overhead is 2nd, consistent with the parameter size introduced by iReVa.
- Knowledge storage (Weakness line 3): Indeed, recent works have proposed storing knowledge in the self-attention module, such as PMET [1]. We proposed iReVa to offer an interpretable and manageable model editing approach. However, the self-attention module is inherently complex, and deconstructing it into an interpretable forward propagation method requires more effort. In our future work, we look forward to proposing interpretable forward propagation in self attention modules.
- Other tasks (Weakness line 4): Regarding the hallucination task, we conducted the experiment following GRACE [2] and Melo [3]. The experimental results are presented in the table below, with the baselines' results selected from the Melo [3]. The results show that iReVa does not perform well on the hallucination task because the target prompts in this dataset are too long. iReVa creates a key-value pair (knowledge neuron) for each token in the target prompt and its prefix. Due to the excessive number of introduced neurons and the high similarity of the prefixes, the probability of errors significantly increases. Additionally, due to the mechanism of equation 7 (Section 4.2), iReVa will only affect the last token of the prompt which is used for next token prediction. However, the hallucination task requires calculating the perplexity (PPL) of the entire sentence, and iReVa only modifies the hidden states of the token at the last position. Since GPT series models are unidirectional autoregressive models, the tokens before this position cannot access this additional information introduced by iReVa, resulting in poor performance of this method on the hallucination task. An intuitive solution is to apply this mechanism to every token in the sentence instead of just the last one, which is just the strategy we applied in the supplymentary experiment below, but this would also introduce more noise, leading to suboptimal performance.
| Backbone | Method | ES $\downarrow$ | ARR $\downarrow$ | Locality $\downarrow$ |
| :------: | :----: | :----: | :-----: | :------: |
| | ROME | 103.82 | 14.02 | 30.28 |
| gpt2-xl | GRACE | 7.14 | 10.00 | 15.84 |
| | Melo | 1.04 | 2.66 | 17.45 |
| | iReVa | 376.69 | 2312.39 | 13.76 |
- More backbones (Weakness line 5). We additionally conducted the experiments in Section 6.5 (table 4) on gpt-j-6b [4] as a supplement, and the results are shown in the table below. iReVa also outperforms baselines on gpt-j-6b. Furthermore, the hyperparameters of the baselines were kept consistent with their source code. We did not include LLaMA [5] family models due to the following considerations: LLaMA models have a different FFN structure compared to common causal decoder models, containing three linear layers: up_proj, down_proj, and gate_proj, unlike the K, V structure in GPT models, hence some baselines can't be reproduced on LLaMA family models.
| Backbone | Method | S $\uparrow$ | ES $\uparrow$ | PS $\uparrow$ | NS $\uparrow$ |
| :------: | :----: | :---: | :---: | :---: | :---: |
| | ROME | 40.86 | 53.81 | 49.89 | 18.87 |
| gpt-j-6b | MEMIT | 66.41 | 94.04 | 72.48 | 32.70 |
| | iReVa | 69.70 | 99.71 | 77.10 | 32.27 |
- Strategy for selecting the adaptor layer (Weakness line 7): The selection of layer is certainly a direction worth exploring. In our early experiments, we examined the patterns of words output by each layer after providing a prompt to the given model. For the GPT-2 XL model, we found that most prompts resulted in an output of a meaningless word, such as a space or newline, at the penultimate layer. Therefore, we hypothesized that there might be unused knowledge storage capacity in this layer, which led us to select the penultimate layer as the adaptor layer. In current editing works, the editing layer is usually a given hyperparameter, and the optimal value is found by modifying the layer. Thus, we followed these works and conducted layer normalization experiments (Section 6.5). The penultimate layer performed slightly better than other layers, which to some extent supports this hypothesis.
- Problems of Equation 3 (Weakness line 8): In equation 3, i and o are passed into Equation 3 due to the "add and norm" (residual connection) architecture in widely-recognized transformer models.
- Reference:
[1] Transformer-patcher: One mistake worth one neuron. In The Eleventh International Conference on Learning Representations, 2023.
[2] Aging with grace: Lifelong model editing with discrete key-value adaptors. ArXiv, abs/2211.11031, 2022.
[3] Melo: Enhancing model editing with neuron indexed dynamic lora, 2023.
[4] GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model, https://github.com/kingoflolz/mesh-transformer-jax, 2021.
[5] LLaMA: open and efficient foundation language models, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply, my concern has been addressed. I have raised my score. | Summary: This paper introduces a novel method called iReVa for knowledge editing. iReVa initializes and retrofits key-value pairs into MLP blocks to create a new mapping of knowledge without affecting related information. Compared to existing methods, iReVa offers better interpretability and a stronger ability to make traceable edits.
Strengths: 1. The proposed methods demonstrate great performance compared to other baselines under the batch editing scenarios.
Weaknesses: 1. The color in the figure is not obvious to discriminate between the original knowledge neurons and new knowledge neurons.
2. The computation of the proposed method is similar to T-Patcher, I'm curious about the difference between them. The proposed methods are designed to tackle the batch edit, but it seems it still needs to add one neuron for each example.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Line 57-59 focus on self-attention, while the author selects MLP to further investigate, the logic here is a bit confusing. Is there something I missing?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reply to reviewer vMmP
Thank you, Reviewer vMmP, for your valuable feedback! We appreciate your recognition of our work. This response primarily addresses the questions you raised in your review.
- Figure issue (Weakness 1): Thanks for your reminder. We will revise the figure and adjust the image color to make it clearer in the revised version.
- T-Patcher [1] issue (Weakness 2): Indeed, the forward propagation process of iReVa is similar to T-Patcher. The difference between iReVa and T-Patcher is clarified in our global rebuttal, we wish its content will address your issue.
- Line 57-59 (Questions 1): We apologize for the writing error. The sentence in lines 57-58 should be rewritten as: "In contrast, PMET [2], through a cosine similarity analysis on hidden states experiment, posed viewpoints that the self-attention module can extract various types of knowledge."
- Reference:
[1] Transformer-patcher: One mistake worth one neuron. In The Eleventh International Conference on Learning Representations, 2023.
[2] Pmet: Precise model editing in a transformer. In AAAI, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, and I will be maintaining my positive evaluation of your manuscript. | Summary: This paper focuses on model editing at a low cost. Evidence suggests that modules carrying knowledge in a Transformer module are primarily the MLP blocks. Therefore, the authors propose a method, namely iReVa, to initialize and retrofit key-value pairs into MLP blocks in a Transformer for explicitly inserting new knowledge. Specifically, they insert new neurons in the MLP blocks for each piece of knowledge. Each neuron is initialized with the embedded key and value derived from the input-output pair, respectively. To prevent dramatic change to the irrelevant knowledge, iReVa further retrofits the key and value by fine-tuning with multiple objectives. Compared to the existing methods such as MEND, ROME, MEMIT, and MELO, iReVa reveals better interpretability and stronger capacity for carrying traceable edits. The experiments on zsRE-10K and PARAREL-10K datasets reveal that iReVa has superior performance regarding edit success, generalization, and specificity. Further edit withdrawal test indicates that iReVa can explicitly manipulate the activation of neurons and easily withdraw the edits.
Strengths: 1. This paper focuses on modeling editing, which has significant applications in the era of LLMs. It can be applied to alleviate the hallucination issue of LMs and resolve the out-of-date as well as missing knowledge in an LM.
2. This paper introduces a novel editing method with key-value adaptors for traceable model editing. The proposed method makes sense to me. The initialization with embedded key and value derived from the input-output pair can easily make precise edits to the model. Further retrofitting refines the adaptors to satisfy the task.
3. For experiments, the author has comprehensively shown the superiority of their method in the perspectives of edit success, generalization, and specificity. And more analyses reveal the generalization of iReVa. Particularly, the edit withdrawal test in Section 6.2 is well-designed, which shows the effect of traceable edits and could provide a potential solution for dynamic knowledge maintenance for LMs.
4. Overall, this paper is well-written and easy to follow.
Weaknesses: 1. The discussions on the limitations and broader societal impacts of iReVa are not included in the paper. I have some questions about the application scope of the proposed method. Please see the questions below.
Questions
1. Could iReVa lead to a dramatically increasing number of parameters? Let’s see if there are millions of knowledge for editing, how can you potentially insert all the knowledge into LMs with iReVa?
2. After you change a piece of knowledge, can the reasoning still be conducted for the edited knowledge? For example, if we have edited the president of America, could some reasoning questions like ``Who is the wife of the president of America” also be resolved with the new knowledge?
3. Typo: ``evident’’ in line 6 should be ``evidence’’. Please check.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No, the author should discuss the limitations of the proposed method such as the application scope, the potential risks, and future improvement to indicate how robust the results are to violate the
assumption. I would like the author to add such information during the rebuttal.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Reply to reviewer QtC8
Thank you, Reviewer QtC8, for your valuable feedback! We appreciate your recognition of our work. This response primarily addresses the questions you raised in your review.
- Increasing the number of parameters (Question 1): In Section 6.3 of our paper, we analyzed the space overhead of iReVa, which is 2nd, where n is the number of new knowledge (q & a pairs) and d is the size of the model's embedding size (hidden size). This means that iReVa's space consumption grows linearly with the amount of new knowledge. Theoretically, iReVa’s space overhead could be smaller, but this would involve following techniques. In practice, a significant portion of the new knowledge added to the model are frequently-updated, meaning the answer changes while the question remains the same. For such knowledge, we only need to locate the corresponding neurons Ki, Vi. For example, when inserting new knowledge, we can directly feed the new question and supplymentary similar questions into the model with iReVa to see if any neurons are simultaneously activated by all questions (essentially a threshold-based similarity match). By modifying Vi to adapt to the new answer, no additional space is needed. For new facts, where the question has not been recorded by the model's memory via iReVa, additional space is required.
- Reasoning for new knowledge (Question 2): Reasoning has always been a challenging issue in the field of model editing. In the background of large language models, reasoning ability lacks interpretability, making it difficult for most model editing methods, including iReVa, to apply newly learned knowledge in reasoning tasks. One existing attempt is IKE [1], which inspired from in-context learning. However, this approach affects interpretability and locality metrics. Overall, the reasoning ability of current model editing methods often trades off with interpretability, which many researchers are striving to solve.
- Limitations: Thanks for raising this issue. The limitation of iReVa can be summarized as follows: 1. iReVa performs poorly when the target prompt is a long sentence because it constructs a knowledge neuron for each token in the target prompt, thereby increasing the training time cost. Additionally, during inference, the high number of neurons increases the probability of errors. 2. To maintain iReVa's interpretability, its application is limited, including that iReVa can be only applied on GPT like models and generation task. 3. The behaviour of iReVa (ES and PS) won't enhance noticeably as the scale of base model grows. These limitations will also be included in our revised version.
- Reference:
[1] Can we edit factual knowledge by in-context learning? Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.
[2] Editing factual knowledge in language models, 2021.
[3] Fast model editing at scale, 2022.
---
Rebuttal Comment 1.1:
Title: Reply for Authors' Response
Comment: Thanks for your reply. I have read your response carefully. I will keep my initial rating for your manuscript. | Rebuttal 1:
Rebuttal: # Global rebuttal
We appreciate all reviewers' invaluable feedbacks! This section is our global rebuttal, which addresses common questions raised by multiple reviewers. We hope all reviewers will see this. The following is the content.
## Difference between iReVa and T-Patcher
- Mistaken knowledge: T-Patcher only establishes new neurons for knowledge where the model previously answered incorrectly, while iReVa establishes new neurons for all knowledge, which benefits subsequent neuron management. In real scenarios, the knowledge to be edited may need frequent updates, meaning it might be modified or rolled back in the future. iReVa's neuron management allows for deletion, updating, and other operations on identifiable neurons (by feeding new question and supplymentary similar questions into the model with iReVa to see if any neurons are simultaneously activated by all questions).
- Backbone difference: According to the experiment section of T-Patcher, they only apply T-Patcher on transformers of encoder-decoder architecture. T-Patcher's editing position is at the encoder's final layer, unlike iReVa, which operates at the penultimate layer. This will be explained in the following responses.
- Forward propagation differences: During inference, iReVa uses max pooling for newly inserted neurons to avoid unnecessary noise from activating too many neurons. Furthermore, we do not apply learnable bias for newly inserted vectors Ki and Vi.
- Initialization techniques: In iReVa, we initialize two parameters, Ki and Vi, for new neurons so that the model can answer questions related to new knowledge even without training, demonstrating iReVa's stronger interpretability.
- Different optimization goals: For trained iReVa, we do not directly optimize the activation value of qKi because iReVa operates at the penultimate layer. Optimizing qKi would only ensure correct output at this layer, not at the final layer. Operating iReVa at the final layer, as T-Patcher does, might result in loss of locality. However, operating at the penultimate layer allows for correction by the final layer's frozen parameters. Additionally, with initialized weights, we added constraints on learnable parameters to prevent them from deviating too far from the initial weights, as our experiments in Section 6.1 showed the effectiveness of our initialization.
- Independent neurons: iReVa establishes a dedicated neuron for each piece of new knowledge. In the algorithm 1 (Section 4.2, Line 167-170), We merely optimize one key-value pair (knowledge neuron) for current edit question, namely the parameter previous-edited question's key-value pairs is frozen, thus neurons for different knowledge are independent and do not interfere with each other. By re-feeding the question into the model, the corresponding neuron can be located. In T-Patcher, the batch size during training is set to 32, that means T-Patcher cannot distinguish which of these 32 neurons corresponds to which piece of knowledge.
## Missing of baselines
T-Patcher [1] and GRACE [2] both use encoder-decoder models as their backbone for generation tasks like zsRE [3], whereas iReVa uses GPT-like models. Specifically, these methods operate on the model's encoder. In the experiment of T-Patcher, only encoder only model and encoder-decoder model are used. If we force T-Patcher to embed in our GPT like backbone, the implementation will differ remarkably from their source code. For GRACE, they do not apply decoder-only models in zsRE, whereas they use GPT models on hallucination task, which means GRACE might be compatible with the GPT series models. Among the methods we compared, MELO [4] has a similar setup to GRACE and shows effectiveness, so we chose it as a representative for comparison.
- Reference:
[1] Transformer-patcher: One mistake worth one neuron. In The Eleventh International Conference on Learning Representations, 2023.
[2] Aging with grace: Lifelong model editing with discrete key-value adaptors. ArXiv, abs/2211.11031, 2022.
[3] Fast model editing at scale, 2022.
[4] Melo: Enhancing model editing with neuron indexed dynamic lora, 2023. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
One-to-Multiple: A Progressive Style Transfer Unsupervised Domain-Adaptive Framework for Kidney Tumor Segmentation | Accept (poster) | Summary: The paper proposes the One-to-Multiple Progressive Style Transfer Unsupervised Domain-Adaptive (PSTUDA) framework for kidney and tumor segmentation in multi-sequence MRI, addressing inefficiencies in existing one-to-one UDA methods. PSTUDA features a multi-level style dictionary and multiple cascading style fusion modules using point-wise instance normalization to recombine content and style features progressively. Tested on private (MSKT) and public (KiTS19) datasets, PSTUDA showed 1.8% and 3.9% improvements in Dice Similarity Coefficient for kidney and tumor segmentation, respectively, and achieved significant reductions in model parameters and computational costs. The framework outperformed state-of-the-art UDA methods, enhancing both segmentation performance and training efficiency.
Strengths: This paper demonstrates several strengths across various dimensions.
In terms of originality, PSTUDA introduces a relatively interesting approach by employing cascaded style fusion modules and Point-wise Instance Normalization (PIN) to achieve excellent cross-modal alignment and structural consistency. This combo allows for more precise and progressive recombination of content and style features, addressing limitations in existing one-to-one UDA methods and extending the applicability of domain adaptation techniques in medical imaging.
The strong review of related works situates PSTUDA within the broader context of domain adaptation and medical image segmentation.
The paper is well-written and clear, providing detailed explanations of the architecture and methodologies. It emphasizes structural consistency, which is critical for medical image analysis. The comprehensive writing extends to the appendices, offering an appropriate level of detail on datasets, which aids in reproducibility and understanding
PSTUDA’s experimental design is robust, featuring both comparative and ablation studies as well as appropriate metrics and analysis.
This paper is original, detailed, clear, and demonstrates reasonable improvements in segmentation with significant improvements in floating point operations and model size.
Weaknesses: - The reported improvements in Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD95), while notable, are relatively modest (1.8% and 3.9%, respectively).
- The framework's scalability to handle a larger number of target domains in real-world scenarios remains untested.
- The generalizability of this approach beyond simple kidney/tumour segmentation into multi-class intra-organ segmentation or even multi-organ segmentation remains to be seen.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I would agree with the concept of the multiple cascading modules and Point-wise Instance Normalization but am surprised it doesn't benefit the overall segmentation results further. Can the authors provide their insights into why there isn't a more dramatic boost? What future work do they envision to achieve more substantial performance boosts?
- paper demonstrates the effectiveness of PSTUDA on kidney and tumor segmentation in multi-sequence MRI. Have the authors tested the framework on other medical imaging tasks or modalities, such as CT or ultrasound? What would expected results and challenges be in other modalities?
- Despite the reduction in model parameters and FLOPs, PSTUDA still appears resource-intensive. How feasible is the deployment of PSTUDA in real-world clinical settings with limited computational infrastructure?
- Are there any plans to release code, pre-trained models, or detailed implementation guides? How can the research community best leverage PSTUDA in their own work?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations and Broader Impacts are appropriately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the positive consideration and the useful feedback on our work. We address all your concerns below.
**W1. About Performance Enhancement.**
The one-to-multiple UDA task presents many additional challenges compared to the one-to-one task. For example, the differences between various target domains are distinct, and a single generator must coordinate the complex mapping relationships among multiple target domains, thereby making the learning process more difficult. However, a single generator can translate one source domain into multiple target domains, significantly reducing the training time and resource costs compared to one-to-one methods. Overall, our findings have shown significant advantages and we believe our approach holds substantial benefits for multi-sequence UDA tasks.
**W2. Scalability to more target domains.**
As you suggested, we have acquired a new set of T1 MRI sequences from our partner hospital to validate the performance of our method in a one-to-five domain adaptive segmentation task. As shown in the table below, our method maintains high performance even with an increased number of target domains, which are derived from real-world clinical scenarios.
| **Methods** | | **CT→T1c** | | | **T1c→FS T2W** | | | **CT→T2W** | | | **CT→DWI** | | | **CT→T1** | |
| :-----------------: | :-------: | :--------: | :-------: | :-------: | :------------: | :-------: | :-------: | :--------: | :-------: | :-------: | :--------: | :-------: | :-------: | :-------: | :-------: |
| | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. |
| Supervised training | 90.74 | 85.69 | 88.22 | 90.14 | 88.73 | 89.44 | 87.53 | 80.68 | 84.11 | 90.20 | 84.76 | 87.48 | 84.40 | 67.42 | 75.91 |
| W/o adaptation | 71.20 | 13.27 | 42.24 | 43.75 | 6.27 | 25.01 | 9.13 | 22.31 | 15.72 | 49.25 | 4.13 | 26.69 | 14.72 | 5.33 | 10.03 |
| StarGAN v2 | 51.87 | 25.06 | 38.47 | 54.18 | 21.94 | 38.06 | 42.10 | 9.08 | 25.59 | 56.17 | 15.14 | 35.66 | 59.38 | 20.32 | 39.85 |
| PSTUDA | **83.88** | **73.88** | **78.88** | **82.89** | **77.08** | **79.99** | **75.84** | **59.73** | **67.79** | **84.37** | **72.15** | **78.26** | **82.28** | **68.58** | **75.43** |
**W3 and Q2. Abdominal multi-organ segmentation.**
Due to the limited availability of medical data, we apologize that we were unable to collect a multi-target domain abdominal multi-organ dataset within a short timeframe. However, we collected a multi-organ dataset [1-2] consisting of CT and MR images from internet to validate the proposed method. The experimental results are presented in the table below.
| **Methods** | | | **CT→MR** | | | | | **MR→CT** | | |
| :-----------------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: |
| | Liver | R. kidney | L. kidney | Spleen | Avg. | Liver | R. kidney | L. kidney | Spleen | Avg. |
| Supervised training | 94.39 | 90.86 | 73.38 | 78.00 | 84.16 | 87.45 | 69.33 | 77.76 | 75.61 | 77.54 |
| W/o adaptation | 23.44 | 1.99 | 12.77 | 20.28 | 14.62 | 37.33 | 0.00 | 0.26 | 1.29 | 9.72 |
| StarGAN v2 | 75.04 | 68.16 | 62.83 | 68.13 | 68.54 | 44.32 | 28.05 | 26.51 | 24.85 | 30.93 |
| PSTUDA | **88.15** | **74.20** | **70.51** | **71.99** | **76.21** | **53.54** | **53.42** | **60.28** | **38.48** | **51.43** |
We observe that although the results of MR to CT showed an improvement of approximately 42% compared to no domain adaptation, there is still significant potential for enhancement in comparison to fully supervised methods. Furthermore, the results from CT to MR were better. We attribute this difference to the higher image quality of CT in contrast to MR, which presents a greater challenge for domain adaptation from MR to CT.
**Q1 Insights and future work.**
As shown in Table 8 of Appendix E, the ablation study indicates that PIN brings significant performance improvements. However, the results for the multi-scale discriminator in Table 7 show that the improvement is not as pronounced. In our future work, we will conduct further investigation into different versions of discriminators. Furthermore, PSTUDA is essentially a multi-domain to multi-domain translation framework. We will also focus on developing a dedicated one-to-multiple UDA framework to achieve more substantial performance improvements.
**Q3 Feasibility of deployment.**
By utilizing the generator and multi-level style dictionary from PSTUDA in a clinical environment, we can achieve rapid one-to-multiple domain adaptation. We conducted simulations of PSTUDA’s performance using minimal computational resources with clinical data from our partner hospitals. Our PSTUDA requires approximately 13M of storage and 15G of computational resources. Thus, we believe that PSTUDA can be easily deployed even in clinical settings with limited computational infrastructure.
**Q4 Code open source.**
We will release the code, pre-trained models, and detailed implementation guidelines upon acceptance.
[1] Kavur, A. E., et al. CHAOS challenge-combined (CT-MR) healthy abdominal organ segmentation. MedIA 2021.
[2] Landman, B., et al. Multi-Atlas Labeling Beyond the Cranial Vault. 2017.
---
Rebuttal Comment 1.1:
Comment: I appreciate the clarification on the complexities of the one-to-multiple UDA task. The trade-offs in coordinating complex mappings among multiple target domains and the resultant benefits in training time and resource costs are well-explained.
The additional validation using the new T1 MRI sequences is adequate.
Consider moving the ablation studies into the main paper.
The simulations indicating the feasibility of deploying PSTUDA with minimal computational resources are reassuring. The specific resource requirements you provided are helpful for understanding the deployment context.
I am pleased to hear about the plans to release the code, pre-trained models, and implementation guidelines upon acceptance.
Overall, your rebuttal has adequately met my concerns. I maintain my score as it is.
---
Rebuttal 2:
Title: Thanks for the feedback!
Comment: Thank you for your positive feedback on our manuscript. We're glad our efforts to address your questions were satisfactory. We will carefully consider your suggestion to move the ablation studies into the main paper. | Summary: The authors propose a novel and efficient One-to-Multiple Progressive Style Transfer Unsupervised Domain-Adaptive (PSTUDA) framework to address the UDA task for MRI sequences. Specifically, they developed a multi-level style dictionary that explicitly stores style information for each target domain at different stages, reducing the burden on a single generator in multi-target transfer tasks and effectively decoupling content and style. Additionally, multiple cascaded style fusion modules are employed, utilizing point-wise instance normalization to progressively recombine content and style features, thereby enhancing cross-modal alignment and structural consistency. Experiments conducted on both the private MSKT and public KiTS19 datasets demonstrate the effectiveness of this approach.
Strengths: 1. A multi-level style dictionary was developed, which reduces the burden on a single generator in multi-target transfer tasks and effectively decouples content and style.
2. Multiple cascaded style fusion modules are employed to progressively recombine content and style features, thereby enhancing cross-modal alignment and structural consistency.
3. The organization of the paper's structure is excellent.
Weaknesses: 1. The open-source code is not explicitly provided.
2. The explanation of the One-to-Multiple Framework is somewhat rough and needs more details.
3. PSTUDA needs to be validated bidirectionally on cross-modal data, such as MRI and CT.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Reading content from the multi-level style dictionary, does it incur additional overhead? If a new unknown domain is encountered, the learning of the multi-level style dictionary requires a warm-up (there is additional overhead the first time a new domain is encountered).
2. What is puzzling is that Table 6 shows the multi-level style dictionary only takes up 0.1M; does this include tensors?
3. Is there any difference between the Generator and Discriminator Architecture in Section 3.4 and previous work?
4. The theory of achieving independent style transfer for each pixel using Point-wise Instance Normalization (PIN) requires further validation. While it may be effective, the explanation of its application in style transfer is not sufficiently thorough. Additionally, although the targets in medical image segmentation are more precise, this does not necessarily imply that style adjustments need to be more precise. Further explanation is needed.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed the limitations of the current work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your constructive and insightful comments. We address all weaknesses and questions below.
**W1. Open-source code is not explicitly provided.**
We will release the code, pre-trained models, and detailed implementation guidelines upon acceptance. To ensure reproducibility, we build on the open-source implementations of StarGAN v2 and CycleGAN, and provide all relevant hyperparameters in Appendix B.
**W2. One-to-Multiple framework needs more details.**
We apologize for this and will provide a more detailed description of our One-to-Multiple PSTUDA framework in subsequent versions.
**W3. Bidirectional validation on cross-modal MRI and CT data.**
According to your suggestion, we conducted bidirectional cross-modal validation on the abdominal multi-organ dataset [1-2] and performed reverse validation experiments from MR to CT on the MSKT and KiT19 datasets. The results are shown in the following two tables. Both sets of experiments demonstrate that our method significantly outperforms StarGAN v2 (baseline).
| **Methods** | | | **CT→MR** | | | | | **MR→CT** | | |
| :-----------------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: |
| | Liver | R. kidney | L. kidney | Spleen | Avg. | Liver | R. kidney | L. kidney | Spleen | Avg. |
| Supervised training | 94.39 | 90.86 | 73.38 | 78.00 | 84.16 | 87.45 | 69.33 | 77.76 | 75.61 | 77.54 |
| W/o adaptation | 23.44 | 1.99 | 12.77 | 20.28 | 14.62 | 37.33 | 0.00 | 0.26 | 1.29 | 9.72 |
| StarGAN v2 | 75.04 | 68.16 | 62.83 | 68.13 | 68.54 | 44.32 | 28.05 | 26.51 | 24.85 | 30.93 |
| PSTUDA | **88.15** | **74.20** | **70.51** | **71.99** | **76.21** | **53.54** | **53.42** | **60.28** | **38.48** | **51.43** |
| **Methods** | | **MR(T1c)→CT** | | | **MR(FS T2W)→CT** | |
| :-----------------: | :-------: | :------------: | :-------: | :-------: | :---------------: | :-------: |
| | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. |
| Supervised training | 91.98 | 66.61 | 79.30 | 91.98 | 66.61 | 79.30 |
| W/o adaptation | 41.87 | 16.26 | 29.07 | 8.07 | 11.36 | 9.72 |
| StarGAN v2 | 34.44 | 24.02 | 29.23 | 46.17 | 22.96 | 34.57 |
| PSTUDA | **73.13** | **56.23** | **64.68** | **78.13** | **50.85** | **64.49** |
**Q4. Theory of PIN.**
We believe that PIN has significant advantages in certain complex scenarios. For example, in kidney tumor medical imaging data, the contrast, texture, and morphology of the lesion areas exhibit significant differences from normal tissues. These differences ultimately stem from variations in data distribution. In such cases, PIN customizes a set of scaling parameters for each local spatial point in the content feature map for each channel, allowing for more precise alignment of local differences, thereby making the translated images closer to the real target domain’s data distribution. The calculation formula for PIN is:
$$
\huge y_{nchw} = \gamma_{nchw}(v_{ss}) \cdot \hat x_{nchw} + \beta_{nchw}(v_{ss})
$$
$$
\huge \hat x_{nchw} = \frac{x_{nchw} - \mu_{nc}}{\sqrt{\sigma_{nc}^2 + \epsilon}}
$$
$$
\huge \gamma_{nchw}(v_{ss}), \beta_{nchw}(v_{ss}) = chunk(h_{n(2c)hw})
$$
$$
\huge h_{n(2c)hw} = ConvBlock(v_{ss})
$$
where $\gamma$ and $\beta$ are obtained through convolutional transformations via $v_{ss}$, and their dimensions are consistent with those of $\hat x_{nchw}$.
**Q4. Effectiveness and Interpretation of PIN.**
We conducted a comparison of PIN with the widely used AdaIN [3] and BIN [4], as shown in the table below. The results show that our PIN achieves the best performance. Theoretically, fully supervised segmentation results should be the upper limit for UDA, as it is conducted within the same domain, with both the training and test sets belonging to the same distribution. The key challenge in UDA tasks lies in addressing the differences in data distribution between different domains. Hence, the closer the distribution of the translated images is to the real target domain’s data distribution, the better the segmentation performance. Thus, we believe that the precise style adjustments in PIN will further reduce data distribution differences, leading to more accurate target segmentation.
| **Normalization** | | **T1c→FS T2W** | | | **T1c→T2W** | | | **T1c→DWI** | |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. |
| AdaIN | 85.05 | 62.11 | 73.58 | 75.40 | 43.32 | 59.36 | 83.69 | 64.44 | 74.07 |
| BIN | 82.32 | 67.91 | 75.12 | 74.14 | 49.02 | 61.58 | 85.85 | 65.07 | 75.46 |
| PIN | **86.30** | **76.36** | **81.33** | **77.26** | **53.77** | **65.52** | **86.99** | **74.23** | **80.61** |
**Additional comment:** We would have liked to include the rest of answers to the questions mentioned by Reviewer **tgyp** (marked as **Q1**,**Q2**, and **Q3**). Unfortunately, we did not have enough space in this rebuttal box. As soon as the discussion phase will begin, we will include the mentioned answers in an additional comment for the reviewer.
---
Rebuttal 2:
Title: Response to additional questions
Comment: As we mentioned in the main rebuttal, we include the answers to the additional questions of Reviewer **tgyp**. We hope that this helps to address all remaining concerns and we thank again for taking the time to review our work.
**Q1. Overhead and warm-up for multi-level style dictionary.**
In our experiments, the multi-level style dictionary essentially consists of a set of learnable tensors. Content is accessed through indexing, and as the style vectors are independent of each other, this does not result in any additional overhead.
Our method primarily focuses on adapting a source domain to multiple target domains simultaneously. All domains are fixed and visible during training, so there are no encounters with new unknown domains. Additionally, the multi-level style dictionary is randomly initialized at the beginning of training and does not require a warm-up phase.
**Q2. Size of Multi-level Style Dictionary.**
In our experiments, the multi-level style dictionary is directly created as a set of learnable tensors. Specifically, its dimensions are (4, 6, 4096), where 4 represents the number of domains, 6 represents the number of style fusion layers (four layers in the style fusion module and two layers in the decoder), and 4,096 represents the depth of the style dictionary. Therefore, this tensor contains a total of 4×6×4096=98,304 parameters, which converts to approximately 0.10M.
**Q3. Generator and Discriminator Architecture.**
* **Generator:** PSTUDA uses the CycleGAN generator. The difference lies in the replacement of normalization operations in the style fusion module and decoder from AdaIN to our proposed PIN. Additionally, we employ a progressive style injection method layer by layer to achieve image translation.
* **Discriminator**: PSTUDA’s discriminator architecture is based on the multi-scale discriminator from MUNIT. The difference is that we replace the Conv2dBlock in MUNIT with ResBlk, which has a residual structure. Additionally, the multi-scale output sizes are set to 1/16, 1/32, 1/64, and 1/128 of the original image size. Our motivation for using a multi-scale discriminator is that the generator, integrated with PIN, will have enhanced generative performance. Therefore, the discriminator also needs to be more powerful to match the generator, enabling better adversarial training and effectively leveraging the generator’s capabilities.
[1] Kavur, A. E., et al. CHAOS challenge-combined (CT-MR) healthy abdominal organ segmentation. MedIA 2021.
[2] Landman, B., et al. Multi-Atlas Labeling Beyond the Cranial Vault. 2017.
[3] Huang, X., et al. Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. ICCV 2017.
[4] Nam, H., et al. Batch-Instance Normalization for Adaptively Style-Invariant Neural Networks. NeurIPS 2018
---
Rebuttal Comment 2.1:
Comment: The authors provided bidirectional cross-modal experimental results, further demonstrating the generalization capability of the method. Additionally, they addressed each of the raised questions individually and promised to make revisions in the revised version. Therefore, I suggest changing the score to 5: Borderline Accept.
Furthermore, there is a minor issue: I hope the authors can specify the precision used in the model, such as whether it is FP16, in the revised version.
---
Reply to Comment 2.1.1:
Title: Thanks for the feedback!
Comment: Thank you for recognizing our efforts in the rebuttal. We greatly appreciate your consideration in raising the rating of our paper. Your feedback has been invaluable to our work. Based on your suggestions, we will specifically clarify the precision used in the model in the revised version. | Summary: The paper presents a one-to-multiple progressive style transfer unsupervised domain adaptation framework designed for kidney and tumor segmentation.
It aims to mitigate the challenges of annotation burden and domain differences by employing a multi-level style dictionary and cascading style fusion modules.
It demonstrates significant improvements in segmentation performance and efficiency on the MSKT and KiTS19 datasets.
Strengths: x. The introduction of a multi-level style dictionary and point-wise instance normalization (PIN) for progressive style transfer that effectively decouples content and style features.
x. It significantly reduces the floating-point computation by approximately 72% and the number of model parameters by about 50%, highlighting its efficiency and feasibility for practical clinical applications.
Weaknesses: x. **Concern about novelty.**
First, the authors extended the UDA setting to 'one-to-multiple'. I am not sure about its practical or clinical relevance here compared to continued UDA or UDA on evolving domains.
Second, there seems to be limited novelty compared to the architecture of OMUDA derived from StarGAN v2.
x. **Concern about the effectiveness of PIN**:
Point-wise Instance Normalization is a central component of the framework, but its effectiveness compared to other normalization techniques is not thoroughly evaluated. There might be scenarios where the PIN is less effective, and alternative normalization methods like Adaptive Instance Normalization or Batch-Instance Normalization might perform better.
x. **Concern on analysis.**
The multi-level style dictionary and the cascading style fusion modules. Understanding the contribution of each part would help in refining the model. The sensitivity of the model’s performance to various hyperparameters, such as the temperature parameter (\tao), the number of style fusion modules, and the depth of the style dictionary, is not extensively studied.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see my comments above.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your constructive and insightful comments. We address all weaknesses below.
**W1. Practical and clinical relevance of PSTUDA.**
PSTUDA is designed as a one-to-multiple UDA framework based on multi-sequence MRI segmentation tasks, thus having strong clinical relevance. Compared to methods that train target domains sequentially (including continued UDA and UDA on evolving domains), PSTUDA offers the following advantages:
* **Flexibility:** Sequential methods treat all target domains as a single domain during training, which prevents them from generating images of the specified target domain during inference, potentially leading to inaccurate results. In contrast, PSTUDA can generate images of the specified target domain as needed during inference. This flexibility meets the diverse requirements of clinical applications.
* **Training efficiency:** Sequential methods train in a serial manner, whereas PSTUDA trains all source and target domains in parallel, significantly reducing training time. Additionally, PSTUDA’s generator and multi-level style dictionary enable quick deployment and fast inference, meeting clinical demands.
**W1. Novelty of PSTUDA.**
PSTUDA is inspired by OMUDA and StarGAN v2. However, we believe there are some fundamental differences between our method and these models:
* **Style feature acquisition:** OMUDA extracts style features from a style encoder or mapping network, which lacks representativeness and stability. In contrast, PSTUDA addresses this issue by using a multi-level style dictionary to directly learn the global style features of each domain.
* **Generator architecture:** Both OMUDA and PSTUDA use the CycleGAN generator. However, OMUDA uses the same style features in each layer of AdaIN, while PSTUDA replaces AdaIN with PIN and progressively injects different style features layer by layer to achieve image translation.
* **Discriminator architecture:** OMUDA uses the StarGAN v2 discriminator, whereas PSTUDA employs a multi-scale discriminator to better leverage the capabilities of its powerful generator integrated with PIN.
The ablation experiments in Appendix E demonstrate the effectiveness of our method.
**W2. Evaluation of PIN effectiveness.**
AdaIN [1] and BIN [2] provide global scaling parameters for each channel of the content feature map. In contrast, our PIN offers unique scaling parameters (mean and standard deviation) for each local spatial point in each channel.
We believe that PIN is particularly valuable for fine-grained segmentation tasks due to its ability to consider local style differences, enabling PSTUDA to generate synthetic images that better match the target domain data distribution.
To demonstrate our perspective, we conducted ablation experiments on the three methods, as presented in the table below. The results indicate that PIN outperforms AdaIN and BIN in terms of performance, especially in kidney tumor segmentation. This superiority can be attributed to the fact that kidney tumors, being abnormal pathological tissues, exhibit significant style differences which can be finely processed by PIN.
| **Normalization** | | **T1c→FS T2W** | | | **T1c→T2W** | | | **T1c→DWI** | |
| :---------------: | :-------: | :------------: | :-------: | :-------: | :---------: | :-------: | :-------: | :---------: | :-------: |
| | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. |
| AdaIN | 85.05 | 62.11 | 73.58 | 75.40 | 43.32 | 59.36 | 83.69 | 64.44 | 74.07 |
| BIN | 82.32 | 67.91 | 75.12 | 74.14 | 49.02 | 61.58 | 85.85 | 65.07 | 75.46 |
| PIN | **86.30** | **76.36** | **81.33** | **77.26** | **53.77** | **65.52** | **86.99** | **74.23** | **80.61** |
**Additional comment:** We would have liked to include the rest of answers to the questions mentioned by Reviewer **yJwR** (marked as **W3**). Unfortunately, we did not have enough space in this rebuttal box. As soon as the discussion phase will begin, we will include the mentioned answers in an additional comment for the reviewer.
---
Rebuttal Comment 1.1:
Title: thank you.
Comment: Most of my concerns are addressed. Hence I raised the score to 5.
Please include the new results in the manuscript if it is accepted.
---
Rebuttal 2:
Title: Response to additional questions
Comment: As we mentioned in the main rebuttal, we include the answers to the additional questions of Reviewer **yJwR**. We hope that this helps to address all remaining concerns and we thank again for taking the time to review our work.
**W3. Ablation Study on key hyperparameters affecting model performance.**
Our main contributions lie in the multi-level style dictionary and the style fusion module. These two modules have the following hyperparameters: the depth of the style dictionary, the number of levels in the style dictionary, and the number of style fusion modules. Note that the number of levels in the style dictionary is equal to the number of style fusion modules.
To investigate the sensitivity of model performance to various hyperparameters, according to your suggestions, we conducted extensive ablation studies on the depth of the multi-level style dictionary (MSD) and the number of style fusion modules (SFM). The results, presented in the table below, indicate that under the original settings (dictionary depth of 4,096, module number of 4), PSTUDA performs optimally in most experiments. Notably, when we extended our dictionary depth to 16,384, there was an improvement in segmentation performance on T2W sequence.
| **MSD Depth** | | **T1c→FS T2W** | | | **T1c→T2W** | | | **T1c→DWI** | |
| :-----------: | :-------: | :------------: | :-------: | :-------: | :---------: | :-------: | :-------: | :---------: | :-------: |
| | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. |
| 256 | 84.85 | 66.34 | 75.60 | 70.55 | 43.85 | 57.20 | 83.15 | 63.52 | 73.34 |
| 1024 | 85.84 | 69.68 | 77.76 | 70.06 | 42.76 | 56.41 | 83.99 | 64.41 | 74.20 |
| 4096 | **86.30** | **76.36** | **81.33** | 77.26 | 53.77 | 65.52 | **86.99** | **74.23** | **80.61** |
| 16384 | 84.63 | 66.74 | 75.69 | **77.63** | **57.49** | **67.56** | 83.31 | 68.78 | 76.05 |
| **SFM Number** | | **T1c→FS T2W** | | | **T1c→T2W** | | | **T1c→DWI** | |
| :------------: | :-------: | :------------: | :-------: | :-------: | :---------: | :-------: | :-------: | :---------: | :-------: |
| | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. | Kidney | Tumor | Avg. |
| 2 | 85.65 | 71.77 | 78.71 | 73.94 | 44.51 | 59.23 | 83.89 | 61.31 | 72.60 |
| 4 | **86.30** | **76.36** | **81.33** | **77.26** | **53.77** | **65.52** | **86.99** | **74.23** | **80.61** |
| 6 | 84.49 | 67.94 | 76.22 | 74.97 | 50.57 | 62.77 | 81.20 | 60.44 | 70.82 |
| 8 | 84.09 | 63.86 | 73.98 | 60.06 | 30.43 | 45.25 | 83.75 | 60.60 | 72.18 |
[1] Huang, X., et al. Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. ICCV 2017.
[2] Nam, H., et al. Batch-Instance Normalization for Adaptively Style-Invariant Neural Networks. NeurIPS 2018.
---
Rebuttal 3:
Title: Thanks for the feedback!
Comment: We are pleased to have addressed most of your concerns and we sincerely appreciate you raising our paper's score to 5: Borderline Accept. Your feedback has been invaluable to our work, and based on your suggestions, we will include the new supplementary experimental results in the revised version. Thank you for your support! | null | null | Rebuttal 1:
Rebuttal: Dear reviewers and AC,
We sincerely appreciate your valuable time and effort spent reviewing our manuscript. We would like to thank all the reviewers for providing insightful comments and valuable suggestions. The valuable feedback from the reviewers has significantly contributed to enhancing the quality of our manuscript.
Based on the comments from the reviewers, we have summarized the strengths of our paper as follows:
* **Method: [Reviewer yJwR, tgyp, pquR]** PSTUDA introduces a relatively interesting approach by employing cascaded style fusion modules and Point-wise Instance Normalization (PIN) to achieve excellent cross-modal alignment and structural consistency. The introduction of a multi-level style dictionary and point-wise instance normalization (PIN) for progressive style transfer that effectively decouples content and style features. A multi-level style dictionary was developed, which reduces the burden on a single generator in multi-target transfer tasks and effectively decouples content and style. Multiple cascaded style fusion modules are employed to progressively recombine content and style features, thereby enhancing cross-modal alignment and structural consistency.
* **Experiment: [Reviewer yJwR, pquR]** It significantly reduces the floating-point computation by approximately 72% and the number of model parameters by about 50%, highlighting its efficiency and feasibility for practical clinical applications. PSTUDA’s experimental design is robust, featuring both comparative and ablation studies as well as appropriate metrics and analysis. This paper is original, detailed, clear, and demonstrates reasonable improvements in segmentation with significant improvements in floating point operations and model size.
* **Expression: [Reviewer tgyp, pquR]** The organization of the paper's structure is excellent. The paper is well-written and clear, providing detailed explanations of the architecture and methodologies. The comprehensive writing extends to the appendices, offering an appropriate level of detail on datasets, which aids in reproducibility and understanding.
* **Impact: [Reviewer pquR]** It emphasizes structural consistency, which is critical for medical image analysis. The strong review of related works situates PSTUDA within the broader context of domain adaptation and medical image segmentation. This combo allows for more precise and progressive recombination of content and style features, addressing limitations in existing one-to-one UDA methods and extending the applicability of domain adaptation techniques in medical imaging.
We summarized our novelty as follows:
* Unsupervised domain adaptation is a mainstream method for addressing domain distribution discrepancies. However, existing UDA methods are mostly limited to one-to-one domain adaptation, which is often inefficient and resource-intensive when dealing with multi-sequence medical image domain adaptation tasks. To address this challenge, we propose a novel and efficient one-to-multiple progressive style transfer UDA framework. It uses a single generator to simultaneously translate one source domain to multiple specified target domains. Compared to one-to-one methods, our approach not only achieves optimal performance but also significantly reduces model parameters and floating-point computation, highlighting its efficiency and feasibility in practical clinical applications.
* We developed a multi-level style dictionary to explicitly store the style information of each target domain at different stages, which alleviates the burden on a single generator in multi-target domain adaptation tasks and achieves effective decoupling of content and style.
* We employed multiple cascaded style fusion modules that progressively utilize point-wise instance normalization to inject local styles into content features. This fine-grained style transfer further reduces domain discrepancies and enhances cross-modal alignment and structural consistency.
Additionally, we have completed the theoretical proof and experimental validation of PIN, bidirectional cross-modal experiments for abdominal multi-organ segmentation, generalization experiments for more target domains, and all supplementary ablation experiments as requested by the reviewers.
We addressed each comment and question in detail below, and we kindly request the Reviewers **yJwR** and **tgyp** to reconsider our work in light of these aspects and support our efforts. If there are any further questions or concerns, we would be happy to discuss them with you.
We strongly believe that PSTUDA can be a useful addition to the NeurIPS community and above innovative contributions elevate the value and significance of our research in the realm of multi-target medical image domain adaptation and provide useful references and insights for future research.
Thank you very much!
Best regards,
Authors. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Stopping Bayesian Optimization with Probabilistic Regret Bounds | Accept (poster) | Summary: The paper introduces a Monte-Carlo-based stopping criterion for Bayesian optimization tasks, which has been neglected in the Bayesian optimization literatures. The authors provide a generic algorithm that can be tailored to Bayesian optimization setting and establish theoretical results and demonstrate numerical performances.
Strengths: The paper provides many interesting theoretical tools with relevant references that could be potentially useful in establishing stopping rules and other understandings in Bayesian optimization tasks.
Weaknesses: The paper's writing can be improved significantly. Although the paper introduces the most generic version of the algorithm as in Algorithm 1, to be actually used in the BO setting, one needs to calibrate parameters $(\delta^t_{est})_{t\in \mathbb{N}}$ that are highly nontrivial. Instead of presenting algorithm 1, readers would benefit more by seeing the algorithm directly targeted to the Bayesian optimization setting. Or perhaps it would be better if authors title the paper more broadly and consider applications beyond the Bayesian optimization. Section 3.2, in particular, needs to be modified to clarify its connection to the Bayesian optimization setting. A proper description of parameter scheduling and end-to-end algorithmic description would make the paper more solid and appreciated by a wider audience.
Technical Quality: 2
Clarity: 1
Questions for Authors: 1. The regret defined in (2) is not a simple regret, and it does not accurately measure the performance of the optimization as the objective function $f$ is not used, but instead a sequence of objective functions $(f_t)_{t \in \mathbb{N}}$, which vary over time, is considered. Can you clarify what the regret means in this case? It seems it makes more sense to stop the BO algorithm when $f(x^*) - f(x_t) \le \epsilon$ with probability at least $1-\delta$. In the later parts of the manuscript, $f_t$ seems to be a sample path, and if this is the case, algorithms like GP-TS would easily satisfy the condition, which authors refer to as probabilistic regret bound, in the early stage of the algorithm. Such a condition would be virtually non-informative.
2. In line 110, the authors claim that for each draw of $f_t$, the indicator can be evaluated using gradients. Can you elaborate more on this?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our submission. We are sorry to hear that you found our presentation of the material underwhelming. Some of your feedback was similar to that of another reviewer, so please see our global response for additional details.
**What does $f_t$ represent?**
Per the Section 2, $f_t$ denotes the posterior at time $t$ and this remains the case throughout the paper. In Section 3.1, we do talk about generating sample paths, but we usually say things like "draws of $f_t$". The only exception we have found is on line 102 where it says "drawing functions $f_t$” instead of "generating function draws of $f_t$". We will amend this line (and any others) to avoid confusion.
This also addresses your concern that algorithms like GP-TS would easily satisfy the proposed criterion. This is not the case because we use Monte Carlo methods to integrate out $f_t$ when evaluating (3).
**$r_t(x)$ is not a simple regret**
Equation (2) defines the model-based (simple) regret as $r_t(x) = f_t^* - f_t(x)$, where $f_t^* = \sup_{x \in \mathcal{X}} f_t^*$. Like $f_t$, $r_t(x)$ is a random variable. It denotes our regret for taking $x$ as our solution _according to the model_. Said differently, if we believe that the true function was drawn from the prior and that observations are indeed corrupted by independent Gaussian noise with variance $\gamma^2$, then $P(r_t(x) < \epsilon)$ is the subjective probability that $x$ satisfies the criterion given what is known a time $t$.
Hence, the proposed stopping rule does exactly what you suggested: it stops when the regret is less than $\epsilon$ with probability at least $1 - \delta$. The only difference is that the regret in question is defined under the model because the true supremum (hence the true regret) is unknown.
**Scheduling $\delta_{\text{est}}^t$**
We agree that additional details are needed here and will include them in future versions. In our experiments, we set $\delta_{\text{est}}^t = T^{-1}$ where $T$ was an upper bound on the number of queries (line 245). As seen in Figure 3, the number of samples requested by Algorithm 1 increased very slowly in $\delta_{\text{est}}$, so this upper bound can be loose (e.g. hundreds of times larger than the anticipated number of queries). As will be shown in future versions, this trend holds for different approaches to bounding estimation errors, such as Clopper-Pearson intervals and equal-tailed Bayesian credible intervals with uninformative priors.
In cases where no such upper bound is available, we recommend using an "outer schedule" $(\delta_{\text{est}}^t)$ that is similar to the "inner schedule" for $(n_j)$ used by Algorithm 1, which is discussed in Appendix B.1. Under this approach, the stopping criterion is only evaluated at steps $\beta^i t_0$ for $i \in \mathbb{N}$, where $t_0 \in \mathbb{N}$ is an initial time and $\beta > 1$ is a scheduling parameter. The choice of $\beta$ will depend on the ratio of the costs of making a query versus the cost of evaluating the stopping rule (approaching one as the ratio goes to infinity). Relative to the convergence time $T_0$ in Proposition 1 (and $S$ in the proof of Proposition 2), this approach results in at most $\beta$ times more queries.
---
Rebuttal Comment 1.1:
Title: Response to the authors
Comment: The usage does not really convince me of the model-based regret. Although Bayesian optimization algorithms endow probabilistic structure to the objective function, it often just serves as an intermediate assumption to derive an algorithm. The algorithm itself has a strong performance guarantee under model misspecification (e.g., objective function in the RKHS). If one must use such model-based regret, this paper needs to emphasize the importance of correct model specification and explicitly state the limitation in the misspecified model setup. In practice, there are a substantial number of successful BO examples with model misspecifications. I don't find then the framework provided in this paper attractive or practical. Therefore, I will retain my score.
---
Reply to Comment 1.1.1:
Title: Regarding model mismatch
Comment: We are surprised to see that your review has pivoted to focusing on model misspecification and request you discuss this matter with your fellow reviewers. Our (potentially biased) view is that we have been very upfront about this issue. Here is some evidence to support this claim:
- We state in the abstract that our goal is to find an $\epsilon$-optimal point with probability at least $1 - \delta$ _under the model_.
- We reinforce this point in the introduction by discussing how model misspecification is one of two key issues that has likely impeded the development of model-based stopping rules; and, we explicitly tell the reader on line 31 that our work only provides "mild commentary" on this topic.
- We qualify statments by saying "under the model" or "according to the model" more than 10 times in the main text.
- We refer to (2) as the "model-based regret" on line 81.
- The explanation of our main convergence result, Proposition 2, says: "If the model is correct, we can therefore promise to return a satisfactory solution with high probability."
- Model mismatch is the focus of Section 5.3.
- In our conclusion, we say: "If data is generated according to the model, we can therefore guarantee that BO is likely to return a satisfactory solution."
To be clear, our intention is not to say that your dissatisfaction is unwarranted. We fully agree that model misspecification is a major issue. At the same time, we argue that we (as a community) can only make progress on model-based stopping if we do so incrementally. To our knowledge, our contributions are more than incremental:
- We are the first to provide actual performance guarantees (even if they are defined under a model).
- Our convergence results are much more general than any preceding work.
- We demonstrate significantly superior empirical performance.
If you disagree, then please explain why or point to related works that have done a better job.
---
Rebuttal 2:
Title: Re: Re: on model misspecification
Comment: Thanks for the comment as well.
I think this paper has many interesting technical works that could be potentially useful in developing stopping rules for Bayesian optimization. However, to make my stance clearer, what I find more appropriate is to make sure the stopping criterion reflects how close the output of the algorithm is to the optimum of the objective function. As the authors clarified in the response, the stopping criterion would be appropriate without model misspecification, however, since a substantial number of BO applications fall into the misspecified model setting, if one develops a model-based stopping criterion, one must also provide a diagnostics/heuristics to guide practitioners to safely use tools. Currently, I find descriptions in Section 5.3 are insufficient. If the authors are willing to provide more down-to-earth guidance on the choice of a number of iterations to make sure the posterior Gaussian process can provide a reasonable stopping criterion for the deterministic black-box optimization in the manuscript, I am willing to raise the score a bit.
---
Rebuttal Comment 2.1:
Title: Re^3: model misspecification
Comment: Like what you've said, we view our work as more of a step in the right direction than a final solution. It is vital that readers understand both the strength and weaknesses of methods such as ours in order to make informed decisions. Thank you for stressing this point. We will provide extended discussion of this topic regardless of whether or not you choose to adjust your score. | Summary: This paper develops a $(\varepsilon,\delta)$ stopping criterion for Bayesian optimization algorithms. The authors propose a probabilistic regret bound estimator, which is constructed through sampling the function and find the maximum points of these samples, to decide when to stop the algorithm. They also give the theoretical analysis of this estimator and show their results in the numerical experiments.
Strengths: * The paper is well written and well organized.
* The problem formulation is clear.
* The algorithms are supported by some theoretical results (though I think the assumptions have some problem).
Weaknesses: My concern is about the Assumption A.3, which is not a common assumption in BO field. This assumption, as you mentioned, is true for a generalized EI in [1]. However, it is still unknown if this assumption is held in a noisy case, and I am also not agree with that the Knowledge gradient is in the form of generalized EI defined in [1]. I think the UCB algorithm with a fix coefficient will never satisfiy this assumption. As a result, the theoretical analysis is restricitive and only suitable for several specific acquisition functions.
[1] Convergence properties of the expected improvement algorithm with fixed mean and covariance functions.
Technical Quality: 3
Clarity: 3
Questions for Authors: * I didn't understand the right figure in Fig 2. Do you mean that you change the posterior distribution for orange, green and red line through conditioning on the partial information of real optima?
* How to decide the oracle budget in the experiment section? As the budget is chosen by an oracle, why sometimes the budget may be worse than other methods?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: They adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We will respond to each of your comments below.
**Assumption A3**
In order to make high-probability statements about $f$'s global properties, we must eventually learn enough about the function. We chose to enforce this condition by assuming that $(x_t)$ is dense in $\mathcal{X}$ because it is simple to understand and popular strategies exhibit this behavior (in the noise-free setting). This condition can also be guaranteed by using an $\epsilon$-greedy strategy. We argue this practice is desirable because it helps protect against model mismatch and ensures asymptotic performance no worse than that of random search.
This being a paper on Bayesian optimization however, that conclusion is somewhat underwhelming. As you’ve said, the set of acquisition functions (AFs) that satisfy A3 is an open question (even when $f$ is noise-free). Consider the following analysis.
Let $s_t \in \arg\max_{x \in \mathcal{X}} \mu_t(x)$ and $x_t \in \arg\max_{x \in \mathcal{X}} \alpha_t(x)$, where $\alpha_t$ denotes the chosen AF. The proof strategy from [1] is then:
1. Define $\alpha_t$ as a non-negative, continuous function such that $\alpha_t(x) = 0 \iff \sigma_t(x) = 0 \land \mu_t(x) - \mu_t(s_t) \le 0$.
2. Show that $\sigma_t(x_t)$ and $\mu_t(x_x) - \mu_t(s_t)$ are both non-positive when $t \to \infty$.
3. Show that $\alpha_t(x_t)$ therefore vanishes as $t \to \infty$.
4. Show that (3), together with the no-empty-ball property [1] and the definition of $x_t$, implies that $(x_t)$ is dense in $\mathcal{X}$.
Notice that the proof is actually simpler if $\alpha_t(x) = 0 \iff \sigma_t(x) = 0$, i.e. if a query has no value if and only if it provides no information. This family of AFs not only contains KG but other common non-myopic AFs (such as variants of entropy search). We will double check this result and update the paper accordingly. Note that the statement in the paper is not quite correct anyways since assumption A2 does not guarantee the NEB property.
If you agree with this analysis, then we hope you will reconsider your position on A3. Requiring noise-free assumption is undesirable, but this topic is beyond the scope of our submission. To our knowledge, previous works have made far stronger assumptions (such as use of GP-UCB with the theoretically correct $\beta_t$) or omitted proofs entirely.
p.s. We also think that UCB with fixed $\beta_t$ will not be space filling. See Section 5 of [2].
**Figure 2**
We will improve the related text, but your interpretation is correct. For instance, $f_t(x) | f_t(x) \le f_t^*(x)$ in green means that we truncated prior to approximating (3), i.e. we sampled $f_t^*$ and averaged over $P(f_t^* - f_t(x) \le \epsilon \mid f_t(x) \le f_t^*)$.
Properly conditioning on $f_t^*$ would require us to truncate everywhere and marginalize over random sets $\arg\max f$, which is intractable. To illustrate this, let $f_1 \sim N(0, 1)$ and $f_2 \sim N(0, 10^{-9})$ be independent and suppose you are told that $\max(f_1, f_2) = 3$. How confident can you be that $f_1 = 3$?
We ultimately ended up sampling functions $f_t$ instead, which avoids this issue. We thought it was interesting to show the impact of different ways of communicating information about the supremum. In hindsight, it may be better to move this to appendices.
**Oracle budgets**
Oracle budgets were defined retroactively by looking at the results and choosing, for each problem, the smallest time $T$ so that $95\%$ of runs met the regret goal (where possible). Table 1 reports medians. If 94 out of 100 runs on a hypothetical problem took 10 trials to satisfy the regret bound and the remaining 6 took 50, then the oracle budget (and hence the median) would be 50. The median stopping times of other methods might be closer to 10 however. We will include problem-specific result tables in extended results that give a more detailed breakdown of our results.
#### References
[1] Vazquez and Bect, 2010. "Convergence properties of the expected improvement algorithm with fixed mean and covariance functions".
[2] Jones, 2001. "A Taxonomy of Global Optimization Methods Based on Response Surfaces".
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for their response and for clarifying several issues.
I think the analysis process you provided is correct, but it still cannot reduce my doubts about the theoretical part. There are no assumptions about Aq in the algorithm description, but there are many limitations in the theoretical section, which makes the algorithm and theory in the article inconsistent. This paper still has potential for improvements such as providing some experiments about the stopping criterion on different Aq, so I will not change the score.
---
Rebuttal 2:
Comment: Thank you for your input! Please humor a quick reply.
In the noise-free setting, we believe to have shown (either in the text or in follow-up discussion here) that PRB converges for most acquisition functions (AFs). The only well-know exception we are aware of is UCB with a fixed $\beta$ parameter. At the risk of being overly pedantic: this is technically a bandit method and therefore falls outside the scope our work.
Regarding inconsistencies, assumptions A1-A3 could be communicated earlier in the text. We have not done so because they are only required for our technical analysis. Our algorithm can be used with any search strategy, but convergence can only be guaranteed if additional assumptions are made. This is unavoidable for stopping rules like PRB, since we need to protect against degenerate cases where, e.g., only a single point is ever queried.
Finally, we agree with your comment that we should show results for more than one AF. This is a good suggestion. We actually did run some experiments early on with different AFs for standard problem like Branin; and, these were consistent with results reporting in the text. We did/do not have the compute resources to run all of our experiments with multiple AFs, but are happy to add experiments that illustrate how our method interacts with different strategies.
We ask you to reconsider not to revising your score, but respect your decision either way. | Summary: This paper addresses the challenge of determining when to stop a Bayesian optimization process. Traditional methods rely on exhausting a predefined budget, but this work proposes an alternative approach based on probabilistic criteria as a stopping criterion.
Key Contributions:
New Stopping Criterion: The paper introduces a stopping criterion based on the probability that a solution is within a certain threshold ($\epsilon$) of the optimal solution with high confidence (1 - $\delta$). This criterion is more adaptive and user-friendly compared to traditional budget-based stopping rules.
Theoretical Guarantees: The authors prove that Bayesian optimization satisfies this new stopping criterion under mild technical assumptions for Gaussian process priors.
Practical Algorithm: This paper presents a practical algorithm for evaluating the stopping rules using Monte Carlo estimators. This algorithm is designed to be robust against estimation errors.
The paper provides empirical results demonstrating the effectiveness and limitations of the proposed approach.
Strengths: This paper introduces a new stopping criterion for Bayesian optimization based on probabilistic regret bounds, addressing the challenge of indeterminate budget allocation in black-box optimization scenarios. The criterion is formulated as $P(r(x*) \le \epsilon) \ge 1 - \delta$ where $r(x*)$ represents the regret of the best solution found, $\epsilon$ is the acceptable regret threshold, and $\delta$ is the confidence parameter. This probabilistic approach provides a more adaptive and theoretically grounded alternative to traditional fixed-budget stopping rules, allowing the optimization process to terminate when a solution is found that is likely to be within $\epsilon$ of the global optimum with high confidence $1-\delta$
Weaknesses: N/A
Technical Quality: 3
Clarity: 3
Questions for Authors: Assumption 3 (A3), which requires the sequence of query locations $(\vec{x}_t)$ to be almost surely dense in the search space $\mathcal{X}$. Is this assumption practical in the high-dimensional data ?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are pleased to hear that you enjoyed our submission.
**Is A3 practical for high-dimensional problems?**
We give a detailed reply below. The short answer is that what matters in practice is how smooth $f$ is relative to the size of $\mathcal{X}$.
For simplicity, assume $f \sim \mathcal{GP}(0, k)$ where $k$ is a stationary kernel with unit variance. In discussing dimensionality, we are really talking distances. We used the infinity norm to measure distance in the main text for convenience. From a theoretical perspective, the particular choice of norm isn't so important here: $\mathcal{X}$ is assumed finite-dimensional, so all norms are equivalent. What ultimately matters is the distance induced by $k$. This is easy to see in the sense that even if $||x - x'||$ is very large, the distance passed to the kernel may be very small (i.e., $f$ may be very smooth). Even this distance can be misleading however; consider a periodic kernel.
It turns out that the right distance for characterizing the supremum is the canonical pseudo-metric (29)
$
d_k(x, x') = \sqrt{k(x, x) - 2 k(x, x') + k(x' , x')}.
$
To build intuition for why this is the appropriate metric, notice that $d_k(x, x')$ goes from zero to four as the correlation between $f(x)$ and $f(x')$ goes from positive to negative one. Denoting this correlation by $\rho$, one can show that
$
\mathbb{E}\left[\max(f(x), f(x'))\right] = \sqrt{\frac{1 - \rho}{\pi}},
$
which exhibits the same trend. Further details can be found in Appendix A.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing a detailed response to my question. I appreciate the clarification regarding the practicality of Assumption 3 (A3) for high-dimensional problems. Your explanation about the role of smoothness of the function $f$ relative to the size of $\mathcal{X}$ is insightful. The discussion on the choice of norm and its equivalence in finite-dimensional spaces, as well as the importance of the kernel-induced distance, is particularly helpful. | Summary: This paper proposes estimators for Bayesian optimisation stopping rules backed by theoretical guarantees. The stopping criterion estimators are probably approximately correct (PAC) and derived from sample-based estimates of algorithm’s simple regret according to a Gaussian process model. Experiments complement the theoretical findings on benchmark problems involving synthetic-data benchmarks and hyper-parameter tuning problems.
Strengths: * The text is mostly well written.
* The theoretical results seem rigorous and the involved analysis techniques might benefit other areas of research in BO and bandits.
* A reasonable number of baselines and benchmarks is included in the experiments and significant improvements are shown for the proposed approach.
Weaknesses: * Some important details of the methodology are not clear, as the description tends to be quite verbose with a lack of technical details. It is not very clear how Algorithm 1 applied within the BO loop, for example.
* Scalability limitations are not discussed.
Technical Quality: 3
Clarity: 3
Questions for Authors: * In Algorithm 1, what exactly is $Z$, given the dependence of $\Psi$ on $\mathbf{x}$?
* It’s not clear how gradients are applied to evaluate the regret-bound indicators, as suggested in line 110. The sampled functions are likely to be non-convex. So, even if running gradient descent from the given $\mathbf{x}$ does not lead to a point $\mathbf{x’}$ satisfying $f_t(\mathbf{x’}) - f_t(\mathbf{x}) > \epsilon$, there’s no guarantee that elsewhere on the search space this would not be achieved.
* What was the runtime for the algorithms in the experiments reported in Table 1? Since the PRB strategy requires multiple samples from the GP at each iteration, it seems to me that the cost of running the algorithm ramps up quite fast. So it’d be good to compare the PRB runtimes against the baselines’ to have an idea of how much more computational resources are required by attempting an early stopping strategy.
* What was the feature map $\boldsymbol{\phi}$ used for the experiments?
* What is $D$ in the definition of $\beta_t$ in line 270?
* Hartmann, Rosenbrock and Branin are classic synthetic test functions for optimisation algorithms. So why are they included under a section labeled “Results on real problems”? That can be somewhat misleading, as we’d usually refer to “real problems” as cases involving real data or challenging application scenarios attached to a real-world application (e.g., complex physics simulators for scientific problems).
* Scalability limitations are not discussed. For instance, the proposed PRB estimator is based on optimisation over samples from a GP, and finding their optima should become harder and harder as the dimensionality of the space grows. Are there useful approximations that could be applied without hurting the theoretical guarantees by too much? Also, the algorithm could be adapted to work with sparse GP models, especially sparse spectrum GPs, given the feature-based approximations for sampling in Eq. 4. However, I’ve missed a discussion on how to adapt the algorithm to scenarios involving large amounts of data (e.g., batch BO).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Some limitations are discussed in the main text, while other issues, such as scalability, might not have been addressed, as mentioned above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. We will address most of your points below.
**Missing details**
Your comment that important details about how the presented material is used in the context of BO has been heard. We will rework the text to ensure this content is clearly communicated. Please see our global response for further details on this topic.
**Scalability**
This is an interesting direction for future work. We agree that works about stopping criteria would ideally address the scaling issues they may invite. At the same time, we believe to have provided first-of-their-kind results for model-based stopping of BO. Further, we have taken the time to study how many theoretical model-based stopping algorithms can be converted to robust, practical ones. You have said that these results not only seem rigorous but (potentially) beneficial to related fields. We therefore think it fair to say this topic is beyond the scope of our work.
**Questions**
1. The random variable $Z$ is introduced on line 128 and its definition in the specific context of PRB is given on line (134). In writing Section 3.2, we wanted to emphasize the fact that these methods are generic. Based on the feedback we received however, it seems that this ended up being confusing.
1. What you have said is correct. Please see our global response for discussion of how we optimized $f_t$.
1. Runtimes are shown on the right in Figure 3. We expressed these as a function of the distance between the final estimate $\Psi_t^n(x)$ and target level $\lambda = 1 - \delta_{\text{mod}}$ because this relationship dominated the observed trends. Re your comments about scalability: the number of observations in this plot is always less-equal to 512 and this number is anti-correlated with $\lambda$. These details aside, we should better communicate that PRB is *much* more expensive to evaluate than the competing methods. We argue this price tag is acceptable since queries are usually far more expensive than the compute cost; however, it is important that we clearly communicate this information.
1. Feature maps $\boldsymbol{\phi}$ were constructed using a thousand random Fourier features in the manner suggested by [1].
1. We will clarify that $D$, defined in Assumption A1, denotes the dimensionality of the search space.
1. We categorized problems as "real" or "synthetic" depending on whether model hyperparameters we fit online to the data. We will improve this terminology in future versions.
#### References
[1] Sutherland and Sneider, 2015. “On the Error of Random Fourier Features”.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and the clarifications. Here are a few follow-up questions, as I'm still confused about some details.
> 1. The random variable $Z$ is introduced on line 128 and its definition in the specific context of PRB is given on line (134).
Yes, but $Z$'s definition in line 134 is still dependent on some $\mathbf{x} \in \mathcal{X}$, which is not clear. Would the $\mathbf{x}$ at which $Z$ is evaluated correspond to $\mathbf{s}_t$ in Proposition 1?
> 3. Runtimes are shown on the right in Figure 3.
As far as I understand from reading the caption in Figure 3, those runtimes are calculated from sampling Bernoulli random variables mimicking the $Z$'s which would be calculated for PRB estimates. However, within a BO loop, those $Z$'s would require samples from a GP (followed by multi-start gradient descent) to be evaluated, which apparently is not the case in Figure 3. So, to me, it seems that those runtime estimates are not indicative of the actual runtime of estimating PRB within a BO loop.
---
Reply to Comment 1.1.1:
Title: Reply to follow-up questions
Comment: Thanks again for your input. In future versions, the main text will clearly discussed the following details.
**How is $Z$ define?**
There is one $Z$ for each point at which we evaluated the proposed stopping rule. We evaluated this rule on all previously queried points $\mathbf{x} \in \mathbf{X}_t$ that satisfied a pruning condition
$P(f_t(\mathbf{s}_t) | Rebuttal 1:
Rebuttal: This post discusses feedback that was common to multiple reviewers. Comments raised by individual reviewers will be addressed in subsequent posts.
As a preliminary remark, we note that reviewers seem to think that our submission makes solid contributions to theory and/or practice. The primary criticism seems to have been that important practical details are either unclear or absent. We are committed to improving our work and hope that you will consider revising your scores if you feel that we have adequately addressed your concerns.
**Presentation**
In writing this paper, we attempted to provide generic explanations for the core components of our algorithm. Based on the feedback we received, it is clear that this ended up being confusing at times and that greater emphasis should have been placed on practical details and how the pieces fit together. We lost the forest for the trees.
We will tighten up the text to ensure that the broader picture stays in focus. Section 3.2 will be reworked to revolve around BO and its notation will be brought in line with the rest of the paper. Practical details will either be clearly presented in the paper or in a suitable appendix. If possible, pseudo-code for the entire BO loop will be added.
**Using gradients to obtain regret indicators**
Reviewers 5ozp and cuEF both pointed out that the text is overly vague about how gradients are used to obtain $\mathbb{1}(r_t(x) \le \epsilon)$. Using function draws gives us access to pathwise derivatives. Throughout, we used multi-start gradient ascent to maximize each path. In detail: we evaluated each path at previously queried locations and (up to) 64 batches of 256 random points, then random gradient ascent starting from the 16 points for each path. If desired, this process can be expedited by using early stopping (line 113).
Per Reviewer 5ozp, the sampled functions will typically be non-convex; so, there is no guarantee that this process succeeds. This is mentioned on line 124, but further discussion should be added (esp. for high-dimensional problems). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
No-Regret Bandit Exploration based on Soft Tree Ensemble Model | Accept (poster) | Summary: This thesis presents a stochastic bandit algorithm ST-UCB based on the soft-tree ensemble model for reward estimation and regret minimization. This algorithm exploits the properties of the soft-tree model and extends the neural robber theory to a tree-based structure, proving that under appropriate assumptions, ST-UCB achieves lower cumulative regrets, demonstrating superiority over the traditional robber algorithm based on ReLU neural networks. In addition, this paper explores the theoretical connection between soft and hard trees, and proposes the prospect of tree-based modeling in complex data analysis.
Strengths: 1. The research applies neural bandit (NB) theory to non-neural network models, which extends the applicability of stochastic bandit algorithms and also provides new regret minimization algorithms for tree-based models.
2. The paper not only has a proof of theory and an in-depth analysis of the properties of the soft tree ensemble model, but also conducts some experiments to verify the performance of the algorithm.
Weaknesses: Although the paper proves that the ST-UCB algorithm achieves regret-free performance under specific regularity conditions, these conditions may be difficult to satisfy or validate in practical applications.
Technical Quality: 3
Clarity: 2
Questions for Authors: It is recommended to explore the applicability of the algorithm under a wider range of conditions and to focus on the efficiency of the soft-tree model, taking into account the complex structure and high computational demands that may be involved.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and suggestions.
> Although the paper proves that the ST-UCB algorithm achieves regret-free performance under specific regularity conditions, these conditions may be difficult to satisfy or validate in practical applications.
> It is recommended to explore the applicability of the algorithm under a wider range of conditions and to focus on the efficiency of the soft-tree model, taking into account the complex structure and high computational demands that may be involved.
We will make an effort to consider a wider range of applicability in future work.
However, since the theoretical assumptions (Assumption 3.1) of our analysis rely only on the standard set of assumptions in the existing neural bandits literature (as described in Remark 1),
the applicability of our theory is at the same level as that of existing neural bandit algorithms (e.g., NN-UCB [1]).
Therefore, we believe that the applicability of our algorithm does not diminish the significance of our contributions.
- [1] Zhou, Dongruo, Lihong Li, and Quanquan Gu. "Neural contextual bandits with ucb-based exploration." International Conference on Machine Learning. PMLR, 2020. | Summary: The paper proposes an algorithm for multi-armed bandits, where the reward function is estimated with a soft tree ensemble method. The paper present their model as an extension to NN-UCB but with sublinear regret guarantees at the cost of a reduced hypothesis space. The generalized theory of neural tangent kernels, i.e. the tree neural tangent kernel, was utilized in a similar fashion as neural tangent kernel were used in previous works for the NN-UCB algorithm. Numerical results are provided at the end of the paper.
Strengths: The paper proposes a new approach, to the best of my knowledge.
It is shown that their model is an extension to the existing neural bandit case using.
It improves it regret bound to be sublinear a clear improvement over neural bandits.
A new confidence bound on for their model is provided.
Weaknesses: In line 218 wihtin Theorem 3.2: a concrete dependency on T for the lower bound on M would have been nice, in order to insert it into Theorem 3.2 to see how the confidence bound depens on time. Without that a reader could think that the confidence bound is super linear in T.
Minor errors: in Lines 112 and 113 I think the gradient operators are missing within the inner products. Line 124 should have bold g in the inner product.
Technical Quality: 4
Clarity: 4
Questions for Authors: The theory suggests that the regret bound improves with increasing M. Is there a point where M is to large and the tree ensemble becomes too complex?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The paper clearly indicates that the imrpovement they make over exisiting work comes at the cost of a reduced hypothesis space.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the overall positive assessment of our paper.
We describe the answers to the question of the reviewer
> Is there a point where M is to large and the tree ensemble becomes too complex?
In theoretical perceptive, the huge $M$ improves the regret even if the complexity of the soft tree models becomes overly large.
The theoretical superiority observed in such overly complex models is similar to the recent theoretical findings on NTK in regression problems (e.g., [1], [2]).
On the other hands, in practical perceptive, our regret analysis (Theorem 3.3) need the learner to choose sufficiently small (large) learning rate (step size) of the gradient descent;
Such theoretical demand makes it difficult to apply ST-UCB algorithm for overly large soft tree models in practice, and we leave additional effort to more computationally efficient algorithm as future work.
Finally, we would like to note that such high computational demands also exist in current neural bandit algorithms (e.g., NN-UCB [3]). Therefore, we believe that the high computational requirements described above do not diminish the significance of our contributions.
- [1] Jacot, Arthur, Franck Gabriel, and Clément Hongler. "Neural tangent kernel: Convergence and generalization in neural networks." Advances in neural information processing systems 31 (2018).
- [2] Arora, Sanjeev, et al. "On exact computation with an infinitely wide neural net." Advances in neural information processing systems 32 (2019).
- [3] Zhou, Dongruo, Lihong Li, and Quanquan Gu. "Neural contextual bandits with ucb-based exploration." International Conference on Machine Learning. PMLR, 2020.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed answer to my question. I decided to leave my score as it is and wish the authors good luck. | Summary: This paper presents the Tree Neutral Tangent Kernel (TNTK) for soft tree models and introduces the ST-UCB algorithm based on this kernel. The authors also provide theoretical guarantees for the kernel and algorithm, including a regret bound. The contribution of this study is substantial; however, many sections require further refinement to enhance the clarity and overall presentation.
Strengths: This paper provides solid theories for Tree Neutral Tangent Kernel (TNTK) and the suggested algorithm. The contribution of these theories seems significant. Also, the experiments demonstrate that the algorithm the ST-UCB algorithm performs well.
Weaknesses: A piece of the main contribution is the introduction of TNTK and its theoretical properties. But, these are not presented well. A more detailed and clear description for the introduction of TNTK seems necessary. Even the definition is missing. The authors should show the derivation steps of TNTK in a more structured way. There are some other problems with presentation:
- The first paragraph in the first section is unclear. How various combinations of users and items can be connected to unobserved actions?
- Lines 42-53 are not clear. The authors should explain the effective dimension more or describe the bounds in terms of the ambient dimension d.
- Citation formats should be corrected.
- For Section 2, lines 107-120 should be clearer. Especially, lines 111-113 seem to have some errors.
- The proofs in Appendices need much work for improved presentations.
- The statement of Lemma 4.1 seems wrong.
Technical Quality: 3
Clarity: 1
Questions for Authors: - For the subGaussian assumption in Assumption 3.1, do we need the conditional expectation given H_t-1? Due to the independency of noise, it does not seem to be necessary. If needed, could you please explain?
- The exponential eigenvalue decay is surprising. Could you please provide some intuition on how the improvement can be achieved as compared to NTK?
- What is the key difference between NN-UCB and ST-UCT in their mechanism?
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: The derivation of TNTK does not seem markedly different from that of the Neural Tangent Kernel (NTK), which may slightly weaken the perceived novelty of the study. Nonetheless, the application of mathematical techniques used for NTK to develop TNTK is not straightforward and still represents a significant contribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their useful feedback and suggestions.
To address the readability issues that the reviewer kindly pointed out, we will carefully revise our paper as follows:
- The description of TNTK:
- In Section 2, we will add the detailed introduction of TNTK, including its definition.
- Furthermore, in the Appendix, we will add an independent section that describes the detailed derivation of TNTK.
- The proofs in Appendices:
- We will delete redundant expressions as much as possible to make the proofs more concise and coherent.
- Furthermore, we will add brief proof sketches in Appendices A, B.1, and B.2.
- The description about effective dimension and MIG:
- In Section 3, we will add a description of the effective dimension by relating it to MIG.
- Furthermore, in Appendix, we will add an independent section that summarizes the effective dimension and MIG.
With these modifications, we believe that the reviewer's concerns can be resolved.
Below, we provide answers to the reviewer's questions.
> For the subGaussian assumption in Assumption 3.1, do we need the conditional expectation given H_t-1? Due to the independency of noise, it does not seem to be necessary. If needed, could you please explain?
Our analysis **does not** assume the independence of noise sequence $(\epsilon_t)$; therefore, our conditional sub-Gaussian assumption is milder than the sub-Gaussian noise assumption with independence.
Please note that the conditional sub-Gaussian assumption is already utilized in existing works on linear or kernelized bandits (e.g., see [1, 2, 3]).
- [1]Abbasi-Yadkori, Yasin, Dávid Pál, and Csaba Szepesvári. "Improved algorithms for linear stochastic bandits." Advances in neural information processing systems 24 (2011).
- [2]Abbasi-Yadkori, Yasin. "Online learning for linearly parametrized control problems." (2013).
- [3]Chowdhury, Sayak Ray, and Aditya Gopalan. "On kernelized multi-armed bandits." International Conference on Machine Learning. PMLR, 2017.
In addition, these existing works also do not assume the independence of the noise sequence.
> The exponential eigenvalue decay is surprising. Could you please provide some intuition on how the improvement can be achieved as compared to NTK?
The difference in the eigenvalue decay behavior between NTK and TNTK mainly arises from the difference in smoothness between NTK and TNTK.
Specifically, TNTK is infinitely differentiable over $\mathbb{S}^{d-1} \times \mathbb{S}^{d-1}$, whereas NTK is not (see, e.g., [4]).
Our analysis shows that TNTK achieves exponential eigendecay by carefully leveraging existing result on the dot product kernel (Definition A.1 and Lemma A.1),
which fundamentally relies on the infinite differentiability of the underlying kernel.
- [4] Bietti, Alberto, and Francis Bach. "Deep Equals Shallow for ReLU Networks in Kernel Regimes." ICLR 2021-International Conference on Learning Representations. 2021.
> What is the key difference between NN-UCB and ST-UCB in their mechanism?
The main difference between NN-UCB and ST-UCB lies in the rate at which the width of the confidence bound converges to zero.
Specifically, in ST-UCB, the width of the confidence bound given by Theorem 3.2 converges to zero at a rate of $\tilde{O}(1/\sqrt{T})$.
In contrast, the width of the confidence bound used in NN-UCB generally does not converge to zero unless additional assumptions on $\mathcal{X}_t$ are introduced (Section 4, Lines 253-255).
This difference in the behavior of the confidence bounds results in the different rates of cumulative regret for ST-UCB and NN-UCB, as discussed in Section 4.
Furthermore, we would like to address the comments provided by the reviewer as follows:
> The first paragraph in the first section is unclear. How various combinations of users and items can be connected to unobserved actions?
We will revise this paragraph in revision to improve clarity. The content we intended to convey in this paragraph is as follows:
In situations where the size of the action (arm) candidate set $\mathcal{X}$ can be extremely large compared $T$ to the total number of steps (i.e., $|\mathcal{X}| \gg T$),
it is difficult to apply conventional finite-armed bandit algorithms that assume the independence of arm rewards.
Specifically, when assuming the independence of arm rewards,
it is necessary to observe each arm at least once to estimate its reward; however, it is infeasible to run when $\mathcal{X}$ is enormous.
Therefore, in such scenarios, it is essential to estimate the rewards of all actions (arms) in,
including unobserved ones, from limited observational data and to balance the trade-off between exploration and exploitation.
As an example of such situation, this paper considers a recommendation system where the action candidate set $\mathcal{X}$ is given by the combination of users and items.
As mentioned in [5], in large-scale recommendation systems with a large number of user or item candidates, the action candidate set $\mathcal{X}$ can become extremely large.
Lastly, it should be noted that examples of large-scale arm candidate sets are not limited to recommendation systems.
Many such examples of applications have been reported in existing works (e.g., [6], [7]), and large $\mathcal{X}$ setting is a common motivation for introducing model-based bandit algorithm like kernel bandits (see, e.g., Introduction in [6]).
- [5]Vanchinathan, Hastagiri P., et al. "Explore-exploit in top-n recommender systems via gaussian processes." Proceedings of the 8th ACM Conference on Recommender systems. 2014.
- [6]Chowdhury, Sayak Ray, and Aditya Gopalan. "On kernelized multi-armed bandits." International Conference on Machine Learning. PMLR, 2017.
- [7]Kassraie, Parnian, Andreas Krause, and Ilija Bogunovic. "Graph neural network bandits." Advances in Neural Information Processing Systems 35 (2022): 34519-34531.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. | Summary: The paper investigates soft tree ensemble model for reward modeling in bandit algorithms.
Strengths: The work provides comprehensive formal evidence for the proposed methods.
The empirical study is also comprehensive.
Weaknesses: Not sure if there is some new insights beyond other similar works that utilized the NTK theory such as Neural Thompson Sampling.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) Could you share more insights on why soft tree ensemble model is of particular interest? In which case we should prefer tree-based reward model than ReLu-based reward model?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper did discuss the theoretical limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments.
> Could you share more insights on why soft tree ensemble model is of particular interest? In which case we should prefer tree-based reward model than ReLu-based reward model?
From a practical perspective, the performance of a bandit algorithm depends on how accurately the reward model can represent the underlying data structure.
Therefore, our soft tree-based algorithm is expected to perform better than the ReLU-based algorithm on tabular data, where existing soft-tree regression works have reported high empirical performance (e.g., [1,2]).
- [1]Popov, Sergei, Stanislav Morozov, and Artem Babenko. "Neural oblivious decision ensembles for deep learning on tabular data." arXiv preprint arXiv:1909.06312 (2019).
- [2]Hazimeh, Hussein, et al. "The tree ensemble layer: Differentiability meets conditional computation." International Conference on Machine Learning. PMLR, 2020.
From a theoretical perspective, our analysis shows that the soft tree-based ST-UCB algorithm has superior performance compared to the ReLU-based NN-UCB algorithm,
albeit at the cost of a smaller hypothesis space for the soft tree model (see Section 4).
We believe that this result will motivate practitioners to use our soft tree-based algorithm instead of the ReLU-based algorithm,
as the superior performance of ST-UCB is guaranteed if the underlying reward function lies within the hypothesis space of the soft tree model. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Training Binary Neural Networks via Gaussian Variational Inference and Low-Rank Semidefinite Programming | Accept (poster) | Summary: The paper is concerned with variational inference in binary neural networks, e.g., neural networks that contain weights quantized to be either -1 or +1, with the aim of improving the overall performance of binary neural networks by performing Bayesian model averaging. For this, the authors review prior work on the topic and discuss relaxations that are typically done to optimize the optimization problem of finding a variational approximation to the posterior over the binary weights. To approach the problem, the authors show that under the assumption of Gaussian variational inference, e.g., the continuous relaxation of the discrete weights is assumed Gaussian distributed, the objective is a non-linear semi-definite program that is, hence, non-convex. Due to the non-convexity the program is solved iteratively using SGD until a local optimum is reached. The proposed approach elegantly connects the setting to hyperplane rounding and uses a low-rank approximation of the covariance of the variational family. The authors performed an extensive comparison against existing works on VGG and ResNet architectures for common benchmark tasks (CIFAR-10/100 and (Tiny)-ImageNet) based on accuracy and top-1/5 accuracy. The results look generally promising.
Strengths: I will provide a general statement of the papers strengths below.
Uncertainty quantification in binary neural networks seems like a relevant and interesting direction and I appreciate the connection of non-convex optimization with the field. The technical details are well explained in most parts and the method seems interesting and promising. Moreover, I appreciate the through empirical evaluation even though limited to a single metric and the additional ablation on the rank of the deviation matrix.
Weaknesses: I will give a general review on the papers weaknesses below and provide detailed questions in the questions section.
At parts the paper is difficult to follow as notation is sometimes not introduced or details aren't explained enough. For example, page 4 above line 151 capital L should be a curly capital L from my understanding and it was not immediately clear to me why the joint constraint in line 162 is important. Moreover, the contributions of the paper seem rather limited in the context of related work as the resulting method seem to resemble variational inference with a low-rank Gaussian approximate family using the reparameterization trick and natural gradient updates. It is possible that I misunderstood the updates in Alg.1 as I could not find derivations of them in the appendix or main text. The results look promising but are limited to a single metric, accuracy, which might not accurately the performance of each method. Additional common metrics, such as, the negative log predictive density and the calibration error, are missing at the current stage
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Could the authors elaborate on connections of their methods to natural gradient-based optimization in variational inference, as done in classical and recent literature such as [Honkela] or [Shen]?
2. The authors model correlations between all units in the network (line 182) through a low-rank approximation and need to resort to a very low rank approximation of rank smaller or equal to ten. Often a block-diagonal matrix assumption is made (assuming layers are independent). Did the authors experiment with a block-diagonal structure in which blocks are low-rank? If so, how does such an approach compare? It seem that a high-rank for each block would be feasible in this setting.
3. After reading up about hyperplane rounding, I can see the connection to using a sign function. However, the authors only mention this without elaborating on the connection. I would recommend making those connections more explicit and elaborating on them.
4. A critical aspect of the method seem that it relies on a low-rank approximation, however, choosing the rank might be non-trivial. Moreover, as the ablation in Fig.1 (note that this should be a table) indicates it is not clear that increasing the rank necessarily improves the results. How is the rank chosen in the experiments and how do the authors propose to chose the appropriate rank?
References:
[Honkela] Honkela et al. (2008), Natural Conjugate Gradient in Variational Inference.
[Shen] Shen et al. (2024), Variational Learning is Effective for Large Deep Networks.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: The authors highlight some open problems and future research directions. However, the limitations section could be further improved to detail the specific technical challenges associated with the approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their assessment and questions. We respond to each point inline below. We encourage the reviewer to ask any follow-up question and to comment on our responses. We hope that they will consider raising their score at the end of our exchange.
> The results look promising but are limited to a single metric, accuracy, which might not accurately the performance of each method. Additional common metrics, such as, the negative log predictive density and the calibration error, are missing at the current stage.
We did not provide these metrics because __they are not available for our competitors__. Here we provide a selection of the results of our method using the negative log predictive density (NLPD) and the calibration error (ECE) metrics.
__For the 1W1A setting on the CIFAR-10 dataset:__
| Architectures | NLDP | ECE|
|----------------|--------|-----|
|ResNet18 | 0.60 | 0.074|
|VGG-Small |0.71 |0.086|
__For the 1W32A setting on the CIFAR-10, CIFAR-100 and TinyImageNet dataset:__
| |CIFAR-10 VGG16 | CIFAR-10 ResNET 18| CIFAR-100 VGG16 |CIFAR-100 ResNet18 | Tiny-ImageNet ResNet18|
|-----------|------------|---------|--------------|------------------|-------------------|
|NLPD | 0.49 | 0.25 | 1.59 | 1.12 | 2.13 |
|ECE | 0.19 | 0.035 | 0.13 | 0.078 | 0.13 |
__For the ImageNet dataset:__
| |AlexNet 1W1A | AlexNet 1W32A| ResNet18 1W1A| ResNet18 1W32A |
|------------|----------|---------|----------|----------|
| NLPD | 22.8 | 23.00 | 6.91 | 6.91 |
| ECE | 0.88 | 1.00 | 4.50 | 4.44 |
> Could the authors elaborate on connections of their methods to natural gradient-based optimization in variational inference, as done in classical and recent literature such as [Honkela] or [Shen]?
We thank the reviewer for pointing out these references. __Our approach is comparable to that of Shen__ in the setup of the variational learning problem and the usage of Gaussian variational distributions. However, in contrast with Shen, our method works over low-rank covariances, while Shen adopts diagonal covariance matrices. Finally, to the best of our understanding, the natural-gradient methods of Honkela and Shen are based on considering the Riemannian gradient with respect to the Fisher metric, while __our method implicitly uses the Riemannian gradient over the space of low-rank Gaussian distributions defined via the embedding $\Sigma = Z Z^T.$__ This has the advantage of enforcing the low-rank constraint and is the basis for the Burer-Monteiro approach to semidefinite programming. We believe that __this key ingredient, together with the connection with Grothendieck’s inequality and hyperplane rounding, is completely absent from the previous work__ cited by the reviewer. This is to be expected as that work does not have to deal with the training of binary neural networks and is not concerned with rounding from real to binary weights. In conclusion, __we cannot agree with the reviewer’s assessment regarding the limited novelty of our method__ and reassert its significant departure from previous work.
> The authors model correlations between all units in the network (line 182) through a low-rank approximation and need to resort to a very low rank approximation of rank smaller or equal to ten. Often a block-diagonal matrix assumption is made (assuming layers are independent). Did the authors experiment with a block-diagonal structure in which blocks are low-rank? If so, how does such an approach compare? It seem that a high-rank for each block would be feasible in this setting.
__We concur with the reviewer__ on the promise of using a block low-rank structure, a direction we have already been exploring. In our first presentation of the method we opted to model all correlations as we were interested in studying the maximum possible advantage we can obtain via a Gaussian approximation. Now that we have established the existence of this advantage, our future work will focus on how to reap it without incurring the computational costs, particularly in terms of memory, that VISPA suffers. We will include this discussion in Section 5.
> After reading up about hyperplane rounding, I can see the connection to using a sign function. However, the authors only mention this without elaborating on the connection. I would recommend making those connections more explicit and elaborating on them.
While this connection is well-known in the theoretical computer science community, we agree that it deserves more elaboration for a more general audience. Unfortunately, we were strongly constrained by space limitations in the detail we could provide in Section 3.1, as __the geometric view of hyperplane rounding requires introducing the vector embedding view of the underlying SDP__. If the reviewer deems it satisfactory, we would be happy to include a separate section detailing the geometric derivation of hyperplane rounding in the Supplementary Material.
> A critical aspect of the method seem that it relies on a low-rank approximation, however, choosing the rank might be non-trivial. Moreover, as the ablation in Fig.1 (note that this should be a table) indicates it is not clear that increasing the rank necessarily improves the results. How is the rank chosen in the experiments and how do the authors propose to chose the appropriate rank?
__See FAQ 2 in the author rebuttal.__ Most results presented use $K=4$, as this was experimentally showed to be a good compromise between performance and computational cost. More generally, for a robust result, we would recommend __averaging solutions from different choices of rank.__
---
Rebuttal Comment 1.1:
Title: Response
Comment: I want to thank the authors for their rebuttal and addressing my questions and concerns. I have followed the discussion with other reviewers. In consideration of the other reviews and assessments I will keep my positive score and slightly increase to a weak accept.
---
Reply to Comment 1.1.1:
Title: Thank you for the consideration
Comment: Thank you for your thoughtful consideration of our rebuttal. We appreciate your decision to maintain a positive score and are grateful for the slight increase to a weak accept. | Summary: The paper proposes a Gaussian variational inference approach to training binary neural networks. Unlike traditional VI methods, the proposed VISPA algorithm is motivated by a SDP relaxation of the binary neural network objective. In experiments, VISPA gives state-of-the-art results for training binary neural networks on CIFAR-10, CIFAR-100 and TinyImageNet benchmarks.
Strengths: - The paper is well-written and the overall problem formulation was clearly motivated.
- The proposal to use Gaussian variational inference for directly training binary neural networks appears to be novel.
- The experimental results are convincing, though larger-scale experiments, for example a ResNet-50 instead of the AlexNet on ImageNet or transformers would make the paper stronger.
Weaknesses: - The non-diagonal low-rank representation of the covariance adds a significant overhead. It is unclear whether the algorithm can remain scalable. From Figure 1 (Impact of K on model accuracy) it remains unclear whether setting K > 1 is really worth the additional computational overhead.
- The VISPA algorithm is in some sense "heuristically" derived, and it is unclear whether for example stationary points of the method are critical points of the original training objective.
- I appreciate the connection to variational inference, but in the objective function there is no entropy or prior present, so it is unclear whether this algorithm actually has any connections to Bayesian inference.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Can lines 11-13 in VISPA algorithm be viewed as an orthogonal projection onto the constraint? It would be interesting to view VISPA as a projected stochastic gradient method on the problem P_corr, which would help in proving convergence and making the algorithm more theoretically sound.
- Even for K=1, the method seems to perform better than other variational baselines like BayesBiNN. Is there any intuitive explanation as of why the algorithm works so much better in practice?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: All limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their assessment and questions. We respond to each point inline below. We encourage the reviewer to ask any follow-up question and to comment on our responses.
> The non-diagonal low-rank representation of the covariance adds a significant overhead. It is unclear whether the algorithm can remain scalable. From Figure 1 (Impact of K on model accuracy) it remains unclear whether setting K > 1 is really worth the additional computational overhead.
The computational overhead is mostly restricted to the usage of memory. If $n$ is the number of weights, our memory for storing relevant variables is at most $(K+1) n$ compared to $n$ for other methods. The total significance of this increase depends on the batch size and the size of the examples. For instance, in the case of ResNet18, there are roughly $9 \cdot 10^6$ weights while, for batch size=100, the total size of a data batch is $15 \cdot 10^6$, for a total memory usage of $24 \cdot 10^6$. Hence, choosing K=1 will lead to a usage of $33 \cdot 10^6$, a $37.5%$ increase. __This increase will typically get smaller as we consider models trained on larger images.__ In practice, we often observe even smaller increases, as our estimate does not include memory usage due to backpropagation, which can be very large, e.g., in the case of residual connections.
The effect on the runtime is negligible because __most of the runtime is spent in forward and back-propagation__ anyways.
In conclusion, our method is most suitable in situations where memory is not strictly constrained or can be augmented at low cost. We plan to include this discussion in the final version of the paper.
> The VISPA algorithm is in some sense "heuristically" derived, and it is unclear whether for example stationary points of the method are critical points of the original training objective.
We agree, but notice that the focus of this paper is to provide __a mathematically motivated algorithm that performs competitively in practice__. We believe it eminently possible to obtain the suggested result for sufficiently small step size under the assumption that the loss is smooth by adapting existing results in manifold optimization. Indeed, we can think of our algorithm in the Burer-Monteiro formulation as minimizing the loss function over the manifold given by the constraints $\mu_i^2 + (ZZ^T)\_{ii} = \mu_i^2 + \sum_j Z_{ij}^2=1$ for all $i$, which is a product of spheres, a well-studied case [1]. As suggested by the reviewer, we can view step 11 of our algorithm as __a projection step onto the manifold__, which effectively implements a retraction from the tangent space back to the manifold. To make the convergence argument complete, it only remains to analyze the effect of minibatch sampling of the loss on the gradient estimates used by the algorithm. Again, by taking the step size sufficiently small, it should be possible to make the contribution of the variance negligible and guarantee that the algorithm obeys a descent lemma and finds a near-stationary point.
[1] Boumal, N. (2023). An introduction to optimization on smooth manifolds.
> I appreciate the connection to variational inference, but in the objective function there is no entropy or prior present, so it is unclear whether this algorithm actually has any connections to Bayesian inference.
__Please see FAQ 3 in the author rebuttal.__ We will add the entropy term in the final version and justify why we let it vanish for our final algorithm.
> Can lines 11-13 in VISPA algorithm be viewed as an orthogonal projection onto the constraint? It would be interesting to view VISPA as a projected stochastic gradient method on the problem P_corr, which would help in proving convergence and making the algorithm more theoretically sound.
Yes. As explained above, Step 11 of our algorithm indeed performs a projection step over the manifold of interest. We would be happy to add this explanation in the final version.
> Even for K=1, the method seems to perform better than other variational baselines like BayesBiNN. Is there any intuitive explanation as of why the algorithm works so much better in practice?
The improved behavior for $K=1$ is most likely __a benefit of the randomization built into our rounding argument__, even for $K=1$. While other methods deterministically round based on the current $\mu$-value, our algorithm for K=1 effectively adds noise to $\mu$ via the $Zr$ term before rounding, allowing us to better explore the space of possible assignments. | Summary: This paper aims to propose a new method training binary neural networks.
The method, according to the authors claims, is based on variational inference and semi-definite programming.
During training, the method maintains a low-rank Gaussian distribution from which the neural network parameters are drawn.
The training objective is the expected loss over the Gaussian distribution.
During test, the method obtains a binary neural network by rounding a random sample drawn from the Gaussian distribution.
Experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet demonstrate the competitiveness of the proposed method.
Strengths: - The accuracy reported by the authors are competitive against the baselines.
- The experiments are quite extensive including a wide range of datasets.
Weaknesses: - Respectfully, this paper has little to do with variational inference nor semidefinite programming, contrary to what the title, the abstract, the introduction, and the related work suggest. Thus, a large fraction of the paper should be revised.
1. Just because you minimize the objective in Eq (2) over a Gaussian family does not imply this is variational inference.
If the authors disagree, please explain the posterior that the objective aims to approximate.
In fact, minimizing Eq (2) over a Gaussian family will collapse to a degenerate Gaussian with a zero covarianc.e
That is exactly why the authors have to add a moment constraint on the mean vector and the covariance matrix to prevent the collapse.
1. Just because you have a positive semi-definite constraint on the covariance matrix does not imply that this is a semi-definite program.
This formulation is not even remotely close to semi-definite programs.
Moreover, the algorithm proposed by the authors is not even remotely close to any prominent techniques in semi-definite programming.
The algorithm is basically stochastic gradient descent with Polyak's momentum and a projection step to an Euclidean sphere.
- The memory consumption of this method is not reported, and I suspect that this method is way more memory-intensive than baselines.
Note that storing a rank-$K$ covariance matrix increases the memory by a factor of $K + 1$ (storing the mean $\mathbf{mu}$ and a low-rank matrix $\Zv$ with $K$ columns.
The main results are reported using $K = 8, 4, 2$ depending on the datasets.
Technical Quality: 2
Clarity: 3
Questions for Authors: NA.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer and regret to find that the connections to variational inference and semidefinite programming were not clear to them. We believe that, in the attempt to streamline the presentation of the mathematical component of our method, we may have omitted some steps that clarify such connections. __A summary of our clarifications are given in FAQ 3 and FAQ 4 of the author rebuttal__. Below, we aim to present more details of our clarifications, together with their proposed location in the final version of the paper. Lastly, we note that the closely related works by __Ajanthan et al [1] and Meng et al [2] (see Section 3.1) also rely on the same connection to variational inference__. More recently, the paper by Shen et al (pointed out by Reviewer 6N2m) [3] describes a similar framework to ours for the (non-binarized) training of LLMs under the moniker of variational learning, a variant of variational inference. We welcome further questions and suggestions that may improve the clarity of our work.
## CONNECTION TO VARIATIONAL INFERENCE
### ENTROPY REGULARIZATION OF PROBLEM 2 (Top of Section 3)
Motivated by stability and generalization concerns, we may consider an entropy regularized version of Problem 2 given by
$OPT(\theta) := \min_{p \in \mathcal{P}} \theta \cdot \mathbb{E}_{w \sim p, x}[L(f(x, w), y_x)] - H(p)$
where $H(p)$ is the entropy of the distribution $p.$
This can be seen to be the log-partition function of a probabilistic graphical model (PGM), albeit one with a complicated $n$-way sufficient statistic given by the loss function $L$, and the corresponding optimizer is the Gibbs distribution $p_\theta.$
Following Wainwright and Jordan [4 -Chapter 3.7 and Chapter 5.1], the task of approximating the log-partition function of a model is exactly where we may deploy the toolkit of variational inference.
### MEAN FIELD APPROXIMATION (Line 145 in Section 3)
The mean field technique [4 - Chapter 5.2-5.3] is based on approximating $p_\theta$ by finding its closest approximation, in KL divergence, from a family of tractable models. Meng [2] and Ajanthan [1] consider the naive mean field approximation, which corresponds to selecting the class of graphical models with no edges, i.e., product distributions. The resulting optimization problem is:
$$\min\_{p \in \mathcal{P}\_{Bernoulli}} D(p || p\_{\theta}) = \min\_{p \in \mathcal{P}\_{Bernoulli}} D(p || p\_{\theta}) = \min\_{p \in \mathcal{P}\_{Bernoulli}} \theta \cdot \mathbb{E}\_{w \sim p, x}[L(f(x, w), y\_x)] - H(p) - OPT(\theta)$$
where $\mathcal{P}_{Bernoulli}$ is defined in the submission.
### BNN Training via Gaussian Variational Inference (Section 3.1)
Gaussian variational inference ([5] and references therein) suggests to use as tractable models the family of multivariate Gaussian distributions. We then obtain:
$$\min\_{p \in \mathcal{P}\_{\textrm{corr}}} D\_{KL}(p || p\_{\theta}) = \min\_{p \in \mathcal{P}\_{\textrm{corr}}} \theta \cdot \mathbb{E}\_{w \sim p, x}[L(f(x, w), y\_x)] - H(p) - OPT(\theta)$$
At first, this formulation may seem problematic as we are fundamentally changing the reference measure from the discrete measure over the binary hypercube $\\{-1,+1\\}^n$ to the Lesbegue measure over $\mathbb{R}^n.$ However, this change is justified, when $\theta$ goes to infinity, by the approximation result of Grothendieck's inequality, which formally shows that the loss minimization problem over $\mathcal{P}\_\textrm{corr}$ approximates that over $\mathcal{P}\_{Bernoulli}$ for certain classes of objective functions.
See discussion in 3.1 lines 165-178.
This reasoning leads us to consider the regime in which the entropy term vanishes ($\theta \to \infty$) to obtain the problem:
$$
\min\_{p \in \mathcal{P}\_\textrm{corr}} \mathbb{E}\_{w \sim p, x}[L(f(x, w), y\_x)]
$$
This is exactly the problem studied in our paper. For sake of completeness, at a preliminary stage of our investigation, we also had a version of our gradient-based SDP algorithm include the entropy term for different large values of $\theta$. In all cases, our final algorithm ($\theta \to \infty$) proved more stable, accurate and faster to run, so that we did not pursue different settings of $\theta$ further.
[1] Thalaiyasingam Ajanthan, Puneet Dokania, Richard Hartley, and Philip Torr. Proximal Mean-Field for Neural Network Quantization. ICCV, 2019.
[2] Xiangming Meng, Roman Bachmann, and Mohammad Emtiyaz Khan. Training Binary Neural Networks using the Bayesian Learning Rule. ICML, 2020.
[3] Shen et al., Variational Learning is Effective for Large Deep Networks. ICML, 2024.
[4] Wainwright, M.J. and Jordan M.I., Graphical Models, Exponential Families, and Variational Inference.
[5] Diao, M.Z., Proximal Gradient Algorithms for Gaussian Variational Inference:Optimization in the Bures–Wasserstein Space, MIT Thesis.
## CONNECTION TO SEMIDEFINITE PROGRAMMING
__See FAQ 4 in the author's rebuttal.__
## QUESTION ABOUT MEMORY USAGE
> The memory consumption of this method is not reported, and I suspect that this method is way more memory-intensive than baselines.
If $n$ is the number of weights, our memory for storing relevant variables is at most $(K+1) n$ compared to $n$ for other methods. The total significance of this increase depends on the batch size and the size of the examples. For instance, in the case of ResNet18, there are roughly $9 \cdot 10^6$ weights while, for batch size=100, the total size of a data batch is $15 \cdot 10^6$, for a total memory usage of $24 \cdot 10^6$. Hence, choosing K=1 will lead to a usage of $33 \cdot 10^6$, a $37.5%$ increase. __This increase will typically get smaller as we consider models trained on larger images.__
---
Rebuttal 2:
Comment: Hi Authors,
Thanks for the response.
**Connection to Variational Inference**
Could you clarify what is parameter $\theta$? You have used $\theta$ as a scaler to multiply the expectation term $\mathbb{E} [L(f(x, w), y)]$ and subsequently let $\theta$ go to infinity. On the other hand, you also use $\theta$ to index the weight distribution $\mathbf{w} \sim p_{\theta}$, which in general has to be represented by a high dimensional vector in the context of neural networks.
Also, can you clarify how you arrive at the first equation $\theta \cdot \mathbb{E}[L(f(x, w), y)] - H(p)$? I think you want to define a conditional distribution $p((x, y) \mid w) = \exp(- L(f(x, w), y))$ with a prior $p_0(w)$. Doing variational inference in $w$ gives the (negative) ELBO
$$-\mathbb{E}[\log p((x, y) \mid w) + \log p(w)] - H(p) = \mathbb{E} _ {p} [L(f(x, w), y)] - \mathbb{E}_p [\log p_0(w)] - H(p),$$ where $p$ is the variational distribution.
It's quite close, but not the same as the equation you wrote.
**Connection to SDP**
I have read FAQ 4. But I have to say that I still find this connection is very superficial.
**Memory Usage**
I am partially convinced. However, I also want to note that ResNet18 is not a huge model by today's standard. For instant, a ResNet50 model will have a higher (relative) memory overhead compared to ResNet18.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their comments. We respond inline below.
>> Could you clarify what is parameter ? You have used $\theta$ as a scaler to multiply the expectation term and subsequently let $\theta$ go to infinity. On the other hand, you also use to index the weight distribution $w \sim p_\theta$, which in general has to be represented by a high dimensional vector in the context of neural networks.
The parameter $\theta$ is a scalar inverse temperature that determines the extent of the entropy regularization, i.e., lower $\theta$ -> higher temperature -> more entropy regularization. The distribution $p_{\theta}$ is the optimal distribution for the problem
$$
OPT(\theta) := \min_{p \in \mathcal{P}} \theta \cdot \mathbb{E}_{w \sim p, x}[L(f(x, w), y_x)] - H(p),
$$
i.e., the Gibbs distribution associated with parameter $\theta$. It is not a distribution over $\theta$, but still a distribution over high-dimensional weight vector, which is denoted by $w$ throughout the paper. If $w \sim p_\theta$, then $w$ follows the optimal distribution for the problem above. Alternatively, one can think of $\theta$ as a scaling of the loss function to balance its importance wrt to the entropy term.
>> Also, can you clarify how you arrive at the first equation?
Taking the prior $p_0$ to be the maximum entropy distribution over the weights, the term $\mathbb{E}_p[\log p_0(w)]$ sums up to a constant and does not affect the optimization. The loss function $L$ can be taken to be scaled by $\theta$. This allows us to balance the trade-off between loss and entropy.
>> I have read FAQ 4. But I have to say that I still find this connection is very superficial.
It is possible that we might be biased because our design of the method was heavily influenced by the SDP viewpoint and in particular by the application of SDPs to relaxations for combinatorial optimization problems. At the same time, we feel that the hyperplane rounding in our method may be difficult to justify without knowledge of SDP methods in that area.
>> I am partially convinced. However, I also want to note that ResNet18 is not a huge model by today's standard. For instant, a ResNet50 model will have a higher (relative) memory overhead compared to ResNet18.
The reviewer is correct that the relative memory consumption will increase for larger models, keeping image size constant. However, we note that in practice, we often observe even smaller increases, as our estimate does not include memory usage due to backpropagation.
More importantly, models deployed on edge devices, where binarization is most needed, tend to naturally be smaller than ResNet50, such as MobileNet or SqueezeNet. For instance, on MobileNet v2, the number of weight is $3.4 \cdot 10^6$, with a typical data batch still having size $15 \cdot 10^6$ (assuming batch size = $100$), for a total memory consumption of $18.4 \cdot 10^6$. In this case, running our method with $K=1$ will lead to a memory usage of $21.8 \cdot 10^6,$ just an $18\\%$ increase. Even choosing $K=6$ will still maintain memory consumption within a factor of $2$ of the original. | Summary: The paper provides a theoretical framework for BNN training (a longstanding model compression and computational speed up technique) based on Gaussian variational inference. The framework succeeds in providing theoretical grounding for practical and well established techniques for BNN training - specifically Straight Through Estimators (STEs), weight clipping and usage of sign function as special case of their formulation. Inspired by previous work by Ajanathan et al. (2019) and Meng et al. (2020) which interpret latent weights as mean field parameters, their framework allows one to model pairwise correlations between latent weights by using low rank Gaussian distributions over weights leading to a more accurate optimization problem (expressible as a SDP according to the authors). They also use sensible relaxations for tractability (Burer-Monteiro approach), leaving the door open to more sophisticated techniques (gradient flow over Bures-Wasserstein manifolds) for future work.
Strengths: - [Clarity, Quality] The authors contextualize their work well with respect to the past work on the topic, and the writing is clear and succinct.
- [Originality] The paper presents a novel modelling approach to training BNNs by combining and extending several well established methods (use of gaussian variation inference as a proxy for bernoulli distributions, SDP relaxation of a combinatorial problem etc) allowing them to leverage well studied and optimized methods and numerical routines for the intractable BNN training problem (Problem (1) in the paper).
- Their relaxation of the sample space to real numbers and usage of gaussian weights facilitates the usage of gradient descent based approaches for optimization (a challenge in earlier works where gradient based approaches use various heuristics since the objective function doesn't admit a tractable/sensible gradient)
- The authors perform ablation to elucidate the impact of the main hyperparameters ($Z$ and $K$), as well as empirical evaluation on notable configurations of datasets and architectures
- [Significance] The paper ties together several methods BNN optimization methods in a unifying framework, and presents an interesting relaxation which yields concrete improvement over state of the art results
Weaknesses: - The paper can benefit from empirical evaluation over a broader class of architectures since BNNs can have applicability beyond vision as a domain. I do acknowledge that the domain is typically restricted to the class of vision based transforms, but leveraging these techniques in other major model classes of interest can have significant impact which is left unexplored.
- The authors claim that the problem involves an SDP but the algorithm presented has very loose connections to most formalizations of semi definite programming (and variational inference). The SDP claim seems to derive only from a semi definite constraint and the problem itself fails to be convex let alone semidefinite. (See Line 163-164).
Technical Quality: 3
Clarity: 3
Questions for Authors: - In L242-243, the authors mention taking 40 samples and average results for predictions. How much does this impact performance (aka inference time, storage etc), since one of the primary motivations of BNNs is to minimize computational storage and runtime requirements
- It is well established in quantization literature that activations (and weights) both admit low precision representations without significant performance loss. Did you try activation precision below 32 bits since low precision architectures are quite common (and perhaps even the norm)?
- Do you have a hypothesis for why lower K perform better for 1W1A while higher K is better for 1W32A architectures?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There are no major limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their assessment and questions. We respond to each point inline below. We encourage the reviewer to ask any follow-up question and to comment on our responses. We hope that they will consider raising their score at the end of our exchange.
> The paper can benefit from empirical evaluation over a broader class of architectures since BNNs can have applicability beyond vision as a domain. I do acknowledge that the domain is typically restricted to the class of vision based transforms, but leveraging these techniques in other major model classes of interest can have significant impact which is left unexplored.
__See FAQ 1 in the author rebuttal.__
> The authors claim that the problem involves an SDP but the algorithm presented has very loose connections to most formalizations of semi definite programming (and variational inference). The SDP claim seems to derive only from a semi definite constraint and the problem itself fails to be convex let alone semidefinite. (See Line 163-164).
__In the author rebuttal, please see FAQ 3__ for the connection to variational inference and __FAQ 4__ for the connection to semidefinite programming.
> In L242-243, the authors mention taking 40 samples and average results for predictions. How much does this impact performance (aka inference time, storage etc), since one of the primary motivations of BNNs is to minimize computational storage and runtime requirements?
This has limited impact on storage, as the algorithm can maintain the average dynamically. The impact on inference time may be as large as 40x, but may be significantly reduced by parallelization of the processing of the 40 samples. __It is an active area of focus to further reduce this impact__. In preliminary results, we find that the process may be sped by using a smaller number of correlated samples via Gaussian quadrature. In this case, $2K +1$ samples would suffice, which is as small as 3 for the $K=1$ version of our method. In conclusion, we note that __the prime runtime concern is to reduce the cost of training__, which we achieve by binarizing both weights and activations while outperforming competitors.
> It is well established in quantization literature that activations (and weights) both admit low precision representations without significant performance loss. Did you try activation precision below 32 bits since low precision architectures are quite common (and perhaps even the norm)?
__We conducted experiments using binary activation precision__. The results of these experiments are detailed in Tables 1, 3, and 4, which show the performance under the settings of binary activation and binary weights across different datasets and network architectures. For future work, we plan to explore other low precision configurations, including 2-bit, 3-bit, and 4-bit precision.
> Do you have a hypothesis for why lower K perform better for 1W1A while higher K is better for 1W32A architectures?
__See FAQ 2 in the author rebuttal.__
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Based on the FAQ and the responses to other reviewers, I have more confidence in the proposed method. I still respectfully disagree with the emphasis on the SDP part of the algorithm which as the authors explain in Part 4 stems from the semi-definite constraint and **sharing the optimization approach for solving large SDPs using a non convex relaxation**. Different classes of problems can admit similar relaxations/approximations and I do not believe that is enough to fully justify the heavy usage of the term. However, this is a difference of semantics, and I would like to focus on the efficacy and functionality of the method itself
Based on the increase in memory usage (as pointed out in the limitations on memory constrained devices issue by Reviewer Bkmh), I believe it is vital to use mitigation strategies to reduce the impact of $K$ on memory. Similar concerns are also raised by Review M9aW. Given that one of the primary motivations is deployment/training on edge devices, the increase in memory usage needs to be compensated by other factors to not contradict the primary objective. The authors have mentioned that they plan to explore this domain using low precision (and reliance on the fact that larger images ameliorate the issue, ref Rebuttal to reviewers Bkmh and M9aW). However, the performance of the final model also tends to increase with $K$ as noted in the paper and FAQ 2. I would like the authors to explicity mention a brief set of guidelines and best practices to understand the trade offs in choosing an appropriate value of K with respect to factors such as training time/memory and expected model perf like in FAQ 2 as part of the paper/appendix.
Finally, based on the responses so far I will increase my score to 6.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response. We will add a paragraph in the main body of the paper covering the content of FAQ 2 and our response to reviewer Bkmh, which contains estimates of how the memory consumption varies with $K$.
We will also note that models deployed on edge devices, where binarization is most needed, tend to naturally be small, such as MobileNet or SqueezeNet. For instance, on MobileNet v2, the number of weight is $3.4 \cdot 10^6$, with a typical data batch still having size $15 \cdot 10^6$ (assuming batch size = 100), for a total memory consumption of $18.4 \cdot 10^6$. In this case, running our method with $K=1$ will lead to a memory usage of $21.8 \cdot 10^6,$ just an $18\\%$ increase. Even choosing $K=6$ will still maintain memory consumption within a factor of $2$ of the original. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time. We are grateful that most reviewers agree on the __novelty of our framework__ based on Gaussian variational approximation, the __clarity of the submission__ and the __significance of the performance improvements__ revealed by our experiments. In this rebuttal, we address frequently brought up questions.
# FAQS
## 1. Generalization of our method to other neural network architectures
We chose to focus our first presentation of VISPA on vision-based applications because of the __availability of well-studied binarized architectures and well-established baselines__, which isolate the performance of our method more closely.
Indeed, for transformers, there is not yet agreement on the best binarized architecture, due to the difficulty of binarizing activations in softmax layers. Only recently, researchers have made progress in bypassing this obstacle [1]. This also contributes to the scarcity of baselines and to the __absence of a standard benchmark__.
To demonstrate the applicability of VISPA to transformers, __we include in the PDF a table of preliminary results__. See the rebuttal to Reviewer Bkmh for a more detailed discussion.
[1] He, Y., et al. "BiViT: Extremely Compressed Binary Vision Transformers." Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2023.
## 2. Setting of rank parameter K
The impact of $K$ on test accuracy is discussed in Section 4.1, Figure 1. Our expectation was that higher values of $K$ should yield better accuracy, as the algorithm can explore a larger space of distributions. As pointed out by a number of reviewers, the results do not fully support this conclusion. __Results do tend to marginally improve as K gets larger__ with the exception of VGG-Small + CIFAR-10 (1W1A). The accuracy of ResNet18+CIFAR10 (1W32A) is essentially constant over $K$. We have two hypothesis for this lack of conclusive evidence:
- Higher values of $K$ yield a __larger feasible space and require longer time to converge__. In the case of VGG-Small + CIFAR-10 (1W1A), it is possible that the fixed number of epochs chosen for the experiments was sufficient for convergence on smaller $K$ values, but insufficient on higher values.
- Higher values of $K$ yield a __more complex distribution__ from which it is __more expensive to sample good weights__ at inference stage.
Based on preliminary experiments, we believe that a combination of the two may be at play for VGG-Small + CIFAR-10 (1W1A).
## 3. Connection to Gaussian Variational Inference
Following Wainwright and Jordan [1 -Chapter 3.7], the variational inference technique can be applied to the variational formulation of the problem of computing the __log-partition function__ of a probablistic graphical model. By adding an entropy regularization term $\frac{1}{\theta} \cdot H(p)$, the optimization problem Problem 2 (Line 134) can be put in this form for a graphical model whose sufficient statistics are described by the loss function. The fundamental idea of variational inference is then to __approximate the log-partition function__ and the corresponding Gibbs distribution $p_{\theta}$ by restricting the entropy-regularized optimization problem to a __tractable class of probability distributions__ (see [1], Chapter 5.2). The Gaussian Variational Inference approach (see [2,3] and references therein) suggests using the class of __multivariate Gaussian distributions__ as the tractable family of choice. Grothendieck's inequality, which shows that such a Gaussian distribution can be used to approximate the required function in the regime $\theta \to \infty$, leads us to __let the entropy term vanish__. Finally, this yields the same objective function as in our submission.
[1] Wainwright, M.J. and Jordan M.I., Graphical Models, Exponential Families, and Variational Inference
[2] Diao, M.Z., Proximal Gradient Algorithms for Gaussian Variational Inference:Optimization in the Bures–Wasserstein Space
[3] Lambert, M. et al, Variational inference via Wasserstein gradient flows
## 4. Connection To Semidefinite Programming
The reviewers correctly note that we use the term SDP to simply indicate the presence of a semidefinite constraint; however, this usage has become frequent in recent literature as researchers investigate the potential of semidefinite programs with non-convex objectives and constraints, in particular low-rank constraints [1]. For clarity, throughout the paper, we repeatedly reassert the non-convexity of our formulation. Despite this non-convexity, we argue that __our techniques are typical of SDP programming__. The Burer-Monteiro approach is one of the most promising practical techniques to reduce the computational cost of SDPs [2], with a __significant amount of research__ devoted to understanding which reduction in rank preserves the global optima of the original solution [3,4,5]. Similarly, our use of __moment matching constraints__ in the definition of $\mathcal{P}_{\textrm{corr}}$ is typical of the __relaxation of combinatorial optimization problems to semidefinite programs__, which is a pillar of the theory of approximation algorithm. Indeed, our work is originally motivated by __Grothendieck’s inequality and hyperplane rounding.__
[1] Mei, S., Misiakiewicz, T., Montanari, A., & Oliveira, R. I. (2017, June). Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality.
[2] Yurtsever, A., Tropp, J. A., Fercoq, O., Udell, M., & Cevher, V. (2021). Scalable semidefinite programming.
[3] Boumal, N., Voroninski, V., & Bandeira, A. (2016). The non-convex Burer-Monteiro approach works on smooth semidefinite programs. \
[4] Cifuentes, D., & Moitra, A. (2022). Polynomial time guarantees for the Burer-Monteiro method.
[5] Erdogdu, M. A., Ozdaglar, A., Parrilo, P. A., & Vanli, N. D. (2022). Convergence rate of block-coordinate maximization Burer–Monteiro method for solving large SDPs.
Pdf: /pdf/224864d9eb6b7d3f7af17f3bc0733906b4686986.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper presents an optimization framework for Binarized Neural Network (BNN) training using Gaussian variational inference, resulting in a low-rank semidefinite programming (SDP) formulation. The authors propose the Variational Inference Semidefinite Programming Algorithm (VISPA) to improve accuracy by modeling and learning pairwise correlations between weights. The empirical evaluation demonstrates that VISPA outperforms state-of-the-art algorithms on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet datasets.
Strengths: The paper introduces a novel optimization framework based on Gaussian variational inference for BNN training, which goes beyond latent weights and leverages low-rank semidefinite programming relaxations.
The paper provides a theoretical motivation for the use of latent weights, STE, and weight clipping. It also presents a new interpretation of latent weights methods within the proposed optimization framework.
The empirical evaluation on multiple benchmark datasets shows consistent and significant performance improvements over existing state-of-the-art BNN training methods.
Weaknesses: While the method shows excellent results on the tested datasets, the paper could benefit from a discussion on the generalization capabilities of the proposed approach to other types of neural networks or applications.
The performance of VISPA may be sensitive to the choice of hyperparameters, such as the covariance rank
𝐾, which is not extensively explored in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you provide more details on the computational complexity and runtime of VISPA compared to other state-of-the-art BNN training methods?
How does VISPA perform on other types of neural networks, such as transformer architectures?
Can you elaborate on the initialization strategy for the weight deviation matrix 𝑍 and its impact on the training stability and final performance?
What are the potential limitations of the proposed method when applied to resource-constrained devices, and how can these be mitigated?
Could you provide more insights into the impact of different hyperparameter settings, especially the covariance rank 𝐾, on the performance of VISPA?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their assessment and questions. We respond to each point inline below. We encourage the reviewer to ask any follow-up question and to comment on our responses. We hope that they will consider raising their score at the end of our exchange.
> How does VISPA perform on other types of neural networks, such as transformer architectures?
__See FAQ 1 in the author rebuttal.__ We will include this discussion in Section 5 (Limitations and Open Problems) in the final version.
We have performed __preliminary experiments of VISPA on the transformer architecture Swin-T[3] using ImageNet__. We binarized the weights of the linear layers except for the qkv layer before the softmax. The experiment was performed with a learning rate of 0.1, a weight decay of 1e-5, and an epoch size of 150, without fine-grained hyper-parameter tuning. Results are shown in the __table within the PDF attached to the author's rebuttal__.
We have identified 3 methods to which to compare VISPA's performance : __BiViT[1], BiT[2] and BiBERT[4]__. As summarized in the table, such methods differ in fundamental ways from our application on VISPA, making an immediate comparison somewhat tenuous. All of them use Knowledge Distillation to enhance the performance of binarized transformer architectures, putting our method at a disadvantage. Furthermore, our strongest competitor BiViT uses a special handling of the softmax operation, facilitating its task. On the other hand, our implementation of VISPA does not binarize QKV layers, gaining an advantage there. With these caveats, we find that __our method outperforms all competitors in terms of top-1% test accuracy, sometimes decisively__. We believe that this result strongly suggests that __the performance improvements, shown in this paper for vision-based transforms, will translate successfully to transformer__. A more detailed study would involve different architectural choices on the binarized transformer, which are beyond the scope of our submission.
[1] He, Y., et al. "BiViT: Extremely Compressed Binary Vision Transformers." ICCV, 2023.
[2] Liu, Z., et al. "BiT: Robustly Binarized Multi-distilled Transformer." NIPS, 2022.
[3] Liu, Z., et al. "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows." ICCV, 2021.
[4] Qin, H., et al. BiBERT: Accurate Fully Binarized BERT. ICLR, 2022.
> Could you provide more insights into the impact of different hyperparameter settings, especially the covariance rank 𝐾, on the performance of VISPA?
__See FAQ 2 in the author rebuttal.__
> Can you provide more details on the computational complexity and runtime of VISPA compared to other state-of-the-art BNN training methods?
As shown in Algorithm 1, let $M$ represent the batch size, $n$ the number of weight parameters, and $K$ the embedding dimension. The computational complexity of the VISPA approach is $O(Mn+nK)$. In comparison, other state-of-the-art BNN training approaches typically require $O(Mn)$. In most cases $M >> K,$ so that the increased cost due to the nK term is a negligible fraction of the total running time, as __most time is spent performing forward- and back-propagation through the neural network.__
> Can you elaborate on the initialization strategy for the weight deviation matrix 𝑍 and its impact on the training stability and final performance?
In Section 4 (Experiments), we discussed that the weight deviation matrix $Z$ is initialized using the Xavier normal distribution, with a mean of 0 and a standard deviation of $s * \sqrt{\frac{2}{\text{fan\\_in} + \text{fan\\_out}}}$. Here, fan_in represents the number of input units, fan_out the number of output units, and s is a scaling factor parameter. Xavier initialization has been empirically shown to work well in practice across a variety of neural network architectures (e.g., CNNs, ResNets, GANs, and Transformers). It is based on a theoretical analysis that ensures the variance of the input to each neuron is similar to the variance of the output. This choice balances the scale of weights and helps maintain stable gradient magnitudes [1].
For details on the impact of $Z$'s initialization on training stability and performance, see Appendix A.2. There we show that VISPA achieves __consistent performance across a range of choices of scaling factor s.__
[1] Xavier Glorot and Yoshua Bengio, Understanding the difficulty of training deep feedforward neural networks, AISTATS, 2010.
> What are the potential limitations of the proposed method when applied to resource-constrained devices, and how can these be mitigated?
The main potential limitation of VISPA on resource-constrained device is the increase in memory usage due to having to maintain the covariance variable $Z.$ If $n$ is the number of weights, our memory for storing relevant variables is at most $(K+1) n$ compared to $n$ for other methods. The total significance of this increase depends on the batch size and the size of the examples. For instance, in the case of ResNet18, there are roughly $9 \cdot 10^6$ weights while, for batch size=100, the total size of a data batch is $15 \cdot 10^6$, for a total memory usage of $24 \cdot 10^6$. Hence, choosing K=1 will lead to a usage of $33 \cdot 10^6$, a $37.5\\%$ increase. __This increase will typically get smaller as we consider models trained on larger images.__ In practice, we often observe even smaller increases, as our estimate does not include memory usage due to backpropagation, which can be very large, e.g., in the case of residual connections.
A possible mitigation strategy is to store our variable $Z$ at lower precision. Given that this variable is only accessed via multiplication with Gaussian noise, we believe that this will not change the behavior of our algorithm. | null | null | null | null | null | null |
OW-VISCapTor: Abstractors for Open-World Video Instance Segmentation and Captioning | Accept (poster) | Summary: The authors propose a new task called Open-World Video Instance Segmentation and Captioning (OW-VISCap), which involves detecting, segmenting, tracking, and describing both seen and unseen objects. To tackle this problem, they introduce two networks: an object abstractor for encoding images at the object level and an object-to-text abstractor for generating captions for objects. In the object abstractor, they use evenly distributed points as input to obtain open-world object queries, in addition to closed-world object queries, to handle unseen objects. The object-to-text abstractor is composed of standard transformer blocks, but it employs masked attention in the cross-attention layer to focus on objects. The proposed method demonstrates superior performance in Open-World Video Instance Segmentation (OW-VIS) and Dense Video Object Captioning (DVOC) compared to existing approaches.
Strengths: - The authors propose a new task and corresponding solution for video understanding.
- They provide a thorough analysis of existing methods in the fields of OW-VIS and DVOC.
- Overall, the paper is well-organized and easy to read.
Weaknesses: Despite being the first proposed method for this task, the technical contribution of the proposed approach seems weak. Moreover, the evaluation and analysis appear to lack in several areas:
1. **Evaluation on Limited Benchmarks**: The proposed method has been evaluated on only one benchmark each for OW-VIS and DVOC. To verify its open-world capability, additional benchmarks are needed. Other works like OV2Seg [1] and DVIS++ [2] use a variety of datasets, including LV-VIS, YouTube-VIS, and OVIS, in addition to BURST.
2. **Experimental Evidence for Free-Form Captions**: In lines 33-37, the use of discrete class labels in current OW-VIS methods is cited as an issue. However, there is no experimental evidence showing that using free-form captions improves OW-VIS performance.
3. **Performance in Table 5**: The basic VIS performance appears to be lacking. Recent VIS methods [2-5] achieve over 35 AP on the OVIS benchmark with a ResNet-50 backbone. Even considering the open-world setting, the proposed method's performance gap compared to the latest works in a closed-world setting is too large.
References
[1] Wang, Haochen, et al. "Towards open-vocabulary video instance segmentation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[2] Zhang, Tao, et al. "DVIS++: Improved decoupled framework for universal video segmentation." arXiv preprint arXiv:2312.13305 (2023).
[3] Heo, Miran, et al. "A generalized framework for video instance segmentation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[4] Li, Junlong, et al. "TCOVIS: Temporally consistent online video instance segmentation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[5] Ying, Kaining, et al. "CTVIS: Consistent training for online video instance segmentation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the Weaknesses section
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: They discuss limitations of the proposed method in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Despite being the first proposed method for this task, the technical contribution seems weak.**
Please see the “Technical Novelty” section in the comment addressed to all reviewers.
**1. Evaluation on Limited Benchmarks ... Other works like OV2Seg [1] and DVIS++ [2] use a variety of datasets, including LV-VIS, YouTube-VIS, and OVIS, in addition to BURST.**
We would like to point out that, different from our “open-world” method, OV2Seg and DVIS++ are “open-vocabulary” methods. These methods assume a finite set of novel categories during evaluation. Each of these object categories is encoded to obtain text embeddings which are then used to obtain probability scores for open vocabulary classification. In our open-world setting, a finite set of novel categories that we can individually encode doesn’t exist. Our goal is not to classify novel objects, but to generate free-form captions. Testing our approach in the open-vocabulary setting is hence not meaningful.
We have evaluated our model on the OVIS dataset for the closed-world task (see Appendix, Tab. 5).
**2. ... There is no experimental evidence showing that using free-form captions improves OW-VIS performance.**
Our goal is not to use the generated captions to improve OW-VIS performance. The goal is to describe discovered objects more holistically via free-form captions, instead of just one-word labels for closed-world objects or no labels for open-world objects.
**3. The basic VIS performance appears to be lacking. ... performance gap compared to the latest works in a closed-world setting is too large.**
Please see point (c) in the “Experimental Performance” section of the comment addressed to all reviewers.
---
Rebuttal 2:
Comment: Dear Reviewer W8BT, as the NeurIPS AC, I would like to remind you to check and read the rebuttal provided by the authors. After that, please respond or acknowledge authors ' rebuttal and update your rating if necessary. Many thanks.
---
Rebuttal Comment 2.1:
Comment: Thank you for the responses and for pointing out my misunderstanding regarding the terms "open-world" and "open-vocabulary." Based on the feedbacks, I think this work is valuable as an initial study on the proposed problem. I will raise the score to a BA.
---
Reply to Comment 2.1.1:
Comment: Thank you for increasing the score! Your feedback is encouraging! We appreciate your support as we think this novel task is valuable and important for our community. We hope this first step in addressing this task will spur more research in this direction by others too. | Summary: This paper propose a new task and the corresponding model: detecting, tracking, segmenting, and captioning open-vocabulary objects in a video. The authors proposed a novel online framework that contains an object detector and feature detector (Object abstractor), a Mask2former style segmentation head, and a frozen LLM. The resulting model is able to detect objects both seen and unseen in the training data, and produce captions. Experiments on open-vocabulary video instance segmentation and dense video object captioning show the proposed model outperform the corresponding SOTA or baselines.
Strengths: - This paper proposed an important and missing new task: segment and captioning all objects in videos. I believe this is one of the ultimate understanding task for video, and this paper defined the task, and composed SOTA components as a novel model for the task.
- I appreciate the workload the authors put in composing the model for such a complex task. The model designs of using Mask2Former architecture and a frozen LLM makes sense to me.
- Experimental results are reasonable. It shows the proposed method performs better than other models on unseen objects, and produce better captions than alternative stage-wise models or concurrent specialist model. The components are ablated in Table 2 and Table 3.
Weaknesses: - From Figure 3, it is unclear how "video" is handled. Do you feed object queries in the previous frame to a new frame, to retain their identity? If so, would the object caption in different frames be different for the same identity?
- While this paper propose a new task, the task is evaluated separately on two sub-tasks on different datasets: OW-VIS and Dense VOC, and thus the overall task is never evaluated. I understand curating evaluation data for this (annotation-heavy) task is expensive, but having some data, even if they are small, will help followup works on this direction. Note this is NOT required in the rebuttal.
- The layout of the paper can be improved. E.g., Table 1 is in page 6, but is first referred to in the text at page 8. The paper mentions an frozen LLM is used, but only in the very end of the supplementary the authors unveil which LLM: an OPT-2.7B. I think which LLM is an important model information and should be mentioned in the main text. It is also hard to fine the exact training data to train the model.
Technical Quality: 3
Clarity: 2
Questions for Authors: Overall this paper proposed a valid and novel model for a new important task, with good comparisons to existing methods. My concerns are mostly on clarification or presentation, and I hope the authors can clarify/ improve them in the rebuttal. My current rating is a weak accept.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **From Figure 3, it is unclear how "video" is handled. ..., would the object caption in different frames be different for the same identity?**
Yes, we feed object queries in the previous frame to new frames as proposed in CAROQ. We explain this in Appendix C.
Object captions may differ in different frames. We think that this setting provides the flexibility to handle multiple action segments of the same object within a video. However, we noticed that if the video frames don’t change much, captions remain mostly consistent.
**The task is evaluated separately on OW-VIS and Dense VOC, the overall task is never evaluated. ... NOT required in the rebuttal.**
Thanks for the suggestion, we completely agree. However, overall task evaluation requires a lot of effort as you rightly pointed out. Our future efforts include the development of such a dataset.
**The layout of the paper can be improved. ... hard to find the exact training data to train the model.**
Thanks for the suggestion! We will revise the paper accordingly.
We obtain our training data following prior work. We fine-tune our models on the BURST and the Dense VOC datasets. The procedure to obtain the BURST data is straightforward, following their official website. For the Dense VOC data, we follow the steps mentioned in DVOC-DS.
**A valid and novel model for a new important task, with good comparisons to existing methods...**
We are thrilled that you found our work novel and important. Thanks again for the valuable suggestions! We will revise our paper based on these comments.
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal
Comment: Thank the authors for the rebuttal. My confusion on object identities is resolved. I thus keep my positive rating.
I respectfully disagree with other reviewers' concern on novelty. To me proposing a working system for video object captioning (with segmentation) is a good enough contribution, as this is important and is missing from the market.
---
Reply to Comment 1.1.1:
Comment: Thank you very much! We are thrilled that you found our contributions novel and important. We hope this first step in addressing this task will spur more research in this direction by others as well.
---
Rebuttal 2:
Comment: Dear Reviewer 9nUt, as the NeurIPS AC, I would like to remind you to check and read the rebuttal provided by the authors. After that, please respond or acknowledge authors ' rebuttal and update your rating if necessary. Many thanks. | Summary: This paper introduces a task called “open-world video instance segmentation and captioning”, which combines open-world video instance segmentation (OW-VIS) and video object captioning tasks. To achieve better performance, the authors propose two key components: an object abstractor to identify new objects using a prompt encoding, and an object-to-text abstractor that bridges object queries with a frozen large language model (LLM) to generate captions.
Strengths: 1. The object-to-text abstractor represents a strategic integration of visual data processing with LLM, which enhances the richness and accuracy of the generated captions.
2. The paper is well-written and easy to follow.
Weaknesses: **Method
1. The introduced task is the combination of the existing ones (i.e., OW-VIS and video object captioning), which is not fundamental and novelty enough.
2. For the first challenge (Lines 33-37), some existing vision-language models (VLMs) also focus on free-form captions for instance segmentation (i.e., open-vocabulary segmentation) and object-oriented captioning (e.g., [1]).
[1] Dense Video Object Captioning from Disjoint Supervision, 2023.
3. The technical novelty of the proposed method seems limited. Specifically,
a) the proposed object abstractor is based on the framework of the video instance segmentation model only with a slight modification (i.e., inter-query contrastive loss).
b) the proposed object-to-text abstractor is a captioning model but focuses on describing the objects in the input videos.
**Experiment
1. The experiments can be divided into two parts: video instance segmentation and object-oriented captioning. For segmentation, it would be advantageous to utilize more widely used validation sets such as YouTube-VIS. For captioning, besides METEOR, incorporating additional standard captioning metrics like CIDEr and ROUGE would enhance the evaluation.
2. The improvements are slight.
a) In Table 1, the proposed method does not achieve SoTA results on more than half of the evaluation metrics compared to baselines, which does not convincingly demonstrate its performance.
b) In Table 3, why do the DetA (56.1) and AssA (54.0) metric values remain unchanged, regardless of the addition or removal of components in the ablation study?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the Weaknesses.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: This paper discusses the limitations in Section G of the supplementary.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. The introduced task ... not fundamental and novel enough.**
We think that OW-VISCap is a novel task that identifies an important gap in existing literature. We agree with reviewer 9nUt: OW-VISCap is “one of the ultimate understanding tasks for video”. Fine-grained object captioning in videos is crucial, especially in an open-world setting. However, existing works like DVOC-DS only focus on closed-world objects and on bounding box-based captioning of only a single action segment per object in a video.
**2. ... (Lines 33-37), some existing VLMs also focus on free-form captions (i.e., open-vocabulary segmentation) and object-oriented captioning (e.g., [1]).**
Thanks for pointing out the ambiguity in lines 35-37. We will reword these lines in the revised version. However, please note that open-vocabulary segmentation methods need to encode all object categories (seen or unseen) to obtain text embeddings first, which are used to obtain probability scores for the final predicted masks. In our open-world setting, we don’t need to use predefined object categories. Instead, we directly generate free-form captions.
Also, note that [1] operates in a closed-world setting. We have carefully pointed out the differences between [1] and our method in Sec 2.3 of the main paper and elaborated on the differences further in Appendix A.1.
**3. The technical novelty ... seems limited.**
Please see the “Technical Novelty” section in our comment addressed to all reviewers.
**Experiment**
**1. Utilize widely used validation sets like YouTube-VIS. For captioning, besides METEOR, ... CIDEr and ROUGE would enhance the evaluation.**
For the closed-world, we have reported results on the OVIS dataset (Tab. 5 of the Appendix), since it is more diverse and challenging than the YouTube-VIS dataset, and since it is also widely used these days. Please also see the “Experimental Performance” section in our comment which addresses all reviewers.
Our CIDEr and ROGUE-1 scores for OWVISCapTor+CAROQ (second last row in Tab. 1) are 1.03 and 0.54 respectively. However, please note that the other method we compare to (DVOC-DS), only reports METEOR. Hence, we can’t compare with this method using these additional captioning metrics.
**2. The improvements are slight.
a) In Table 1, ... does not achieve SoTA on more than half of the evaluation metrics.
b) In Table 3, why do the DetA and AssA remain unchanged?**
a) Please see the “Experimental Performance” section in the comment addressed to all reviewers.
b) Tab. 3 shows how different configurations affect the captioning performance. Detection and association (DetA and AssA) are performed using exactly the same object abstractor.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response. After reading the comments from other reviewers and the responses, I am still concerned that 1) the task is more like a combination of segmentation and captioning in the video; 2) the technical novelty of each proposed part is limited, which composes existing components as a model for the task; 3) the results (Table 1) do not convincingly demonstrate the effectiveness of the proposed method. If the unified framework does not surpass the performance of individual models on their respective tasks, it raises the question: why not develop a specialized model for each task instead? Thus, I will maintain my initial score.
---
Reply to Comment 1.1.1:
Comment: 1) We kindly disagree with the reviewer. Our task requires **a holistic object understanding**, where the object queries are expressive enough to be used for not only segmentation (closed and open-world), but are also expressive enough to generate meaningful object-centric captions. This is an important step towards generalized scene understanding. There are existing generalized tasks in the literature that combine two or more specific tasks (eg., Multi-Object Tracking and Segmentation and Video Instance Segmentation combine image segmentation and Multi-Object Tracking). But they play a critical role in holistic video understanding. In this work, we aim towards more generalizability.
2) Our individual technical contributions (open-world object queries, masked attention for object-centric captioning, and inter-query contrastive loss) together are effective in forming a generalized method for our novel and important OW-VISCap task. We think that sharing these contributions with the community will accelerate research in object understanding in Vision Language Models.
3) Our unified framework surpasses the performance of individual models on **the tasks we care about (open-world performance and captioning ability)** as shown in Table 1. While specialized models are great for closed-world segmentation, in this work, we are not interested in developing specialized models because that doesn’t encourage a holistic understanding of objects in videos. In this work, we care about generalizability.
---
Rebuttal 2:
Comment: Dear Reviewer zWxZ, as the NeurIPS AC, I would like to remind you to check and read the rebuttal provided by the authors. After that, please respond or acknowledge authors ' rebuttal and update your rating if necessary. Many thanks. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their helpful feedback. We are thrilled that they find our work novel (Reviewer 9nUt), thoroughly analyzed (Reviewer W8BT), and well-written (Reviewer zWxZ). Reviewers zWxZ and W8BT have raised questions about the technical novelty and experimental performance of our method, which we address below.
**Technical Novelty:**
We think that we identified an important problem that is of interest to our community, and we developed a simple yet effective baseline solution. Our main technical contributions are twofold:
a) Our object abstractor is designed to handle **open-world objects**. As Reviewer zWxZ rightly pointed out, our object abstractor builds upon existing closed-world object abstractors, albeit we also introduce an interquery contrastive loss. This being said, our main contribution is the development of open-world object queries via a prompt encoder. The developed technique improves upon prior SOTA EntitySeg+DEVA by 5.6 points on open-world tracking accuracy (OWTA) for unseen objects on the BURST dataset, as summarized in Tab. 1.
b) We introduce an important component, **masked cross attention**, in the object-to-text abstractor. It effectively allows us to perform fine-grained object-level captioning. The design permits to focus on the objects without losing the overall image context, and significantly outperforms naive baselines, as demonstrated in Tab. 3. As far as we know, masked attention has not been used for object-level captioning before. We think this is an important contribution.
**Experimental Performance:**
We would like to highlight three points:
a) Our work specifically addresses two major problems: **open-world object discovery** and **object-level captioning**. Hence we are mainly interested in the metrics that specifically cater to these aspects (highlighted with blue in Table 1: OWTA (unseen) and CapA). VIS is not the focus of this work. We obtain the best results on these metrics: 5.6 points higher OWTA (unseen) on the BURST dataset and 7.1 points higher CapA on the Dense VOC dataset. We are also the overall SOTA on the Dense VOC dataset, improving prior SOTA DVOC-DS by 1.5 points. Our VIS performance is similar to the closed-world baselines we build upon, i.e., Mask2Former and CAROQ, as seen from the results on the OVIS data (Tab. 5 in the Appendix).
b) **Generalizability**: We want to highlight the generalizability of our work. Our approach simultaneously segments, tracks, and captions both never-before-seen and previously seen object categories in videos; previous specialized methods for dense video object captioning (Dense VOC) and open-world VIS (OW-VIS) cannot achieve this. DVOC-DS, the previous SOTA for Dense VOC, cannot handle never-before-seen objects (open-world) or multiple action segments for the same object. Although DVOC-DS achieves better closed-world detection and association accuracies (DetA and AssA), we significantly improve the captioning accuracy (CapA) of the Dense VOC task by 7.1 points. We are the SOTA on the overall metric: captioning higher order tracking accuracy (CHOTA), while also being able to operate in an open-world setting. While we perform slightly worse than the OW-VIS SOTA on seen objects on the BURST dataset, we obtain a 5.6 points higher open-world tracking accuracy (OWTA) on unseen objects. Additionally, we can generate free-form captions for predicted objects, which OW-VIS SOTA can’t.
c) **Stronger baselines for closed-world**: Our contributions (open-world object queries, object-to-text abstractor augmented with masked attention, and inter-query contrastive loss) can be integrated with any VIS pipeline. Integrating our method with a strong VIS pipeline would lead to a stronger VIS performance. We currently build on top of Mask2Former and CAROQ. Hence our VIS performance is similar to Mask2Former and CAROQ, as seen from the results on the OVIS dataset (Tab. 5 in the Appendix), using the Resnet-50 backbone. As mentioned in point (a), VIS is not the focus of our work, we focus on orthogonal components: open-world object discovery and object-captioning. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Boosting Transferability and Discriminability for Time Series Domain Adaptation | Accept (poster) | Summary: The paper introduces Adversarial CO-learning Networks (ACON) to improve unsupervised domain adaptation (UDA) for time series classification by enhancing the transferability and discriminability of temporal and frequency features. The proposed approach incorporates multi-period frequency feature learning, temporal-frequency domain mutual learning, and domain adversarial learning in temporal-frequency correlation subspaces. Extensive experiments demonstrate ACON's superior performance over existing methods on various time series datasets.
Strengths: - The introduction of Adversarial CO-learning Networks (ACON) that integrates multi-period frequency feature learning, temporal-frequency domain mutual learning, and domain adversarial learning is a novel and promising direction for enhancing unsupervised domain adaptation (UDA) in time series classification.
- The paper demonstrates extensive experiments across a wide range of time series datasets and five common applications, providing strong empirical evidence of the proposed method's effectiveness.
- The paper effectively highlights the distinct properties of temporal and frequency features, proposing mechanisms to leverage their respective strengths for improved transfer learning.
Weaknesses: - Methodological Complexity: The proposed ACON framework is highly complex, involving multiple components and phases (multi-period frequency feature learning, temporal-frequency domain mutual learning, and domain adversarial learning). This complexity may hinder reproducibility and practical implementation, especially in real-world applications.
- Insufficient Scalability Analysis: The paper lacks a detailed discussion on the scalability and computational efficiency of ACON. It remains unclear how the model performs with larger datasets and whether it introduces significant computational challenges.
- Presentation and Clarity Issues: Some sections of the paper are dense and difficult to follow. Simplifying the language and providing clearer explanations would improve readability and accessibility.
- Lack of Detailed Comparative Analysis: While the paper includes experiments against various baselines, it falls short in providing a detailed comparative analysis. More in-depth discussion on why ACON outperforms other methods and the specific contributions of its individual components would strengthen the paper.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The preliminary study assumes that low-frequency components are the most discriminative. However, the choice between phase and amplitude for frequency feature extraction can significantly impact results. Why were only the human activity recognition datasets used for this preliminary study, and how might this assumption affect other types of time series data?
2. The ablation study indicates that the domain loss contributes more significantly to the model's performance than the two KL losses. Could you explain why this is the case, and provide more insights into the individual contributions of each loss component?
3. The paper proposes domain adversarial learning to align features in both source and target domains. However, this seems to conflict with the novel aspect of different alignment strategies based on domain-specific characteristics. Could you clarify how these two approaches are reconciled in your framework?
~~4. How are classification predictions made during inference? The paper mentions a voting mechanism, but additional details on how the final prediction is derived from the individual components would be helpful.~~
5. In Section 4.1, how is the value of k (the number of top amplitudes) determined for multi-period frequency feature learning? Is it dataset-specific, or is there a general guideline for selecting this parameter?
~~6. Can you provide more details on what the learned latent variables represent? Do they capture the essential features that define each class, and how can this be validated?~~
(The issues in Questions 4 and 6 were errors that occurred while I was switching between different pages during the review process of multiple articles. I sincerely apologize to the authors for this.)
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors acknowledge that while ACON enhances transferability and discriminability for time series domain adaptation, it remains unstable when dealing with time series data that exhibit relatively large variances. Addressing this instability is a critical challenge that needs to be tackled in future research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback! Due to limited space, additional results are in **PDF attached to Author Rebuttal** and we summarize the weaknesses and questions below.
### W1: Reproducibility and implementation in real
- Compared with existing works, ACON has the widest evaluation scope (8 datasets and 5 common applications) and the sota performance, exhibiting greater potential in solving real problems.
- Existing UDA methods for time series usually need careful hyperparameter tuning, while ACON fixes all hyperparameters in total loss, making the implementation easier.
- Code was provided to ensure reproducibility.
### W2: Scalability and computational challenges
- Larger datasets
- Comparisons of evaluation scope are in **Table 4 of PDF**. ACON expands the evaluation scope to **more common tasks and more datasets** (8 datasets and 5 common applications) with **longer lengths and more channels**. Performance of ACON on larger and more complex datasets demonstrates its scalability.
- Computational challenges:
- Comparisons of training time are in **Table 5 of PDF**. ACON performs the best, with the second fastest speed, slightly slower than the fastest method. Thus ACON does not introduce additional computational challenges.
### W3: Presentation
We will refine the expression. If the reviewer could point out the specific sections, we would be more grateful and make more targeted improvements.
### W4: More in-depth discussion on ACON and other methods
Existing works are in two categories: general UDA and UDA for time series.
- General UDA methods ignore the special properties of time series, while ACON fully leverages the respective advantages of temporal and frequency features.
- Most UDA methods for time series ignore frequency domain. RAINCOAT introduces frequency domain into UDA and assumes that temporal domain and frequency domain are independent, treating them equally. As a result, the improvement brought by frequency domain is incremental. Guided by empirical insight, we propose ACON to fully utilize the advantages of temporal domain and frequency domain:
- multi-period feature learning to enhance the discriminability of frequency features
- temporal-frequency domain mutual learning to leverage the respective advantages
- domain adversarial learning in temporal-frequency correlation subspace to align the source and target distribution
### Q1: Only HAR datasets are used in preliminary study
We apologize for any confusion that may lead the reviewer to think that only HAR datasets are used. We conduct a comprehensive preliminary study on **5** datasets. The results are presented in **Table 1-2 of PDF**. Among them, (1) Results of CAP dataset (SSC task) are in Section 3 of the paper. (2) Results of UCIHAR, HHAR-P and WISDM datasets (HAR task) are in Appendix C.1. (3) We conduct additional experiments on MFD dataset (FD task) in rebuttal to justify the motivation further.
To analyze the impact of phase and amplitude, results of frequency classification experiments using three strategies are in **Table 6 of PDF**.
From Table 1, 2 and 6, **general** phenomena are observed:
- Frequency features have better discriminability.
- Temporal model is more skilled at learning invariant features with domain alignment.
- Phase can not provide strong discriminative information.
Phenomena of diverse scenarios consistently support our assumption.
### Q2: KL losses's improvements are smaller than domain loss
Domain loss is the key to improving transferability. Without domain loss, transferability of temporal features is weak and cannot significantly enhance transferability of frequency features using KL losses.
### Q3: Adversarial learning seems to conflict with mutual learning
Different mutual learning strategies in source and target domain do not conflict with adversarial learning between source and target. The reasons are:
- Mutual learning is an alignment (1) between temporal and frequency space (2) at the **sample** level.
Adversarial learning is an alignment (1) between source and target domain (2) at the **distribution** level.
- The alignment direction of mutual learning (temporal <-> frequency) is orthogonal to that of adversarial learning (source <-> target). Therefore, mutual learning is not contradictory to adversarial learning in reducing the divergence between source and target.
- Mutual learning enhances the transferability of frequency features, assisting the alignment of adversarial learning in frequency space.
- Mutual learning enhances the discriminability of temporal features, enhancing the discriminability of the invariant features learned through adversarial training.
In all, mutual learning and adversarial training share the same goal — to learn domain invariant and discriminative features. To avoid potential confusion caused by the repetition of "alignment" both in mutual learning and adversarial learning, we will change "alignment" in mutual learning to "match".
### Q4: A voting mechanism
The "voting mechanism" did not appear in our paper. We would be grateful if the reviewer could provide more information. To make classification predictions, we apply softmax to the output of the classifier and take the class with the highest score as the final prediction.
### Q5: How is the value of k determined?
In ACON, the original time series length is set as a default period to capture the global information. Upon this, according to the amplitude distribution after min-max normalization, we choose the one with the amplitude ratio > 0.9 as the dominant frequencies. We will add explanations in the next version.
### Q6: Latent variables
The "latent variables" did not appear in our paper. To our knowledge, latent variables refer to the learned representation in Bayesian Generative Models, which is irrelevant to our paper. We would be grateful if the reviewer could provide more information.
We hope our responses were able to address any remaining concerns.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my previous concerns. I appreciate the clarifications and would like to delve deeper into a few points:
1. The observation of asynchronous temporal/frequency features between the source and target domains is quite intriguing. Your experiments confirm this phenomenon, which I find to be a valuable finding.
2. Regarding the KL loss and domain loss, I realize my previous query may not have been clear. From the ablation study, it appears that the domain loss contributes significantly more to the overall performance improvement than the KL loss. The KL loss seems to be an adjustment to align with the authors' previous observations of different domain characteristics. My question is: can domain adaptation tasks be effectively completed with domain loss alone, without relying on the KL loss?
3. Following up on the previous point, I noticed in your rebuttal that you described the relationship between mutual learning and adversarial learning as "orthogonal." While I agree that these two tasks have different objectives, the term "orthogonal" suggests a strong distinction. Considering that these tasks are closely related, is there any theoretical support for describing them as orthogonal?
Lastly, I sincerely apologize for the confusion regarding Q4/6. This error was due to my handling multiple paper reviews simultaneously and inadvertently switching pages. I deeply regret any inconvenience this may have caused.
---
Rebuttal 2:
Title: Response for a deeper discussion (Part 1)
Comment: Thank you very much for your quick feedback. We would be delighted to engage in a deeper discussion with you.
### **Feedback1: The observation of asynchronous temporal/frequency features between the source and target domains is a valuable finding**.
Thank you very much for recognizing our empirical insight as a valuable finding. As a new finding, we believe it can provide insights to guide the design of more algorithms for future works.
### **Feedback2: Can domain adaptation tasks be effectively completed with domain loss alone, without relying on the KL loss?**
We will answer this question in 3 subquestions.
### Subquestion 1. "How to complete domain adaptation tasks effectively?"
To ensure that the reviewer fully understands our discussion, we give some detailed background knowledge in UDA here.
In domain adaptation theory [1], the generalization error $\epsilon_t$ of the target domain can be bounded by:
$\epsilon_t \leq \epsilon_s + d_{\mathcal H\Delta\mathcal H}(P,Q)+\lambda \quad (1)$
$\epsilon_s$ is the source error, $d_{\mathcal H\Delta\mathcal H}(P,Q)$ measures the divergence between the source and target feature distributions, and $\lambda$ is the error of an ideal joint hypothesis $h^*$ defined as $h^∗ = arg \min_{h\in \mathcal H }\epsilon_s(h) + \epsilon_t(h)$, such that:
$\lambda = \epsilon_s(h^*) + \epsilon_t(h^*) \quad (2)$
From Equation (1), we find that if we want to complete domain adaptation tasks effectively, we need to have three small error terms: $\epsilon_s$, $d_{\mathcal H\Delta\mathcal H}(P,Q)$ and $\lambda$. Among them, $\epsilon_s$ is the source error, considering that the model is supervised by source labeled data, $\epsilon_s$ is very small. Thus the key to achieving domain adaptation is to minimize $d_{\mathcal H\Delta\mathcal H}(P,Q)$ and $\lambda$.
$d_{\mathcal H\Delta\mathcal H}(P,Q)$ measures the divergence between the source and target feature distributions, if the features have strong transferability, the divergence between features of the source domain and target domain should be small. Thus, to minimize $d_{\mathcal H\Delta\mathcal H}(P,Q)$, boosting the transferability of features is a promising solution.
In Equation (2), $\lambda$ is the error of an ideal joint hypothesis. It means that given source features and target features, $\lambda$ is the error of the optimal classifier (e.g. linear classifier) to classify features into each class of 2 domains. Thus $\lambda$ can measure the discriminability of features. To minimize $\lambda$, boosting the discriminability of features is a good choice.
With the above background knowledge, the answer to this subquestion is: to complete domain adaptation tasks effectively, we need to minimize $d_{\mathcal H\Delta\mathcal H}(P,Q)$ and $\lambda$, i.e. **boost transferability and discriminability of features**.
### Subquestion 2. "How does domain loss help domain adaptation?"
In our original rebuttal to the reviewer ggDU, we answer "Domain loss is the key to improving transferability" but we do not explain it in detail due to the limited space. Here we explain this answer **intuitively and theoretically**.
- Intuitively, adversarial learning is a two-player game between the domain discriminator and the feature extractor. The domain discriminator is trained to distinguish source features from target features and the feature extractor is trained simultaneously to confuse the discriminator. When the game reaches equilibrium, the domain discriminator can no longer distinguish the source features from the target features. At this point, the extracted features are invariant to the change of domains, and the transferability of features is enhanced.
- Theoretically, according to [1], the $\mathcal{H}\Delta \mathcal{H}$-Divergence between the source feature distribution $P$ and target feature distribution $Q$ can be estimated by training the domain discriminator $D$:
$L_D= \max\limits_{D}\mathbb E_{f\sim P}[D(f)=0]+\mathbb E_{f\sim Q}[D(f)=1] \quad (3)$
The objective of the feature extractor is to minimize the source error as well as the $\mathcal{H}\Delta \mathcal{H}$-Divergence bounded by Equation (3), boosting the transferability of features.
Overall, the answer to subquestion 2 is: **domain loss is the key to improving transferability of features**.
---
Rebuttal 3:
Title: Response for a deeper discussion (Part 2)
Comment: ### **Feedback2: Can domain adaptation tasks be effectively completed with domain loss alone, without relying on the KL loss?**
### Subquestion 3. "Can domain loss solve domain adaptation tasks effectively?"
In subquestion 1, we know that to complete domain adaptation tasks effectively, boosting the transferability and discriminability of features is a must.
In subquestion 2, we know that domain loss is the key to improving the transferability of features.
If domain loss can solve domain adaptation tasks effectively, it means domain loss can enhance the discriminability of features. However, the training objective of domain loss is not related to the discriminability of features. Thus there is no guarantee that discriminability of features can be boosted only using domain loss.
Ref[2] reveals that adversarial learning can potentially deteriorate the discriminability since it distorts the original feature distributions when learning domain invariant features.
Thus the answer to the subquestion 3 is: **only using domain loss cannot solve domain adaptation tasks effectively because domain loss may potentially deteriorate the discriminability of features, techniques that can boost the discriminability of features should be introduced.**
### **Feedback3: Is there any theoretical support for describing mutual learning and adversarial learning as orthogonal?**
In our rebuttal, we state that **the alignment directions** (these words cannot be ignored) of two modules are orthogonal. Intuitively, the alignment direction of mutual learning (temporal <-> frequency) and the alignment direction of adversarial learning (source <-> target) are totally different and easy to understand.
It is quite difficult to prove the orthogonality of two learning modules theoretically, but we can prove it empirically:
Mutual learning and adversarial learning can be treated as different tasks. Following Ref [3], we analyze the individual gradients produced by adversarial learning and mutual learning in the training process. The angles between gradient vectors in mutual learning and gradient vectors in adversarial learning are calculated as in Ref [3], results are in the Table below:
| | UCIHAR | HHAR | WISDM | Average |
| ----- | -------------- | -------------- | -------------- | ------- |
| Angle | $86.16\degree$ | $84.37\degree$ | $88.22\degree$ | $86.25$ |
If the angle between two gradients are close to $90\degree$, these two tasks can be treated as orthogonal. From the above table we can find that in 3 datasets, the average angles between gradient vector in mutual learning and gradient vector are very close to $90\degree$.
[1] Ben-David et al. "A theory of learning from different domains." *Machine learning*, 2010.
[2] Liu et al. "Transferable Adversarial Training: A General Approach to Adapting Deep Classifiers." *ICML*, 2019.
[3] Yu et al. "Gradient Surgery for Multi-Task Learning." *NeurIPS*, 2020.
Thanks for your valuable comments. We hope our responses were able to address any remaining concerns. Please do let us know if you have any further questions as well as what would be expected for score improvement.
---
Rebuttal 4:
Comment: Thank you for your response. I have reviewed the content, and these replies effectively address my current concerns. I believe the authors have a solid understanding and insight into their research work. Additionally, they offer a fresh perspective in the field of UDA, making this a commendable contribution.
I've edited the comments and the score.
---
Rebuttal Comment 4.1:
Title: Response to Reviewer ggDU
Comment: Thank you for the timely response and raising the score to 6. We are pleased to know that we have successfully addressed your concerns. We sincerely appreciate your recognition of our work as a commendable contribution that offers a fresh perspective in the field of UDA.
We will revise our paper according to your feedback and the results in the rebuttal, including adding more details about how to determine top-k periods, providing more comparisons about the evaluation scope and training speed, and discussing the enhancement between different modules.
Thank you again for your invaluable feedback and dedicated time to review our paper. If you have any questions, please send them to us, we look forward to discussing with you to further improve our work. | Summary: This paper studies the Unsupervised Domain Adaptation (UDA) problem in time series classification. The authors first proposed an insight that temporal features enhance transferability while frequency features enhance discriminability. Based on the insight, the authors designed a model that leverages temporal and frequency features. Other techniques include but not limited to Knowledge Distillation and Domain Adversarial Learning. The evaluations show SOTA performance, and the ablation study indicates that the designed components are helpful.
Strengths: 1. Empirical insights into temporal and frequency features of time series data in terms of transferability and discriminability.
2. It is interesting to see how the authors managed to achieve better transferability and discriminability through knowledge distillation.
3. The designed temporal-frequency subspace for domain adversarial learning showed notable performance increase.
Weaknesses: 1. This paper is derived from the empirical insight of temporal and frequency features. It would be better if the authors can explain a little bit about why temporal or frequency features excel in different properties.
2. The preliminary experiment on transferability and discriminability could use more datasets and more analysis to justify the motivation of the paper.
3. Since there are other papers utilizing temporal and frequency features as mentioned in the related work, it would be better if the author can illustrate the differences and the advantages of the current design compared to other works.
4. It would be helpful to improve clarity if the author can explain the “index” in figure 1.
5. During the illustration of the model proposed, the author did not cover the design of the domain discriminator.
Technical Quality: 3
Clarity: 3
Questions for Authors: It’s a great idea to exploit the advantages of both temporal and frequency features using distillation. But I also wonder why a simple fusion of temporal and frequency features is not used and compared. Will the fused features maintain both good transferability and discriminability?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable feedback! The additional results are presented in **the PDF attached to Author Rebuttal** and we summarize the original weaknesses or questions.
### W1: The intuition behind empirical insight
Here we provide an intuition from the perspective of energy.
According to Parseval-Plancherel identity [1], the energy of a time series in the temporal domain is equal to the energy of its representation in the frequency domain. By FFT, the energy of raw temporal signals in Euclidean space is transformed to the frequency data in Hilbert space. In Euclidean space, the energy is allocated to every time step, while in Hilbert space, the energy is mainly allocated to several dominant frequencies.
This means that in the frequency domain, the model needs to focus its attention on a local region to capture the dominant frequencies, while in the temporal domain, the model needs to distribute its attention globally to extract discriminative patterns, which makes the temporal modeling still challenging. In contrast, in the frequency domain, the discriminative patterns have been "**pre-extracted**" via FFT, which makes the frequency model more easily extract discriminative features.
When a shift happens, the frequency model with local attention may encounter **a dramatical local shift** (e.g. a shift in the dominant frequencies of a certain class), leading to worse transferability. In contrast, the temporal features with a more evenly distributed energy are more resistant to the shift and exhibit better transferability.
[1] Plancherel, et al. "Contribution à ľétude de la représentation d’une fonction arbitraire par des intégrales définies." 1910.
### W2: More datasets for preliminary study
We conduct a comprehensive preliminary study on a wide range of datasets (**5 datasets from 3 tasks**). The results are comprehensively presented in **Table 1-2 of PDF**. Among them, (1) Results of SSC task (CAP dataset) are included in Section 3 of the paper. (2) Results of HAR task (UCIHAR, HHAR-P and WISDM datasets) are included in Appendix C.1. (3) We conduct additional experiments on FD task (MFD dataset) in rebuttal to further justify the motivation.
From Table 1 and 2, we can observe **general** experimental phenomena:
- Frequency features have better discriminability.
- Temporal model has a better ability to learn invariant features in the progress of alignment.
The phenomena of different types of data consistently support our empirical insights, demonstrating its generalizability.
### W3: Comparison with other works
To our knowledge, RAINCOAT is currently the only work that introduces the frequency domain into time series domain adaptation.
For the extraction of frequency features, RAINCOAT treats amplitudes and phases equally, while ACON discards the phase and proposes multi-period frequency feature learning to enhance discriminability.
For the utilization of temporal and frequency features, RAINCOAT assumes them are independent and treats them equally, resulting in the improvement brought by the frequency domain being incremental. In contrast, based on the respective advantages, ACON proposes temporal-frequency domain mutual learning and co-alignment in temporal-frequency correlation subspace, achieving a performance boost.
### W4 & W5: Index & Design of domain discriminator
Thank you very much for pointing out the confusion and the missing.
For Figure 1(a), the index refers to time steps or frequencies. For Figure 1(b), the index refers to different domains contained in the CAP dataset. For Figure 1(c), the index refers to different source-target pairs selected from the CAP dataset.
We adopt a 3-layer linear as the domain discriminator and ReLU as the activation function, which is consistent with existing UDA works. We will add the explanations in the later version.
### Q1: Comparision with a simple fusion
We conduct exploratory experiments to verify whether a simple fusion in classification or alignment module can produce better performance. The results are presented in Table 3 of the PDF. Due to limited space, a more detailed explanation is included in **the caption of Table 3**.
In classification module, by comparing the 4th, 5th and 6th rows in Table 3, we can observe that concatenation features do not achieve better performance. Intuitively, when we use the concatenation feature for classification, the final prediction is **a simple average** of the temporal prediction and the frequency prediction. Ideally, the extracted features can only maintain suboptimal discriminability and suboptimal transferability. In contrast, our mutual learning allows the frequency domain and temporal domain to become teachers with different advantages, and transfer knowledge to each other. In this way, each other's secondary quantities (e.g. the estimates of the probabilities of the next most likely classes) are transferred [1], enhancing the discriminability and transferability.
In alignment module, by comparing the 3rd and 6th rows, we can observe that with the concatenation DANN significantly underperforms our method. Intuitively, when we use the concatenation feature for alignment, $\mathbf f_F$ with the worse transferability provides $g_D$ with rich domain-label relevant information. In this case, $g_D$ only needs to focus on the game with $\psi_{F}$ in the frequency domain, ignoring the domain adversarial learning in the temporal domain. Based on the experimental performance and intuition, we discard the simple fusion and opt for co-alignment in the temporal-frequency correlation subspace.
[1] Hinton, et al. "Distilling the knowledge in a neural network." *Deep Learning Workshop in NeurIPS*, 2014.
We hope our responses were able to address any remaining concerns. Please do let us know if you have any further questions as well as what would be expected for score improvement.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I am leaning to accept this paper and would like to maintain my positive score.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer U97c
Comment: Thank you for the timely response and efforts in reviewing our paper. We sincerely appreciate your recommendation to accept our paper.
We will revise our paper according to your feedback and the results in the rebuttal, including adding:
- The intuition behind the empirical insight.
- More datasets for preliminary study.
- More details: a more detailed comparison with other works, the design of the domain discriminator, and the explanation of the "index".
- A comprehensive comparison of simple fusion.
Thank you again for your invaluable feedback and dedicated time to review our paper. If you have any questions, please send them to us, we look forward to discussing with you to further improve our work. | Summary: The paper shows that temporal features and frequency features should not be equally treated in model training, the expression of those features are different. Frequency features show strong discriminability and temporal features show strong transferability. Then the author propose the Multi-period frequency feature learning; Temporal-frequency domain mutual learning and Domain adversarial learning in temporal-frequency correlation subspace to leverage the advantages. The proposed results are impressively good.
Strengths: The proposed differences of frequency feature and temporal feature is interesting.
The proposed methods are novel.
The results are impressively good.
Weaknesses: The motivation of the proposed methods should be clarified.
1. Why the alignment between the temporal predictions and the frequency predictions can leverage the respective advantages of the temporal domain and frequency domain? There is no guarantee to keep both advantages.
2. Why the Multi-period frequency feature learning can enhancing the discriminability of the frequency domain?
3. Why Domain adversarial learning can let the model learn transferable representations?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback! We are grateful that the reviewer highlights our method's novelty and solid experiments. We would be happy to explain the motivations behind the different components in more detail.
### W1: Why the alignment between the temporal predictions and the frequency predictions can leverage the respective advantages of the temporal domain and frequency domain?
The alignment between the temporal predictions and the frequency predictions is inspired by knowledge distillation (KD). KD aligns the student model's predictions to the teacher model's predictions to transfer knowledge from teacher to student. Existing studies have proven that learning to mimic the teacher is easier than learning the target function directly, and the student can match or even outperform the teacher [1].
Given the respective advantages of the frequency domain and temporal domain, ACON allows the frequency domain and temporal domain to each become a teacher with different advantages. By minimizing the KL divergence with different directions, the respective advantages of the frequency domain and temporal domain are transferred to each other. In this way, the temporal domain and the frequency domain can not only learn the true label but also learn each other's secondary quantities (e.g. their estimates of the probabilities of the next most likely classes) [2], enhancing the discriminability and transferability.
[1] Romero, Adriana, et al. "Fitnets: Hints for thin deep nets." *ICLR*, 2015.
[2] Hinton, et al. "Distilling the knowledge in a neural network." *Deep Learning Workshop in NeurIPS*, 2014.
### W2: Why the Multi-period frequency feature learning can enhance the discriminability of the frequency domain?
The real-world time series usually presents multi-periodicity, which is reflected in the frequency domain as the presence of a few dominant frequencies with significantly larger amplitudes. Data from different periods can have different discriminative patterns. By multi-period frequency feature learning, ACON extracts these discriminative patterns derived from different periods.
### W3: Why Domain adversarial learning can let the model learn transferable representations?
We provide explanations from two perspectives: intuition and theory.
- Intuitively, adversarial learning is a two-player game between the domain discriminator and the feature extractor. The domain discriminator is trained to distinguish source features from target features and the feature extractor is trained simultaneously to confuse the discriminator. When the game reaches equilibrium, the domain discriminator can no longer distinguish the source features from the target features. At this point, the extracted features are invariant to the change of domains.
- Theoretically, according to [1], the $\mathcal{H}\Delta \mathcal{H}$-Divergence between the source feature distribution $P$ and target feature distribution $Q$ can be estimated by training the domain discriminator $D$:
$L_D= \max\limits_{D}\mathbb E_{f\sim P}[D(f)=0]+\mathbb E _{f\sim Q}[D(f)=1]$ (1)
The objective of the feature extractor is to minimize the source error as well as the $\mathcal{H}\Delta \mathcal{H}$-Divergence bounded by Equation (1), resulting in transferable representations.
[1] Ben-David, et al. "A theory of learning from different domains." *Machine learning*, 2010.
Thank you again for the valuable feedback! We hope our responses were able to address any remaining concerns. Please do let us know if you have any further questions as well as what would be expected for score improvement.
---
Rebuttal 2:
Title: Looking forward to your feedback
Comment: Dear Reviewer 5Lht,
Thank you for your valuable comments.
We have made an extensive effort trying to address your concerns. In our response:
- From the perspective of Knowledge Distillation, we clarify why the alignment between the temporal predictions and the frequency predictions can leverage the respective advantages.
- Based on the inherent characteristic of time series, we explain why the Multi-period frequency feature learning can enhance the discriminability of the frequency domain.
- From the intuitive perspective and the theoretical perspective, we provide explanations of why Domain adversarial learning can let the model learn transferable representations.
We hope our response can address your concerns. If you have any further concerns or questions, please do not hesitate to let us know, and we will be more than happy to address them promptly.
All the best,
Authors
---
Rebuttal 3:
Comment: Thanks for the explanations, I will keep my rating to 6.
---
Rebuttal Comment 3.1:
Title: Response to Reviewer 5Lht
Comment: Thank you for your invaluable feedback and dedicated time to review our paper.
We will revise our paper according to your feedback, adding more explanation on why each component works. If you have any further questions, please send them to us, we look forward to discussing with you to further improve our work. | Summary: This paper uncovers an empirical insight in time series domain adaptation - frequency features are more discriminative within a specific domain, while temporal features show better transferability across domains. Based on this insight, it develops ACON (Adversarial CO-learning Networks), which achieves clear improvement over baselines.
Strengths: - Empirical insight: it is great to see such empirical analysis to provide an interesting insight about the properties of frequency/temporal features.
- Technical solution, which leverages the insight to improve accuracy.
- Experiments: it is good to see solid experimental efforts in the paper and appendix.
Weaknesses: - It would be better to explain (or explore) “why” behind the insight? Is it because the distinguished frequency patterns shift across domains dramatically?
- Usually, neural networks can capture some frequency patterns from time series data. Why it avoids most distinguished frequency patterns, but extracts more transferable ones?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation of stability has been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful feedback! We are grateful that the reviewer highlights our empirical insight, technical solution and solid experiments. We would be delighted to explore the underlying reasons behind the empirical insight with the reviewer.
### W1: Why the frequency features have worse transferability? Is it because the distinguished frequency patterns shift across domains dramatically?
A1:
Here we provide an intuition from the perspective of energy.
According to Parseval-Plancherel identity [1], the energy of a time series in the temporal domain is equal to the energy of its representation in the frequency domain. By FFT, the energy of raw temporal signals in Euclidean space is transformed to the frequency data in Hilbert space. In Euclidean space, the energy is allocated to every time step, while in Hilbert space, the energy is mainly allocated to several dominant frequencies.
This means that in the frequency domain, the model needs to focus its attention on a local region to capture the dominant frequencies, while in the temporal domain, the model needs to distribute its attention globally to extract discriminative patterns. When a shift happens, the frequency model with local attention may encounter **a dramatical local shift** (e.g. a shift in the dominant frequencies of a certain class), leading to worse transferability. In contrast, the temporal features with a more evenly distributed energy are more resistant to the shift and exhibit better transferability.
### W2: Why the temporal model avoids to capture most distinguished frequency patterns?
A2:
As mentioned in A1, with a more evenly distributed energy, the temporal model needs to distribute its attention globally to extract discriminative patterns (eg. multi-period or trend). This global extraction ability is often highly correlated with the model architecture and depth, which makes the temporal modeling still challenging. In contrast, in the frequency domain, the discriminative patterns have been "**pre-extracted**" via FFT, which makes the frequency model more easily extract discriminative features.
Our experimental results also support this point. We compare the sota models of temporal modeling (TimesNet [2] and DLinear [3] ) with 1D-CNN on source domain classification (results are presented in the following table), and it was surprisingly found that these more complex models do not show a significant improvement in UDA settings. This indicates that the existing temporal models still do not have an ideal ability to extract the most discriminative patterns.
Based on intuition and empirical insight, we believe the mutual enhancement between the temporal domain and the frequency domain can be a promising solution.
| Temporal model | UCIHAR | HHAR-P | WISDM |
| -------------- | ------ | ------ | ----- |
| 1D-CNN | 86.18 | 97.01 | 95.63 |
| TimesNet | 89.85 | 95.07 | 88.42 |
| DLinear | 67.18 | 67.03 | 75.30 |
[1] Plancherel, Michel, and Mittag Leffler. "Contribution à ľétude de la représentation d’une fonction arbitraire par des intégrales définies." *Rendiconti del Circolo Matematico di Palermo*, 1910.
[2] Wu, Haixu, et al. "TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis." *ICLR*, 2023.
[3] Zeng, Ailing, et al. "Are transformers effective for time series forecasting?" *AAAI*, 2023.
Thank you again for providing insightful comments which helped us to improve our paper, and we hope our responses were able to address any remaining concerns. Please do let us know if you have any further questions as well as what would be expected for score improvement.
---
Rebuttal 2:
Title: Looking forward to your feedback
Comment: Dear Reviewer uayq,
Thank you for your valuable comments.
We have made an extensive effort trying to address your concerns. In our response:
- From the perspective of energy, we provide an intuition behind the empirical insight.
- We further demonstrate that the mutual enhancement between the temporal domain and the frequency domain is a promising solution by comparing the sota models of temporal modeling with 1D-CNN.
We hope our response can address your concerns. If you have any further concerns or questions, please do not hesitate to let us know, and we will be more than happy to address them promptly.
All the best,
Authors | Rebuttal 1:
Rebuttal: Thank you to all reviewers for the thoughtful feedback.
We are pleased that all four reviewers agree with our empirical insight, method novelty and solid experiments. We are also delighted that reviewers recognized this study as a promising solution to UDA for time series.
In response to reviewers' comments, we perform 4 additional experiments and organize the results into 6 new tables. Due to the limited space, the new tables are included in **the PDF attached to this Author Rebuttal**. We hope these updates address all key concerns and clarify the significance of our work.
We respond to all your comments below in the individual replies, where we directly address comments raised by individual reviewers. Please note that in our response, all experimental results are presented in the PDF attached to this Author Rebuttal. In addition, we carefully proofread the paper and edit the paper for clarity. We will include further details in the next version (new revisions during the rebuttal are not allowed).
Thank you again for your thoughtful feedback. If you have any further concerns or questions, please do not hesitate to let us know.
**Table 1-2**: More datasets for preliminary study (Reviewer U97c and Reviewer ggDU)
**Table 3**: Comparison with a simple fusion (Reviewer U97c)
**Table 4**: Comparison of evaluation scope (Reviewer ggDU)
**Table 5**: Comparison of accuracy and speed (Reviewer ggDU)
**Table 6**: Three different strategies in the frequency domain (Reviewer ggDU)
Pdf: /pdf/3b75aba64805b30c5e28aac9b1e90ce5b00c6ab0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation | Accept (oral) | Summary: The paper proposes FunCoder, a code generation framework incorporating the divide-and-conquer strategy with functional consensus. FunCoder recursively branches off sub-functions as smaller goals during code generation, represented by a tree hierarchy. These sub-functions are then composited to attain more complex objectives. Additionally, they use a consensus formed by identifying similarities in program behavior, mitigating error propagation. FunCoder shows great perormance compared to baselines by +9.8% on average in HumanEval, MBPP, xCodeEval and MATH with GPT-3.5 and GPT-4.
Strengths: + interesting method
+ impressive experiment and results
Weaknesses: - some important details are missing
- lack analysis for cost
The paper is commendably clear and the results of the experiments are striking. However, I find that the sections detailing the approach are somewhat brief, which leaves me wanting a deeper understanding of the full method, especially since the study focuses on employing a divide-and-conquer approach for code generation. It would be greatly beneficial if the authors could expand on this strategy. For instance, how does the model determine which functions require further division or are enough simplistic? Are there specific experiments that illustrate this process? Additionally, it would be insightful to know whether the model is designed to recognize dependencies among the subdivided functions and how it integrates these components effectively.
Regarding the implementation costs, the approach involves segmenting the program generation into multiple stages using a depth-first search for each sub-program. This intricate process prompts a question about the overall efficiency and resource utilization. I would appreciate if the authors could provide more detailed information on the costs associated with these procedures.
Technical Quality: 2
Clarity: 2
Questions for Authors: - How the model decide the functions to be further divided or too simple? Any experiment?
- How the conquer works? Does the model aware of the dependency among those sub functions?
- What is the cost of each part?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Some limitations have been discussed but it would be better to discuss more on the cost of the FUNCODER and the limitation of divide-and-conquer for hard problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and effort in reviewing our paper! These are very helpful suggestions and we address your questions here:
> **W1: The sections detailing the approach are somewhat brief, which leaves me wanting a deeper understanding of the full method, especially since the study focuses on employing a divide-and-conquer approach for code generation.**
Apologies for any confusion. Due to page constraints, we couldn't describe the complete divide-and-conquer process in detail in the main text. However, we supplemented the Implementation Details in Appendix A.1 and have respective prompts in Appendix C to help understand the model's operation. As described in Algorithm 1, FunCoder is a recursive process following a DFS pattern. We use square brackets [L1] below to denote the line number in Algorithm 1.
- FunCoder [L1], when solving each function $f$, first performs the *Divide* stage [L3-L9], where the LLM initially writes the function and identifies some potential sub-problems, represented as sub-function stubs (e.g., `def f(xs: list[int]) -> int`) [L3]. In this process, we identify the sub-problems of the current problem, thereby understanding the dependency relationship between functions.
- For each decomposed sub-problem $g_1, g_2, \ldots$, we recursively use FunCoder to obtain the final implementation $G_i$ for that sub-problem [L5-L8]. This $G_i$ shall replace the previously incomplete subfunction stub signature in the final program.
- FunCoder [L1], when solving each function $g_1$, …
- FunCoder [L1], when solving each function $g_2$, …
- Now that all sub-problems of $f$ are implemented, we move on to the *Conquer* stage [L11-13] to complete the larger problem. By combining the signature $f$ and the final implementations $G_i$ of sub-problems, we generate the complete implementation $F$ [L11] and return it [L13].
Let's describe how this algorithm works in detail by combining it with the example given in the lower half of Figure 1.
```py
[a.1] FunCoder(a)
│ [a.3] LLM(a) -> A, {b, c} # divide
├──[b.1] FunCoder(b)
│ │ [b.3] LLM(b) -> B, {d, e} # divide
│ ├──[d.1] FunCoder(d)
│ │ │ [d.3] LLM(d) -> D, {} # divide
│ │ └──[d.13] return D # nothing to conquer
│ ├──[e.1] FunCoder(e)
│ │ │ [e.3] LLM(e) -> E, {} # divide
│ │ └──[e.13] return E # nothing to conquer
│ │ [b.11] LLM(B, {D, E}) -> B* # conquer
│ └──[b.13] return B*
├──[c.1] FunCoder(c)
│ │ [c.3] LLM(c) -> C, {} # divide
│ └──[c.13] return C # nothing to conquer
│ [a.11] LLM(A, {B, C}) -> A* # conquer
└──[a.13] return A* # final result
```
We will add this content to the Appendix to provide a more detailed explanation of our method. Hope this addresses your concerns about our approach.
> **Q1: How the model decide the functions to be further divided or too simple? Any experiment?**
We rely on the knowledge of code and instruction following capabilities acquired by the LLM during its training. This enables the LLM to make approximate decisions on whether a function should be further decomposed. As stated in Appendix A.1, the LLM is prompted in the divide phase to implement the current function and simultaneously introduce new sub-functions if necessary.
As shown in Appendix C.2, we designed the *Divide* prompt with a 2-shot example. One example demonstrates that complex problems should be decomposed, while the other example shows that simple problems can be implemented without new sub-functions.
Furthermore, in Table 5, we observed that functions produced in the *Divide* stages are highly domain-specific. These sub-functions are likely to be generated based on knowledge from pre-training data, supporting how LLM can decompose tasks based on pretrained knowledge.
> **Q2: How the conquer works? Does the model aware of the dependency among those sub functions?**
Regarding the complete process of divide-and-conquer, why Conquer works, and whether the model is aware of function dependencies, we have briefly discussed this in response to Weakness 1. Here, we elaborate further:
During the *Divide* phase, we extract the hierarchical relationships between functions to construct a dependency tree. *Conquer* merges finished sub-problems, and this is entirely consistent with the principles of divide-and-conquer. In our task, *Conquer* provides the finalized sub-functions in the context and rewrites the parent function based on them. We will make this point clear in future versions.
Conquer is indispensable. In the *Divide* phase, the function is generated before the sub-problem definitions, so the details of the sub-problems are not clear at this point. And the invocation of sub-functions or positions from where they are invoked by the parent function might be incorrect. It is therefore necessary to regenerate the function after all sub-functions are finished. In Table 3, we conducted an ablation study on the *Conquer* stage (Two-pass vs. One-pass) and empirically found that enabling Conquer significantly improves the code generation performance.
> **Q3: What is the cost of each part? What about overall efficiency and resource utilization.**
In Appendix A.6, Table 11, we listed the token usage of gpt-3.5-turbo for both our method and the baselines. We have also provided further statistics on token usage in our General Response. Upon your request, we have further broken down the token usage for each stage of FunCoder in the following table. Hope this answers your questions.
| Part in FunCoder | mean tks | med tks |
| --- | --- | --- |
| TOTAL | 5402.0 | 5166.5 |
| Divide | 1241.0 | 1135.0 |
| Input generation | 930.9 | 861.5 |
| Conquer | 3230.2 | 3149.5 |
Our algorithm itself is very lightweight and does not consume much CPU or RAM resources. Resource consumption comes from calling the LLMs. For an overall analysis of token efficiency, please refer to the general response. We will include these in our next revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal and the efforts made to clarify the cost analysis in your paper. However, I still have some concerns regarding the logic and assumptions underlying this analysis.
The analysis of token cost appears to assume an ideal scenario where no failures occur. Additionally, the new data presented in the rebuttal seems to mirror Table 3 of the paper(ablation study). We would like to see the analysis results of Table 1 or 2(compared to the SOTA).
In a naive program, the standard is a -> BC, and yours is A, [B, C] (follow your notations).
We assume the divide and conquer is perfectly gained, and the pass rate is just the success rate.
The best performance of FUNCODER is 94.5.
Then, your success rate to derive b->B and c->C should be increased to the square root of 0.945, which means 0.972.
When the program becomes much more complex, the requirement of success rate for each divided subprogram is further increased.
I think there should be some design and cost to increase the performance, even if the divided programs are simpler than the original ones.
However, from the rebuttal, it seems most parts depend on the LLM for program generation and divide-conquer, which means the error will increase when the program becomes more complex.
I am willing to consider increasing my score if we could address these questions.
---
Reply to Comment 1.1.1:
Comment: We thank you again for taking your time to read our responses and provide suggestions.
> **1. The analysis of token cost appears to assume an ideal scenario where no failures occur.**
Indeed, in the analysis of general response, we simplified the situations and ignored the the situations where the LLMs make a mistake. However, in token cost results (Table 3, 11 and in our response) we have **always included** token usage for retries to ensure fair comparison, and we argue that this situation may be trivial to consider.
Failures that cause retries can rarely happen in FunCoder -- they happen if and only if no code can be extracted from the responses' Markdown blocks, or that the code was syntactically incorrect. We provide additional statistic results in the table below, which shows a relatively low failure rate even in complex questions (MATH or xCodeEval).
| Dataset | Pass@1 | failed to parse / all LLM calls |
|---|---|---|
| HumanEval | 85.4 | 0.10% |
| MBPP | 78.5 | 0.31% |
| xCodeEval | 31.4 | 1.68% |
| MATH | 54.0 | 2.10% |
> **2. We would like to see the analysis results of Table 1 or 2 (compared to the SOTA).**
We report token consumption data for some of the fields in Table 1 and Table 4, regarding GPT-3.5 and SOTA methods, in the table below. Some of the cells are missing data since some methods were not open-source and did not report detailed token usage.
| Dataset | Method | Pass@1 | min tks | max tks | mean tks | med tks |
|---|---|---|---|---|---|---|
| HumanEval | Standard | 68.3 | 648 | 1477 | 886.7 | 861 |
| | CodeT | 81.1 | 2298 | 9645 | 4479.1 | 4166 |
| | Reflexion | 69.5 | 416 | 4906 | 1416.1 | 754 |
| | LDB (Reported prev SOTA) | 82.9 | - | - | ~23000 | - |
| | FUNCODER | 85.4 | 3015 | 13850 | 5402.0 | 5166 |
| xCodeEval | Standard | 20.2 | 1051 | 3343 | 1599.5 | 1530 |
| | CodeT (prev SOTA) | 23.2 | 2264 | 9245 | 3937.4 | 3733 |
| | Reflexion | 20.6 | 2977 | 1003222 | 401767.3 | 328591 |
| | FUNCODER | 31.4 | 4883 | 53225 | 10559.7 | 8927 |
| MATH | Standard | 62.2 | 551 | 5867 | 953.0 | 835 |
| | FUNCODER | 76.8 | 2622 | 30139 | 7075.5 | 5666.5 |
> **3. Then, your success rate to derive b->B and c->C should be increased to the square root of 0.945, which means 0.972. When the program becomes much more complex, the requirement of success rate for each divided subprogram is further increased.**
**Accumulated errors in FunCoder also occur in Standard**, where generating all functions (ABC) all at once would have a higher error rate than generating one at a time (A, B, C).
Note that this metaphor may not be exact since 94.5% acc is an average of all problems, where different problems are decomposed into different depths. However, we follow this metaphor and continue to show why FunCoder works better than other methods. Here success rate of *Standard* on a single function is $\sqrt{82.9}=91.0$. On a harder dataset where problems are decomposed to 10 levels of functions on average, *Standard*'s overall accuracy will be $91.0^{10}=38.9$, while FunCoder still keeps $97.2^{10}=79.3$.
Results on xCodeEval dataset shows that FunCoder gets greater improvements on harder problems compared to simple problems. But of course, the analysis we just made was based on a simple assumption and does not reflect reality.
> **4. I think there should be some design and cost to increase the performance, even if the divided programs are simpler than the original ones.**
We designed FunCoder under the consideration of complex problem performance, and have made visible progress. Particularly, our method:
1. **Divide** dynamically decomposes problems into simpler subproblems, making them easier to solve. The Divide stage always considers just one layer of problem at a time, thus can keep the model in context, reducing ambiguity and complexity caused by extra-long code. For instance, the problem A-B-C-D requires all these functions to be in context in *Standard*, while ours only need 2 for each of the 4 calls.
2. **Conquer** enables bottom-up generation through re-composing simple functions into a complex solution to a complex problem. This mitigates the issue where Divide cannot see the subfunctions while generating parent functions, making function dependencies more robust. (Note that LLMs are autoregressive models, so while *Standard* generates ABC, when A is being generated B,C aren't there yet. This could cause *Standard* to mis-invoke B or C from A in this implementation.)
3. **Functional consensus** verifies every function starting from the leaves. Through sampling and voting, incidental errors and cascading errors may be reduced and thus the accuracy could be improved. Before our method, self-test was widely used for verifying programs, and now FunCoder manages to achieve higher performance with the same level of token complexity.
We further note that Divide, Conquer and Consensus may be used together, and that using them in conjunction will further raise the accuracy on complex problems. | Summary: This paper presents Divide-and-Conquer Meets Consensus -- a prompting scheme for generating complex functional code. In contrast to planning ahead of time, the proposed technique performs planning in smaller steps by decomposing a coding task into smaller sub-tasks recursively and solving them when they are simple enough. The evaluation shows that the technique can significantly outperform prior arts and works for both proprietary models and smaller open models.
Strengths: 1. This paper targets the important problem of code generation for complex coding tasks which is right now the bottleneck of the code generation domain as simple coding benchmarks have been saturated.
2. The overall framework of the technique is novel and beautiful, borrowing insights from the classical concept of the divide-and-conquer principle.
3. This paper provides a number of interesting insights beyond the base experimental results, e.g., in self-testing it is challenging to obtain both valid code and valid tests.
Weaknesses: 1. There is a flaw in the design of functionality similarity. Authors claim in L104 that they sample inputs from the input domain of the function, namely D(f). First, it is non-trivial to infer D(f). While simple type analysis is applicable, oftentimes the pre-conditions of the function can go beyond type constraints. For example, the pre-condition asserts the input sequence is sorted when doing code generation for binary search. Second, it is challenging to accurately infer such pre-conditions, and inputs violating such pre-conditions often manifest undefined behaviors of function candidates, falsifying functionality-similarly candidates. For example, in the EvalPlus paper, these conditions are manually specified to ensure the soundness of test input generation. In summary, the equivalence-modulo-inputs idea requires knowing the accurate input domain and designing corresponding input samplers, which are oversimplified in this paper.
2. Back to the high-level framework, it is unclear why functionality consensus would work in the first place. Functionality consensus selects the most common functor candidate, but commonality may not necessarily correlate with correctness in code generation. Correct samples can even be exceptional in sampling when the task is challenging to the LLM.
3. The diversity of studied models is a bit limited. It would be helpful to run Divide-Conquer on more open models such as StarCoder2.
Minor suggestions:
1. L97 "A program specifies its functionality (or behavior) through the control flow...": This is not accurate as control flow is just a partial representation of code semantics. For example, `a + b` and `a - b` share the same control flow (i.e., no control flow) but stand for completely different semantics.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Can you exemplify "cascading errors" in Section 2.2? Can you also better explain why sampling multiple functions helps mitigate such errors?
2. The technique is motivated to solve complicated coding tasks. I wonder if the selection of benchmarks can really exercise the core part of the technique, i.e., resolving complex code generation tasks.
3. When computing the token usage, did you include tokens of unused samples when computing "functionality similarity"? Can you also provide more statistics such as the medium token consumption in case the average number is biased by simple tasks? Overall the presented token consumption in Table 3 is much smaller than I expected according to the large prompts shown in the Appendix (and these will be called multiple times).
4. How's the token usage compared to other baselines?
5. Is the technique single-turn or multi-turn when being implemented for evaluation?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Social impact wise this paper is fine. For technical limitations, they are reflected in the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your time and effort in reviewing our paper! We find your suggestions very helpful and we hereby address your questions:
> **W1: The equivalence-modulo-inputs idea requires knowing the accurate input domain and designing corresponding input samplers, which are oversimplified in this paper.**
It is indeed challenging to *ensure* that LLMs generate reliable and valid inputs. However, in order to verify code correctness using LLMs, there are typically only three major methods:
1. Predict bugs by just reading the code (e.g. self-refine).
2. Generate unit tests for self-testing (e.g. CodeT).
3. (Ours) Generate inputs and select the best program based on outputs of sampled programs.
In our comparisons, we focused more on the widely used unit-test method, which not only has to generate reliable inputs but also provide correct case outputs. If the generated output is incorrect, program verdicts will be deeply impacted. Experiment results in Table 3 empirically show a clear advantage of our method over unit tests.
> **W2: Functionality consensus selects the most common functor candidate, but commonality may not necessarily correlate with correctness in code generation. Correct samples can even be exceptional in sampling when the task is challenging to the LLM.**
Improving performance beyond pre-trained capabilities is hard. While it's quite challenging to prove it formally, our consensus still works better than clustering and other methods, as is shown in Table 3. We look forward to future work substantiating this finding.
Admittedly, commonality is less likely to boost performance on problems beyond the LLMs' knowledge, but it isn't necessarily outperformed by the *Standard* baseline. However, commonality empirically reduces incidental errors, as we exemplified in general response.
Refer to xCodeEval results in Table 1. Although our method yields little improvements on Expert-level problems where LLMs can almost never get right, it happened to perform quite well on Hard-level and easier problems. This can be similarly observed on MATH dataset.
> **W3: The diversity of studied models is a bit limited. It would be helpful to run on more open models such as StarCoder2.**
Thanks for your suggestion. We adapted our approach and found that FunCoder can also achieve good results with StarCoder2-15b, bringing its pass@1 of 59.8 on *Standard* to 78.7 with our method. More promising results can be found in our General Response.
> **W4: "A program specifies its functionality (or behavior) through the control flow..." is not accurate.**
Thank you for pointing out this mistake. Indeed, considering only the control flow of a program is incomplete when it comes to behavior. This was an oversight during our writing process. We'll correct this in our next revision as "the program's control flow and logic".
> **Q1: Can you exemplify "cascading errors" and explain why sampling multiple functions helps mitigate such errors?**
In this context, "cascading errors" refer to where an error in the sub-function causes the parent function to also fail. For example, a badly implemented 'cosine' function will cause everything depending on it to malfunction.
As discussed in W2, sampling implementations on a single function can reduce its incidental errors, so in decomposition it would also reduce overall (cascading) errors in the whole program.
> **Q2: Can the selection of benchmarks really exercise the core part of solving complicated coding tasks?**
Following AlphaCode and CodeLLAMA's definition of 'complex' competition-level datasets, we used xCodeEval and MATH in our evaluations. Table 1, 4, 10 show that our method achieves significant improvements on these more difficult problems.
However, we fully agree with your concerns regarding complex problems. Due to the scope of our work, we haven't yet been able to test our performance on software development tasks, which often involve engineering details like changing requirements, code retrieval, and real-time debugging. Nevertheless, we believe that the idea of divide-and-conquer and consensus can be equally applied to such complex problems and represents a promising area for future research.
> **Q3a: When computing the token usage, did you include tokens of unused samples when computing "functionality similarity"?**
Yes, we included the token count for unused sampled functions, and we obtained token cost directly from OpenAI API's statistics. Hence the token count reported in our paper is the total token expenditure from all API calls throughout the process of solving a problem.
> **Q3b: Can you also provide more statistics such as the medium token consumption in case the average number is biased by simple tasks?**
Thanks for your suggestion. We've added a lot more statistics in our General Response and we'll include these data in our next revision.
> **Q3c: The token consumption is much smaller than I expected according to the large prompts (and these will be called multiple times).**
Thank you for pointing out the concern about token costs. This is because the OpenAI Inference API still only charges prompt tokens once per call, when we use `n=k` sampling. We further discussed token costs in our general response, and concluded that it costs only $O(kN)$ tokens, same as *Standard* sampling.
> **Q4: How's the token usage compared to other baselines?**
We list in Table 11 (Appendix A.6) the token usage for all other baselines. Note that some baselines are not open-sourced and the token usage details were not reported in the respective papers, so they are not included in the table.
> **Q5: Is the technique single-turn or multi-turn when being implemented for evaluation?**
We used single-turn chat completions. In each call we only include a common prefix (system prompt and few-shot examples) and one assistant question describing the current task. This helps reduce token costs, context lengths and prevents context contamination.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply!
Comment: Overall I am satisfied with the authors' responses. Weaknesses 3 and 4 should be easily addressable and I look forward to seeing more models being compared. While weaknesses 1&2 are not resolved due to their challenges, I suggest at least discussing them as limitations of the technique in the revision and calling for further research. Specifically, I feel the concern of W1 is a bit outstanding. Probably the only way to address it right now is to add program contracts via human experts (for example, contracts are added manually in the EvalPlus paper for test-case reliability) or LLMs; some weak pre-condition is also better than nothing even if they cannot be strictly verified.
I'm happy to increase by rating to 6.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your response and your rating update.
We fully agree with you that weakness 1 & 2 should be at least discussed in the limitations section, and we will add them in our next revision. We look forward to future research that investigates these weaknesses and, better yet, be accompanied with theoretical analysis. | Summary: The paper presents FuncCoder, a novel coding framework designed to enhance code generation by incorporating a divide-and-conquer strategy with functional consensus. FuncCoder addresses the limitations of existing methods that struggle with complex programming tasks by recursively breaking down problems into smaller sub-functions, which are then hierarchically organized and recomposed to achieve more complex goals. The functional consensus mechanism mitigates error propagation by selecting functions that exhibit similar behaviors. Experimental results show that FuncCoder outperforms state-of-the-art methods, such as Parcel and Reflexion, on various benchmarks, including HumanEval, MBPP, and xCodeEval, and demonstrates its effectiveness across different language models like GPT-3.5, GPT-4, as well as smaller models such as StableCode3b. The framework's ability to dynamically decompose and compose functions enables superior performance in handling complex coding requirements and mathematical reasoning tasks.
Strengths: + The paper proposes FunCoder, a plan-and-solve coding LLM optimized with recursively divide-and-conquer. The design of FunCoder is generally reasonable and the methodologies are well explained and motivated.
+ Besides outperforming SOTA baselines, such as Parcel and Reflexion based on GPT-based models, it is impressive how FunCoder significantly improve the small LLMs' performance.
+ The paper performs in-depth analysis and ablation study to illustrate FunCoder's effectiveness and justify different design choices.
Weaknesses: While FunCoder demonstrates impressive performance in handling well-defined programming challenges, there is a practical concern regarding the token length of the trajectory. The recursive decomposition of tasks into sub-functions inherently leads to the generation of numerous function headers, bodies, and documentations, which can significantly inflate the token count. As reported by Table-3, the average token length is ~5.5k, and I am curious about the median and the maximum tokens required to solve HumanEval problem.
This becomes concerning when dealing with more intricate and interconnected coding requirements. As each layer of decomposition adds to the overall length, the token usage can escalate rapidly, leading to inefficiencies and increased computational costs. The empirical data shows that while FunCoder performs well on self-contained coding benchmarks, the token length could become a limiting factor for more complex problems, potentially offsetting the advantages gained through the divide-and-conquer approach. This exponential growth in token length necessitates careful consideration and optimization to ensure that FunCoder remains scalable and efficient for a broader range of coding tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: + Why do the authors report different methods for GPT-3.5 and GPT-4 in Table-1? Aren't these approaches generalizable to all GPT-based models through API? I would suggest the authors to add back missing baselines for both models. A similar question presents in Table-4
+ In Table-4, why do the authors choose CoT or self-refine as the main baseline while no more comparing with Parcel?
+ Why do the authors choose not to disclose their code during the submission phase?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper discusses the limitation in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your time and effort in reviewing our paper! These are very helpful suggestions and we address your concerns as following:
> **W1: The recursive decomposition of tasks into sub-functions inherently leads to the generation of numerous function headers, bodies, and documentations, which can significantly inflate the token count.**
Indeed, decomposing the original problem into multiple functions may increase token usage. However:
- Function headers and documentation don't occupy much tokens, since they typically just have a few lines, and are way smaller than the function body that might span to dozens of lines.
- Programs that split functions don't have to be longer than un-split programs, due to function re-use which helps reduce redundant code.
- Our method does not force the LLM to decompose overly simple tasks into subfunctions. In fact, the standard method also generates sub-functions, and both methods have similar resulting code lengths. Our approach just regenerates function with fine-grained divide-and-conquer above that.
We might also add that, we showed in Table 11 (L759) that although our token usage was not the best of all, but that it was small enough given its state-of-the-art performance (e.g. LDB took 23K tokens to get 82.9% while ours made 85.4% with 5.4K tokens).
> **W2: Missing the median and the maximum tokens required to solve the HumanEval problem.**
Thank you for pointing out the flaws in our initial statistical method. We reviewed Table 3, the ablation study of FunCoder on HumanEval with gpt-3.5-turbo, re-ran statistics on the original data and obtained the following results (you can find them in our General Response):
| Setting | Pass@1 | min tks | max tks | mean tks | med tks |
| --- | --- | --- | ---- | --- | --- |
| Standard | 68.3 | 648 | 1477 | 886.7 | 861.5 |
| One-pass | 72.6 (+4.6) | 826 | 3489 | 1233.7 | 1132.0 |
| Two-pass | 78.7 (+10.4) | 2197 | 8406 | 3343.2 | 3078.0 |
| Two-pass + ST@11 | 80.5 (+12.2) | 2967 | 13758 | 5408.3 | 5184.0 |
| FunCoder@5 | 83.5 (+15.2) | 2455 | 9432 | 4040.9 | 3800.0 |
| **FunCoder@11** | 85.4 (+17.1) | 3015 | 13850 | 5402.0 | 5166.5 |
> **W3: When tackling complex problems, the token consumption grows exponentially.**
We did not provide a token complexity analysis in the original paper, but our method actually costs only O(N) tokens and it linearly scaled to the size of the generated program. We discussed this proof in detail in general response, and this will be added to the next revision of this paper.
> **Q1: Why baselines are different on GPT-3.5 and GPT-4?**
Some results are directly obtained (and cited) from the original papers, and we've marked them with an underline in Table 1. Some methods only reported results for GPT-4, and upon careful examination, results for other models were not mentioned in these papers. In Appendix A.2, we provided details for which results were derived from the original papers, and specified whichever methods were specifically tailored for code generation or mathematical reasoning tasks.
Some methods compared are not fully open-source or lack detailed instructions. Since we weren't able to reproduce them consistently, we had to cite them verbatim instead. We'd like to add more experiments in the next revision and get more updates on this topic.
> **Q2: In Table-4, why do the authors choose CoT or self-refine as the main baseline while no more comparing with Parsel?**
Parsel did not provide prompts or code for tasks beyond code generation with HumanEval, and it is not designed specifically for mathematical reasoning. Therefore, we did not compare Parsel on the math tasks. Instead, we introduced baselines that were specifically designed for mathematical reasoning, which are more commonly used in that area, such as CoT, self-refine, and CR.
- Standard and CoT serve as text-based baselines for mathematical reasoning, compared with other methods that solve math problems through code generation.
- Self-Refine uses runtime feedback to fix code, serving as a baseline for comparison with our sampling-based approach.
- CR (Cumulative Reasoning) is another baseline for step-by-step problem solving and was the previous SOTA method for the MATH dataset. Unlike our approach, which employs a divide-and-conquer strategy, CR uses a bottom-up technique that progressively derives the final answer.
Details about these methods are provided in Appendix A.2 (L627).
> **Q3: Why do the authors choose not to disclose their code during the submission phase?**
We are still organizing our codebase during the paper submission phase. However, we are confident that we can prepare a minimal working code and accompanying bash scripts for replicating the experiments within the next few months. Meanwhile, as the principles of our method are relatively straightforward, to ensure reproducibility of our work, we have provided complete prompts verbatim so that just calling an API with these prompts would suffice. These prompts should work with most methods and models unless specifically noted.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors response. It mostly makes sense, though I have concerns why the minimum working code for replication requires **months** to release while the authors acknowledge the methods are relatively straightforward. I would like to see some brief explanations of what are the most time-consuming part for code release and the main difficulty of implementing the current agentic system.
---
Reply to Comment 1.1.1:
Comment: We thank you for your response and again apologize for causing confusion regarding our code.
To put it honestly, we were quite busy with other things on our hands, so didn't have enough time to polish our code. But we reassure you that the method **and the implementation** are still straightforward, and that replicating this work from scratch needn't two months time (far less than that).
As is also discussed in the *Limitations and Social Impact* section that directly running code generated by LLMs can be risky. Although our current version of Code Interpreter Sandbox works and could prevent the machine from many known attacks, we are actively looking for more promising means of protection, lest our code causes problems when it goes public.
Comments and suggestions on how the code should be open-sourced are also super welcomed. | Summary: The author applies divide-and-conquer methods to code generation problems with Large Language Models (LLMs). A problem is divided into subproblems recursively; that is, the function that solves the problem is generated in a top-down manner. Given the parent function, the LLM is prompted to implement it using child functions, which are yet to be implemented. Then, it recursively implements the child functions. There are two key components that make this method work well:
1. A two-pass generation approach: When generating a function f, the first pass generates the function/plans without child functions being implemented yet. The second pass occurs when the children functions are implemented, conditioning on the children functions and regenerating the parent functions again. This is to overcome potential problems that may arise when parent functions are generated before child functions are concretely implemented.
2. Consensus via multiple sampling: Each time a function is regenerated in the second pass, multiple programs are sampled, and a consensus of the samples is used to generate the final program. The consensus is determined by selecting the program that exhibits the most similar input-output behavior to other functions.
Experimental results show it improves performance effectively on HumanEval, MBPP, xCodeEval, and MATH benchmarks, and remarkable 94.5% results on HumanEval with GPT4.
Strengths: * The method is conceptually very clean and effective empirically, as shown on various benchmarks including three code generation tasks and math-solving tasks.
* Extensive experiments and ablation studies are conducted to show the effectiveness of the method.
* The results also show that the method is effective not just on large closed-source models, but also on smaller open models such as LLaMA 3 8B and StableCode 3B. This contributes to the reproducibility of the method.
Weaknesses: * The idea of decomposing the problem into subproblems and solving them recursively with an LLM is not entirely new.
* The consensus method, which aims to select top programs from a set of samples using similarity, is different but similar to the AlphaCode approach. It would be interesting to see a comparison of selecting the top programs using majority voting on the input/output as in AlphaCode instead of calculating the similarity.
Technical Quality: 3
Clarity: 3
Questions for Authors: * On average, how many decomposition levels are used when solving the code generation tasks?
* Are the Standard and CodeT prompts used in the experiments the ones shown in Appendix C.1? Is this type of decomposition prompting also better for the Standard and CodeT methods?
* How does the consensus method compare to other self-consistency or AlphaCode-like majority clustering methods?
* In the abstract and line 167, it is said that GPT4 performance on HumanEval is about 97% but I can't find that number in Table 1. Is it mentioned else where?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, one limitation that it is not suitable for software engineering/ open coding problems is discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and effort in reviewing our paper! We find your suggestions very helpful and we address your questions as follows:
> **W1: The idea of decomposing the problem into subproblems and solving them recursively with an LLM is not entirely new.**
As is discussed in *4. Related work*, there is indeed previous work that decomposes problems into subproblems and explores them through search. But many of those focus on mathematical reasoning or multi-hop question answering, and can have difficulties applying to code generation.
Specific to code generation, we innovatively leveraged the inherent advantages of functions, using function signatures to identify sub-problems. We further relate sub-problems via function declaration and invocation, thus decomposing complex tasks in a more natural way. We believe that this form of task decomposition can have a more straightforward advantage in code generation and can better elicit the potential of code models.
> **W2: Comparison between consensus and clustering.**
We find your suggestion to be very constructive. We have made a comparison between these methods on `gpt-3.5-turbo` and have observed a considerable advantage in functional consensus:
| Method \ Pass@1 | HumanEval | MBPP | xCodeEval | MATH |
| --- | --- | --- | --- | --- |
| Standard | 68.3 | 72.0 | 20.2 | 34.6 |
| CodeT | 81.1 | 76.0 | 23.2 | n/a |
| Clustering | 75.0 | 74.0 | 22.0 | 35.0 |
| FunCoder (ours) | 85.4 | 78.5 | 31.4 | 54.0 |
*Clustering* from AlphaCode strictly classifies programs into mutually independent groups by exactly identical outputs. Our method measures output similarity based on many input cases, which is a more permissive approach. Since inputs to a program may have edge cases, treating two programs distinctly just because one output was different, without considering how similar they are, is sub-optimal. For example:
- Consider finding all square roots of a float. Generate 10 functions.
- 4 results know that a positive number has two square roots. 4 results know that a negative number has imaginary square roots. Only 2 results considered both points.
- Notice that the model gets 60% right for either point independently, but only 20% when the points are put together.
- Clustering would put programs in three categories [4, 4, 2] based on output, and end up choosing who only gets one point right. But with our function similarity, the one who gets both points right that is on-more-aspects similar will be favored.
Getting benefits on *Clustering* often requires more programs (even up to 1M in AlphaCode paper). Our method does well with just 11 sampled programs. Consensus is also exceptionally good at pass@1 while AlphaCode focused on pass@10 instead.
> **Q1: On average, how many decomposition levels are used when solving the code generation tasks?**
We analyzed the depth of functions from the results and show them in this table:
| | HumanEval | MBPP | xCodeEval | MATH |
| --- | --- | --- | --- | --- |
| (avg) Depth | 1.19 | 1.06 | 1.45 | 1.34 |
Note that since we rely on the intrinsic capabilities of the model to determine whether or not to decompose functions during generation, we cannot forcibly control the depth of decomposition. The LLM tends to generate shallow code for simple problems and vice versa, which is why the average depth is just slightly above 1.0. But this method is quite effective on complex problems exhibiting deeper function depths.
> **Q2a: Are the Standard and CodeT prompts used in the experiments the ones shown in Appendix C.1?**
Yes, both Standard and CodeT used the prompts from Appendix C.1. We will clarify this point in a later revision and explicitly specify the prompts used for each method on HumanEval.
> **Q2b: Is this type of decomposition prompting also better for the Standard and CodeT methods?**
To ensure the fairness between the decompose prompt and the standard prompt, we used the same example in all prompts. So the *Standard* method also breaks down the original problem into multiple functions, but it generates all the functions all at once, whereas our *decomposition* prompting allows for recursive sub-function generation. However, it's worth noting that the *decomposition* prompt always requires iterative-decomposition, since it only produces sub-function stubs and would otherwise produce incomplete programs.
In Table 3, we conducted an ablation study on the results of decomposition. "One-pass" refers to just *Divide* (recursively decomposing) functions above *Standard*, and "Two-pass" refers to having both *Divide* and *Conquer* but without functional consistency applied. The results indicate that applying just recursive decomposition will still yield a considerable improvement.
> **Q3: How does the consensus method compare to other self-consistency or AlphaCode-like majority clustering methods?**
Thanks for your suggestion. Clustering in AlphaCode can be viewed as self-consistency over programs in code generation. So we re-ran the experiments with majority clustering mentioned in AlphaCode, and you can see the results in our response to Weakness 2.
> **Q4: In the abstract and line 167, it is said that GPT4 performance on HumanEval is about 97% but I can't find that number in Table 1. Is it mentioned elsewhere?**
We apologize for any confusion caused by our wording. What we intended to convey is that on the HumanEval dataset, StableCode-3b with FunCoder (81.0% pass@1) achieved 97.7% of the performance of GPT-4 with Standard (82.9% pass@1). This statement in the abstract aims to highlight that FunCoder can significantly enhance the code generation performance of small open-source models, bringing them close to that of advanced models, which to our belief was a very interesting finding.
A similar point is also discussed in section 3.1.3 (L163-167). We apologize again for any misunderstanding caused by any ambiguity in our paper, and we shall clarify this in a future revision. | Rebuttal 1:
Rebuttal: We thank all the reviewers for taking the time and effort in reviewing our paper, and we find these comments very constructive and inspiring. Hereby we address some of the most common concerns and questions, adding additional experiments and analyses as-appropriate. We hope that this information will provide you with more insights into our methods, and we're certainly welcome to further discussions on these topics.
## 1. Results on More Models
We thank reviewer Ar6t for the suggestion regarding model diversity. We've been paying close attention to new cutting-edge models and have supplemented our experiments accordingly.
| Model | Method \ Pass@1 | HumanEval | MBPP | xCodeEval | MATH |
| --- | --- | --- | --- | --- | --- |
| GPT-4o mini | Standard | 87.2 | 76.0 | 35.4 | 51.4 |
| | FunCoder | 91.5 | 77.5 | 39.8 | 52.6 |
| Codestral 22B | Standard | 79.3 | 68.5 | 11.4 | 31.4 |
| | FunCoder | 89.0 | 74.5 | 22.0 | 36.8 |
| StarCoder2 15B | Standard | 59.8 | 64.5 | 7.2 | 21.0 |
| | FunCoder | 78.7 | 70.0 | 11.6 | 28.8 |
## 2. Analysis of token cost
### 2.1 Example
We use the example from Figure 1, where the final program consists of 5 functions A[B[D,E],C], and A serves as the entry to the program. We denote $N=A+B+C+D+E$ as the token complexity of this task. The order in which calls are executed is put in a pair of parentheses before each line in the following examples.
**Standard:** Call once only. Completes the given function.
```
(1) a -> ABCDE
input tokens = a
output tokens = A+B+C+D+E
overall = O(N)
```
**FunCoder/Divide:** In each step of the *Divide* stage the to-be-implemented function will serve as the context. The function will be implemented and sub-function stubs will be declared.
```
(1) a -> Abc
(2) b -> Bde
(3) d -> D
(4) e -> E
(6) c -> C
input tokens = a+b+c+d+e
output tokens = A+b+B+c+C+d+D+e+E < 2N
overall = O(N)
```
**FunCoder/Conquer:** Here the context includes the current function's definition and finalized implementations of sub-functions. The output is the re-implemented current function.
```
(5) bDE -> B
(7) aBC -> A
input tokens = a+b+B+C+D+E < 2N
output tokens = A+B
```
**FunCoder/Consensus:** When sampling (`n=k`) is enabled in the bottom-up process, the *Conquer* stage automatically includes 'consensus'. Here input tokens are still counted once, and output tokens are counted k-times.
```
(3) d -> kD
(4) e -> kE
(5) bDE -> kB
(6) c -> kC
(7) aBC -> kA
input tokens = a+b+B+c+C+d+D+e+E < 2N
output tokens = kA+kB+kC+kD+kE = kN
overall = O(kN)
```
**Conclusion:** The token cost involved in FunCoder and Standard are both worst-case $O(N)$ which linearly scales to the number of final output tokens (i.e. problem's inherent complexity). Even if sampling is applied its complexity $O(kN)$ would still be on par with other methods (e.g. self-consistency, CodeT) where sampling is also enabled.
### 2.2 Complexity of token-cost
We explain that the worst-case token cost of our method is $O(kN)$. For the sake of simplicity, consider the case without output sampling:
- Suppose that the program will have $N$ tokens.
- We ignore the tokens involved with prompting since they are generally proportional to the number of LLM calls.
- The naive 'Standard' method should generate exactly $N$ tokens.
- FunCoder goes through the *Divide* stage and the *Conquer* stage for each of the functions. Without loss of generality,
- Based on the current function, *Divide* generates an implementation of itself and stubs for sub-functions. In this stage, each function would appear at most once in input and twice in output. All *Divide* stages consume no more than $3N$ tokens.
- Recursively generate all sub-functions, and
- *Conquer* regenerates the parent function based on its stub and all finalized sub-functions. Here each function will appear at most twice in input and exactly once in output. All *Conquer* stages shall consume at most $3N$ tokens.
So FunCoder requires no more than $6N$ tokens in input-output, making its token consumption $O(N)$ even at worst-case.
We further argue that even if output sampling is enabled (`n=k`), this complexity will still be a linear $O(kN)$. This shows that our token complexity is consistent with other sampling-enabled methods like Standard + CodeT.
### 2.3 Detailed result of token cost distribution
We thank reviewer LV7h and s7Dq for their suggestions on more comprehensive statistics regarding token consumption. We have added minimum-maximum values, median and distribution to the statistics, included in the table below. This further supports the point that our method merely applies a constant factor to the token cost.
| Setting | Pass@1 | min tks | max tks | mean tks | med tks |
| --- | --- | --- | ---- | --- | --- |
| Standard | 68.3 | 648 | 1477 | 886.7 | 861.5 |
| One-pass | 72.6 (+4.6) | 826 | 3489 | 1233.7 | 1132.0 |
| Two-pass | 78.7 (+10.4) | 2197 | 8406 | 3343.2 | 3078.0 |
| Two-pass + ST@11 | 80.5 (+12.2) | 2967 | 13758 | 5408.3 | 5184.0 |
| FunCoder@5 | 83.5 (+15.2) | 2455 | 9432 | 4040.9 | 3800.0 |
| **FunCoder@11** | 85.4 (+17.1) | 3015 | 13850 | 5402.0 | 5166.5 |
These results will be included in the next revision of our paper.
## 3. Exemplify why Consensus Works
We thank reviewers 8URc and Ar6t for pointing this out and would like to elaborate further:
- Consider finding all square roots of a float. Generate 10 functions.
- 4 results know that a positive number has two square roots. 4 results know that a negative number has imaginary square roots. Only 2 results considered both points.
- Notice that the model gets 60% right for either point independently, but only 20% when the points are put together.
- With our function similarity, the solution which is on-more-aspects similar (and happens to be correct) will be favored.
Through this example, we illustrate that Functional Consensus has the potential to identify the correct samples even at their minority, outperforming other methods such as self-consistency or clustering. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Depth Anything V2 | Accept (poster) | Summary: The paper tries to answer two important questions in current monocular depth estimation: 1. whether generative-based models can generate more fine-grained depth than discriminative models. 2. The effectiveness of synthetic data and large-scale pseudo data. The results show that discriminative models can also generate fine-grained depth estimation when trained with synthetic data and gradient matching loss. The paper provides several depth estimation models with different sizes, which may promote various downstream applications.
Strengths: 1. The paper proposes to enhance the performance of the existing discriminative depth estimation model, DepthAnything, with synthetic datasets and large-scale pseudo labels, which is simple yet effective.
2. The experiments are extensive and the writing is good.
Weaknesses: 1. How do you define the fine-grained depth quantitatively? There seem to be no concrete metrics to evaluate it, except qualitative visualization in the paper. So it is still hard to fairly compare the fine-grained depth with the generative-based models.
2. The pseudo label can only be relative depth, which may not be effectively directly used for metric depth estimation.
3. There are no dataset ablation experiments to verify the necessity of using the total 62M pseudo labels in the paper. If one random samples a subset from the pseudo-labeled real images, e.g. sample 10% data from all the 8 real datasets, what is the performance of the student model? Whether the 62M data is redundant or not?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What does "+unlabeled real image" mean in Figure 6?
2. What is the backbone in Table 11?
3. In Section C.10, the authors find that synthetic data is important for fine-grained predictions, I am curious about the differences in quantitative results when training the teacher model with and without real labeled images.
4. Can 62M pseudo labels improve the performance of the teacher model (ViT-Giant backbone)?
5. For metric depth estimation, the authors fine-tuned the model from pre-trained relative depth estimation, what is the performance when fine-tuning the model from the original DINOv2 pre-trained backbone? (e.g. using the same ViT-large backbone with different weight initialization)
6. What is the performance of the proposed metric depth estimation model compared to recent SOTA models, e.g. Unidepth and Metric3Dv2?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see weakness and questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Quantitative comparison on depth sharpness.**
Quantitatively comparing different depth models in terms of sharpness is not trivial, mainly due to two reasons: 1) a lack of well-defined metrics for sharpness, and 2) the absence of *diverse, real, and fine-grained* benchmarks to evaluate this term. But to better address your concerns, we carefully select 140 annotated pairs that include thin objects or are near object boundaries from our diverse DA-2K benchmark. On these fine-grained challenging depth pairs, we achieve a superior accuracy of 98.6%, compared to Marigold's 88.6%.
**Q2: Pseudo label of relative depth may not be effectively used for metric depth estimation.**
In this work, we primarily focus on relative depth estimation, where we can fully unlock the strengths of large-scale unlabeled data. In comparison, obtaining pseudo metric depth for in-the-wild images without additional information is almost infeasible. Relative depth is also beneficial to metric depth. As shown in Table 4, a sufficiently pre-trained relative depth model can be quickly adapted to metric depth estimation with minimal fine-tuning (~1 hour on NYU-D and KITTI). Besides, pseudo labels of relative depth can serve as an auxiliary optimization target when jointly trained with the main metric depth estimation task. Using an auxiliary head to learn pseudo relative depth from large-scale unlabeled data can enhance the shared encoder's capabilities, benefiting the primary metric depth estimation task.
**Q3: Whether the total 62M unlabeled images are redundant.**
Thank you for your insightful question. Honestly, during the development of V2, we have also explored reducing the scale of 62M unlabeled images due to the almost unaffordable computational cost. We performed the following attempts:
- Randomly selecting 50% unlabeled images from each unlabeled set (similar to the setting you mentioned).
- Measuring the hardness of each unlabeled sample based on the agreement between the Depth Anything V1 and V2 predictions, *i.e.*, the higher the agreement, the easier the image. Based on the sorted hardness, we select 1) top 50% images, 2) medium 50% images, or 3) bottom 50% images, from each unlabeled set.
Our comprehensive experiments reveal that all reduction strategies (1+3) result in performance drops of 0.5%-1% on all standard benchmarks and 1%-2% on our proposed DA-2K benchmark. Therefore, we conclude the full set of 62M unlabeled images is crucial for producing the most capable model.
**Q4: Meaning of "+unlabeled real images" in Figure 6.**
The main figure in Figure 6 illustrates the failure cases if the teacher model is purely trained on synthetic images. The upper-right subfigures labeled "+unlabeled real images" show the improved predictions of the model after it is re-trained on large-scale pseudo-labeled real images. We will clarify it in our final version.
**Q5: The backbone used in Table 11.**
The backbone used is ViT-Small. We will add this detail in our final version.
**Q6: Training the teacher model with and without real labeled images.**
In our early attempts, we have tried combining real labeled images (e.g., Cityscapes, DIML) to train our teacher model. They will bring a 0.3% improvement on the NYU-D and KITTI datasets. However, we observed a significant degradation in depth sharpness and robustness to transparent/reflective surfaces due to flawed GT labels in the real datasets. Replacing these low-quality labels with our re-annotated pseudo labels provides a similar improvement (~0.5%) when the re-annotated images closely match the test domains (*e.g.*, DIML for NYU-D). However, To maintain model universality and avoid overfitting to specific test metrics, we opted to use only the eight most representative large-scale unlabeled datasets.
**Q7: Whether the 62M pseudo labels can improve ViT-Giant-based teacher model.**
For standard benchmarks' metrics (*e.g.*, NYU-D, KITTI), we find the 62M pseudo-labeled images are not necessary to the ViT-Giant-based teacher model. As we have repetitively mentioned in the paper, this is mainly due to the label noise in these existing test benchmarks (Figure 8). In comparison, on our proposed high-quality DA-2K benchmark, these additional images can further improve the teacher's performance by 1%. Moreover, these pseudo-labeled real images enhance the model's robustness in challenging cases, as demonstrated in Figure 6 (*i.e.*, sky and dark person).
**Q8: The performance of fine-tuning original DINOv2 weights for metric depth estimation.**
This experiment is ablated in Depth Anything V1. Fine-tuning DINOv2 on NYU-D and KITTI achieves $\delta_1$ scores of 0.973 and 0.971, respectively. In contrast, our fine-tuned Depth Anything V2 obtains 0.984 and 0.983 results on the two datasets, demonstrating significantly better performance than the original DINOv2.
**Q9: Comparison with metric depth models UniDepth and Metric3Dv2.**
Thank you for pointing out these two competitive methods. We did not compare with UniDepth and Metric3Dv2 mainly because they were released within two months before our submission. According to the [NeurIPS policy](https://neurips.cc/Conferences/2024/PaperInformation/NeurIPS-FAQ), we are considered concurrent works and are not expected to compare with them. Briefly compared here: on NYU-D, our fine-tuned metric depth model achieves the same $\delta_1$ result as UniDepth and is 0.5% lower than Metric3Dv2. It is worth noting that we require significantly fewer labeled images than them (ours: 0.6M *vs.* theirs: 16M and 3M). Their training sets already cover many similar scenes (*e.g.*, Taskonomy, ScanNet) as NYUv2, leading to better transfer results. More importantly, our focus is *relative* depth rather than *metric* depth. Our relative depth model performs much more robustly, as shown in the PDF. Quantitatively, on our proposed DA-2K benchmark containing mostly in-the-wild images, we achieve over 10% higher accuracy than them.
---
Rebuttal Comment 1.1:
Title: Thanks for your feedback
Comment: Thank you for the feedback. Similar to Reviewer RjZq, could you provide a more detailed evaluation comparing the relative depth metrics of existing metric depth models, such as Metric3D, Metric3D v2, and Uni-Depth, against challenging indoor and outdoor benchmarks? Specifically, NYU-D is known for having noisy labels, which raises a concern. It appears that metric depth models trained on large real-world datasets may already exhibit strong generalization performance across diverse scenes, suggesting that pseudo-labeling might not be essential. Consequently, the main benefit of the pseudo-labeling approach proposed in this paper seems to be in achieving more detailed relative depth maps and cannot be applied to improve metric depth estimation directly since pseudo-labels lack scale information.
---
Reply to Comment 1.1.1:
Title: Thank you for further feedback
Comment: To verify the relative-depth generalization ability of existing metric depth models, we have evaluated UniDepth and Metric3D (v1 & v2) on our proposed challenging and precise DA-2K benchmark, which encompasses eight diverse scenarios (*e.g.*, indoor, outdoor, adverse style, see Figure 9(b) for details). Results are summarized below.
| Method | Indoor | Outdoor | Average of 8 scenarios |
| :---- | :----: | :----: | :----: |
| UniDepth | 80.4 | 84.2 | 83.1 |
| Metric3Dv1 | 83.5 | 86.8 | 83.7 |
| Metric3Dv2 | 85.7 | 88.1 | 86.2 |
| **Depth Anything V2 (Ours)**| **89.8** | **95.1** | **94.8** |
Moreover, in Table D.1 of our appendix, we have demonstrated the indispensable role of large-scale unlabeled images. Part of the results are borrowed here:
| Using unlabeled data? | Indoor | Outdoor | Average of 8 scenarios |
| :---- | :----: | :----: | :----: |
| No | 85.3 | 92.9 | 91.4 |
| **Yes** | **89.8 (+4.5)** | **95.1 (+2.2)** | **94.8 (+3.4)** |
---
Rebuttal 2:
Title: Thanks for further response and the implementation of gradient matching loss
Comment: Thank you for acknowledging our better performance on the relative depth metrics.
Regarding the gradient matching loss, it was originally proposed by MiDaS. We have provided its formulations in the one-page PDF. And our implementation is adopted from MiDaS, which can be found here: https://gist.github.com/dvdhfnr/732c26b61a0e63a0abc8a5d769dbebd0#file-midas_loss-py-L93-L113. The groundtruth label in this gradient matching function is in the disparity space, via inversing the depth label (*i.e.*, 1 / depth) and re-normalizing it to 0-1 with min-max value.
---
Rebuttal 3:
Comment: Dear Reviewer Bw2h, as the author-reviewer rebuttal deadline approaches, we want to follow up on our recent response to your additional question regarding our gradient matching loss. We’ve provided the [source code](https://gist.github.com/dvdhfnr/732c26b61a0e63a0abc8a5d769dbebd0#file-midas_loss-py-L93-L113) and explanations above for the detailed implementation of this loss.
If you have any further comments or need additional clarification, we would be more than happy to assist. If our response has satisfactorily addressed your concerns, we kindly ask if you might consider revisiting your initial score.
Thank you for your precious time and consideration. | Summary: Depth Anything V2 is an advanced monocular depth estimation model that improves upon its predecessor by utilizing synthetic images, enhancing the teacher model's capacity, and leveraging large-scale pseudo-labeled real images for training. This approach results in significantly faster and more accurate depth predictions compared to models based on Stable Diffusion. This model also offers a range of model sizes to support various applications and demonstrates strong generalization capabilities, which are further fine-tuned with metric depth labels. Additionally, a versatile evaluation benchmark, DA-2K, has been developed to address the limited diversity and noise in current test sets, facilitating more accurate future research.
Strengths: The paper offers several noteworthy advantages. Firstly, it provides a comprehensive analysis of recent depth foundation models, detailing their features, strengths, and weaknesses. The comparison between generative and discriminative models in Table 1 is particularly insightful. Secondly, the approach is novel, utilizing pseudo-labeled real images to harness the advantages of both synthetic and real datasets. This method is impressive and appears beneficial for scaling up foundation models, similar to SAM. Thirdly, the paper demonstrates strong generalization performance, as evidenced by various tables and figures. The real-time performance showcased in the CVPR conference demo was particularly impressive. Lastly, the extensive additional experiments and analyses in the appendix offer valuable insights, supporting the validity of the proposed methods and experimental protocols.
Weaknesses: Data Volume and Performance Relationship: According to Table 10, there seems to be a linear relationship between the amount of data and performance. It would be interesting to see how the model performs with even more data added.
Metric Depth Estimation Comparison: Table 4 shows that the results of Metric Depth Estimation are similar to or worse than other Monocular Depth Estimation models (UniDepth[1], Metric 3D[2]). Additional analysis on this would be beneficial.
##### [1] UniDepth: Universal Monocular Metric Depth Estimation
##### [2] Metric3D v2: A Versatile Monocular Geometric Foundation Model for Zero-shot Metric Depth and Surface Normal Estimation
Technical Quality: 4
Clarity: 4
Questions for Authors: These points are included in the Weakness.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitations of Depth Anything V2 include a dependency on data volume for performance, as suggested by Table 10, which raises questions about scalability and diminishing returns with more data. Additionally, Table 4 shows that its Metric Depth Estimation results are comparable to or worse than other models like UniDepth and Metric 3D v2, indicating that while the model excels in some areas, it does not consistently outperform competitors across all metrics.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The performance if even more data is added.**
Thank you for raising this insightful question. Currently, we have used 62M unlabeled images from eight *highly curated* public datasets, *e.g.*, SA-1B and Open Images. As you mentioned, based on the positive scaling curve of experiments in Table 10, we believe that if more *curated* data is introduced, the model's generalization ability will be further enhanced. Unfortunately, as far as we know, **there are no ***curated*** public datasets available that can further ***significantly*** scale up the existing 62M images (***e.g.***, scaling up to 100M)**. Some web-scale datasets, like LAION (no longer public due to privacy concerns) and DataComp, exhibit poorer image quality and imbalanced visual concepts, which have been shown to be even detrimental to model learning [1]. Therefore, at this stage, it is challenging to further scale up our unlabeled pool from existing public sources. We will leave this interesting topic for future work. We may first curate higher-quality raw images from web-scale billion-level datasets similar to [1], and then pseudo-label them with precise depth annotations for our models to learn.
On the other hand, although we cannot further *significantly* scale up the unlabeled data, **it is feasible to involve a certain amount of unlabeled images from ***specific domains*** to enhance model performance on these targeted domains**. For example, we have tried adding the NYU-D or KITTI *raw unlabeled* training images (~20K) to our large-scale unlabeled pool. We found that the test results on the two datasets are both boosted by nearly 0.5% (the baseline results are already very competitive). These results further demonstrate the importance of narrowing the domain shift between training and test data. This observation aligns with our motivation in the paper to incorporate unlabeled real images to mitigate the disadvantages of synthetic images (*i.e.*, distribution shift and poor diversity) while amplifying their strengths (*i.e.*, high precision).
[1] Automatic Data Curation for Self-Supervised Learning: A Clustering-Based Approach, arXiv 2024.
**Q2: Comparison with UniDepth and Metric3Dv2.**
Thank you for pointing out these two competitive methods. We did not compare with UniDepth and Metric3Dv2 mainly because they were released within two months before our submission. According to the [NeurIPS policy](https://neurips.cc/Conferences/2024/PaperInformation/NeurIPS-FAQ), we are considered concurrent works and are not expected to compare with them. Briefly compared here: on NYU-D, our fine-tuned metric depth model achieves the same $\delta_1$ result as UniDepth and is 0.5% lower than Metric3Dv2. It is worth noting that we require significantly fewer labeled images than them (ours: 0.6M *vs.* theirs: 16M and 3M). Their training sets already cover many similar scenes (*e.g.*, Taskonomy, ScanNet) as NYUv2, leading to better transfer results. More importantly, our focus is *relative* depth rather than *metric* depth. Our relative depth model performs much more robustly, as shown in the PDF. Quantitatively, on our proposed DA-2K benchmark containing mostly in-the-wild images, we achieve over 10% higher accuracy than them.
---
Rebuttal Comment 1.1:
Comment: My questions have been fully addressed, and I am satisfied with the response. I would like to express my gratitude to the authors for their thorough and thoughtful answers. I will maintain the decision to accept. | Summary: The authors introduce Depth Anything V2, a powerful monocular depth estimation model. This model relies entirely on synthetic data to train a teacher depth estimation model, which is then used to generate pseudo-labeled real images. These pseudo-labeled images are subsequently fed into the training pipeline. Depth Anything V2 achieves state-of-the-art results in both quantitative and qualitative evaluations. Additionally, the authors offer models in various scales, accommodating different scenarios. To address the limitations of existing monocular depth estimation benchmarks, the authors also provide a new benchmark with precise annotations and diverse scenes, facilitating future research. However, compared to depth anythingv1 and other affine-invariant depth methods, the novelty is limited.
Strengths: 1. The authors dedicate considerable space to discussing the choice of datasets, highlighting the disadvantages of labeled data, the advantages and limitations of synthetic images, and the role of large-scale unlabeled real images. This provides a clear rationale for the proposed pipeline.
2. A new benchmark is introduced to address the limitations of existing benchmarks. This benchmark encompasses a wide variety of scenarios and includes comprehensive, high-resolution depth estimation results.
3. The results are impressive. The method demonstrates significant improvements in zero-shot relative depth estimation, particularly on the proposed DA-2K benchmark. The qualitative results also reveal the fine-grained detail of the predictions.
Weaknesses: 1. Lack of Novelty: The article primarily focuses on analyzing the choice between synthetic and real data, while the pipeline remains unchanged compared to the original Depth Anything V1. V2 used more high-quality data to achieve better performance, but it has no new methods contributions and insights.
2. Lack of Detail: Although some methods may be well-known within the community, it would be beneficial for the authors to elaborate more on the description of the pipeline and the formulation of the loss function.
3. More Comparisons Required: In Table 2, the authors argue that the method predicts affine-invariant inverse depth and compares it with Depth Anything V1 and MiDaS V3.1. MiDaS is not an up-to-date SOTA method. It should enclose more recent works, such as UniDepth, ZoeDepth, Marigold, Metric3D, and Metric3Dv2. Similarly, in Table 4, which compares metric depth methods, Unidepth, and metric3dv2 achieve state-of-the-art performance in these two benchmarks, but they are not listed in tables.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why can the proposed method be compared with other methods on the proposed dataset, but not on other zero-shot relative depth estimation datasets?
2. The performance of the pipeline seems to be limited by the performance of the teacher model, as the training data for the student models is provided by the teacher model. Why, then, do some student models outperform the teacher model in Table 5?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper does not present limitations in detail.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Our contributions and novel insights.**
We do not position the pipeline as our contribution. There are many dimensions to measuring a paper's contributions. From the very start of our paper (L1-3, L41-45), we emphasize that, *instead of proposing a new module or pipeline, this work is centered on the **data perspective** and aims to reveal crucial insights:*
- **[Insight on flawed training data]** Efficient discriminative models can also produce fine-grained depth predictions. Heavy diffusion models are not necessary. *Replacing flawed real images with precise synthetic images* is the key to the prediction sharpness. There is NO prior work that achieves both fine-grained predictions and ultra-efficient inference (>10x faster than SD-based models). So we believe this insight is critical and worth presenting to our community.
- **[Insight on scaling up teacher model and the *new* role of unlabeled real images]** We reveal that, however, *it is non-trivial to leverage synthetic images* (Figure 5, 6, Table 13). To overcome the disadvantages (distribution shift and poor diversity) of synthetic images and amplify their strengths (preciseness), we present a simple solution: use the most capable DINOv2-Giant-based teacher model to learn from precise synthetic images first, and then pseudo-label diverse real images for students to learn from. *This pipeline is well-motivated by the carefully studied properties of synthetic images.*
- **[Insight on flawed evaluation benchmarks & our contribution: DA-2K]** We point out the drawbacks of current test benchmarks: noisy annotation, poor diversity, and low resolution. In response, we construct a versatile benchmark DA-2K, with precise annotations, rich diversity, and high-resolution images. DA-2K is a valuable supplement to existing benchmarks.
All these novel insights combined contribute to a more capable Depth Anything V2 for our community. *Our data-centric contribution is acknowledged by Reviewer rTF8 and fDqs.* We politely hope you can reconsider our work's contributions and insights from the data perspective. Thank you.
**Q2: More details on the pipeline and loss function.**
Thank you for your kind reminder. We will detail them in our final version. For now, we provide a preview in the PDF.
**Q3: Comparison with more methods on relative and metric depth estimation.**
Regarding relative depth, we only compared with MiDaS and Depth Anything V1 because we all predict the *inverse* depth (we prefer inverse depth due to its [numerical stability](https://github.com/isl-org/MiDaS/issues/21)), while Marigold produces the depth *without inversion*. It will cause some evaluation noise when converting the two depths to the same space, mainly arising from the [clipping practice](https://github.com/prs-eth/Marigold/blob/f74115261b67b59fb536994d0413f64d69af65c5/eval.py#L198-L200). However, to address your concerns, we further compare our method with Marigold. Our method achieves much higher $\delta_1$ scores than Marigold on KITTI (94.4% *vs.* 91.6%) and NYU-D (98.0% *vs.* 96.4%).
Regarding metric depth, we have initially compared with your mentioned ZoeDepth (Table 4). We did not compare with UniDepth and Metric3Dv2 mainly because they were released within two months before our submission. According to the [NeurIPS policy](https://neurips.cc/Conferences/2024/PaperInformation/NeurIPS-FAQ), we are considered concurrent works and are not expected to compare with them. Briefly compared here: on NYU-D, our fine-tuned metric depth model achieves the same $\delta_1$ result as UniDepth and is 0.5% lower than Metric3Dv2. It is worth noting that we require significantly fewer labeled images than them (ours: 0.6M *vs.* theirs: 16M and 3M). Their training sets already cover many similar scenes (*e.g.*, Taskonomy, ScanNet) as NYUv2, leading to better transfer results. More importantly, our focus is *relative* depth rather than *metric* depth. Our relative depth model performs much more robustly, as shown in the PDF. Quantitatively, on our proposed DA-2K benchmark containing mostly in-the-wild images, we achieve over 10% higher accuracy than them.
**Q4: Why our method is compared with other methods on our proposed DA-2K benchmark (Table 3), but not on other benchmarks (Table 2).**
We explained in Q3 that we do not compare with Marigold in Table 3 due to concerns on the depth inversion noise during evaluation (see Q3 for results). However, we can compare with Marigold on our proposed DA-2K benchmark because DA-2K is a sparsely annotated test set. It only requires the model to discriminate which point between a pair is closer. Therefore, any depth model can be easily compared without inversion noise.
**Q5: Why some student models can outperform their teacher model.**
This is indeed a very common phenomenon in semi-supervised learning \[1, 2, 3\], where a typical strategy called "self-training" pseudo-labels unlabeled data to expand the training set. Limited by space, an intuitive explanation for this effectiveness is that explicitly training on extra pseudo-labeled data better shapes the model's decision boundary for more robust generalization. A similar phenomenon is observed in Depth Anything V1 (its Table 9, row 1 *vs.* row 4). Additionally, Llama 3.1 produces synthetic data for itself to iteratively bootstrap its performance. An extreme but intuitive example from [Hinton's recent talk](https://www.youtube.com/watch?v=n4IQOBka8bc&t=843s) also illustrates this: *a classifier trained on 50% noisy labels can achieve much higher than 50% test accuracy*. As Hinton states, "It can 'see' the training data is wrong. *Students can be smarter than the advisor*".
[1] Pseudo-Label: The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks, ICML Workshop 2013.
[2] Self-training with Noisy Student improves ImageNet classification, CVPR 2020.
[3] FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence, NeurIPS 2020.
---
Rebuttal 2:
Title: Comment
Comment: Thanks for the authors' detailed reply. Most of my concerns have been solved.
However, in Q3, I sill believe the papaer should includ more methods for comparisons.
Firstly, affine-invariant (or relative depth present in your answer) is defined as the scale-shift invariant to the metric depth. All downstream applications are in depth space, such as recovering the 3D point cloud. Although some methods train their model in the inverse space, the result should be in the depth space instead of inverse depth space. Thus in comparisons, the paper should not exclude methods, such as marigold, geowizard, leres, and others, that output the affine-invariant depth, but only choose MiDaS. There are more advanced methods.
Furthermore, recent metric depth methods, such as zoedepth, zerodepth, metric3D also present affine-invariant depth comparisons. I believe this comparison decouples the metric and can better present the geometry quality of the depth. The paper can consider include them for comparison. Affine-invariant (or relative depth) is only a compromise representation, as the metric depth prediction is very ill-posed.
---
Rebuttal Comment 2.1:
Title: Thank you for acknowledging our feedback and raising the score
Comment: Thank you for acknowledging our further feedback and raising the score to 6. We will include your constructive feedback into our final version. Thank you!
---
Rebuttal 3:
Title: Thank you for futher feedback
Comment: Thank you for your kind suggestions regarding the methods we compared. We agree with your recommendations and will ensure that all mentioned methods, including both affine-invariant and metric depth estimation, are included in our final verison. The only exception is ZoeDepth, since it is trained exclusively on NYU-D and KITTI, which cannot be considered *zero-shot* depth estimation (to NYU-D or KITTI) as ours and other methods you mentioned. But our fine-tuned metric depth models significantly outperform ZoeDepth in metric depth estimation.
Briefly comparing our method with other methods (some results are borrowed from Metric3D): on NYU-D: DiverseDepth achieves the $\delta_1$ score of 87.5%, LeReS is 91.6%, HDN is 94.8%, ZeroDepth is 92.6%, Metric3D is 96.6%, Marigold is 96.4%, Geowizard is 96.6%, while ours is **98.0%**. We acknowledge that there are differences in the training settings among these methods, particularly in terms of the coverage of training data and model architecture. In the final version, we will also make an effort to highlight these differences to better showcase the strengths of each method from various perspectives. Thank you again for your valuable advice. | Summary: A system is proposed for scaling up large monodepth estimation models to achieve very strong zero-shot performance, focusing on finely detailed depth prediction. Essentially, a teacher model is trained on synthetic depth datasets with perfect ground truth, and thereafter distilled on a large in-the-wild dataset.
Strengths: * The qualitative results are really stunning.
* The distillation-based framework is novel (as far as I know), and provides a valuable insight into the training of depth estimators more generally.
Weaknesses: * It would be ideal to show the results back-projected into 3D as a point cloud or mesh for some example cases. It is difficult to tell the depth map quality from colormaps alone.
* The comparison with MIDAS feels like it leaves out an important limitation of the proposed the method. The proposed high-quality synthetic datasets appear to consist mostly of indoor and outdoor scenes. How will the technique stack up against MIDAS for unusual images that are not likely to be well-represented in such synthetic data, but may be available in 3D movies, such as of moving animals or people? Does the distillation on in-the-wild images just solve this problem?
* The paper appears to sort of conflate image-conditioned generation and regression, which are different problems even for image-conditional depth estimation. While "blurry" depth maps may be suboptimal for users looking for perceptual quality and seeking to back-project a 3D pointcloud or mesh, they may be in some sense metrically optimal since even scale and shift-invariant depth estimation is often ambiguous. Making predictions "fine-grained" could just be a problem of learning to sample from the distribution of image-conditioned depth, rather than heuristically applying losses such as gradient-based matching losses to a regression-based framework which is inherently a flawed way to look at the problem of perceptually high quality depth estimation.
Technical Quality: 4
Clarity: 4
Questions for Authors: The performance on scenes which are not among the original synthetic data (animals, people, etc.) is unusually impressive. I understand that distillation is the proposed explanation, but can the authors provide a clear intuition or explanation for why this works so well?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The computational limitations are briefly mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Back-project the depth map into 3D as a point cloud.**
Thank you for your valuable advice. We have added some visualizations of the back-projected point clouds in the one-page PDF. Due to space constraints, we were only able to include a limited number of examples. However, we will try to include more comprehensive visualizations in the final version to provide a clearer demonstration of the effectiveness of our method.
**Q2: Why our method is impressive even on unusual images/objects (***e.g.***, moving animals) that are not well-represented in our synthetic training sets.**
Thank you for raising this insightful question. We provide more comparisons with MiDaS on moving animals and people in the one-page PDF. Our strong generalization ability can be attributed to three key factors working together:
- **[Object commonality]:** Even if some real objects are absent from our synthetic sets, there exist synthetic objects that share features with these real objects. For instance, synthetic moving cars in our Virtual KITTI dataset exhibit dynamics similar to real moving animals. Synthetic tables in our Hypersim dataset provide general object structure knowledge (*e.g.*, four legs and one body) for our model to generalize to animals (*e.g.*, elephants) with similar structures (the occlusion relationship among legs is similar). Such widespread commonality equips our model with a basic and general understanding of the physical world.
- **[Strong pre-trained encoder]:** While object commonality helps, most pre-trained models still struggle to generalize robustly to novel new objects. As shown in Figure 5, vision encoders BEiT-Large, SAM-Large, and SynCLR-Large all fail to produce satisfactory results on the simple cat image. In contrast, the DINOv2-Giant encoder succeeds in such synthetic-to-real transfer. DINOv2-Giant, with over 1B parameters and pre-trained on 142M curated data, possesses rich prior knowledge about the real world, allowing it to generalize well even only fine-tuned with our 595K synthetic images.
- **[Distillation from diverse pseudo-labeled real images]:** While the first two points ensure reliable generalization to real objects in many cases, they are not perfect (Figure 6). Distillation from diverse pseudo-labeled real images provides the final assurance. Our model harnesses the most general and accurate representations from the predominantly correct pseudo labels, overcoming the noise within them. An extreme but intuitive example from [Hinton's recent talk](https://www.youtube.com/watch?v=n4IQOBka8bc&t=843s) illustrates this: *a classifier trained on 50% noisy labels can achieve much higher than 50% test accuracy*. As Hinton states, "It can 'see' the training data is wrong. *Students can be smarter than the advisor*". Additionally, we discard the largest-loss (top 10%) regions during re-training (L162), ensuring our model learns from cleaner pseudo labels. Qualitatively, Figure 15 shows the tremendous advantages of using pseudo-labeled real data for training.
**Q3:** **Whether our "fine-grained" depth maps are merely *perceptually* high-quality, but are indeed *metrically* worse than "blurry" depth maps.**
**Our depth maps are both *perceptually* high-quality and *metrically* comparable or better than SOTAs.** In Table 2, we measure the quantitative results of depth models by fitting a scale and a shift scalar to align the predicted affine-invariant depth to GT metric depth. Our results are much better than MiDaS and comparable or better than Depth Anything V1. For the partially comparable results, we analyze (L171-174) that current test sets are too noisy and coarse to exhibit the true strengths of our models (*e.g.*, fine-grained predictions, robust to reflective and transparent surfaces). Motivated by the limitations of existing evaluation benchmarks, we build a new benchmark DA-2K, with diverse scenes, precise annotations, and high-resolution images (Section 6). Our V2 achieves more than 10% higher accuracy than V1 on our proposed high-quality benchmark, as shown in Table 3.
**Gradient matching loss $L_{gm}$ is indeed also beneficial to optimizing scale-shift invariant loss.** Initially, sharing similar concerns with yours, we attempted to remove $L_{gm}$. Unfortunately, we observed all metrics in Table 2 degraded (more than 1% drop in the $\delta_1$ metric). For this phenomenon, we argue that *$L_{gm}$ is also beneficial in optimizing the affine-invariant depth to be globally optimal*, because a perfectly predicted depth should achieve a zero $L_{gm}$ value compared with GT depth map. For example, the GT depth of synthetic images is not only *perceptually* high-quality, but also *metrically* precise in the back-projected 3D space, demonstrating the two properties are indeed not contradictory [1]. Lastly, we want to emphasize that it is MiDaS, not us, who proposed this loss term. We mainly demonstrate this loss is highly useful (both perceptually and metrically) when applied in the context of high-fidelity synthetic images (L165-167).
[1] Evaluation of CNN-based Single-Image Depth Estimation Methods, ECCV Workshop 2018.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. It would be nice to add some of the discussion in Q2, and regarding the gradient matching loss, to the main paper.
---
Reply to Comment 1.1.1:
Comment: Thanks for your valuable advice! We will ensure the discussion in Q2 and the discussion on the gradient matching loss are included in our final version. Thank you. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their constructive and insightful feedback. We have addressed each of the raised concerns individually. Additionally, we have provided a one-page PDF below with further visualizations and qualitative comparisons. We look forward to your further feedback. Thank you very much.
Pdf: /pdf/14a70b043e4b4f120677dd8f326a3e465b9c4f89.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
StepbaQ: Stepping backward as Correction for Quantized Diffusion Models | Accept (poster) | Summary: This paper introduces a new perspective on quantization in diffusion models. It views the quantization error as causing a "stepback" in time within the latent space during the denoising process. The paper proposes StepbaQ which adjusts the sampling steps to correct the sampling path and reduce the buildup of quantization errors.
Strengths: * The paper is well-organized, making it easy for readers to understand and follow the main ideas.
Weaknesses: * This paper does not provide solid theoretical guarantees or an in-depth analysis of how the temporal shift impacts the scheduled sampling trajectory.
* The study lacks a comparison of computational efficiency with other quantization approaches.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The method assumes quantization errors follow a Gaussian distribution, it actually requires the scaled latent variable to share the same mean as the quantized latent variable. Thus, it necessitates both a tailored diffusion process design and an accurate error model, which may restrict its applicability or effectiveness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and constructive comments. We provide our responses as follows.
> This paper does not provide solid theoretical guarantees or an in-depth analysis of how the temporal shift impacts the scheduled sampling trajectory.
We would like to clarify this concern. Our manuscript indeed presents a comprehensive analysis of the impact that temporal shifts have on the scheduled sampling trajectory of diffusion models. Under the assumption that quantization error is Gaussian, Section 4.1 of our paper lays the theoretical foundation by explaining how quantization error can lead to temporal shifts in the latent space.
Subsequently, in Section 4.2, we delve into the consequences of these temporal shifts on the scheduled sampling trajectory. We demonstrate that such shifts are primarily responsible for the accumulation of errors, as they introduce an unexpected "stepback" in the denoising process, as depicted in Figure 1(b). This divergence from the scheduled sampling trajectory can significantly compromise the quality of the generated results.
To address this issue, Section 4.3 discusses calibration techniques involving the correction of sampling steps. Our empirical findings strongly support our analytical discourse on sampling trajectory deviations; the proposed sampling step correction method substantially enhances the performance of diffusion models. This evidence underscores the validity of our analysis and the effectiveness of our proposed correction strategy.
> The study lacks a comparison of computational efficiency with other quantization approaches.
Please see "Author Rebuttal by Authors" above.
> The method assumes quantization errors follow a Gaussian distribution, it actually requires the scaled latent variable to share the same mean as the quantized latent variable. Thus, it necessitates both a tailored diffusion process design and an accurate error model, which may restrict its applicability or effectiveness.
Regarding the concern about the assumption that quantization error follows a Gaussian distribution, please refer to the "Author Rebuttal by Authors" section above for a detailed explanation.
As for the requirement that the scaled latent variable shares the same mean as the quantized latent variable, it can be readily achieved by removing the mean of quantization error. Consequently, the proposed StepbaQ can be effectively implemented across various diffusion process designs, demonstrating its broad applicability.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. It gives me a more clear understanding. I maintain the score but increase the confidence.
---
Reply to Comment 1.1.1:
Comment: We're pleased to hear that your concerns have been addressed. | Summary: This paper proposes a novel perspective of quantization error in diffusion models being equivalent to a "step back" in the denoising process. The authors are show that one can effectively quantify this stepback using a small calibration dataset, and continue the diffusion process as usual after correcting for the new effective timestep. This technique doesn't make many assumptions, and hence is very general and can be integrated with many off-the-shelf quantized diffusion solvers. The authors also show convincing experimental evidence on effectiveness of their method.
Strengths: The main ideas introduced in the paper are novel and elegant. The perspective of thinking of quantization error as stepback enables the author to devise simple, clean, general techniques to help correct for step size and error accumulation. The empirical evaluation is convincing.
Weaknesses: While the experimental results (like the ablation study) are helpful and convincing, the paper could benefit from a more thorough analysis/interpretation of the method. For instance, how hard is to estimate the quantization error variance for different models/denoising methods? What does this say about the Gaussian assumption in the first place? What does it mean (in terms of making progress) to take one denoising step if the calculated stepback is >=1 step?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) How was the calibration for estimating the stepback performed? Was there any analysis done on how many samples was sufficient to estimate it with high confidence? I also wonder if different time steps show varying difficulty of estimating the associated variance.
2) How does one explain the sudden jump in the stepback magnitude for t=1 (Figure 3)? Yes, it makes sense that the last few steps are important, but 10 step correction seems too drastic compared to 1-2 steps for previous time steps?
3) On the same note, does this high stepback number for t=1 mean that the final drawn sample from the model is heavily miscalibrated? Are there ways to remedy this?
4) Could you please elaborate on why it was okay to deactivate the timestep-specific quantization parameters in TFMQ-DM for comparison?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and constructive comments. We provide our responses as follows.
> How hard is it to estimate the quantization error variance for different models/denoising methods?
Estimating the variance of quantization error is a straightforward process. We begin by comparing the sampled latent variable from the floating-point model with those from the quantized model to calculate the quantization error, and then we assess its variance. This procedure is model-agnostic, meaning it does not introduce any additional complexity when estimating the variance of quantization error across different models or denoising methods.
> What does this say about the Gaussian assumption in the first place?
Please see "Author Rebuttal by Authors" above.
> What does it mean (in terms of making progress) to take one denoising step if the calculated stepback is >=1 step?
Here we provide an example to illustrate cases where stepback is >= 1. In Figure 3, the magnitude of stepback at the first step is $10$, indicating that $\tau_1 - t_1 = 10$. Given that $t_1=1$, this implies that the distribution of $ \hat{x}_1 $, is more closely aligned with the distribution of $x{_1}{_1}$ rather than the distribution of $x_1$. Therefore, we can use $\tau_1 = 11$ and $\alpha{_1}{_1}$ during the sampling process (line 4 in Algorithm 1) to reflect the increased noise level of the quantized latent variable and obtain improved results.
> Was there any analysis done on how many samples were sufficient to estimate it with high confidence?
As detailed in the Implementation Details section of our paper, our findings indicate that utilizing 128 samples for StepbaQ correction is sufficient, and increasing the sample size to 1024 does not result in a noticeable improvement in performance. Here, we provide additional results that utilize 16, 128, and 1024 samples for StepbaQ correction on the SDv1.5 model under the W8A8 setting, which demonstrate that these settings yield comparable results. In fact, they derive identical corrected sampling steps. The minor variances in performance can be attributed to the slight differences in the estimated error mean.
These observations suggest that a relatively small sample size is adequate to implement StepbaQ correction effectively. This efficiency is likely due to the large dimensionality of the latent variable, which is $64 \times 64 \times 4 = 16384$, providing a substantial number of data points for accurate quantization error estimation. Nevertheless, for our experiments, we have chosen to use 128 samples rather than 16 to mitigate any potential concerns that might arise from using an excessively small sample size.
| Sample Num | COCO-FID | COCO-CLIP | SDprompts-FID | SDprompts-CLIP |
|:---------:|:---------:|:---------:|:---------:|:---------:|
| 16 | 12.21 | 27.07 | 16.69 | 28.72 |
| 128 | 12.34 | 27.01 | 16.36 | 28.69 |
| 1024 | 12.26 | 26.95 | 16.32 | 28.78 |
> I also wonder if different time steps show varying difficulty of estimating the associated variance
There is no varying difficulty in estimating the associated variance since the estimation process is the same across all steps. Please refer to line 6-7 of Algorithm 1 in our paper.
> How does one explain the sudden jump in the stepback magnitude for t=1 (Figure 3)?
The drastic change in the stepback magnitude can be attributed to the sensitive nature of SNR when t is close to 0, which can be observed from the steep SNR curve in Figure 2. By definition, $\alpha$ approaches 1 as t close to 0. In that case, the denominator of the SNR formula, $1-\alpha_{t_i}$ (Equation 11 in Appendix A), becomes extremely small. Therefore this small denominator can be significantly influenced by the quantization error term $\sigma_i$, leading to substantial changes in the magnitude of stepback.
> Does this high stepback number for t=1 mean that the final drawn sample from the model is heavily miscalibrated? Are there ways to remedy this?
As discussed in the previous response, the large stepback value is a consequence of the sensitive nature of the SNR. Therefore, this observation is not considered to be a result of miscalibration. To further alleviate the impacts of quantization error, potential approaches involve exploring alternative strategies for sampling calibration data, implementing finer granularity in the quantization process, or integrating more advanced PTQ algorithms, which we plan to explore in our future work.
> Could you please elaborate on why it was okay to deactivate the timestep-specific quantization parameters in TFMQ-DM for comparison?
It is generally understood that employing finer granularity in quantization can enhance the performance of a quantized model, as it allows for a more precise representation of information within the constraints of a limited number of bits. For instance, some studies on LLM quantization employ techniques such as group-wise quantization or token-wise quantization [B], while certain approaches to diffusion model quantization utilize timestep-specific quantization parameters, as discussed in lines 120-123 in our paper.
However, finer granularity typically introduces additional computational overhead and may provide an unfair advantage compared to methods that do not employ such granularity. Therefore, to ensure a fair comparison that excludes the trivial benefits derived from using finer granularity, we have chosen to deactivate the timestep-specific quantization parameters in TFMQ-DM. This allows us to focus on measuring the effectiveness of the temporal information-aware reconstruction of TFMQ-DM, which is the primary contribution of their approach.
[B] ZeroQuant: Efficient and affordable post-training quantization for large-scale transformers. NeurIPS 2022
---
Rebuttal Comment 1.1:
Comment: Thanks for your thorough response; it is convincing, and offers more clarity! I have increased my score from 6 to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising your score. We are glad to know that our response addresses most of your concerns. | Summary: This paper proposes the StepbaQ, a general strategy designed to enhance the performance of quantized diffusion models which employs a sampling step correction technique to realign the sampling trajectory and eliminate the accumulation of quantization error. Experiments show that the proposed StepbaQ improves performances of diffusion models quantized by off-the-shelf tools.
Strengths: + This paper is well-written and the topic seems quite interesting and is of high practical value.
+ The numerical results are convincing (but limited).
+ Ablation studies are provided to show importance of different components.
Weaknesses: - It is not clear that how the definition of stepback will impact performances if the stepback consists of multiple steps.
- It would be good if the authors can provide computational cost and corresponding comparisons with baselines.
Technical Quality: 3
Clarity: 3
Questions for Authors: See comments in Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and constructive comments. We provide our responses as follows.
> It is not clear that how the definition of stepback will impact performances if the stepback consists of multiple steps.
Here we provide an example to illustrate cases where stepback consists of multiple steps. In Figure 3, the magnitude of stepback at the first step is $10$, indicating that $\tau_1 - t_1 = 10$. Given that $t_1=1$, this implies that the distribution of $ \hat{x}_1 $, is more closely aligned with the distribution of $x{_1}{_1}$ rather than the distribution of $x_1$. Therefore, we can use $\tau_1 = 11$ and $\alpha{_1}{_1}$ for sampling (line 4 in Algorithm 1) to reflect the increased noise level of the quantized latent variable and obtain improved results.
> It would be good if the authors can provide computational cost and corresponding comparisons with baselines.
Please see "Author Rebuttal by Authors" above.
---
Rebuttal Comment 1.1:
Title: Thank you.
Comment: After carefully reading the rebuttal, most of my concerns have been addressed, I am retaining my score.
---
Reply to Comment 1.1.1:
Comment: We're pleased to hear that your concerns have been addressed. | Summary: The paper introduces a novel method to address the issue of accumulated quantization errors in quantized diffusion models. It attributes the quantization error to a "stepback" in the denoising process, and it introduces a sampling step correction mechanism to mitigate the adverse effects of accumulated quantization error. Experimental results demonstrate significant performance improvements in terms of FID on the SDprompts dataset, particularly under challenging quantization settings.
Strengths: 1. The paper presents a novel perspective by interpreting quantization errors as a temporal shift or "stepback" in the denoising process. This insight provides a clear understanding of how quantization errors impact model performance.
2. By calibrating the sampling trajectory, the StepbaQ effectively addressed the accumulated quantization error.
Weaknesses: 1. The author should provide more justification to the argument in Equation (5) that the quantization error can be modeled as a Gaussian random variable.
2. Can the calibrated statistics obtained from one model be directly adapted to different models?
3. Does the method work in consistency models, which perform sampling with only a few steps?
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Based on the standard metrics, the algorithm performs well. But further justification on Equation (5) is needed. Furthermore, it would be interesting to see how this method works in the setting that only a few sampling steps are taken, such as in the context of consistency models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and constructive comments. We provide our responses as follows.
> The author should provide more justification for the argument in Equation (5) that the quantization error can be modeled as a Gaussian random variable.
Please see "Author Rebuttal by Authors" above.
> Can the calibrated statistics obtained from one model be directly adapted to different models?
As the estimated quantization error varies between different models, the corrected steps optimized for one model are unlikely to yield optimal results when directly applied to another. We highly recommend implementing an independent correction process for each model, given that the correction procedure of StepbaQ does not impose a significant time burden.
> Does the method work in consistency models, which perform sampling with only a few steps?
Thanks for the suggestion. Although we have already shown the effectiveness of the proposed StepbaQ under a few-step sampling setting through experiments on SDXL-Turbo with a 4-step sampling process, as detailed in Section 5.1, the prospect of applying StepbaQ to consistency models is indeed intriguing. Consequently, during the rebuttal phase, we conducted additional experiments to assess the performance of StepbaQ on the LCM-SDXL model (https://huggingface.co/latent-consistency/lcm-sdxl). These experiments were conducted using a 4-step sampling process with the guidance scale set to 8, following the provided sample code. All other hyperparameters were maintained at their default settings, and the quantization parameters were consistent with those detailed in Section 5.1 with input prompt embedding also quantized to 16-bit as SDXL-Turbo.
The results indicate that StepbaQ consistently enhances the performance of quantized diffusion models even under a few-step sampling setting. This finding demonstrates the robustness and applicability of the proposed StepbaQ.
| Method | Precision | COCO-FID | COCO-CLIP | SDprompts-FID | SDprompts-CLIP |
|---------|:---------:|:---------:|:---------:|:---------:|:---------:|
| Naive PTQ | W8A8 | 18.47 | 25.46 | 18.61 | 29.08 |
| StepbaQ | W8A8 | **17.87** | **25.50** | **17.62** | **29.12** |
| Naive PTQ | W4A8 | 97.28 | 22.80 | 73.36 | 24.58 |
| StepbaQ | W4A8 | **80.41** | **23.33** | **61.33** | **26.18** |
---
Rebuttal 2:
Comment: Thanks for conducting the extra experiments. I will increase my score but maintain the confidence, since I am not an expertise in the network quantization literature.
---
Rebuttal Comment 2.1:
Comment: Thank you for raising your score. We are glad to know that our response addresses most of your concerns. | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their constructive comments. Common questions asked by multiple reviewers would be replied here in a unified manner.
> Justification of modeling quantization error as a Gaussian random variable
Our assumption that quantization error follows a Gaussian distribution is inspired by the empirical findings presented in PTQD, where the authors verify the Gaussian nature of quantization error through statistical tests detailed in Appendix B of their paper. In our own empirical investigations, we observed that the distribution of quantization error typically exhibits a symmetric, bell-shaped curve, as depicted in Figure 4 in Appendix B of our paper and the attached pdf file. This observation aligns with the result shown in Figure 3 of the PTQD paper.
To delve deeper into the characteristics of quantization error, we present an analysis of both the kurtosis (Fisher) and skewness of the error distribution, averaged across all steps for each model under the W8A8 setting. Our findings indicate that the skewness values are notably small, confirming our observation that the quantization error follows a symmetric, bell-shaped distribution. Regarding kurtosis, the values observed are greater than zero for each model, suggesting that the distribution of quantization error exhibits relatively fat tails. These fat tails likely originate from clipping errors, which occur less frequently but are more significant than rounding errors.
Although the quantization error exhibits higher kurtosis than a Gaussian distribution, our proposed StepbaQ method still significantly enhances the performance of diffusion models, as evidenced by the results in Tables 1 and 3 of our main paper. This underscores the robustness and applicability of our approach, demonstrating that StepbaQ can effectively improve model performance under realistic conditions.
| | SDv1.4 | SDv1.5 | SDXL-Turbo |
|----------|:----------:|:----------:|:----------:|
| Kurtosis| 2.627 | 3.429 | 0.502 |
| Skewness | 0.036 | -0.020 | -0.007 |
> Computational efficiency and comparisons with other quantization approaches
In our paper, we provide the runtime of StepbaQ for the 20-step SD v1.5 model at line 265. Here we further provide the approximate runtime of each quantization method applied to the SDv1.4 model, using a single A6000 GPU. For reconstruction-based methods such as Q-Diffusion and TFMQ-DM, calibration data is uniformly sampled every 20 steps, totaling 3,200 samples (128 samples per step). This sample size is reduced from that reported in the Q-Diffusion paper to expedite the quantization process. In contrast, for statistical methods like PTQD and StepbaQ, the calibration set size is set to 128.
According to the table, it is evident that reconstruction-based approaches, including Q-Diffusion and TFMQ-DM, require significantly more computation time. This discrepancy can be attributed to the utilization of AdaRound or BRECQ, which are notably time-consuming, especially when applied to large-scale models. This has resulted in their diminished popularity in the current landscape, where model sizes are increasing, as discussed in the GPTQ[15] paper. On the other hand, statistical approaches such as PTQD and StepbaQ exhibit superior computational efficiency, as they only involve collecting the statistics of quantization error.
However, it may not be appropriate to directly compare the computation time between reconstruction-based and statistical approaches, given their different objectives. Reconstruction-based methods aim to optimize quantization parameters to minimize error at each step, whereas statistical methods adjust the sampling process to reduce the overall error. As such, these two types of approaches can be combined to enhance performance, which is demonstrated in Table 3 of our paper.
| | Q-Diffusion | TFMQ-DM | PTQD | StepbaQ |
|----------|:----------:|:----------:|:----------:|:----------:|
| Time | 54hr | 54hr | 20min | 20min |
Pdf: /pdf/ab5469237d5ab2e1ea716091b3cb08cdae9c7b04.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper is well organized and logically understandable. The author reconsidered the quantization error as a "stepback" problem and provided concrete theoretical illustration. The author looked into latent variables behind general quantization errors in sampling trajectories, which impressed significantly when first reading. Qualitative experimental results proved the effectiveness of the proposed method. This is literally a creative, persuasive and well-completed work.
Strengths: Contents are well organized leading to an easy-reading form. The author shows mature writing style that makes this work logically understandable.
The core insight that quantization error can be reviewed as "stepback" is interesting and creative. Authors have presented detailed proof for their theories and conduct extensive experiments.
Weaknesses: Despite varies experiments on 2 mainstream diffusion models are conducted, the comparison with other existing methods seems to be slightly thick. Experimental results on sd v1.5 and sdxl-turbo only takes native PTQ and PTQD into consideration, which makes the result less competitive.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.Moving to table1, the fid calculated on sd v1.5 with W4A8 drops obviously while sdxl-turbo slightly. Why there will be significant differences on these 2 datasets?
2. The ablation study shows less improvement on clip score, maybe another measurement like is score should be adopted to further prove the effectiveness.
3. what exactly the classifier-free guidance scale is used with sd v1.5 in this work? have you tried different scales?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Despite the author gives an interesting insight behind the quantization error arisen in sampling stage of diffusion models, experiments seems to be slightly coarse as some details are not well presented. More comparison between existing methods and more model setting details are strongly recommended.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive review and constructive comments. We provide our responses as follows.
> Experimental results on sd v1.5 and sdxl-turbo only takes native PTQ and PTQD into consideration, which makes the result less competitive.
The experiments presented in Section 5.1 are designed to evaluate the enhancements StepbaQ brings to off-the-shelf quantization toolkits. While these toolkits provide a streamlined process for model quantization, not all of them support reconstruction-based quantization algorithms like AdaRound. In scenarios where it is necessary to augment models quantized by these toolkits without altering their established quantization workflows, only statistical methods such as PTQD and StepbaQ can be effortlessly integrated.
Consequently, within Section 5.1, we limit our comparisons to StepbaQ, Naive PTQ, and PTQD. It is important to highlight that we have conducted a comparative analysis involving other quantization methods on the SDv1.4 model, as shown in Table 3. These comparisons adhere to the experimental settings used in prior studies to ensure a fair comparison.
> The fid calculated on sd v1.5 with W4A8 drops obviously while sdxl-turbo slightly. Why there will be significant differences on these 2 datasets?
We observe that SDXL-Turbo, compared to SD v1.5, demonstrates higher resistance to noise. This can be attributed to the differences in the training process of these two models. Specifically, SDXL-Turbo undergoes an additional fine-tuning phase through knowledge distillation to facilitate a few-step sampling. This supplementary fine-tuning process seemingly contributes to the model's increased tolerance to quantization errors.
> The ablation study shows less improvement on clip score, maybe another measurement like is score should be adopted to further prove the effectiveness.
We do not use the IS score as it may provide misleading results when applied to datasets other than ImageNet, as indicated in [A]. The improvement in CLIP-score is by nature less significant even when FID is greatly improved. This is because CLIP-score focuses more on the alignment between images and textual descriptions, which may not be as responsive to certain types of image quality enhancements. Furthermore, when comparing our results to those reported in Table 5 of the TFMQ-DM paper, the improvements we have achieved are already notably substantial.
> What exactly the classifier-free guidance scale is used with sd v1.5 in this work? have you tried different scales?
We focus on evaluating the model under the default setting, and therefore we only use the default value of 7.5 as the classifier-free guidance scale for the SDv1.5 model.
> Experiments seems to be slightly coarse as some details are not well presented.
We apologize for any oversight in detailing specific aspects of our method. Unless otherwise mentioned, we adhere to the default hyperparameter settings of the diffusion models for our evaluations. Regarding the quantization settings, for the experiments presented in Section 5.1, we kindly direct readers to the "Quantization Settings" subsection within that section for comprehensive information. For the experiments conducted in Section 5.2, additional details can be found within the Q-Diffusion codebase.
The details of the proposed StepbaQ method are thoroughly elaborated in Algorithm 1. For information on how StepbaQ is adapted for use with SDXL-Turbo, please refer to Appendix B of our paper.
[A] A Note on the Inception Score. ICML 2018 Workshop | null | null | null | null | null | null |
Deep Implicit Optimization for Robust and Flexible Image Registration | Reject | Summary: This paper presents a novel image registration framework that aims to bridge the gap between classical and learning-based approaches. It incorporates fidelity optimization directly into the neural network as a layer. The framework employs end-to-end implicit differentiation through an iterative optimization solver, ensuring that the features learned are both registration and label-aware. Additionally, the warp functions derived are guaranteed to represent local minima of the registration objective within the feature space. The authors report that this framework performs exceptionally well on in-domain datasets and remains robust against domain shifts, such as anisotropy and variations in intensity profiles. Furthermore, the framework is designed to allow seamless switching between different transformation representations at test time without the need for retraining.
Strengths: 1. The paper's motivation is both clear and innovative, effectively merging classical optimization techniques with neural networks by embedding an optimization layer within the network. This enhances data consistency and steers the optimization toward local minima.
2. A significant technical achievement of this paper is the backpropagation of gradients through the optimization layer using the implicit function theorem. This demonstrates the paper's technical depth and is the key contribution of the paper.
3. The analysis of loss landscapes provided in the paper is insightful. The flattening of the feature space by neural networks introduces a wider range of possible gradient directions, which, when combined with fidelity loss, enhances overall performance.
Weaknesses: 1. A major limitation is the framework's registration accuracy, as measured by the Dice score, which does not demonstrate clear advantages over neural-network-only methods. For instance, in Table 1 on the OASIS dataset, the proposed LKU-Net variant achieves a Dice score similar to that of TransMorph-Large. This calls into question the practical benefit of integrating classical optimization into the network.
2. The absence of specific smoothness measurements in comparison with other methods, such as the percentage of negative Jacobian determinants (||J||<0) or the standard deviation of the logarithm of the Jacobian determinant, is a significant oversight. Without these metrics, it's unclear whether the observed increase in Dice score represents a genuine improvement or merely a trade-off with the deformation field's smoothness.
3. The paper lacks clarity in the reproducibility of results compared to other methods. For example, the learn2reg OASIS leaderboard indicates that LKU-Net can achieve a Dice score of 88.5 on the OASIS dataset without explicitly optimizing for smoothness. Similarly, TransMorph scores 88.5. Additionally, in Table 4, the performance of LKU-Net with Dice supervision is paradoxically worse than without, which is counterintuitive and raises concerns about the implementation fidelity and methodological consistency.
4. While the introduction of an optimization layer with a fidelity function is a key contribution, the paper falls short in comparing its method to other registration methods that also combine learning with optimization. This lack of comparative analysis leaves unanswered questions regarding the true effectiveness and novelty of the proposed method compared to existing approaches.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1 Could the authors clarify the primary benefits of their framework compared to existing learning frameworks that incorporate optimization, such as ConvexAdam [1], SAMConvex [2], PDD-Net [3], and PRIVATE [4]? Specific examples or quantitative comparisons highlighting how the framework outperforms these methods would be helpful.
2 The paper claims that, "For the first time, our method allows switching between arbitrary transformation representations (from free-form to diffeomorphic) at test time with zero retraining." Could the authors elaborate on why this capability is a significant advantage over other methods? For example, registration networks that separate feature extraction from deformation field estimation using multi-level image pyramids might achieve similar flexibility. What makes the framework distinct or superior in this context?
3 The paper mentions that anatomical landmarks can be incorporated into the framework. Are these landmarks based on image segmentation or keypoints, or do they refer to a more general concept of landmarks? If possible, could the authors demonstrate or provide insights on how these keypoints or landmarks are integrated within the optimization loop?
[1] Siebert, Hanna, Lasse Hansen, and Mattias P. Heinrich. "Fast 3D registration with accurate optimisation and little learning for Learn2Reg 2021." International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer International Publishing, 2021.
[2] Li, Zi, et al. "Samconvex: Fast discrete optimization for ct registration using self-supervised anatomical embedding and correlation pyramid." International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer Nature Switzerland, 2023.
[3] Heinrich, Mattias P. "Closing the gap between deep and conventional image registration using probabilistic dense displacement networks." Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part VI 22. Springer International Publishing, 2019.
[4] J. Hu, W. Gan, Z. Sun, H. An, and U. S. Kamilov. A Plug-and-Play Image Registration Network, Mar. 2024. arXiv:2310.04297
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The paper commendably integrates a fidelity function with an optimization layer into a neural network, but there are several limitations:
1. The optimization layer has not demonstrated significant performance improvements compared to traditional neural network methods on OASIS leaderboard, and the implementation issues in the other datasets/methods raises concerns about the completeness of the result. Clarifying scenarios where this integration is advantageous could enhance the paper's impact.
2. The absence of quantitative smoothness metrics for registration performance is a critical gap. Introducing these metrics would strengthen the comparative analysis with existing methods.
3. The paper lacks a comprehensive comparison with major competitors combines optimization with learning. Expanding this analysis would provide clearer insights into the framework's unique contributions.
4. Concerns about implementation and reproducibility persist.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and insightful feedback. We are excited to hear the reviewer found the paper both clear and innovative, and appreciated the paper’s technical depth, insight provided by Fig.2. We believe we have addressed all concerns, and look forward to engaging in a discussion and convincing the reviewer about the importance of the work.
**A major limitation is the framework's registration accuracy, as measured by the Dice score**
The focus of our paper is not solely achieving the highest Dice score on in-distribution (ID) data. Instead, our primary objective is to develop a robust and flexible method that performs well both ID and under domain shifts such as variations in preprocessing and acquisition configurations. While LKU-Net performs exceptionally well on the OASIS dataset, it completely breaks down under domain shift, which is a significant disadvantage. In contrast, a baseline like SynthMorph (SM) gets a Dice score of ~0.77 on the OASIS validation set, but performs on par with DIO under domain shifts due to its modality-agnostic training methodology. Our approach’s motivation is similar to SM, which aims to maximize performance _and_ robustness, which we believe offers substantial practical benefits in real-world scenarios where data variations are common.
**The absence of specific smoothness measurements … trade-off with the deformation field's smoothness.**
These numbers are reported in Table 2. There is a slight (obvious) tradeoff between Dice score and det(Jacobian) values. Table 2 also shows the versatility of our learned feature maps, which do not _overfit_ to the optimization algorithm used during training.
**The paper lacks clarity in the reproducibility ...**
There is a slight difference between the results reported in the papers and the challenge leaderboard, notably due to implementation improvements post-publication. We used the recommended parameters in the official code, and found the numbers to agree (with reasonable error margins) with that of their papers.
**Additionally, in Table 4, the performance of LKU-Net with Dice supervision is paradoxically worse than without**
This is not observed by us alone – [1] show in Sec 4.5.2 that most architectures like Convnet-affine and VTN-affine cannot generalize outside the training domain. Moreover, [1] observe that the semi-supervised models are _inferior_ to their unsupervised models in the LPBA dataset, indicating anatomical knowledge injected to the model with supervision may not generalize well to unseen data beyond the training data. We observe this exact phenomenon with more baselines in Tab.4.
Baselines like SM and LapIRN perform reasonably on OASIS (way worse than LKU), but demonstrate consistent robustness to domain shift. Does it mean they show paradoxical behavior? Very likely not. In general, for registration, the validation overlap on in-distribution data tells nothing about the domain shift performance.
We’re confident that LKUNet is trained properly. We will release all code and models used in the paper for fairness and reproducibility.
**the paper falls short in comparing its method to other registration methods that also combine learning with optimization**
To our knowledge (see L104-L118), most methods do not perform learning and optimization end-to-end. The closest work Reviewer UTrd pointed out is KeyMorph, which cannot be extended to general warp fields which do not have closed-form solutions.
**1 Could the authors clarify the primary benefits of their framework … methods would be helpful.**
ConvexAdam, SAMConvex, PDDNet use 6D correlation volumes (instead of 4D features) which do not scale well with large data. Moreover, they are not learned end-to-end. PnP uses DEQ to finetune a AWGN network, but the data-fidelity term comes from the intensity images, while our features also incorporate label-fidelity. These are clear differences showing the elegance and strength of our formulation.
**2 The paper claims that, "For the first time, our method allows switching, why is this an advantage over other methods?**
The ability to switch between arbitrary transformation representations (e.g., from free-form to diffeomorphic) at test time without retraining is a significant advantage. Traditional DLIR methods fix the warp field representation during training, limiting flexibility and requiring retraining if the representation needs to change due to different downstream needs (e.g. model was trained with diffeomorphism for normative patients but requires non-diffeomorphism at inference for a subject with pathology). DIO allows for this flexibility, enabling the model to zero-shot adapt to different warp requirements and constraints. This capability is particularly beneficial in scenarios where the desired transformation properties may evolve or vary across applications.
**3 The paper mentions that anatomical landmarks can be incorporated … within the optimization loop?**
Landmark information is incorporated indirectly using a Dice loss (for labelmaps) or landmark distance (for keypoints). Similar to DLIR methods, using a Dice loss at training will learn features such that instance optimization of learned features minimizes dice alignment loss. Therefore, the learned features are label-aware (as seen by the performance gap between ours and ANTs/FireANTs).
At test time, if labels/keypoints are available, instance-optimization can simply add a Dice/landmark distance loss (sparse loss) in addition to feature image alignment loss (dense loss). Hyperparameter tuning can also be done at inference-time.
**Concerns about implementation and reproducibility persist.**
We have provided our entire codebase in the supplementary material. If there are additional concerns, we’re happy to clarify during the discussion period.
[1] Mok, Tony CW, and Albert Chung. "Affine medical image registration with coarse-to-fine vision transformer." CVPR 2022.
---
Rebuttal Comment 1.1:
Comment: The authors' rebuttal has addressed some concerns, showing the paper's potential for excellence.
However, in its current form, it is not yet suitable for publication at NeurIPS.
The reviewer encourages the authors to conduct more comprehensive experiments, particularly those involving fidelity loss in the post-optimization phases, rather than simply asserting differences from existing methods. | Summary: This paper introduced DIO, a differentiable implicit optimization layer to a registration network that aimed to bridge the gap of classical-learning-based image registration, considering the incorporation of weak supervision like anatomical landmarks into the learned features. The authors decoupled feature learning and optimization and trained a deep network to predict multi-scale dense features registered through a black box iterative optimization solver.
Strengths: **S1.** The proposed model significantly improves over SOTA models on domain shift experiments.
**S2.** The paper is well-written
**S3.** Multi-scale optimization seems an interesting approach that might be a possible module that can be integrated in deformable image registration.
Weaknesses: **W1. Possibilities of getting artifacts from different voxel sizes.** How do different voxel sizes ensure that the velocity or transformation field is differentiable or invertible? I believe this approach might introduce artifacts and lose fine details when propagating the source image to match the target image. And how processing the image features independently will preserve the diffeomorphism property in generating the transformation field? Can the image matching term efficiently capture the intensity differences likewise treating the input images as pairwise? Overall, treating the input images separately from the feature extractor raises several questions regarding the credibility of the transformation field $\phi^*$.
**W2. Motivation for using multi-scale optimization.** I found the motivation of using multi-scale is somewhat underdeveloped. What is the rationale behind using such kind of optimization considering different source/target image features?
**W3. Applicabilities of learned multi-scale dense features from sparse images.** In Sec. 4 the authors tried to show that DIO learned interpretable dense features and compared to the classical methods DIO preserved the gradient in the loss function. On the other hand, the authors also discussed that deep networks recovered affine transform with $~90$% overlap. I wonder what is the advantage of capturing multi-scale dense features compared to existing DLIR methods such as VoxelMorph, TransMorph, etc.
**W4. Experimental supports.** Though the authors are getting comparable performance in image registration (Tab. 1), they are achieving improved results in testing out of domain/distribution datasets. However, the authors might want to show their model's performance without adapting their proposed multi-scale optimization. Basically, is the optimization scheme or the multi-scale features helping the complete registration model in achieving those bits of improvements? and the important question is why? Two interesting sets of experiments that validate the domain shift hypothesis can be the following -
*(i) train on some of the other datasets (excluding OASIS) and test on the rest,* and
*(ii) train on multiple datasets, including OASIS, and test on the rest.*
Overall, I appreciate the authors for working in the domain of image registration which is very relevant as well as important in the medical imaging domain. However, the current version of the manuscript lacks some important experimental justification and further experiments. With that being said, the current version of the manuscript is under the threshold of acceptance. However, I am open to reconsidering the initial rating if the above concerns are adequately justified.
Technical Quality: 2
Clarity: 2
Questions for Authors: How do decoupling feature learning and optimization blocks help the model in getting improved performance over existing DLIR methods?
I tried to summarize all the findings, concerns, and questions in the Weaknesses section.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations have been discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. We are delighted to learn that the reviewer finds the paper well written and appreciates multi-scale feature optimization to be an interesting approach. We believe we have addressed all the remaining concerns and hope the reviewer increases their score and advocates for the acceptance of our paper.
**W1. Possibilities of getting artifacts from different voxel sizes**
This is standard practice in classical image registration, where the fixed and moving images are typically not in the same space. For applications such as unimodal and multimodal registration [1,2,3], ex-vivo to in-vivo registration [4,5], images often differ in size and voxel resolution. Instead of resampling the moving image twice (once to make its size equal to the fixed image and second during warping) thereby introducing more resampling artifacts, classical methods perform a single resampling (during warping only). This is also the motivation for introducing decoupled feature extraction and optimization – to enable registration with modalities with grossly different spatial resolutions.
Many established toolkits like ANTs, NiftyReg, Greedy, Demons, and LDDMM handle varying image sizes effectively. In contrast, current DLIR methods are limited because they require images to have matching spatial dimensions due to channel-wise concatenation. This is a problem with applications like in-vivo to ex-vivo, where downsampling the ex-vivo image loses fine detail and upsampling the in-vivo image consumes memory. Our method overcomes this limitation of existing DLIR methods, allowing for more flexible registration.
**Overall, treating the input images separately from the feature extractor raises several questions regarding the credibility of the transformation field $\phi$**
“this approach might introduce artifacts and lose fine details when propagating the source image to match the target image” – This is indeed the case, but for DLIR methods, which typically resample the images into a uniform spatial resolution (1mm isotropic for OASIS dataset in Learn2Reg) followed by another resampling due to warping – introducing additional resampling artifacts. This loses fine details if the image was acquired at a higher resolution (say 0.7mm in-plane resolution) and introduces resampling artifacts due to anisotropy.
**W2. Motivation for using multi-scale optimization**
Using multi-scale optimization is also standard practice in the classical image registration community. The rationale is that optimization at the finest scale only will converge at some bad local minima, due to the ill-conditioned and ill-posed nature of deformable registration. Therefore, registration at coarser resolutions aligns large structures like ventricles, and finer scales align small structures like gray matter sulci. Motivated by classical methods, certain DLIR methods also use multiscale optimization. Multiscale optimization in DIO is motivated by learning discriminative feature maps at different scales compared to naive downsampled versions of intensity images.
For example, intensity based registration at 4x scale achieves a Dice score of 0.6, whereas our method achieves around 0.75 in the OASIS val set, and enjoys a similar improvement for 2x downsampling. We’ve added this ablation to Appendix due to space issues.
**W3. Applicabilities of learned multi-scale dense features from sparse images.**
We’re not entirely sure what “DIO preserved the gradient in the loss function” and “discussed that deep networks recovered affine transform with 90% overlap” mean. We are happy to clarify in the discussion period.
**W4. Experimental supports.**
We’ve added an ablation on training with and without multiscale optimization. We see slightly better scores for multi-scale optimization, due to better local minima achieved during optimization at coarser scales.
Since our model learns feature maps for registration, it _has to be_ followed by an optimization step. The feature images are indeed helpful - as is evident from the difference between ANTs and our method, wherein the underlying optimizer is the same.
**Two interesting sets of experiments that validate domain shift**
This is exactly what we did, i.e. trained on OASIS and tested on the rest. We did not train on other datasets due to their limited size (364 training images in OASIS versus 40 in LPBA40). Training on multiple datasets is possible, but the size of the OASIS dataset would dominate over the others. Given space considerations, we leave more nuanced ablations for future work.
**How do decoupling feature learning and optimization help the model in getting improved performance over existing DLIR methods?**
Our goal is to not asymptotically outperform existing methods (see Limitations of the paper), but to tradeoff a little bit of in-distribution performance (as in Table1), for generalizable and robust performance under domain shift. Moreover, small improvements in dice score can be attributed to slight misalignments for smaller anatomical regions.
[1] Murphy, Keelin, et al. "Evaluation of registration methods on thoracic CT: the EMPIRE10 challenge." IEEE transactions on medical imaging 30.11 (2011): 1901-1920.
[2] A. Klein, et al. Evaluation of nonlinear deformation algorithms applied to human brain MRI registration. NeuroImage, 46(3):786–802, July 2009.
[3] Wang, Quanxin, et al. "The Allen mouse brain common coordinate framework: a 3D reference atlas." Cell 181.4 (2020): 936-953.
[4] Khandelwal, Pulkit, et al. "Automated deep learning segmentation of high-resolution 7 tesla postmortem MRI for quantitative analysis of structure-pathology correlations in neurodegenerative diseases." Imaging Neuroscience 2 (2024): 1-30.
[5] Meyer, Charles R., et al. "A methodology for registration of a histological slide and in vivo MRI volume based on optimizing mutual information." Molecular imaging 5.1 (2006): 7290-2006. | Summary: The authors introduce the idea of implicit optimization, and coupled feature extraction for images, to achieve robust image registration. I liked the overall idea and was eager to gain insight into how implicit optimization
Strengths: I really liked the overall ideas here.
An implicit optimization layer does indeed address a DL shortcoming that hs been pointed out before (several DL methods show that some fine-tuning of the output deformation field will improve registration for many domains/moels).
I like the idea of using this in combination with a feature extractor and using those learned features to drive the optimization, with potential benefits down the line.
The paper is also pretty clear and concise, which I like.
There are extensive experiments, with measures of variance, which is a great start.
Weaknesses: Unfortunately, I found the rest of the paper (beyond the core idea) lacking and having several weaknesses.
Importantly, the authors mischaracterize important relevant literature and conceptual ideas, that I think are incorrectly described in the motivation, and not compared to properly in the experiments. This makes it challenging to assess and gain insights from the contribution.
Incorrect characterization: the authors say a few statements that to me seem directly false (and are important to the paper). For example, on line 50 they say "Moreover, design decisions like sparse keypoint learning for affine registration [103, …] do not facilitate dense deformable registration" -- they repeat this in several parts of the paper (e.g. line 107). This is wrong -- their first citation, for example, 103, uses keypoints for *deformable* registration (along with affine). A few lines lower they say "Current DLIR methods are not robust to minor domain shift like varying anisotropy and voxel resolutions, different image acquisition and preprocessing protocols [62, 53, 70, 43]." -- which is incorrect and not supported by the citations. First, citations 62, 53, 70 are from before 2012 and do not discuss DLIR methods at all (but just general registration), whereas citation 43 *is explicitly tackling and achieves robustness to domain shift*. It may well be that the authors' method does better (I am not sure, see below), but the claim is incorrect. Many other papers tackle distribution shift in DL registration -- see Mok et al, 2023. Another crucial omission is related to the authors' claim that DL methods may not output local minima results -- which is true, but plenty of works propose to take the output of neural networks and perform a bit of instance-specific optimization of the resulting field to get to that local minima -- essentially 'fine-tuning' the field for a couple of seconds (e.g. VoxeMorph TMI 2019, but plenty of other methods as well after that for example from Matthias Heinrich's group). This is crucial to the current paper, since the proposed method essentially does the same thing at inference -- runs a forward neural network (albeit just as a feature extractor), and then performs an optimization for the image pair -- and while it's done differently (and more elegant in some sense) than the existing literature, these approaches are very related. Overall, I was excited about the method but overall found the motivation/related-works either misleading or lacking rigor -- perhaps the authors are simply not aware of the abilities of existing literature mentioned above, but this does limit the novelty and insight substantially
In the experiments, it seems to me that some obvious results are missing:
. I am not sure why methods used in Figure 3 (e.g. [43]) are missing from Table 1. It seems like a crucial comparison. Deformable KeyMorph [103] is missing from the whole experiments section, and is close to the existing method in that it separates the feature extraction (there via keypoints, but using a parallel net) from the optimization. Training keymorph on oasis and testing it seems like an important comparison if we are to extract some insights into how to decisions in the proposed method (the feature extractor and the implicit optimization) improve our insights in the field.
. Overall the results in Table 1 do not seem impressive -- comparable at best with existing methods. This is totally okay in my book, if the authors are able to communicate other interesting insights. Unfortunately, I do not believe this is the case.
. In the domain-shift section the authors show that their method tens to outperform the DL methods. However, their method gets the benefit of doing instance-specific optimization (the proposed layer) after feature extraction, at the cost of some GPU work for each pair. This is what instance-specific optimization does at the end of DL methods (as discussed below), which was employed in several papers, but this is not included in the comparison! This is a peculiar omission to me -- it should be included for completeness, but importantly it is also crucial to understand whether the proposed method behaves differently -- perhaps there is some advantage, in several situations, to the proposed method, or perhaps it offers more guarantees, etc -- we simply don't know. Minor: it would also be interesting to understand what are the limits of this domain shift of this model -- does it generalize to more substantial variations in modality, or 7mm slice spacing found in clinical sequences?
. I also find the claim that DL methods do not work without crops peculiar - most DL methods are convolutional and hence not size specific, and some (e.g. SynthMorph, which the authors refer to here) does not even require both images be of the same size.
. Since one of the contributions of the paper is the parallel feature extraction (to be used with the optimizer), it seems to me that it would be an important ablation to take the features of some robust method (does not have to be registration, even a domain-shift-robust segmentation network will do, or a 'robust foundation model', etc) and see if that can be combined with an optimizer. This would help provide insights if the formulation of the proposed model and the end-to-end training is useful.
. The claim in 4.4 is also missing some reaosnable comparison -- would it not be possible to take the displacement field of any other DL method, initialize your favorite parametrization (freeform, diffeomorphic, etc) with that, and run the (any) optimizer? It seems like this is easily doable and a reasonable comparison?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see comments above. It would be important to understand why the omissions in the motivation and experiments happened -- did the authors not know some of the keypoint based methods are deformable? Were they not aware of instance-specific optimization after the output of a DL method (which is discussed in several papers)? etc.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes the paper has a limitations section. It would be nice if the authors could comment on how this method can be used on CPUs -- by far the standard hardware available to non-ML users (neuroscientists, clinicians, etc).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback, and we’re glad to note that they found the paper overall very interesting. The review has been immensely helpful in improving the quality of our work. We believe we have addressed all concerns in the rebuttal, and we look forward to a discussion and convincing the reviewer about the importance and relevance of our work.
**Sparse keypoints (kps) don’t facilitate dense reg. is wrong - KeyMorph (KM) does deformable reg. with sparse kps**
This is an oversight on our part. We have revised this incorrect characterization in the paper, and mentioned that these formulations allow linear and deformable registration. However, our motivation for DIO still holds, i.e. KM can only output deformation fields that admit closed-form warps (e.g using TPS), from learned keypoints. However, TPS is a limited class of warps, cannot be guaranteed to be diffeomorphic, and a vast majority of widely used parameterizations (free-form, SVF, geodesic, LDDMM, SyN) do not admit closed form solutions making KM inapplicable. While in KM the keypoints are trained end-to-end using backprop, we use implicit differentiation through the optimization to learn dense feature maps. DIO is therefore an elegant and generalized extension of the KM idea: “keypoints” are replaced by “dense feature maps” and “closed-form warp field” parameterizations are replaced by “arbitrary warp fields” (i.e. solutions of optimization objectives). Moreover, KM runs out of memory quickly – TPS with 512 kps runs OOM on a A6000 GPU whereas DIO outputs multi-scale feature maps equivalent to 192 * 224 * 160 * (1 + 1/8 + 1/64) * 16 / 3 ~ 41M keypoints.
**acquisition and preprocessing protocols [62, 53, 70, 43]…not supported by the citations**
We agree! The first three citations were supposed to reference different image acquisition and preprocessing protocols themselves (neuro/lung imaging). However, the statement itself is correct as shown in Fig.3. and Tab.4; these are very strong results to support the statement. We have cited Fig.3 in the paper instead.
**DL methods dont output local minima, but instance optimization (IO) can – this is very related**
We agree that these approaches are related, however there is a subtle but crucial difference. In existing approaches, the deep network is “label-aware”, but outputs a warp field that may not be a local minima of *any* objective. IO is performed on the *intensity images* that are not label aware. This IO is a small change on top of the predicted warp field. In contrast, we train feature maps to be label aware *by design* (since registering feature maps minimize label overlap loss), as shown in Tab.1 and Fig.3. We sidestep warp field prediction altogether and perform a single IO step on *label aware features* instead of *label-unaware intensity images*.
**Synthmorph is missing in Tab1**
For DLIR, we only added a few recent methods that score > 0.82 Dice on OASIS val. SM scores a ~0.77 Dice on the OASIS dataset, with the provided pretrained model.
**Deformable KeyMorph is missing from the whole experiments section**
We were aware only of the affine variant. However, the performance of KM is unsatisfactory. We trained KM on the OASIS train split, and observed the following dice scores for OASIS val and zero-shot on IBSR:
| Loss | OASIS | IBSR |
|--|--|--|
| MSE | 0.6078| 0.4663 |
|Dice| **0.6437** | 0.49250 |
The in-distribution performance strongly agrees with deformable registration dice scores in Fig.5,7,8,9 in the KM paper. We observe that KM is very robust in recovering from large rotational deviations due to its training scheme (good for canonicalizing data acquired on non-standard coordinates) , but is an unsatisfactory baseline for deformable registration due to ~6M voxel displacements being parameterized by only 512 keypoints - making registration of small subcortical structures especially hard.
**In the domain-shift section the authors show that their method works because of IO, but others dont use IO**
This experimental design is justified by the ‘feature space’ in which IO is performed. We find the warp field as the minimizer to an objective function with feature maps. The minimizer is found using IO. However, the warp field produced is a local minimizer of IO in the “learned feature space” and not in the “intensity space”. The warp field produced by DIO may not be a minimizer in the intensity space. Moreover, implementing IO for every baseline is a highly tedious task, since different baselines have different implementations of the warp field, including scaling, different coordinate systems, etc.
A fair comparison for DIO would be to perform IO of learned features, followed by IO of intensity images. For simplicity, we do not perform IO in the intensity space for any method.
**some (e.g. SM, which the authors refer to here) does not even require both images be of the same size.**
This is not the case, since every DLIR method requires the fixed and moving images to be concatenated channelwise, implying matching spatial dimensions. IBSR18 volumes have different voxel sizes. Moreover, some of the methods have hardcoded image sizes – this is an implementation issue. This also highlights challenges in using DLIR methods from official implementations. Methods like TransMorph use ViT end-to-end and cannot be used with different image sizes. We will make all scripts public for fairness and reproducibility.
**it seems to me that it would be an important ablation to take the features of some robust method … end-to-end training is useful.**
This seems like a useful ablation, but we are not sure which method to use for feature extraction.
**The claim in 4.4 is also missing some reasonable comparison**
We’re not sure what that would show. Sec. 4.4 is written to show that the learned features in DIO do not “overfit” to the warp field representation during training, shown by inference-time switching to an unseen optimizer.
---
Rebuttal Comment 1.1:
Comment: I read the rebuttal carefully and thank the authors.
However, I believe the various omissions require a new submission, since they would require a new revision. They are simply too central to the work, and cannot be explained in a single rebuttal paragraph -- in a normal journal submission, this would be a major revision. For example, the KM omission requires a bit more than a simple run through and a 2x2 table -- this assumption that there is no related method is repeated throughout the paper and used to justify decisions. The KM method would need to be more thoroughly tested. Similarly, while IO is indeed not exactly the current method, the entire paper ignores IO as if it doesn't exist -- when in fact the difference is subtle at best. This not only requires reframing, but rigorous experimentation. This line, for example, i think substantially weakens the paper and I would argue is an important omission, still: "For simplicity, we do not perform IO in the intensity space for any method."
For a future submission, some more minor comments:
- the citations example was more minor, but please note that there are several statements back to back like this -- where the claim is false and the citations used (that make it look like you are using related works to back up a claim) do not support the claim. I don't understand how the authors stand by the truthfulness of the (in my opinion over-reaching) statement -- there are definitely DLIR methods that are robust to domain shift, or at least some types of domain shift. In my opinion, Figure 3 does *not* support the statement -- yes the current method might improve on some methods, but only marginally -- making the claim way too strong.
- but please note that while it's true that most DLIR methods concatenate the inputs in the architecture, some available implementations take in different-sized images, resample to the same (highest) voxel size to be isotropic, pad, and only then concatenate. I believe SM does this, but am not an expert. Either way, it's trivial to do. This enables the method to work with differently-sized inputs. I am not using this point in my decision, though -- just a note for your submission.
Unfortunately, overall I believe the core omissions, which still exist (e.g. see the IO comment), are central to the work and would require a new submission to bre properly reviewed. I believe my original score stands. I do wish the authors good luck with this, I think there is something good here to be communicated, just not in its current form. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their insightful feedback and for taking time to improve the quality of our work. We are glad that reviewers found the overall idea [UTrd, ] and multi-scale optimization idea [3syP] interesting, innovative and insightful [uXMo], clear and concise writing [UTrd, 3syP, uXMo] , extensive experiments with measures of variance [UTrd], improvement over SOTA models in domain shift [3syP], demonstrating technical depth [uXMo]. We have addressed all questions in the individual comments. We summarize and clarify some common concerns:
**Performance in Table 1 is not outperforming LKU.**
Our goal is to not outperform existing methods (see Limitations of the paper), but to trade a little bit of asymptotic performance on the in-distribution dataset (as in Table1), for accurate, generalizable and robust performance under minor domain shift, interpretability, and zero-shot plug-and-play of arbitrary displacement field constraints and optimizers. Our model still performs very competitively on in-distribution dataset.
In real-world settings, registration algorithms must be robust to variations in data distribution. Effective registration algorithms should be robust and interpretable, allowing us to understand why registration might fail if it does. We aim to extend these capabilities in DLIR methods by learning feature maps that lead to robust, accurate registration and interpretable features. This practical benefit is significant compared to methods that perform well on a single test data distribution but fail under real-world variations.
Given the unanimous agreement on the technical novelty, depth, and innovative ideas in the paper, accompanied by clear and concise writing, we believe the paper’s merit should not be determined solely by asymptotic in-distribution validation performance but by its overall applicability to real-world registration scenarios determined by performance under domain shift, interpretability, and flexibility of the algorithm to varying or evolving registration scenarios.
**Comparison with related baselines**
To our knowledge (see L104-L118), most existing methods do not perform learning and optimization end-to-end. The closest work Reviewer UTrd pointed out is KeyMorph, which uses closed-form warp functions (i.e. thin plate splines) and cannot be extended to general displacement fields that do not have closed-form solutions.
We add comparisons with KeyMorph (KM) both in-distribution on the OASIS validation set, and on domain shift.
| Training Loss | OASIS | IBSR |
|--|--|--|
| MSE | 0.6078| 0.4663 |
|Dice| **0.6437** | 0.49250 |
The in-distribution performance strongly agrees with deformable registration _dice scores_ in Fig.5,7,8,9 in the KM paper. We observe that KM is very robust in recovering from large rotational deviations due to its training scheme (good for canonicalizing data acquired on non-standard coordinates as shown in their paper) , but is an unsatisfactory baseline for deformable registration due to (192x160x224)~6M voxel displacements being parameterized by only 512 keypoints - making registration of smaller subcortical structures especially hard. This is the maximum number of keypoints used in the KeyMorph paper as well; adding more keypoints runs out of memory on an A6000 GPU with 48GB VRAM – indicating steep memory requirements.
We use the official repository (https://github.com/alanqrwang/keymorph) for training KeyMorph.
For training _without_ label supervision, we use the following script:
```
python scripts/run.py --job_name oasis_unsup --save_dir ./oasis-unsup --num_keypoints 512 --loss_fn mse --transform_type tps_0 --data_path ./oasis_data.csv --train_dataset csv --run_mode train --backbone truncatedunet --use_amp
```
For training _with_ label supervision, we use the following script:
```
python scripts/run.py --job_name oasis_sup --save_dir ./oasis-sup --num_keypoints 512 --loss_fn dice --transform_type tps_0 --data_path ./oasis_data.csv --train_dataset csv --run_mode train --backbone truncatedunet --use_amp
```
Each training script runs for 2000 epochs (default), which takes about a day.
In regards to writing, we have addressed all changes in individual comments. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Truthfulness of Calibration Measures | Accept (poster) | Summary: The authors evaluate a variety of existing calibration measures according to their completeness, soundness, along with truthfulness which is their main focus, by theoretically showing how the measures satisfy these aforementioned criteria. They discover that existing measures, despite being complete and sound, have large truthfulness gaps. They introduce a new method, SSCE, that makes a simple modification of smooth calibration error by subsampling time steps, that is also approximately truthful and the authors provide proofs for this.
Strengths: - The paper is the first systematic, theoretical investigation of the truthfulness of a range of calibration measures.
- Extensive and rigorous proofs are provided for all claims.
- The paper introduces a novel calibration measure, the Subsampled Smooth Calibration Error (SSCE), which is both complete and sound while also being approximately truthful. The proposed SSCE has the potential to be widely used.
Weaknesses: - Given your definition of truthfulness that you base your analysis on, is it consistent with what previous works have defined, and do other works stress the crucialness for this aspect of calibration? I’d like to see more discussion on related works tackling truthfulness in calibration and its importance, particularly to help establish why such an approach like SSCE should be used in practice in the future over ECE or proper scoring rules that have trade offs.
- Despite the introduction claims that Part III of the paper (about the adversarial setting and theorem 1.3) is one of the main contributions, it is not featured prominently in the rest of the main body of the paper, nor is much written about it, so it is difficult to ascertain its importance in the context of the work. It would be beneficial to discuss this more, as most of the paper is devoted to Part II.
- The paper lacks many practical examples which can make it hard to follow at times, particularly when the formulation (of notions such as soundness) differs from previous works. It would be helpful to have more examples such as in [FRST11].
Technical Quality: 4
Clarity: 3
Questions for Authors: - What are the main implications of adaptive adversary results? If a sublinear penalty for SSCE is achievable, how would it perform on average or in a typical use case?
- The subsampling time steps technique is applied to smooth calibration error to improve its truthfulness. Can it be more generalized and applied to different calibration measures to improve their truthfulness? Discussing this could help shed light on the generalizability of the approach.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Related works tackling truthfulness.**
As we mention in the paragraph starting on Line 34, to the best of our knowledge, no prior work has systematically investigated or formally defined the notion of truthfulness. Instead, they observed the lack of truthfulness in certain calibration measures, and then used this observation towards the design of algorithms / hard instances for those specific measures. Lack of the systematic study and definition of truthfulness also came as a surprise to us! This is the reason we believe our work makes a compelling contribution to the study of calibration.
For the camera-ready version, we will add more details of how and where truthfulness has played a role in prior work. At a high level,
for the ECE, [FH21, QV21] noted that the forecaster can lower their ECE by predicting according to the past. This observation was applied in the algorithm of [FH21] and motivated the "sidestepping" technique in the lower bound proof of [QV21]. For smooth calibration and distance from calibration, [QZ24] used their observation (that both measures can be made $\mathrm{polylog}(T)$ on independent coin flips) as a justification for why a strong lower bound cannot be easily proved on a sequence of random bits.
**Part III of the paper.**
We will use the extra page afforded by the camera-ready version to provide more details of Part III of our contributions.
We agree that (while the rest of the paper focuses on showing that the calibration measure we introduce, $\mathsf{SSCE}$, satisfies desirable properties like truthfulness) it is also important to know whether there even exist forecasting algorithms that perform well with respect to our proposed measure, $\mathsf{SSCE}$.
This is the purpose of Part III, where we show that---just as there exist algorithms guaranteeing $O(\sqrt T)$ $\mathsf{smCE}$---there is an algorithm guaranteeing $O(\sqrt T)$ $\mathsf{SSCE}$.
At a technical level, our proof here (specifically Theorem 1.3 as proved in Appendix F) uses a key step from prior work [ACRS24]. In particular, we combine their deterministic algorithm, which guarantees $O(\sqrt{T})$ $\mathsf{CalDist}$ (and thus $\mathsf{smCE}$), with our Lemma F.1 that bounds the difference between $\mathsf{smCE}$ and $\mathsf{SSCE}$ via standard chaining techniques.
**Implications of adaptive adversary results.**
One particular implication is: In terms of the optimal error, predicting against an adaptive adversary is as easy as predicting a sequence of independent coin flips---it is possible to get an $O(\sqrt{T})$ $\mathsf{SSCE}$ in the former case while, by our Theorem 6.1, the optimal error in the latter case is $\Omega(\sqrt{T})$. This is not the case for every calibration measure; for example, expected calibration error $\mathsf{ECE}$ does not have this property.
We also note that Theorem 6.1 provides a $O(\sqrt{T})$ bound on $\mathsf{SSCE}$ in the worst-case; in nicer cases, one should achieve lower $\mathsf{SSCE}$.
**Apply subsampling to other measures.**
We believe that understanding the extent to which the subsampling more generally helps with truthfulness is an excellent direction for future work! While, applying this technique more broadly is beyond the scope of this work, let us say a bit more about what we already suspect. Due to the discontinuities of expected calibration error $\mathsf{ECE}$ and maximum swap regret $\mathsf{MSR}$, we believe it to be impossible for our technique of subsampling timesteps to imbue either of the two calibration measures with bounded truthfulness guarantees.
Our current analysis might still apply though for calibration measures similar to smoothed calibration error but where the Lipschitz function class (see L488) is replaced with other function classes. | Summary: The paper initiates the study of truthful calibration measures. It introduces a set of requirements --- truthfulness, completeness, soundness, and asymptotic calibration --- that together define a novel and fruitful set of requirements on calibration measures. It explores the classic sequential binary prediction setting, and proves in that setting both that (1) an entire array of existing calibration metrics fail to satisfy at least one of these requirements, and (2) that a simple novel metric called SSCE (which is a smooth and subsampled calibration measure) actually satisfies all of these, in particular being only a constant factor away from perfect truthfulness.
Strengths: This paper is in my opinion strong both on the modeling (principled definitions leading to nontrivial theory) and on the technical side.
The paper presents a new array of results that evaluate calibration measures under a new angle not explicitly considered before: truthfulness (along with several other dimensions that make this requirement nontrivial, as without further restrictions even constant measures can be truthful as the authors remark). It is very appealing that this work successfully tackles both sides of the issue — on the one hand, it carefully analyzes an entire array of existing (including recent) calibration measures and derives principled bounds showcasing their lack of (optimal) truthfulness, but it also proposes a novel — even if simple — calibration metric that “smooths out” lack of truthfulness in prior measures via subsampling. In this way, it presents a principled framework on which future work could build to both explore and exploit truthful calibrated elicitation.
The technique, with which the main result — O(1) truthfulness of SSCE — is derived, is quite impressive; despite a simple formulation, the proof requires a careful martingale-based argument which necessitates careful twists on standard concentration/anticoncentration/other measure-theoretic results. Based on familiarity with related work, I believe this complexity is fully justified even the “elementary” nature of the statement; and I concur with the authors that this technique may be useful for further work as well.
Weaknesses: Overall, this is a strong paper. It is not a flaw per se given that this is the first paper to model the “truthful” setting — and thus any and all reasonable modeling assumptions are great to have at this stage — but the main question I am not fully certain about is: how principled is the fashion in which “soundness” — i.e. the property that “bad” predictions should lead to linear (in T) calibration measure value — is defined? Meaning, the requirements imposed (that predicting the complement of the truth, or consistently predicting a bias different from the true underlying bias, are both bad) seem necessary but possibly not sufficient to correspond to what may be viewed as “consistently bad predictions”. It would be great if the authors could shed further light on this. (On the other hand, the other dimensions of inquiry (truthfulness/completeness/asymptotic calibration), I would argue, are indeed defined in a very principled way.)
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you give some crude but reasonable constant bound on the truthfulness constant c of SSCE? If that is possible, along with an explanation of how this bounding propagates through the outlined proof, that might also serve as additional explanation of the proof pipeline for the reader.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Definition of soundness.**
At a high level, soundness mirrors the completeness and requires that intuitively bad predictions should result in a high calibration error. In this paper, we establish minimal conditions for soundness by considering two particular types of bad predictions, (1) predicting the complement of the true outcomes, and (2) making consistently biased predictions relative to the true mean of the product of Bernoulli distributions. We agree with the reviewer that these conditions are necessary but not sufficient, and that it is an important open direction to explore more principled definitions of soundness. For example, our second requirement can be naturally extended to multiple Bernoulli distributions with different means, where predictions are deemed "bad" if they consistently deviate from the true means.
However, in our paper, we have chosen to focus on the simplest version, as none of the measures we examined seems to have a problem with soundness.
**Constant factors.**
While we prioritized simplifying the proof over optimizing the constant factors, these constants can be easily tracked. We emphasize that these constants are only for demonstrations and likely can be improved with more careful analysis:
- From Theorem C.1 (Line 897), the ratio between the truthful predictor's expected SSCE and $\mathbb{E}[\gamma(\text{Var}_T)]$ is upper bounded by $(48 + 8\ln 2)\cdot(2 + 2\sqrt{2}) \le 259$.
- From the proof of Lemma 6.2 (Lines 1062, 1063 and 1073), the ratio between the the optimal forecaster's expected SSCE and $\mathbb{E}[\sqrt{N_T}]$ is lower bounded by $0.05\cdot 2^{-7}\cdot (1/2) = 1/5120$. We expect that this ratio can be improved using a more careful analysis of the case where $N_T$ is small.
- From Lemma D.3 (Line 1167), the ratio between $\mathbb{E}[\sqrt{N_T}]$ and $\mathbb{E}[\gamma(\text{Var}_T)]$ is lower bounded by $1/16$.
As a result, the ratio between the expected SSCE of the truthful forecaster and that of the optimal forecaster is upper bounded by $259\cdot 5120\cdot 16 \le 2.2\cdot 10^7$.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thank you for your response. Having read it along with the other reviews and your rebuttals to those, I am keeping my positive score and assessment of the paper. A couple clarifying remarks: regarding the constant that I asked about, thank you for providing the quick derivation --- it makes sense to me --- but I would still like to emphasize that if you were to expand your explanation in the rebuttal somewhat, and include it in the paper, that would provide additional ropes for the reader to hold on to while parsing the long and technical proof; I would encourage you to do that. Also, I believe we are in agreement regarding the soundness component, but it might be worth to additionally emphasize the point that this condition is minimal, as in the context of the flow of the paper, the definition comes out almost out of nowhere and might confuse some readers by its unambitiousness relative to the definitions of the other properties. | Summary: The paper studies calibration measures in a sequential prediction setup. In addition to rewarding accurate predictions (completeness) and penalizing incorrect ones (soundness), the paper formalizes another desideratum of calibration measures — truthfulness. A calibration measure is truthful if the forecaster (approximately) minimizes the expected penalty by predicting the conditional expectation of the next outcome, given the prior distribution of outcomes.
The paper first shows that the existing calibration measures fail to simultaneously meet all these criteria (completeness, soundness, truthfulness). The paper then proposes a new calibration measure — Subsampled Smooth Calibration Error (SSCE) — that is shown to be approximately truthful via a non-trivial analysis.
Strengths: The proposed truthfulness desideratum for the calibration measure is well-motivated. The authors have provided several rigorous technical results to support the need of the proposed calibration measure (SSCE), including the failure of being truthful of existing calibration measures. The analysis and the results are non-trivial.
Weaknesses: The proposed Subsampled Smooth Calibration Error is essentially the expected of the smooth calibration error over the uniformly randomly selected events. This seems to be related to the definition of “Event-conditional unbiasedness” proposed in “High-Dimensional Prediction for Sequential Decision Making”. Could authors comment a bit on this connection?
It is a bit unclear on how to understand the truthfulness in an adversary setting, especially given that in Definition 2.5 (Truthfulness of calibration measures), it requires a distribution over the sequence of the events. It might be helpful to clarify here a bit.
Does the authors have intuitions on how the current results could be extended if one consider multi-outcome space, instead of being binary?
Technical Quality: 3
Clarity: 3
Questions for Authors: please see above.
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Connection to "event-conditional unbiasedness".**
In our view, the event-conditional unbiasedness in [Definition 2.3, NRRX23] is a strengthening of the notion of "calibration with checking rules" from [FRST11], where each checking rule specifies a subset of time horizon $[T]$ on which the calibration error is evaluated, as discussed in Lines 109--115. Roughly speaking, event-conditional unbiasedness allows each checking rule to be violated up to a degree commensurate with the number of steps that the checking rule applies to. Therefore, event-conditional unbiasedness, like the notion from [FRST11], is of an worst-case flavor as it takes the maximum over the event family. Note that taking the worst case over all $2^T$ events lead to vacuous guarantees due to the large number of events. On the other hand, our proposed notion of SSCE is qualitatively different as it is an average-case notion which takes the expectation over all subsets of the time horizon.
**Truthfulness in an adversarial setting.**
In our view, the truthfulness is a property of a calibration measure, and is independent of whether the measure is used in a stochastic or adversarial setup.
Intuitively, truthfulness requires that whenever the forecaster has additional side information about the next outcome, they should be encouraged to use it.
Our current formulation focuses on the "stochastic" scenario where the outcome sequence $y_{1:T}$ is drawn from some prior distribution, and this prior distribution is provided to the forecaster as side information. In this case, the forecaster is able to calculate the ground-truth probability of the next outcome $y_t$ from the history $y_{1:t-1}$, and they should truthfully forecast this ground-truth probability. Arguably, this serves as a minimal condition of truthfulness.
Following our methodology, truthfulness can be naturally defined in adversarial settings as well, which we discuss in Section 7. In the adversarial setting, the next outcome is selected by an adversary, which may depend on both the historical outcomes $y_{1:t-1}$ and the forecaster's previous predictions $p_{1:t-1}$. In this setting, truthfulness would require that whenever the adversary's strategy is provided to the forecaster as side information, they should use that information to truthfully predict the ground-truth probability of $y_t$ under this strategy.
**Extension to multi-outcome spaces.**
Central to our current proofs are the analyses of concentration (specifically, uniform convergence over the Lipschitz class) and anti-concentration. So, we expect that the results can be extended to the multi-outcome spaces via the high-dimensional analogues of such results.
We agree that the exact analysis and the bounds on the truthfulness for multi-outcome spaces would indeed be an interesting direction for future work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors' response, I do not have further questions. | Summary: This paper proposes revisiting the calibration measure by emphasizing three main desirable properties for the metric's behavior: rewarding accurate predictions, penalizing incorrect ones, and ensuring truthfulness. The authors define truthfulness as the ability to accurately predict the conditional expectation of future outcomes. The paper's core findings are twofold: first, it demonstrates that most calibration metrics do not adequately capture truthfulness; second, it argues that a small modification to the classical Expected Calibration Error (ECE)—specifically, subsampling the window in which the ECE is calculated and then averaging the results—is sufficient to meet all the desired properties. The whole article is a series of theoretical arguments supporting these claims.
Strengths: The claims appear to be supported by rigorous mathematical arguments, which, I found, are also quite lengthy and challenging to follow.
Weaknesses: The theoretical argument for including the notion of truthfulness is intriguing, but I couldn't discern any practical implications for omitting it. At the very least, we would expect some numerical demonstrations, even on simple toy examples.
Technical Quality: 3
Clarity: 1
Questions for Authors: - Could you numerically highlight any meaningful difference between smECE and the proposed SSCE?
- Does the introduced subsampling breaks to possibility to easily draw a reliability diagram as in (Blasiok & Nakkiran, 2023)?
- The definition in line 41-42 is crucial to the paper and deserves to be more clearly introduced. It is formalized in page 5. This in general makes the paper quite convoluted (most of the point -even simple argument or definition- are understood after reading a long detour. For a > 50 pages technical paper it is quite painful and I personally just gave up reading it.)
Confidence: 1
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: Ok
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Summary of paper.**
We believe that there is a possible typo or misunderstanding in the summary written by the reviewer---our new calibration measure, $\mathsf{SSCE}$, is obtained from the smooth calibration error ($\mathsf{smCE}$) plus subsampling, rather than from the ECE plus subsampling.
**Practical implications for lack of truthfulness.**
Lines 30--32 include an explanation of the practical importance of truthfulness: If a calibration measure is far from being truthful, the forecaster is incentivized to "overfit" to the measure rather than figuring out the exact distribution of the next outcome. In many cases, this leads to predictions that are far from the true probabilities, and would weaken the trustworthiness of the forecasts.
For practical impact of the lack of truthfulness on a toy example, our informal discussion and calculations in Lines 195--206 as well as the formal proofs in Appendices A.2 and A.3 indeed provide one such example. In short, we gave simple examples on which: (1) the "right" predictions lead to a large error of $\Omega(\sqrt{T})$ or $\Omega(T)$; whereas: (2) a different strategy that frequently makes "wrong" predictions has a much lower error of $0$. The analyses of these examples are indeed simple and elementary and we believe they serve as good examples for demonstrating how lack of truthfulness can lead to highly biased and qualitatively poor predictions.
We can also numerically compute the example on Lines 195---206, say for $T = 1000$ timesteps.
The expected calibration error $\mathsf{smCE}$ of truthful prediction (average over 100 seeds) is $11.27$; the $\mathsf{smCE}$ of the dishonest prediction strategy is deterministically $0$.
In contrast, the subsampled smooth calibration error $\mathsf{SSCE}$ of truthful prediction is $8.83$, while the $\mathsf{SSCE}$ of the dishonest prediction strategy is $8.85$.
That is, the incentive to lie according to the dishonest strategy is near-zero under our proposed calibration error.
**Difference between $\mathsf{smECE}$ of [Blasiok-Nakkiran, ICLR'24] and the proposed $\mathsf{SSCE}$.**
We thank the reviewer for pointing us to this calibration measure. We believe that the smooth ECE is qualitatively different from the SSCE, both in terms of their definitions and their truthfulness guarantees. At a high level, the definition of $\mathsf{smECE}$ aims to mitigate the discontinuity of the ECE introduced by the binning; this is in a similar spirit to $\mathsf{smCE}$. However, even after this smoothing, the measure still allows the forecaster to "predict to the past" in an effort to minimize their penalty. On the other hand, our definition of $\mathsf{SSCE}$ introducing an additional amount of randomness by subsampling the horizon, so that the benefit from "predicting to the past" is minimized.
In more detail, in our proof of Proposition A.2, we give a distribution $\mathcal{D}$ on which: (1) truthful prediction typically leads to an $\Omega(\sqrt{T})$ bias at value $1/2$; (2) a different forecasting strategy achieves perfect calibration. Property (2) implies that the optimal error is $\mathsf{OPT}_{\mathsf{smECE}}(\mathcal{D}) = 0$. Property (1), with a simple calculation, shows that truthful prediction leads to an $\Omega(\sqrt{T})$ $\mathsf{smECE}$ (if we scale up the definition in [BN24] by a factor of $T$). In other words, $\mathsf{smECE}$ has a $0$-$\Omega(\sqrt{T})$ truthfulness gap.
**Impact of subsampling on reliability diagrams.**
While the visualization of miscalibration is beyond the scope of this work, we believe that existing methods for producing reliability diagrams (e.g., the one introduced by [BN24]) can easily accommodate the subsampling. It suffices to sample a few independent subsets of the time horizon and generate the reliability diagram corresponding to each subsampled set. Then, we can either stack these diagrams together, or plot the confidence intervals computed from these trials.
**Definition in Lines 41--42.**
We thank the reviewer for the comment, and would like to explain our decision on the organization. The formal definition of truthfulness requires the introduction of many concepts (those in Definitions 2.3, 2.4 and 2.5)
and thus is relegated to the Preliminaries section. However, we do have informal definitions and explanations in the introduction (with forward pointers to Section 2). We believe this allows the readers to choose their own pace, e.g., either reading the paper in order and gradually building on the formality of the definitions or jumping to formal definitions and back.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications.
- Despite the nice theoretical contributions, a minimal set of experiments illustrating the notion would be beneficial if the authors believe that the *truthfullness* additional metric is useful in practice. In all case, I think it can help the readers understanding.
> If a calibration measure is far from being truthful, the forecaster is incentivized to "overfit" to the measure rather than figuring out the exact distribution of the next outcome.
I am not sure to understand this point. Perhaps my confusion comes from the fact that I see calibration measure as essentially distances to ground-truth conditional distribution. It is more formally shown in https://arxiv.org/abs/2402.10046
As such, as the continuity issue almost surely does not occur, I suspect that same might happen with the truthfulness criteria. The example in Line 195-206 is quite artificial. So for any algorithm with a bit of randomness, these edge cases should not matter .
That was why I asked for numerical benchmarks. I would be curious to have authors opinion on that.
- Paper Organization
Ok, I was not sensitive with authors choice but this is subjective. Unfortunately, I find the paper quite lengthy and hard to follow.
> SSCE is obtained from the smooth calibration error (smCE) plus subsampling, rather than from the ECE plus subsampling.
Yes, of course. Thanks for the precision.
---
Reply to Comment 1.1.1:
Comment: Thank you for the follow up!
**Regarding “calibration measure as essentially distances to ground-truth conditional distribution”.**
We think this is a very important point to clarify. Calibration measures are usually defined as a generic function that takes as input a prediction sequence and an outcome sequence (Lines 151–152). Although they are commonly understood as "distances to ground-truth conditional distributions", our work proves this is not the case---calibration can significantly incentivize one to significantly deviate from predicting the ground-truth conditional distribution.
We also wanted to note that, in adversarial settings for example, there may not be a well-defined notion of an underlying ground-truth conditional distribution. Even in settings where such a conditional distribution does exist, a calibration measure should still be a function of predictions and outcomes, rather than the conditional distribution. As a result, it might not be the case that “the continuity issue almost surely does not occur”.
**"For any algorithm with a bit of randomness, these edge cases should not matter".**
We do not see truthfulness as an edge case similar to the sensitivity of ECE to output perturbations (where even vanishingly small perturbations to predictions can significantly increase ECE), which is the edge case cited in your reference. Even if such perturbations were added to one's predictions (we believe this is what you are referring to as an algorithm "with a bit of randomness"), the lack of truthfulness we observe with smECE still holds; for example the numerical experiment we provided in our experiment remains unchanged if one adds vanishingly small perturbations. We do not believe there is any dimension along which perturbing the setup of the example in Lines 195-206 significantly changes the observed truthfulness problem---as such we do not believe it to be an artificial edge case. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Convolutions and More as Einsum: A Tensor Network Perspective with Advances for Second-Order Methods | Accept (poster) | Summary: This paper provides a tensor network (TN) view of studying convolutional networks, including forward passes, first-order and second order derivatives. Based on these notations, the authors show how to efficiently implement some algorithms such as KFAC and structured dropout. Experimental results show that these implementations have faster computation while being memory efficient.
Strengths: 1. The paper is well written with nice tensor network graphs. While many papers in the tensor decomposition field suffer from notorious indexes, the notation in this paper is consistent and clear.
2. Although the TN graphs for matrix multiplications and derivatives themselves are not new, this paper seems to be the first to provide a comprehensive study of these representations and computations in CNNs. Based on these representations, the authors show how to efficiently implement the KFAC and structured dropout algorithm, which is important for optimization.
3. Experiments show the computational and memory efficiency of the proposed TN implementations.
Weaknesses: The authors claimed some potential impacts of the proposed representation, including theoretical and practical benefits. However, they only present applications on the KFAC and structured dropout. Moreover, the derivations seem to require expertise in TNs. Therefore, I feel the potential usefulness of the proposed representation is still unclear.
Technical Quality: 3
Clarity: 3
Questions for Authors: The authors show faster speed of KFAC computation. Does this implementation bring better convergence, training speed or results in practice? Is there any result for the trained network?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer vbTD,
thanks for your support! We appreciate the time and effort you put into reviewing our work.
> The authors show faster speed of KFAC computation. Does this implementation bring better convergence, training speed or results in practice?
You have a point that we did not evaluate the impact of our achieved speed-up on a real second-order method. We are happy to inform you that we will add such an experiment to the main text (please see our [global rebuttal](https://openreview.net/forum?id=cDS8WxnMVP¬eId=dKv8oyjgLS) for a detailed description). In this experiment, we were able to halve the computational overhead of a KFAC-based method compared to SGD, and dramatically reduce its peak memory (sometimes by a factor of 2) to a value very close to that of SGD.
A direct consequence of these findings is that they increase training speed, or allow using hardware with less memory.
One could also experiment with more frequent pre-conditioner updates or larger batch sizes to obtain convergence in fewer steps. We believe these are interesting directions to explore in the future that so far have been disregarded due to the high computational extra cost. To produce statistically significant statements and recommendations though, we believe it will take a larger ablation that is beyond the scope of this work.
Thanks again, and let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. I will maintain the score. | Summary: The paper presents a novel approach to simplifying and optimising TN operations for CNNs. The authors introduce an abstraction for tensor networks to efficiently handle convolutions and leverage TN simplifications to improve computational performance. The paper details the theoretical foundations, implementation strategies and results. The results suggest improvements in run-time and memory efficiency over standard methods.
Strengths: The key strengths are:
- The paper includes rigorous mathematical formulations and proofs to support the proposed methods.
- Provides thorough details on implementation, aiding reproducibility and practical adoption.
- Applicable to a wide range of convolutional operations, including those with various hyper-parameters, batching, and channel groups. The generality enhances utility and flexibility
- Good use of visualisations and diagrams to clarify and explain the networks
- Interest to the research community with strong potential for future extensions
Weaknesses: The key weakness is that the performance demonstrated is not very convincing in many cases. This is understandable as the authors mention, standard routines have been aggressively optimised whereas the TN counterparts have been explored significantly less. However this by itself is not a reason to reject given the potential further research that can be built on this work and the interest it
Minor things:
- Line 57 does not appear to have the reference to 2.2 in the right place
- Two pages are spent on the preliminaries. I see the value of having this to set the context for the later sections but I wonder, if some of those were moved to the appendix, what new content could be added to the main method and results?
- The NeurIPS format states "Place one line space before the figure caption and one line space after the figure." Some of the Figure captions are placed next to the figure in a two-column setup which I am not sure if satisfies the style rules
- Standard errors, rather than standard deviation, might be more informative in Figure 8
Technical Quality: 3
Clarity: 4
Questions for Authors: Questions for authors:
- Why are the (non simplified) TN implementations slower in general? Do the standard implementations have further efficiency optimisations?
- Are there any practical ways to estimate a priori whether a selected TN has high contraction order variability? Note: to know the approximate variability range, not necessarily the exact values of each order
- Does figure 6 include variability wrt contraction order?
- How many of the operations support GPU/hardware acceleration? Would there be performance gains if they were implemented?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: There is a section discussing limitations, but the authors could mention more practical factors. Such as: scalability issues and how the methods perform with larger datasets (should be straightforward with synthetic data), potential biases introduced and their impact on different types of data and potential real-world scenarios where the methods might fail or underperform.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer cYbp,
thanks for your strong support and detailed review! We will apply your suggested improvements to the text.
We would like to inform you that we conducted a new experiment with a real second-order method based on KFAC-reduce (see our [global rebuttal](https://openreview.net/forum?id=cDS8WxnMVP¬eId=dKv8oyjgLS)). Our TN implementation was able to halve the second-order method's run time overhead compared to SGD, and dramatically reduce its peak memory (sometimes by a factor of 2). We hope this provides further evidence that our approach is indeed useful to advance second-order methods and close their computational gap.
## Questions
- > Why are the (non simplified) TN implementations slower in general? Do the standard implementations have further efficiency optimisations?
This observation is correct. Non-simplified TNs perform contractions using a dense version of $\Pi$, which is sparse. This causes wasteful multiplies by zero. Our simplifications reformulate the contraction with $\Pi$ into `reshape`s or `slice`s, which remove these zero multiplications from the TN (see e.g. the bottom right panel of Fig. 1) and thereby speed up the computation.
Whenever simplification is not possible, the main difficulty for using the index pattern's sparsity is that PyTorch's `einsum` only accepts dense tensors. We experimented with alternatives that support mixed-format tensors, and have some ideas how to make progress on this frontier in future work. If you are interested in knowing more, feel free to have a look at our [response to Reviewer iQrS](https://openreview.net/forum?id=cDS8WxnMVP¬eId=u5s1zM6z7X).
- > Does figure 6 include variability wrt contraction order?
This depends on the heuristic used to find a contraction path. Generally speaking, the heuristic depends on the number of tensors in the TN. The tensor networks in Fig. 6 contain at most 6 tensors. After looking into `opt_einsum`'s internals, we believe that the TNs with at most 4 tensors do not contain variability w.r.t. contraction path schedule, while the others might:
In all our experiments, we used `opt_einsum`'s `contract_path` method with the default `'auto'` strategy to find contraction schedules. `opt_einsum` supports many strategies and according to the [documentation](https://optimized-einsum.readthedocs.io/en/stable/path_finding.html#introduction), using `'auto'` "will select the best of these it can while aiming to keep path finding times below around 1ms". The exact behavior depends on the number of tensors to be contracted and is described [here](https://optimized-einsum.readthedocs.io/en/stable/path_finding.html#performance-comparison): For TNs with up to 4 tensors, the optimal schedule is searched. For TNs with 5-6 tensors, a restricted search is carried out. We are unsure whether this restricted search is deterministic or stochastic, so it may be that this approach causes some extent of variability in contraction order.
- > Are there any practical ways to estimate a priori whether a selected TN has high contraction order variability? Note: to know the approximate variability range, not necessarily the exact values of each order
This is indeed an interesting question. We did not look into this and decided to rely completely on `opt_einsum`, which uses a sophisticated heuristic based on [performance comparisons](https://optimized-einsum.readthedocs.io/en/stable/path_finding.html#performance-comparison) to strike a good balance between contraction path search time and contraction time. We believe this is a good representation of what a practitioner would do.
- > How many of the operations support GPU/hardware acceleration? Would there be performance gains if they were implemented?
All operations we propose are purely PyTorch and therefore support GPU execution out of the box.
Thanks again, and let us know if you have further questions. | Summary: ### Summary of "Convolutions and More as Einsum: A Tensor Network Perspective with Advances for Second-Order Methods"
The paper "Convolutions and More as Einsum: A Tensor Network Perspective with Advances for Second-Order Methods" presents a compelling and innovative recast of convolution operations into tensor networks, offering significant benefits for both theoretical understanding and practical applications in machine learning. Here’s a motivated summary with the key points:
1. **Recast of Convolution Operation into Tensor Networks**:
- The authors provide a nice review and recast of convolutions by viewing them as tensor networks (TNs). This perspective simplifies the analysis of convolutions, allowing for intuitive reasoning about underlying tensor multiplications through easily interpretable diagrams. This approach makes it possible to perform function transformations like differentiation more effectively and efficiently.
2. **Tensor Networks for Machine Learning**:
- This paper is very good material for studying tensor networks within the context of machine learning. By representing convolutions as TNs, the authors bridge the gap between complex convolutional operations and the more straightforward tensor manipulations, making advanced theoretical concepts accessible to practitioners and researchers in machine learning.
3. **Implementation with Einsum and Opt_Einsum**:
- The authors implement convolutions using the einsum and opt_einsum functions in PyTorch/Python, providing a detailed comparison of the efficiency of these implementations. The tensor network approach not only accelerates the computation but also reduces memory overhead significantly. The paper demonstrates that tensor network implementations can achieve substantial performance improvements, particularly for less standard routines like Kronecker-factored Approximate Curvature (KFAC).
4. **Insights into Second-Order Optimization**:
- The implementation of KFAC benefits significantly from the tensor network view of convolutions. The TN perspective provides new insights into second-order optimization methods, allowing for more efficient computation of curvature approximations. The paper shows that the tensor network approach can lead to more frequent pre-conditioner updates, larger batch sizes without memory issues, and extends KFAC to transpose convolutions, demonstrating its broad applicability to second-order optimization in neural networks.
In summary, this paper provides a well-rounded and innovative approach to understanding and optimizing convolutions through tensor networks. It not only offers theoretical insights but also demonstrates practical performance benefits, making it a valuable resource for researchers and practitioners in machine learning.
Strengths: ### Strengths of "Convolutions and More as Einsum: A Tensor Network Perspective with Advances for Second-Order Methods"
1. **Innovative Perspective**:
- The paper introduces an innovative perspective by recasting convolution operations into tensor networks (TNs). This novel approach simplifies the analysis and implementation of convolutions, making complex operations more understandable and accessible.
2. **Advances in Second-Order Optimization**:
- The paper shows that the tensor network view of convolutions can greatly benefit second-order optimization methods, particularly Kronecker-factored Approximate Curvature (KFAC). This provides new insights and practical improvements in the computation of curvature approximations.
3. **Comprehensive Diagrams and Transformations**:
- The inclusion of detailed diagrams and specific transformations based on the connectivity pattern of convolutions enhances the clarity and utility of the proposed methods. These visual aids make it easier to understand and implement the tensor network approach.
4. **Flexibility and Modifiability**:
- The tensor network framework allows for easy modifications and supports various convolutional configurations, including different hyper-parameters, batching, channel groups, and convolution dimensions. This flexibility makes the approach broadly applicable.
5. **Experimental Validation**:
- The paper provides robust experimental validation, demonstrating the practical benefits of the tensor network approach in various convolutional operations and configurations. The significant speed-ups and memory savings observed in the experiments highlight the effectiveness of the proposed methods.
Weaknesses: ### Weaknesses of "Convolutions and More as Einsum: A Tensor Network Perspective with Advances for Second-Order Methods"
While the paper "Convolutions and More as Einsum: A Tensor Network Perspective with Advances for Second-Order Methods" offers numerous strengths, it also has some notable weaknesses:
1. **Suboptimal Use of Einsum for Convolutions**:
- Using einsum for convolution operations is generally not an optimal approach. The paper itself acknowledges that an optimal method may not exist, as indicated by the mixed results in Tables F4-7, where the performance comparison between the default PyTorch (PT) implementation and the tensor network (TN) approach shows that TN is not consistently faster. This suggests that while TNs offer theoretical benefits, their practical efficiency can vary significantly.
2. **Performance Trade-offs for Specific Architectures**:
- The einsum-based approach favors specific architectures but makes some of the frequently used models slower. This raises questions about the general applicability of the proposed method. There is a lack of intuition or explanation provided for why certain architectures benefit from the TN approach while others do not. Understanding these trade-offs is crucial for evaluating the practical utility of the method across different use cases.
3. **Limited Comparison with Optimized Convolution Algorithms**:
- The comparison between TN and PyTorch does not specify which convolutional algorithm is used in PyTorch's implementation. Furthermore, the paper does not compare the TN approach with other highly optimized versions of convolution, such as Winograd or Fourier transform-based methods. These optimized algorithms are known to significantly improve convolution performance, and a comparison with them would provide a more comprehensive assessment of the TN approach's efficiency.
In summary, while the paper presents a novel and theoretically sound method for implementing convolutions using tensor networks, its practical applicability is limited by the suboptimal performance of einsum for convolutions, unexplained performance trade-offs across different architectures, and a lack of comparison with other optimized convolution algorithms. Addressing these weaknesses could enhance the robustness and utility of the proposed approach.
Technical Quality: 3
Clarity: 4
Questions for Authors: Please see weakness
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Please see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer iQrS,
thanks a lot for your thorough review; we appreciate the work you put into it and are glad you find the paper innovative and support our idea of making the complex yet powerful tensor network toolbox accessible to the ML community.
We would like to inform you that we conducted a new experiment with a real second-order method based on KFAC-reduce (see our [global rebuttal](https://openreview.net/forum?id=cDS8WxnMVP¬eId=dKv8oyjgLS)). Our TN implementation was able to halve the second-order method's run time overhead compared to SGD, and dramatically reduce its peak memory (sometimes by a factor of 2). We hope this provides further evidence that our approach is indeed useful to advance second-order methods and close their computational gap.
## **Weaknesses 1 & 2**
> Using einsum for convolution operations is generally not an optimal approach
> The einsum-based approach favors specific architectures
You are right that there are additional degrees of freedom to further accelerate our TN implementation. To give some intuition, we would like to point out that one main problem is that we cannot always leverage the index pattern's sparsity: For many convolutions, we can exploit the sparsity thanks to our proposed symbolic simplifications. However, whenever they cannot be applied, we must consider $\Pi$ as dense because PyTorch's `einsum` does not allow contracting tensors with mixed formats. This means we must sometimes carry out multiplies by zero, which is wasteful.
We attempted to address the scenario where our simplifications cannot be applied and have ideas how to progress on this frontier. We will describe them in an updated version of the paper as follows:
- We experimented with TACO [2], a tensor algebra compiler that can optimize contractions of mixed-format tensors. Unfortunately, TACO's Python frontend cannot yet generate GPU code, and on CPU we ran into a memory leak in the Python frontend.
- It would be interesting to benchmark our approach on specialized hardware, e.g. [cerebras' wafer-scale engine](https://www.cerebras.net/product-chip/) processor, which [supports unstructured sparsity](https://www.cerebras.net/blog/sparsity-made-easy-introducing-the-cerebras-pytorch-sparsity-library). This would allow to remove many multiplications by zero. While we do not have access to such a chip, we are excited our approach could benefit from such hardware in the future.
- To the best of our knowledge, we believe that contraction path optimization of mixed-format tensor networks has not been studied sufficiently. This may be due to the absence of hardware that can leverage this information to speed up contraction. We hypothesize further performance improvements might be possible by taking the operand sparsity into account when optimizing the contraction path.
Please let us know if you think there is an additional approach we could try to boost the performance of our framework.
## **Weakness 3: Limited comparison with optimized convolution algorithms**
> The comparison between TN and PyTorch does not specify which convolutional algorithm is used in PyTorch's implementation.
You are right that we do not mention which convolution implementation is used in the comparison. We use PyTorch's [`torch.nn.functional.conv2d`](https://pytorch.org/docs/stable/generated/torch.nn.functional.conv2d.html), which automatically selects a 'good' implementation according to heuristics. We think this is a good baseline implementation as most engineers and researchers rely on it in practise.
> Furthermore, the paper does not compare the TN approach with other highly optimized versions of convolution
Consequently you do have a point that we did not look into comparing our approach with different convolution implementations. We found that PyTorch indeed has [different convolution algorithms](https://github.com/pytorch/pytorch/blob/1848cad10802db9fa0aa066d9de195958120d863/aten/src/ATen/native/cudnn/Conv.cpp#L486-L494) implemented. However, it seems like one [cannot specify manually](https://discuss.pytorch.org/t/manually-set-cudnn-convolution-algorithm/101596/2) which algorithm should be used.
We will try to look into this and identify other convolution implementations in PyTorch in order to provide a more fine-grained comparison. If you have any pointers, please let us know.
## References
[2] Kjolstad, F., Chou, S., Lugato, D., Kamil, S., & Amarasinghe, S. (2017). TACO: A tool to generate tensor algebra kernels. IEEE/ACM International Conference on Automated Software Engineering (ASE). | null | null | Rebuttal 1:
Rebuttal: # Evaluation on a real second-order method
Dear Reviewers,
We are glad to inform you that we successfully applied our work to a real second-order method to complement the speed-ups of fundamental operations shown in the manuscript. Specifically, we took the KFAC-based SINGD optimizer [1] and benchmarked the impact of our TN implementation on memory and run time in comparison to SGD. We will add the experiment to the main text (details below).
**TL;DR: Our TN implementation can reduce SINGD's run time overhead by 50\% and almost completely eliminate the memory overhead. Sometimes, it even reduces the optimizer's memory footprint by a factor of 2. This enables using larger batches or more frequently updating the pre-conditioner.**
We hope that this provides further evidence for the utility of our approach and demonstrates its capability to significantly reduce the computational gap between approximate second-order methods like SINGD and first-order methods like SGD.
Please let us know if you have any follow-up questions. We would be happy to discuss.
---
**Details:** We applied SINGD with KFAC-reduce and diagonal pre-conditioners to ResNet18 and VGG19 on ImageNet using a batch size of 128. We measured per-iteration time and peak memory on an NVIDIA A40 with 48 GiB of RAM. For SINGD, we compare computing the Kronecker factors with the standard approach via input unfolding versus our TN implementation.
- **ResNet18 with inputs of shape `(128, 3, 256, 256)`:**
| Optimizer | Per iteration [s] | Peak memory [GiB] |
|-----------|-------------------|------------------------|
| SGD | 0.12 (1.0x) | 3.6 (1.0x) |
| SINGD | 0.19 (1.7x) | 4.5 (1.3x) |
| **SINGD+TN (ours)** | 0.16 (1.3x) | 3.6 (1.0x) |
The TN implementation reduces SINGD's run time overhead compared to SGD by 50\%, and reduces its peak memory to that of SGD.
- **VGG19 with inputs of shape `(128, 3, 256, 256)`:**
| Optimizer | Per iteration [s] | Peak memory [GiB] |
|-----------|-------------------|------------------------|
| SGD | 0.69 (1.0x) | 14 (1.0x) |
| SINGD | 1.0 (1.5x) | 32 (2.3x) |
| **SINGD+TN (ours)** | 0.80 (1.2x) | 16 (1.1x) |
Again, our TN implementation halves SINGD's run time overhead compared to SGD. On this network, it also dramatically lowers the memory overhead, cutting down the peak memory by a factor of 2 from 32 GiB to 16 GiB.
## References
[1] Lin, W., Dangel, F., Eschenhagen, R., Neklyudov, K., Kristiadi, A., Turner, R. E., & Makhzani, A.. Structured inverse-free natural gradient descent: Memory-efficient & numerically-stable KFAC. ICML 2024. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Investigating Variance Definitions for Stochastic Mirror Descent with Relative Smoothness | Reject | Summary: This paper investigates a new definition for the stochastic gradient variance in mirror descent.
Most existing analyses for stochastic mirror descent require a strongly convex distance generating function to bound the gradient variance.
This limits the their applications especially when this assumption fails.
In particular, Le Priol et al. (2021) have shown that the none of the existing convergence rates applies to Gaussian maximum likelihood.
This paper aims to fix this issue by proposing a new definition of gradient variance.
They show that the new definition is strictly stronger (more likely to hold in practice) than existing definitions, and derive convergence rates in convex setting.
The authors demonstrate an application of the new variance definition bounding the estimation error of MAP for one-dimensional Gaussian distributions.
Strengths: - Analyzing stochastic mirror descent is hard when the distance generating function is not strongly convex.
This paper is a step towards generalizing mirror descent analyses.
In particular, I like Section 2.2 where the authors show that the proposed definition is strictly better than existing ones.
- The mirror descent analysis in this paper yields a non-asymptotic bound for the estimation error of Gaussian MAP. This seems to be a fundamental problem lacking theoretical guarantees based on Le Priol et al. (2021).
However, I am now knowledgeable enough to confirm the significance or novelty of this result in statistics.
Weaknesses: While developing the new gradient variance definition is certainly interesting, I have the following concerns.
- The authors have shown that their gradient variance \\(\sigma_{\star, \eta}^2\\) is finite for every fixed step size \\(\eta\\).
The convergence rates in the convex setting are proved using constant step sizes, and thus the optimality gap does not vanish.
To make the optimality gap vanish, diminishing step sizes are often required, which is not covered in this paper.
Proving convergence with diminishing step sizes probably requires characterizing the average variance \\(\frac1T \sum_{t=1}^{T} \sigma_{\star, \eta_t}^2\\), which I think can be done only on a case-by-case manner depending on the specific application.
- The only case so far where this new definition shines while all other definitions fail is maximum likelihood estimation for one-dimensional Gaussian distributions.
This is very restrictive.
Is it possible to generalize this result to multivariate Gaussian distributions?
In addition, it would be great if the authors could provide other applications to further justify the necessity of this new definition.
Minor:
- Line 192: Add a period.
- Bad notation in Section 4.2: It might be confusing to use $\Sigma$ to denote the standard deviation.
Consider using a different letter like $s$ or $\tau$.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the questions in the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, we discuss your two main points below.
**Diminishing step-sizes:** One would have to characterize some kind of average variance indeed. Yet, when $\sigma^2_\eta$ converges for small $\eta$ (Proposition 2.6), one can expect to obtain bounds of the form $\sigma^2_\eta \leq R^2$ for $\eta \leq \eta_0$, with $R^2_0 >0$ only dependent on $\eta_0$. This is for instance the case in our application. Therefore, while case-by-case bounds on the average variance are required, there are strong reasons to believe that these will generally hold.
As a complement, diminishing step-size results are direct adaptations of the results in Section 3. The result in Section 4 actually relies on using diminishing step-sizes. In particular, the equivalent of Theorem 3.1 would write:
\begin{equation}
\eta_t \left[ \mathbb{E} f_{\eta_t}(x^{(t)}) - f_{\eta_t}^*\right] +\mathbb{E} D_h(x_\star, x^{(t+1)}) \leq \prod_{k=0}^t(1 - \eta_k \mu) D_h(x_\star, x^{(0)}) + \sum_{k=0}^t \eta_k^2 \prod_{\ell=k}^t (1 - \eta_\ell \mu) \sigma_{\star, \eta_\ell}^2.
\end{equation}
The convex result (adaptation of Theorem 3.3) would write:
\begin{equation}
\frac{1}{T+1}\sum_{k=0}^t \eta_k \mathbb{E}\left[f_{\eta_k}(x) - f_{\eta_k}^* + D_f(x_\star, x^{(k)})\right] \leq \frac{D_h(x_\star, x^{(0)})}{T+1} + \frac{1}{T+1}\sum_{k=0}^t \eta_k^2 \sigma^2_{\star, \eta_k}.
\end{equation}
**Generalization to multivariate Gaussian distributions:** Thank you for bringing out this point. We did not initially tackle the multivariate case since the unidimensional one already contained all the complexity of handling stochasticity with mirror descent. Yet, the multivariate can also be tackled with our approach by replacing quantities with their natural multidimensional equivalents (and in particualr $\log$ by $\log {\rm det}$).
More precisely, the log-partition function writes up to constants:
\begin{equation}
A(\theta) = - \frac{1}{2}\log{\rm det}(\Sigma^{-1}) + \frac{1}{2} m^\top \Sigma^{-1} m = - \frac{1}{2}\log\det(- 2\theta_2) - \frac{\theta_1^\top \theta_2^{-1} \theta_1}{4},
\end{equation}
where $\theta_1 = \Sigma^{-1} m$, and $\theta_2 = - \Sigma^{-1} /2$. From there, performing similar derivations, function $f_\eta$ writes:
\begin{equation}
f_\eta(\theta) = \frac{d}{2\eta}(\eta + \log(1 - \eta)) + \frac{1}{2\eta} \mathbb{E} \log \det \left(I + \eta \Sigma^{-1} (m - x)(m - x)^\top\right) - \frac{1}{2}\log \det (\Sigma^{-1}),
\end{equation}
which is a direct transcription of what we had in the unidimensional case. From here, differentiating the above objective implies that the minimizer is $m_\eta = m_\star$ (as in the unidimensional case), and
\begin{equation}
\frac{1}{\eta} \mathbb{E}\left[\left(\Sigma_\eta + \eta (m_\star - x)(m_\star - x)^\top\right)^{-1} \Sigma_\eta - I \right] + I = 0.
\end{equation}
Writing $X = \Sigma_\eta^{-1/2} (x - m_\star) \sim \mathcal{N}(0, \Sigma_\eta^{-1} \Sigma_\star)$, the previous expression writes:
\begin{equation}
\mathbb{E}\left[ (I + \eta XX^\top)^{-1} XX^\top \right] = I,
\end{equation}
so in particular since $XX^\top$ is a rank-1 matrix, we can write:
\begin{equation}
\mathbb{E}\left[ \frac{XX^\top}{1 + \eta \|X\|^2} \right] = I.
\end{equation}
This implies that the covariance of $X$ is isotropic, so $\Sigma_\eta^{-1} \Sigma_\star = \alpha I$ for some $\alpha > 0$.
Taking the trace of this expression, we obtain
\begin{equation}
d = \mathbb{E}\left[ \frac{\alpha Z}{1 + \eta \alpha Z}\right],
\end{equation}
where $Z$ is a chi-squared distribution with parameter $d$. Since $x \mapsto x / (1 + \eta x)$ is a concave function, we can apply Jensen inequality and obtain:
\begin{equation}
d \leq \frac{\alpha d}{1 + \eta \alpha d}.
\end{equation}
In particular, this leads to
\begin{equation}
\Sigma_\eta \geq (1 - \eta d) \Sigma_\star.
\end{equation}
Note that this is actually tighter than our 1d bound, which we can also improve in a similar way by just using Jensen on the same function in Equation (81) (instead of the other bound).
Now that we have a bound on $\Sigma_\eta$, we can (similarly to the 1d case) substitute it into
\begin{equation}
f_\eta(\theta_\eta) - f(\theta_\star) = \frac{1}{2} \log \det \left( \Sigma_\star^{-1} \Sigma_\eta\right) + \frac{1}{2\eta} \mathbb{E} \log \det \left((1 - \eta)\Sigma_\eta^{-1} \left[\Sigma_\eta + \eta (m_\eta - X)(m_\eta - X)^\top \right]\right).
\end{equation}
At this point, we use that $\log \det (A) \geq {\rm Tr}(I - A^{-1})$, so $\log \det (I + B) \geq Tr(I - (I + B)^{-1}) = Tr((I + B)^{-1} B)$, which is the multidimensional version of the $\log(1 + x) \geq x / (1 + x)$ that we use to obtain (89). Following similar derivations, we also obtain that the expectation of the log term is greater than 0, and we are left with:
\begin{equation}
\sigma_{\star, \eta}^2 \leq - \frac{1}{2\eta} \log \det \left( \Sigma_\star^{-1} \Sigma_\eta\right) \geq - \frac{d}{\eta} \log (1 - \eta d).
\end{equation}
In the end, plugging everything, we obtain a $\tilde{O}\left(\frac{d^2}{n}\right)$ convergence result for the map with $n_0 > d$ independent samples (or actually anything that makes the prior full rank, and the condition $\eta \leq 1/d$). This is consistent with Appendix A.3 [17]. We write the result with a $\tilde{O}$ notation (that includes log factors) but they are non-asymptotic, similarly to the 1d case.
In short, while the result are presented in the 1d case, no additional conceptual steps are required to tackle the d-dimensional one. We presented the main differences with the 1d proof in this rebuttal, and will make sure to add the full d-dimensional results to a revision of the paper.
Another main application of our bounds are Poisson inverse problems: $D_{KL}(b, Ax)$, which are often solved using (stochastic) mirror descent and for which theoretical results are limited.
We hope that we answered your question in a satisfying way, and are open to any further discussions.
---
Rebuttal Comment 1.1:
Comment: I acknowledge that I have read the rebuttal. I am satisfied provided that the authors add the extension to multivariate Gaussians. | Summary: This work revisits Stochastic Mirror Descent (SMD) proofs in the (relatively-strongly-) convex and relatively-smooth setting, and introduces a new (less restrictive) definition of variance which can generally be bounded (globally) under mild regularity assumptions. Then this paper investigates this notion in more details, and show that it naturally leads to strong convergence guarantees for stochastic mirror descent. Finally, this paper leverage this new analysis to obtain convergence guarantees for the Maximum Likelihood Estimator of a Gaussian with unknown mean and variance.
Problem:
In proof of Proposition 2, by the definition of $\sigma_{*,\eta}^2$, we can obtain that
$ \sigma_{*,\eta}^2 = \frac{\min_x f(x) - \min_x f_\eta(x)}{\eta} $.
However, $x_* =\argmin_x f(x)$ does not equal to $x_*' = \argmin_x f_{\eta}(x)$.
This will lead to $\sigma_{*,\eta}^2 \neq \frac{1}{\eta^2} D_h(x^*, x^+)$.
Strengths: This work revisits Stochastic Mirror Descent (SMD) proofs in the (relatively-strongly-) convex and relatively-smooth setting, and introduces a new (less restrictive) definition of variance which can generally be bounded (globally) under mild regularity assumptions. Then this paper investigates this notion in more details, and show that it naturally leads to strong convergence guarantees for stochastic mirror descent. Finally, this paper leverage this new analysis to obtain convergence guarantees for the Maximum Likelihood Estimator of a Gaussian with unknown mean and variance.
Weaknesses: No.
Technical Quality: 2
Clarity: 3
Questions for Authors: In the proof of Proposition 2, by the definition of $\sigma_{*,\eta}^2$, we can obtain that
$ \sigma_{*,\eta}^2 = \frac{\min_x f(x) - \min_x f_\eta(x)}{\eta} $.
However, $x_* =\argmin_x f(x)$ does not equal to $x_*' = \argmin_x f_{\eta}(x)$.
This will lead to $\sigma_{*,\eta}^2 \neq \frac{1}{\eta^2} D_h(x^*, x^+)$.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
We agree with you that the minima are not the same, and this is actually a crucial point in the paper, otherwise we would directly obtain that $\sigma^2_\eta = \frac{1}{\eta}^2 \mathbb{E}D_h(x_\star, x_\star^+)$, which is not true. Yet, this holds asymptotically, as we show in Proposition 2.6.
We are not sure which Proposition 2 you are referring to, but we never use the identity that you mention at the end of your summary in our proofs. In Proposition 2.2, we simply bound $f_\eta$ in terms of another function, but everything remains within the $\min$. We would be glad to detail any result you might be wondering about.
We hope that you will reconsider your score in light of this answer, and would otherwise be happy to answer any other concern you might have. | Summary: The paper proposes a new analysis of SMD using a newly introduced generalized variance notion. The benefit of the new analysis is demonstrated in the application to maximum a posteriori estimation of Gaussian parameters.
Strengths: After introducing a new variance notion, the paper delves into comparison with other existing notions and shows that the proposed one is the largest meaningful notion. After a careful comparison, analysis of SMD is presented using this mild assumption. This analysis substantially departs from the results known in the literature. The demonstration of the use case of this new theory in the context of statistical estimation is also clear and adds more significance to the new theory.
Weaknesses: Major:
As explained after theorem 4.3, the guarantees are derived for a reverse KL and may not imply anything on the desired quantity $f(\theta) - f(\theta_*)$. This of course, limits the contribution in this application significantly as non-asymptotic rates were known before.
Minor problems that I hope the authors can fix in the next revision.
1. Is the set C compact? If not, why the minimum exists in Proposition 2.2?
2. Cannot find where $x_*$ is defined. Why does it exist?
3. There is a small issue with indicies in equation (12) and in paragraph before. $\eta_{n} = \frac{1}{n_0+n+1}$, and the stochastic gradient should depend on the new sample $X_{n+1}$.
Update: meaningful results are obtained only for relatively strongly convex case (which is a stronger assumption than even strong convexity). In the convex case, a different (much stronger) definition is used. This becomes clear only after reading Appendix D. This limitation should be clarified in section 3.2, where convergence on some surrogate loss is shown. I will update my evalutation.
Technical Quality: 4
Clarity: 4
Questions for Authors: Is it possible to consider other, non-Gaussian/non-conjugate, models?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and careful comments, we will make sure to fix all points you raised in the next revision.
Before we start the point-by-point answer, we kindly ask you whether you could provide references for non-asymptotic rates on $D_f(\theta_\star, \theta)$ (since they probably do not exist for $f_\eta(\theta^{(n)})$, as we introduce this quantity). As a matter of fact, we do not only give non-asymptotic results on $D_f(\theta_\star, \theta^{(n)})$, but *anytime* results (for all $n$ such that it is finite). These results can then be converted on results for $D_f(\theta^{(n)}, \theta_\star)$ for $n$ large enough using similar arguments than in Le Priol et. al. (2021) (essentially, that Bregman divergences are close to quadratics when their arguments are close enough). Although we do not bound $f(\theta) - f(\theta_\star)$, we are not aware of equivalent results in the literature. However, we are not experts in Gaussian MAP estimation and would gladly compare to any reference you may provide.
Please find the answers to your questions below:
1 - The set C is not necessarily compact. The minimum should thus be an infimum and can potentially be $-\infty$ at this point if the $f_\xi$ are not lower bounded (Corollary 2.3 ensures finiteness). Yet, the assumptions of Corollary 2.3 are not necessary to ensure finiteness.
2 - $x_\star$ is the solution to Problem (1), defined in Def 2 as the point such that $f(x_\star) = f^*$, where $f^*$ is the infimum of $f$, but we will define it more explicitly. We did not want to make its existence a blanket assumption since $\sigma_{\star, \eta}^2$ does not require $x_\star$ to exist, and neither do Theorems 3.1 and 3.3 (where the $D_h(x_\star, y)$ terms can be replaced by $\lim_{n\rightarrow \infty} D_h(u_n, y)$ for any sequence $u_n$ such that $f(u_n) \rightarrow f^\star$), since we do not use any property of $x_\star$ other than the fact that it's involved in defining $\sigma_{\star, \eta}$, which actually just involves $f^\star$. We used $x_\star$ here to facilitate reading.
Yet, other works we compare to [7, 19] assume it exists, so we make this assumption to have comparable results in terms of interpretation. Similarly, when $x_\star$ exists, our variance definition has a nice interpretation in terms of ``gradients at optimum'' (Proposition 2.6). Similarly, assuming existence of $x_\star$ (and that it belongs to ${\rm int} C$) allows for a natural interpretation of $D_f(x_\star, x)$ in terms of 'gradient norm' (Proposition 3.4). To avoid any confusion, we will make existence of $x_\star$ a blanket assumption for convenience, and discuss case-by-case where it can be relaxed.
3 - Thank you for spotting this indexing issue.
4 - Regarding your update: We respectfully disagree with your statement on several aspects:
- relative strong convexity is not stronger that strong convexity. For instance if $h(x) = -\log x$, $f(x) = h(x) + g(x)$ where $g(x) = a x$ for instance for some $a >0$ (but could also be another convex function), then $f$ is strongly convex relatively to $h$ but not strongly convex in the usual sense.
- The fact that the control is not on $f(x^{(k)}) - f^\star$ is explicitly and clearly mentioned right after Theorem 3.3.
- The results of Theorem 3.3 are meaningful, they are simply not function value results (see Proposition 3.4 and Proposition 2.5). It is an open question to know whether $f(x^{(k)}) - f(x_\star)$ can actually be bounded when using stochastic mirror descent without strong(er) assumptions on the variance (such as the ones we compare to in Section 2 or in Appendix D). The fact that we obtain convergence in terms of $f(x)$ in the deterministic case can be considered as an artifact of the deterministic setting, and we can recover it with our analysis: it corresponds to proving a result analog to Corollary 3.2 but for the convex setting, and in this case $E[f_\xi((x^{(k)})^+)] = f(x^{(k+1)})$. In the stochastic setting this simplification does not happen, and so we are 'stuck' with bounding either $f_\eta(x^{(k)})$ or $E[f_\xi((x^{(k)})^+)]$. Finally, note that Dragomir et. al. (2021) obtain even weaker results in the convex setting, since they do not bound $f_\eta(x^{(k)})$, and Hanzely and Richtarik (2021) use a definition that is comparably strong to the one discussed in Appendix D.
In the end, we fully agree with you that convergence of function values directly would be appreciated. Unfortunately, it simply does not seem to be achievable in our setting.
5 - Is it possible to consider other non-Gaussian/non-conjugate models?
Non-Gaussian models can be considered, as in this case Proposition 4.1 still applies for exponential families (as highlighted by Reviewer 35cP). However, several properties of the Gaussian distribution are used when bounding the variance, and so this work would have to be done again on a distribution-specific basis.
For non-conjugate models, it would first be necessary to express the updates as resulting from stochastic mirror descent steps, which might be done in a case-by-case basis but maybe not in full generality.
We hope that we have answered your concerns with the desired clarifications and that you will go back to your more positive evaluation. We are available for further discussion if this is not the case.
---
Rebuttal Comment 1.1:
Title: Asymptotic results and relative strong convexity
Comment: 1. It was a typo, I meant "asymptotic" results were already known before, e.g., discussed in Section 5 of Le Priol et al. [17]. My concern here is that whether the bound on the reverse KL is meaningful in this problem, because it seems to say nothing about $f(\theta) - f(\theta_*)$. The convergence in function value as in Hanzely and Richtárik [10] would be more suitable.
2. You're right, relative strong convexity is weaker than strong convexity only if h is strongly convex, which you don't assume here.
3. I am not entirely convinced about this statement "it simply does not seem to be achievable in our setting." Just because the proof doesn't go through easily in function value, does not mean the function value convergence is not achievable. Why the authors think it is not possible? If the authors can provide a convincing numerical experiment or a counter-example to show that the function value might not converge under these assumptions, I will be happy to increase my evaluation.
4. I believe the lack of convergence in function value in convex case is really important to clarify because it limits the applications of the results a lot. RE another application, Poisson inverse problems, suggested by the authors in the general response is also unclear if it is relatively strongly convex or not in general (even for a positive definite matrix).
---
Reply to Comment 1.1.1:
Comment: 1 - We agree that convergence in function values would be more suitable, unfortunately the assumptions from Hanzely and Richtarik [10] are too strong to be applied in this setting, which was one of the reasons for this work in the first place.
4 - The Poisson inverse problem is to the best of our knowledge not relatively strongly convex in general (the Hessian of the mirror can blow up if one $x_j$ goes to 0, whereas the Hessian of the objective remains bounded as long as $A_i^\top x$ remains bounded away from 0).
Regarding our optimality criteria in the plain convex case, we would like to emphasize the following point (shared answer with that to Reviewer HwBA): our results are non-standard **with respect to standard Gradient Descent results**, not mirror descent ones. As a matter of fact:
- Dragomir et. al. (2021) only control $D_h(x_\star, x)$ in the relatively strongly convex setting and $D_f(x_\star, x)$ in the convex one. We additionally control $f_\eta(x) - f_\eta^\star$ in both settings, under a weaker variance assumption.
- Loizou et. al. (2021) do not consider a mirror descent setting (besides the "vanishing variance" issue, which we tackle below).
- Hanzely and Richtarik (2021) make a very strong variance assumption, under which we have that $f_\eta(x) + \eta \sigma^2_{\rm sym} \geq \mathbb{E}f(x^+)$ (proof at the end of this comment), and so the control on $f_\eta$ gives a non-asymptotic control on $f$ in this case. Note that a similar control can be obtained from the variance Assumption from Appendix D.
In particular, all other existing stochastic mirror descent analyses provide **weaker** results in terms of optimality metrics. We can recover control on function values either asymptotically or through stronger variance assumptions. To us, this is similar to the fact that standard relatively strongly convex Mirror Descent results only control $D_h(x_\star, x)$, but not $D_h(x, x_\star)$.
Regarding the main application of the paper, we are in a similar position: as far as we know, existing results obtain bounds that hold either asymptotically or for $n$ large enough. This is because they bound other objectives, and then transfer their control to the right objective. Instead, we obtain anytime bounds, though on the reverse objective (and we are not aware of such bounds in other works).
We have tried to obtain proper lower bounds to strengthen these claims on optimality criteria, but these are unfortunately very hard to come by as they would combine the difficulties of both the stochastic setting and the relative mirror descent setting (see, e.g, [1] for impossibility of acceleration in the mirror setting, which remained open for a significant time).
We unfortunately do not have time to provide convincing numerical experiments in this rebuttal. One reason why we think that convergence in function values is not achievable without stronger assumptions is that our convex result is basically an equality (up to the removal of the $D_h(x_\star, x^{(k+1)})$ after the telescopic sum), so there is few room for improvement. Another is the one we gave you in the previous comment: the deterministic result follows the same scheme, but a simplification happens that allows to bound function values. This simplification does not happen in the stochastic case. | Summary: This paper introduces a new variance assumption for the analysis of stochastic mirror descent (SMD) to handle cases where standard bounded variance assumption does not hold. The authors show this new assumption can be shown to hold under some regularity assumptions. The authors use the new results to show some convergence guarantees for MLE and MAP of a Gaussian with unknown mean and variance using the connection between this problem and SMD convergence guarantees.
Strengths: The topic is definitely interesting and timely. Results for stochastic optimization without bounded variance assumptions are quite interesting. As shown in the prior literature, this task is especially subtle in the Bregman case. As the authors argue in detail, this difficulty is acknowledged in previous works such as [7] and [17]. It is neat that the authors show the importance of the new results by deriving convergence bounds for MAP/MLE of a Gaussian with unknown mean and variance by using the connection between these bounds and SMD in [17] (which itself is a nice connection). This adds a nice and clear motivation. The work makes some progress towards solving open questions from [17], while as the authors clearly explain, the open questions are still not completely solved.
Weaknesses: I find the motivation of the paper and its application to MAP bounds interesting, however I have some concerns about writing and the strength of the derived results in the context of the application in Section 4. It seems necessary for the latter point to be clarified.
- Authors write after Theorem 4.3 that the open problem from [17] is not completely resolved because the convergence is not shown for the desired quantity. In particular, the authors describe that the guarantee is for $D_A(\theta_*, \theta^{(n)})$ instead of $D_A(\theta^{(n)}, \theta_*) = f(\theta) - f(\theta_*)$. The authors then write that two quantities can be related asymptotically but they state: "but we might also be able to exploit this control over the course of the iterations". Can you make this point more precise? It is not clear to me what this last part is trying to describe. Is it meant to be understood as an open question or is it possible for the authors to derive the stronger result? Since the paper mentions at many places that showing convergence guarantees for MAP is an important contribution of the paper, it is important to justify the convergence metric used in the results for justifying the contribution of the paper fully.
- It might be better to replace MLE in the abstract to MAP since Section 4 is mostly about MAP.
- Abstract states a couple of times "strong convergence", I suggest to remove this since "strong convergence" has a precise meaning in infinite-dimensional optimization and usage in the abstract is confusing because of this. Clearly this is not how the authors are using this term, but it seems authors are using this as a subjective adjective, which is not necessary. By subjective, I mean that: how can one decide what convergence result is strong and what is not?
- Assumption 1 requires all $f_\xi$ are convex. This is rather strong since the standard assumption is $\mathbb{E} f_\xi$ to be convex. Can you discuss this more? According to Prop 4.1, this holds for the main application of the paper, but it might be worth discussing why componentwise convexity is needed.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the questions in the "Weaknesses" section.
- Can you describe more clearly "additional assumptions on the mirror, such as strong convexity (in the usual sense), to ensure bounded variance"?
- Line 179, 180: can you provide the proof for getting $\sigma_{\star, \eta}^2 \leq \mathbb{E} \| \nabla f_{\xi}(x_\star)\|^2$? I could not find this in Appendix B, can you let me know if I missed this?
- Corollary 2.3: Why does it follow that $\sigma_{\star, \eta}^2$ is finite? There is no lower bound on $\eta$, why can't one make this infinite for arbitrarily small $\eta$? The explanation in line 132-136 is also not clear. Can you provide clearer references and explanations for the claim you give in bold in this passage? To add to the confusion here, Line 206 mentions that the quantity that appears in this corollary can explode for $\eta \to 0$ by citing [19]. Can you first provide the precise pointer to [19] for this variance bound and then also explain the difference of this with Corollary 2.3?
- line 137: should change to "has already been investigated"
- Line 262-267: The authors say that the result is "non-standard" but they do not explain how meaningful it is, since the convergence is not shown for standard quantities. Can you please clarify?
- Bregman methods are used often with compact domains (especially simplex). It might be good to also mention this in the paper to say that some interesting applications may satisfy the bounded variance assumption but other interesting appllications do not. It might also help to provide more examples where we have unbounded domains and we use SMD.
- Section 4.2: Where did you define $R_{-}^*$?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations are discussed clearly. The authors provided explanations after Theorem 3.3 and Theorem 4.3 to describe the limitations of their result.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank your for your careful review and suggestions. We will change MLE to MAP in the abstract and remove the 'strong' adjective, which refered to the fact that we obtain similar guarantees for SMD as what we have in the `usual' setting for SGD, and was not specifically aimed at the statistical setting indeed.
**Control over the course of iterations:** The sentence `we might be able to exploit this control over the course of iterations' refers to the fact that the convergence result in Section 4 completely discards the fact that we not only bound $D_A(\theta_\star, \theta^{(n)})$ but also $f_{\eta_n}(\theta^{(n)}) - f_{\eta_n}(\theta_{\eta_n})$. It might be possible to exploit the bound on $f_{\eta_n}(\theta^{(n)}) - f_{\eta_n}(\theta_{\eta_n})$ to obtain one on $D_A(\theta^{(n)}, \theta_\star)$, instead of wanting to obtain it from $D_A(\theta_\star, \theta^{(n)})$ for $\theta^{(n)} \approx \theta_\star$. Yet, we have not been able to leverage this control in a satisfying way, and so this sentence is more to be understood as an open question (and the fact that we control another metric related to the desired one). We will make this clearer to avoid confusions.
**Component-wise convexity:** This assumption is mostly used in Proposition 2.7 to use Bregman cocoercivity with the $f_\xi$ to compare with the results of Dragomir et. al., which require it (so we require it as well to compare to their definition). Our other results do not require convexity of $f_\xi$ directly, but we do use the fact that each $D_{f_\xi}$ is relatively smooth. We will discuss this more clearly.
**Questions:**
1) This sentence refers to the variance expression of Line 191-192, which can be infinite in general, but is finite if $h$ is strongly convex (in the usual, not relative sense), since in this case the eigenvalues of $\nabla^2 h^*(u)$ are bounded uniformly in $u$.
2) The proof is not in Appendix B indeed, sorry for this. As sketched in the main text, we write that in this case, $\sigma^2_{\star, \eta} \leq \frac{f(x_\star) - f(x) + \frac{\eta}{2} \mathbb{E}\|\nabla f_\xi(x)\|^2}{\eta}$. Then, we use that $\frac{1}{2}\|\nabla f_\xi(x)\|^2 \leq \| \nabla f_\xi(x) - \nabla f_\xi(x_\star)\|^2 + \|\nabla f_\xi(x_\star)\|^2 \leq 2L D_{f_\xi}(x, x_\star) + \|\nabla f_\xi(x_\star)\|^2$, using cocoercivity of $f_\xi$ (which requires individual convexity of the $f_\xi$). Then, applying expectations, we have $\mathbb{E} \left[2\eta L D_{f_\xi}(x, x_\star)\right] = 2 \eta L D_f(x, x_\star) = 2\eta L \left[ f(x) - f(x_\star)\right]$, so that $\sigma^2_{\star, \eta} \leq \frac{(2\eta L - 1)}{\eta}\left[f(x) - f(x_\star)\right] + \mathbb{E}{\|\nabla f_\xi(x_\star)\|^2}$, leading to the desired result. This requires $\eta \leq 1/(2L)$, similarly to the Euclidean proof that uses this definition.
3) Finiteness is to be understood for a fixed $\eta$, which is already non-trivial due to the supremum on $x$. The precise reference in [19] is Assumption 2.1, and the difference is that this does not include the $1/\eta$ factor. As a result, the RHS of their Theorem 3.1 does not go to 0 as $\eta \rightarrow 0$. They study an adaptive step-size setting, but they would still get a non-vanishing RHS with constant step-size (making it go to 0) or vanishing step-sizes. Instead, $\eta \sigma^2_{\star, \eta}$ vanishes for $\eta \rightarrow 0$, ensuring that $D_h(x_\star, x^{(t)})$ can be arbitrarily close to 0 with enough steps and small enough step-sizes (which is not the case in [19]).
4) Clarifying non-standard: How meaningful this is very much depends on the applications. The guarantee on $f_\eta$ is meaningful in the sense that $f_\eta \rightarrow f$ for $\eta \rightarrow 0$ (Proposition 2.5). The result on $D_f(x_\star, x)$ can be seen on a control on the gradients of $f$, for instance through Proposition 3.4. Finally, $D_f(x_\star, x) \approx D_f(x, x_\star)$ if $x \approx x_\star$. Please note that existing results with relaxed variance assumptions (such as Dragomir et. al. (2021)), also only obtain convergence on $D_f(x_\star, x)$ in the convex case. The 'non-standard' aspect is with respect to usual results on gradient descent.
5) Thank you for your suggestion, we will add a remark on this aspect in the paper.
6) $R^*_- = \{ u \in R, u < 0 \} $.
Thank you again for your many suggestions and comments, we hope that our results are clearer to you now and that you are willing to increase your score. We will gladly discuss any further question more in details.
---
Rebuttal Comment 1.1:
Comment: Thank you for the explanations. Even though I see the potential value of the paper, I have still some questions and concerns.
Question: You mention in the answer to me and Reviewer H5hK that the guarantee on $f_\eta$ is meaningful because $f_\eta \to f$. But I am not sure that this is an enough justification. This is especially because the paper wants to get non-asymptotic guarantees, whereas justifications such as $f_\eta \to f$ are only asymptotic. In particular, it seems you cannot convert nonasymptotic guarantee on $f_\eta$ to nonasymptotic guarantee on $f$, is this correct? If so, then I think one needs another way to justify that a nonasymptotic guarantee on $f_\eta$ is meaningful. I think a clearer connection is necessary here for different optimality metrics. Please let me know if I am missing something here.
Concerns: The discussions where you are comparing variance definitions are quite confusing, at least to me. And this is perhaps one of the most important parts of the paper so the reader can understand the contributions. This was the case when I was writing the review and it is still the case even with the rebuttal. This needs to be really improved, in my opinion. For example, the discussion on lines 201-210 is quite confusing. You say "The vanishing variance term can be obtained by rescaling by $1/\eta$" which I am sure is very clear to the authors but not to the readers. Can you clearly describe what you mean by "vanishing variance term". It is confusing because earlier you talk about step size reducing the variance. But when you scale $f(x_*) - E[f_\xi(x_*^\xi)]$ by $1/\eta$, of course smaller $\eta$ makes this quantity larger. I guess what you call the "variance term" is different. But please be more explicit.
Lastly, I want to emphasize that presentation is also extremely important since at the end we, as reviewers, and the authors want to make sure that the paper will be received well by our community. Especially given the volume of papers that appear in our conferences, I believe making papers as clear as possible and as readable as possible is extremely important to make sure that the community can read and understand the developments.
If the authors can justify the non-standard guarantees (not on objective value) for the main application of the paper and promise to improve the presentation, I can push my recommendation to a "5" which is an accept rating, but my reasons are outlined above for why I am not comfortable raising the score more than that.
---
Reply to Comment 1.1.1:
Comment: **Regarding non-standard guarantees.** Your reasoning is correct and non-asymptotic guarantees on $f_\eta$ do not allow to obtain guarantees on $f$ a priori without further assumptions. Yet, we would like to emphasize that this non-standardness is **with respect to standard Gradient Descent results**, not mirror descent ones. As a matter of fact:
- Dragomir et. al. (2021) only control $D_h(x_\star, x)$ in the relatively strongly convex setting and $D_f(x_\star, x)$ in the convex one. We additionally control $f_\eta(x) - f_\eta^\star$ in both settings, under a weaker variance assumption.
- Loizou et. al. (2021) do not consider a mirror descent setting (besides the "vanishing variance" issue, which we tackle below).
- Hanzely and Richtarik (2021) make a very strong variance assumption, under which we have that $f_\eta(x) + \eta \sigma^2_{\rm sym} \geq \mathbb{E}f(x^+)$ (proof at the end of this comment), and so the control on $f_\eta$ gives a non-asymptotic control on $f$ in this case. Note that a similar control can be obtained from the variance Assumption from Appendix D.
In particular, all other existing stochastic mirror descent analyses provide **weaker** results in terms of optimality metrics. We can recover control on function values either asymptotically or through stronger variance assumptions. To us, this is similar to the fact that standard relatively strongly convex Mirror Descent results only control $D_h(x_\star, x)$, but not $D_h(x, x_\star)$.
Regarding the main application of the paper, we are in a similar position: as far as we know, existing results obtain bounds that hold either asymptotically or for $n$ large enough. This is because they bound other objectives, and then transfer their control to the right objective. Instead, we obtain anytime bounds, though on the reverse objective (which no other people obtain).
We have tried to obtain proper lower bounds to strengthen these claims on optimality criteria, but these are unfortunately very hard to come by as they would combine the difficulties of both the stochastic setting and the relative mirror descent setting (see, e.g, [1] for impossibility of acceleration in the mirror setting, which remained open for a significant time).
**Comparison to other variance definition.** The "variance term" refers to the $\sigma^2$ term of the RHS of their Theorem 3.1, which is equal to $f(x_\star) - E\left[f_\xi(x_\star^\xi)\right]$, which does not go to 0 when $\eta \rightarrow 0$. This does not describe the actual behaviour of SGD, which is that the 'variance term' (as opposed to the term that depends on the initial conditions) vanishes when the step-size is small, allowing to obtain $f(x^{(t)}) - f(x_\star)$ (or other optimality criteria) arbitrarily close to 0 (with enough iterations and small enough step-size). We would obtain the same behaviour in our analysis if we were to use their variance definition.
Instead, in our analysis, we replace $f(x_\star) - E\left[f_\xi(x_\star^\xi)\right]$ by $f(x_\star) - f_\eta^\star$, which goes to $0$ for $\eta \rightarrow 0$ (since the $1/\eta$ rescaled version has a finite limit as shown in Proposition 2.6). In this case, the optimality criterion can go arbitrarily close to 0 given enough iterations and small enough step-size.
**Final comment.** We agree that presentation and clarity are extremely important, and thus thank you for helping us improve them for our paper. We hope these points are now clearer to you, and that you are now in a more comfortable position to recommend acceptance for our paper.
**Proof of the claim.** We have that $- \frac{1}{\eta} D_h(x, x^+) = \nabla f_\xi(x)^\top(x^+ - x) + \frac{1}{\eta} D_h(x^+, x) = \left(\nabla f_\xi(x) - \nabla f(x)\right)^\top (x^+ - x) + \frac{1}{\eta} D_h(x^+, x) + \nabla f(x)^\top (x^+ - x) $
and so
$- \frac{1}{\eta} D_h(x, x^+) = f(x^+) - f(x) + \left(\nabla f_\xi(x) - \nabla f(x)\right)^\top (x^+ - x) + \frac{1}{\eta} D_h(x^+, x) - D_f(x^+, x).$
Taking expectations on both sides, we obtain using the relative smoothness of $f$ that:
$f_\eta(x) \geq \mathbb{E} f(x^+) - \eta \sigma^2_{\rm sym}$, leading to the desired result.
[1] Dragomir, R. A., Taylor, A. B., d’Aspremont, A., & Bolte, J. (2022). Optimal complexity and certification of Bregman first-order methods. Mathematical Programming, 1-43. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their careful evaluation and detailed feedback, which will greatly help us improve the clarity of our paper, in particular regarding the precise (and minimal) set of assumptions under which our results hold. We answer all questions point-by-point in the specific rebuttals, but first give application examples beyond the Gaussian MAP estimation.
As detailed in the introduction, mirror descent is hidden in many algorithms, and we believe that our framework would be valuable to analyze many more algorithms which do not benefit from non-asymptotic convergence guarantees yet. Making bridges between our analysis and existing results (hopefully improving on them) is an exciting though challenging task (as can be shown from the Gaussian MAP example).
To us, the other natural key application of ours results is Poisson inverse problems, which require solving problems of the form $\min_x D_{KL}(b, Ax)$. These problems can be analyzed in the same way as we do in Section 4. Yet, while our results give a precise framework for deriving convergence bounds for this other important problem, the end-to-end analysis is still challenging (as in the Gaussian MAP case), and out of scope of this paper.
On a side note, our results can improve on analyses using other variance definitions even when these are bounded. For instance, our variance definition is tighter than that of Dragomir et. al. (2021) even for strongly convex mirrors, since we only require to bound the smallest eigenvalue of the Hessian locally (around $x_\eta$) instead of globally (for strong convexity), so the constant might be much better.
Also note that, as discussed in the answer to Reviewer cHke, our results can be extended to handle multi-dimensional Gaussian MAP estimation.
We thank the reviewers again for their time in carefuly evaluating our paper, and are available for any further questions. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This submission studies stochastic mirror descent (SMD) under quite mild conditions on the mirror map and objective function. More specifically, there are a variety of SMD analysis in the literature, but virtually all of them require strong conditions on the mirror map (such as strong convexity) that do not hold in cases where we only have relative smoothness (and/or relative strong convexity) of the objective function with respect to the mirror map. The authors propose a definition of variance of SMD that is better behaved under minimal assumptions. They show how this new variance can be used to obtain general convergence results for SMD. Finally, they show how the new variance definition for SMD can show some kind of non-asymptotic convergence rates for MLE and MAP of Gaussian parameter estimation with unknown mean and covariance, making partial progress on a conjecture posed by Le Priol et at.
Strengths: This is an interesting paper that tackles a hard theoretical problem. I think it is of interest for researchers interested in mirror descent. The new definition of variance of SMD has interesting properties even under very mild assumptions, as the authors show when comparing the new definition with other definitions of SMD variance in the literature. Moreover, the results on Sec 4 already show how this is an interesting way to analyze SMD, and is likely to lead to follow-up work on the area.
So the strengths summarized in bullet points:
- Thorough comparison of new variance definition with other definitions in the literature and proof of finiteness under assumption 1.
- General convergence theorems of SMD under mild assumptions that recover known results in the deterministic case, showing this is may be a "natural" variance definition for SMD and useful for our understanding of SMD.
- Partial progress towards the conjecture of Le Priol et al.
Weaknesses: In its current form, I have one main concern with the paper:
- Despite what is written at the beginning of the paper, **Assumption 1** is NOT a blanket assumption used throughout the paper. In fact, it appears only section 2 uses assumption 1. The rest of the paper uses a weaker assumption that is never clearly stated, which makes it hard to understand when the results hold or not.
This is likely to be a problem with presentation, but in its current form it is often not clear what are the assumption required at each point. Since the main point of the paper is to use a minimal number of assumptions, it is very important for those to be clearly stated.
A minor weakness is the lack of an example besides MAP/MLE. I could not easily think of a concrete example where I could apply the convergence results in sec 3 or 4. If the authors have an example besides MLE or MAP (even if a bit artificial), it would be great. For example, some example with a mirror map such as $- \log x$ would be interesting, but this is a minor suggestion, since it would be nice to see a concrete example of the use of the results in Sec 2 (the results in Sec 4 require a specialized bound on the variance)
Summary of weaknesses:
- Unclear requires assumptions for many of the results
- (Minor weakness) Lack of a concrete (even artificial) example of application of any of the theorems in Sec 3 beyond MAP/MLE (and the latter require specialized bounds on the variance).
Technical Quality: 2
Clarity: 1
Questions for Authors: Currently I'm leaning towards accepting this paper (although borderline), but I'd expect the authors to address some of my concerns regarding assumptions (which, ultimately, are concerns about the correctness of the paper).
If the authors successfully address my concerns, I'll most likely increase my score. At this point I believe there should be mistakes with the assumptions, but my confidence if no high due to the lack of clarity regarding assumptions. I will be forced to decrease my score if another review or myself find a hard inconsistency in the proofs and the assumptions needed.
#### Questions about assumptions
- *$h$ should be of Legendre type?*: At some points the authors say they do not need assumption 1, but instead only need eq. (3) to hold for all iterates. One issue is that this is an assumption on the iterates, not on $h$, and leaves the conditions on $h$ way too open. In fact, for many of the properties the authors seem to use in the appendix (Bregman divergence duality, for example), we usually require $h$ to **at least** be of Legendre type. Without this assumption it is unclear whether many of the derivations hold (I haven't checked the appendix carefully, but $h$ being of Legendre type is close to the minimal assumption for mirror descent to be well-defined). It is not clear that (3) implies a condition on $h$, and trying to prove something like this seems unecessary. The authors should either explain why they do not assume $h$ is of Legendre type (and show that all the properties they use actually hold even without assumption 1), or clearly state this assumption.
- *Formal statement of the assumption used?*: Ideally the authors would have the two different assumptions clearly stated, and then clearly mention either in each section or in the statement of each result which results depend on the assumptions. Also, eq. (3) seems to not be well-defined in general for general $h$ of Legendre type (for example, if $h = - \ln x$ the right-hand side might be negative if $x$ is large, regardless of $\eta$, which is a problem). Moreover, it would be interesting if the authors showed that the assumption on the iterates holds for a broad class of problems, such as general MAP for exponential families. This is because if $f_z(x) = h(x) - \langle x,z \rangle$ with $z \in \mathrm{cl}(\mathrm{dom} \nabla h)$ (here $z$ would be the sufficient statistics of one of the data points), then $\nabla h(x) - \eta \nabla f_z(h) = (1 - \eta)\nabla h(x) + \eta\nabla f_z(h)$, which is in the interior of the domain of $\nabla h$ for $\eta < 1$, showing that eq. (12) is indeed valid for exponential families. It is likely that the authors were aware of this already, but I believe clearly stating that this is true for general exponental families since Prop. 4.1 only covers the Gaussian case.
- *Assumption 1 is required for a result in sec 3, and this should be clearly mentioned*: Not a question, but related to the previous point: the authors say that the theorems in section 3 do not use assumption 1. While this seems to be true if read literally (not carefully checked), this might be read as all the *results* in section 3 not requiring Assumption 1, which is false: Proposition 3.4 uses Bregman co-coercivity, which in turn requires Assumption 1 to be true in general if I am not mistaken. The problem is that now the reader is left unsure what requires assumption 1 or not, putting into question whether the results in Sec 4 (which uses Sec 3) are correct or not.
- *Projected Gradient Descent*: The authors mention the 2-norm case a couple of times, but one should be careful regarding the set $C$ in this case. For $C \neq \mathbb{R}^d$ convex (say, the $\ell_2$-ball), assumption 1 fails to hold for $h = \frac{1}{2}\lVert\cdot \rVert_2^2$ since, in this case, the minimization problem may be attained at the boundary of $C$. This is an issue since (3) stops to be equivalent to (projected) gradient descent (either it becomes unprojected gradient descent or, if you add the indicator of $C$ to $h$, then $h$ may not be differentiable at the iterates $x$). In this case $h$ is not Legendre in $C$ (thus, having the assumption on $h$ would help prevent this case), and one might be able to handle this case by carefully handling the projections. I don't think it is interesting to try to do so, but this is a point that can bring even more doubts about the correctness of the paper.
#### Minor points
- 196-197: Not clear what "after the supremum has been taken" refers to exactly.
- 202-203: "Variance term" does not mean "variance", right? I think I understand what you meant on this lines, but this is a quite confusing passage and I had to re-read it a couple of times
- 232: The notation $\{0,T\}$ refers to a set with 2 elements, you did not mean that.
- 237: What are the assumptions of Lemma 4 of Dragomir? Maybe state it on the appendix, since it is very important for this result to not depend on Assumption 1, right? I'll check the lemma in detail later to verify.
- 278-283: This should definitely be mentioned earlier in the section, not at the end. Also, this should be formalized if possible.
- 318: Assumption 2 -> Definition 2 (Also, why assumptions/defs are numbered differently from theorems/propositions/lemmas?)
- 338: Not exactly clear what $\xi$ is here, use $X_n$ notation?
- This might be personal choice, but $\overline{x^{+}}$ is very big and stretches a lot of the lines you use this notation. $\overline{x}^+$ seems to be slightly better (although a bit cluttered). Don't feel like you need to change this if you disagree.
#### Post-discussion summary
Ultimately, this discussion on assumptions is more on the organization of the text and on the writing, not on the contributions of this submission. This is a very hard theoretical problem (which one can gauge by the amount of previous work with only partial results or restrictive assumptions). The partial solution to the conjecture on non-asymptotic convergence of MAP showcases the potential usefulness of these ideas for future work. Thus, I vouch for acceptance of this paper, since I believe it is of interest for the ML optimization community and can clearly be of inspiration for future work.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: Although the authors are not explicit about some of the limitations of the results on sec 3, they do discuss how to interpret some of the results and limitations from their convergence rates.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for your detailed review, understand your concerns, and clarify these points below.
**Questions 1 and 2**: From what we understand from your review, most concerns come from the `minimal assumptions on $h$' remark from lines 278-283. We realize that this remark can be confusing indeed as to the assumptions on $h$ that are necessary.
Our intention was to emphasize the fact that Theorem 3.1 and Theorem 3.3 (and only these two results, as you point out) are mainly algebraic manipulations that explicit convergence in terms of relevant quantities. In particular, they only require assumptions on the iterates (in the form of Eq. (3)) because their goal is to express how quantities evolve from one iteration to the other. In fact, you can notice that the only inequality used in the proof of Theorem 3.1 is relative strong convexity. We also note that chaining (10) to obtain (6) requires $1 - \eta \mu \geq 0$, and so the condition $\eta \leq 1/\mu$, which we will make sure to add to Theorem 3.1 (we initially presented the theorems with the one-step versions (Eq 10) and then changed to (6)). Similarly, Theorem 3.3 requires $D_h(x_\star, y)$ to be positive for all $y$, and would otherwise include a $D_h(x_\star, x_{t})$ term.
However, the fact that such minimal assumptions are required to write the derivations leading to Theorem 3.1 does not imply non-trivial results on $D_h(x_\star, x^{(t)})$. In particular *several quantities involved in Theorem 3.1 can potentially be infinite*. (6) still holds, but potentially with $f_\eta^\star = -\infty$ or $D_h(x_\star, x^{(0)}) = +\infty$. Finiteness can be obtained in different ways, such as using the general results given in Section 2, or using problem-specific bounds when these do not apply, as is done in Section 4.
We understand that this can introduce confusion and we will highlight it very clearly. You are right that Assumption 1 is needed in general for some results in Section 3 (not the theorems though, but we agree that it can easily be read as `all results', which was not intended), which is why we present it as a blanket assumption.
The `minimal assumption on $h$' was more intended to encourage readers to adapt inequalities such as (10) even for problems that do not satisfy the blanket assumption, if bounds on the various terms can be ensured in other ways. This is what we do for instance in Section 4 (in which just Eq. (10) is used, and $f_\eta^\star$ is bounded directly). We will rephrase it in that way and make sure to make this point extremely clear.
Note that most of the Appendix is actually devoted to deriving results in Section 2 and Section 4. For Theorem 3.1. for instance we only prove Lemma C.1 with a very direct (and we believe verifiable) proof, which we encourage you to take a look at.
**Question 3:** Thank you for your suggestion of writing Proposition 4.1 for exponential families more genrerally, we will make sure to add this to the paper.
**Question 4**: Thank you for your remark, the L2 norm examples are more intended to give readers elements of comparison with more familiar expressions, we will add "unconstrained" explcitly in Line 175 since this is indeed the setting we had in mind. The (mirror) constrained case in which a projection term is added to (3) is discussed in Appendix E, though not tackled in full details.
We thank you again for your feedback and are open to suggestions in order to better convey our message from the remark that the inequality holds under very mild assumptions, but results such as the ones derived in Section 2 are necessary to make the result meaningful in general.
We hope we have lifted your doubts about correctness of the paper, and otherwise encourage you to take a look at the proof of Lemma C.1, or ask us to clarify any other concern you might have.
---
Rebuttal Comment 1.1:
Title: Rephrasing questions 1 and 2
Comment: Thanks a lot for the authors for the detailed reply! I think the main point of my first two questions was not addressed, in part because I don't think I have properly written them. I will try to rephrase the questions, trying to be brief and clear. I do not expect a long answer either, but question 1 is something I am confused about and question 2 is asking for precise statement of the assumptions used.
**Question 1: Do you need $h$ to be of Legendre type?** Assuming the mirror map is of Legendre type is a classical assumption to ensure everything works out in mirror descent. So it is not clear to me whether the authors need to go beyond Legendre type (which I do not think is the case) or if it is actually necessary for it to be of Legendre type. If so, it should be stated clearly the domain and everything (this would, for example, clear up the L2 norm case). I will take a look at appendix E later just in case.
**Question 2: What are the formal statements of the assumptions used?**. Assumption 1 is a formal statement of the assumption used in Sec 3. But in Sec 4 the authors quickly mention that the assumption needs only to hold for the iterates... but then it is not clear what assumptions are needed on $h$. So there should be some sort of "Assumption 2" that clearly states all the assumptions on $h$ and/or the iterates. Then, at the beginning of each section the authors should explicitly say which assumption is being used (or mention on the theorems which assumption is used). Currently which were the assumptions necessary at each point was the part that confused me. Like I mentioned before, I think the authors should still explicitly assume that $h$ if Legendre (so that many of the properties of Bregman divergences work, for example) and formally state the assumptions mentioned in Sec 4.
If I have the time I will also look at Lemma C.1. I just wanted to write a reply early in the discussion period to allow for the authors to write a reply without any rush. Again, I don't expect long answers, but these are my main points of confusion, that is why I am trying to rephrase the questions.
---
Reply to Comment 1.1.1:
Title: Answers to rephrased Questions 1 and 2.
Comment: Thank you for your quick reply, for engaging in the discussion, and for giving us time to respond. We greatly appreciate it.
**Question 1.** We replace the Legendre-type assumption by Assumption 1, where twice continuous differentiability, strict convexity and well-definition of the the conjugacy problems (within the interior of $C$) ensure all the good properties we need for the derivations (since the gradient map is injective and so can be inverted on its image). We use this version of the Assumption following Dragomir et. al. (2021), to be able to write the 'Hessian formulation' of Bregman divergences, which we believe is often easier to interpret.
If we do not want to assume that $h$ is twice continuously differentiable, then we would need to assume that it is Legendre on $C$ as you suggest, in order to be able to write duality for instance as you point out. We could also just assume Legendre and second-order differentiability when we would like to write out Hessians.
Note that the L2 norm case is already cleanly handled with Assumption 1, since it imposes that the solution of the 'duality problem' is unique and lies within the interior of $C$ (which is essentially usually guaranteed by the Legendre assumption), which would not be the case if $h$ included a projection term.
**Question 2.** We will remove the mention about relaxing assumptions right after Assumption 1, and replace it by the fact that Assumption 1 can be replaced by assuming that $h$ is Legendre if one does not have second-order continuous differentiability.
Then, we will explicitly state in Section 2 that we give sufficient (but not necessary) conditions for several good properties of the variance to hold.
Finally, we will give Assumption 2, a minimal assumption for Theorems 3.1 and 3.3 to hold, which are essentially well-definition of the iterates in the form of (3), and existence (in the sense of finiteness) of all the manipulated quantities. We will highlight that although these can be obtained using results in Section 2 (and the corresponding assumptions), they can also be obtained case-by-case for specific problems (as in Section 4).
Thank you again for giving us the opportunity to clear up our answers, we are still available for further clarifications. We are fully aware that there is only so much time you can spend on a review, and we appreciate that you took the time to rephrase your questions so we can hopefully better address them. Regarding the appendix, we recommend first the proof of Lemma C.1, which is just a few lines without advanced arguments. Appendix E is longer and harder to read (introduces a few notations). | null | null | null | null | null | null |
ParallelEdits: Efficient Multi-Aspect Text-Driven Image Editing with Attention Grouping | Accept (poster) | Summary: This paper aims to make multiple objects or attributes editing, while preserving the quality, named ParallelEdits. The paper also introduces a new dataset PIE-Bench++. Experimental results on both PIE-Bench and PIE-Bench++ demonstrate that this method outperforms many existing editing techniques.
Strengths: Simultaneously editing multiple objects or attributes is interesting.
The finding that the order in which aspects are modified when applying single-aspect text-driven image editing methods can affect the quality is interesting.
Weaknesses: 1. Direction Inversion is solely a method for inverting real images, not for editing them. Figure 1 is confusing in this context. Direction Inversion can be combined with various editing methods like P2P, PnP, P2P-zero, and MasaCtrl. Which editing method is Direction Inversion combined with in the third row of results? Why is this particular editing method shown?
2. Same problem in the description in the introduction (Line 22, latest methods [3, 1, 4]; Line36, Unlike traditional editing methods (e.g., [1, 2])). [1] (Direction Inversion) is not a editing method.
3. The paper claims that ParalleEdits can edit multiple objects or attributes simultaneously. Figure 1. illustrates the swapping of multiple objects within the same image, but it only demonstrates the addition of a single object. Can ParalleEdits add multiple objects within a single image, such as both a necktie and sunglasses to a cat?
4. In aspect grouping, the paper does not explain how to get the attention map M for a real image.
5. In Figure 2, why does the attention map from the original image combine with the target prompt.
6. This paper divides aspect grouping based on cross-attention. What if cross-attention is inaccurate? For example, in Figure 2, will each image clearly show a "ducks" cross-attention map when "ducks" are added to the image?
7. In Table 1, P2P is an editing method for generated images. When applying it to real images (PIE-Bench++ dataset), it needs to be combined with an inversion technique (i.e., NTI, DI). Need to explain.
8. The finding that the order in which aspects are modified when applying single-aspect text-driven image editing methods can affect the quality is interesting. However, the paper does not conduct qualitative and quantitative experiments to support this claim.
Further comments:
+ Line 119, should z be z_0 ?
+ Line 135, Xu et al. [3] seems not to use \cite{} to link the reference?
+ Line148~Line149, the definition of tokens is not clear for me. Do the tokens represent all nouns and adjectives in a text prompt?
Technical Quality: 3
Clarity: 2
Questions for Authors: Does StyleDiffusion (StyleD) refer to “Controllable disentangled style transfer via diffusion models” or “Prompt-Embedding Inversion for Text-Based Editing” ? If it is the former, please explain why this paper is compared with style transfer methods. If it is the latter, please correct it.
Simultaneously editing multiple objects or attributes is interesting. But some aspects of the method are not adequately explained (e.g., how to get the attention-map M), and some references within the paper are quite casual (e.g., DirectInversion and StyleDiffusion).
The weaknesses low my initial rate.
---
**Update after author responses**
Thanks to the authors for their efforts. I'm happy to see the extended discussion and additional comparisons that have resolved my concerns.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q: Clarification of the inversion and editing method.**
**A:** We agree with the reviewer. Each baseline in Table 1 and throughout the paper includes a submethod for inversion and another for editing. The table below details these submethods for each baseline. We will update line 36 of the introduction, Table 1/Figure 5 captions, and lines 269-272 to specify the submethods used for inversion and editing for each baseline.
|Method stated in paper|MasaCtrl|P2P|NTI| StyleD | PnP |DI|
|-|-|-|-|-|-|-|
| Inversion technique | DDIM | DDIM | NTI |StyleD|DDIM|DI|
|Editing technique |MasaCtrl|P2P|P2P|P2P|PnP|PnP|
**Q:** P2P is an editing method for generated images and needs inversion method to edit real image.
**A:** For P2P, we adopt the default settings in the original P2P paper [1] (introduced in the Real Image Editing section) to conduct DDIM inversion to edit real images.
**Q:** Is ParalleEdits able to add multiple object in the image?
**A:** ParallelEdits can add multiple objects to an image, as shown in Fig 4 of the main paper (adding harness and flowers) and Fig 10 in the Appendix (d,e,f). PIE-Bench++ includes test samples for evaluating multiple object addition. We tested the example provided by the reviewer—a cat wearing a necktie and sunglasses—as detailed in the document attached in the general response (Figure 1), and we welcome any additional challenging examples the reviewer wishes to suggest.
**Q:** Need to explain how to get the attention map for a real image.
**A:** An attention map is generated by running a few diffusion steps (up to 5 steps) as in line 472 of the main paper and it is created by averaging cross-attention matrices of UNet layers, as outlined in P2P (see section 3.2 of [1]). The main paper (line 175) mentions that the aspect grouping process uses a dual-branch diffusion model conditioned on source and target prompt to generate and group attention map. Attention maps from the two branches are associated with source prompt tokens and target prompt tokens, and the attention maps of source branch correspond to the aspects in real/source images. We will include these details in the appendix of the revised manuscript.
**Q:** Figure 2, Why does the attention map from the original image combine with the target prompt.
**A:** The source and target attention maps are associated with source and target prompt, while both source and target attention maps have been to used to identify the editing type, as shown in Figure 3. We visualize the target attention map overlaid in the original image to illustrate which part of the original image need to be edited based on the attention map.
**Q:** What if cross-attention is inaccurate? For example, is the duck attention map clear to show?
**A:** Yes, adding "ducks" to an image creates a cross-attention map as Figure 3 of the main paper shown. Current image editing works [1,2,3,4] have also used attention map to indict object layouts. The iterative diffusion process is able to generate an accurate object layouts in the attention maps ( as described in section 3.2 of P2P paper [1]). Moreover, attention maps have also been utilized for even more intricate tasks such as image segmentation [5]. Although failure situations resulting from faulty attention maps do occur, they are rare.
**Q:** Need qualitative and quantitative experiments to support the claim that order in which aspects are modified when applying single-aspect text-driven image editing methods can affect the quality.
**A:**. Thanks for pointing this out. We have conducted additional qualitative and quantitative experiments to support the claim, please check Table 2 and Figure 4 attached in the general response. We found that the AspAcc-CLIP is significantly different when using varying orders for each single-aspect editing method, and each method achieves the best performance with different orders. Hence, it is hard to choose a proper editing order for sequential editing, not to mention that sequential editing is extremely slow.
**Other minor issues**
**Q:** z should be $z_0$ in Line 119.
**A:** Yes, we will reviese the manuscript.
**Q:** The definition of tokens is not clear.
**A:** The tokens are words obtained from the tokenizer which includes nouns, adjectives and other words splitted by spaces. For example, in the figure 3 the prompt has been broken down into tokens, all the words are token but only the underlining ones are asepcts.
**Q:** Misused of reference.
**A:** Thanks for pointing this out, StyleDiffusion refers "Prompt-Embedding Inversion for Text-Based Editing", and Xu et al. refers the [3] Inversion-Free Image Editing with Natural Language. We will update the references in the revision.
[1] Prompt-to-Prompt Image Editing with Cross Attention Control, Hertz et al, ICLR23'
[2] DiffEdit: Diffusion-based semantic image editing with mask guidance, Couairon et al, ICLR 23'
[3] Inversion-Free Image Editing with Natural Language, Xu et al, CVPR24'
[4] MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing, Cao et al, ICCV23'
[5] Open-Vocabulary Attention Maps with Token Optimization for Semantic Segmentation in Diffusion Models, Marcos et al, CVPR24'
---
Rebuttal Comment 1.1:
Title: Response
Comment: I would like to thank the authors for the extensive and detailed rebuttal, it has been very helpful. There remain some questions that would be helpful of this paper.
**Q2. Applying DDIM Inversion for real image inversion.**
In the P2P paper, the authors found that the inversion is not sufficiently accurate in many cases, or that using a mask yields a more accurate inversion. The authors should specify which method was used and provide detailed explanations in the paper. Real image inversion and editing represent only a small portion of the P2P paper, and thus, are not the primary focus of P2P. Comparing the inversion method in P2P with methods specifically designed for real image inversion is not fair, as P2P typically combines other inversion methods when editing real images.
In ParallelEdits, which CFG is used in DDIM inversion? Using a CFG of 1 typically results in satisfactory inversion of a real image but may reduce editing ability. Conversely, a CFG of 7.5 often leads to reconstruction failure.
---
I would like to thank the authors for their response. With the additional results and clarifications provided, I am pleased to increase my rating.
---
Reply to Comment 1.1.1:
Title: Additional results and clarifications
Comment: We are glad that the reviewer finds our rebuttal helpful. The reviewer's suggestions have been valuable, and we welcome any further input.
***Q1: The authors should specify which method was used, DDIM inversion or naive DDPM inversion with mask.***
***A1:*** Our paper employs DDIM for inversion and P2P for editing in the P2P baseline, a combination that does NOT require masks, aligning with findings from previous research [1,2]. Following the reviewer's suggestions, we will include this discussion about mask-guided inversion into the paper. Note that, mask guidance uses a naive DDPM inversion by simply adding noise then denoising. However, mask-guided inversion P2P+DDPM cannot be used for non-rigid edits (pose/layout change) and global edits (style changes), and does not ensure identity preservation, often resulting in significant changes to the image content (please refer to the Introduction and Related Work Sections of [4] and Related work Section of [3]).
***Q2: Comparing the inversion method in P2P with methods specifically designed for real image inversion is not fair, as P2P typically combines other inversion methods when editing real images.***
***A2:*** Comparing the inversion method in P2P with others is only a part of our overall analysis. We have compared with P2P with other real image inversion method like StyleD and NTI, also other editing techniques such as PnP, MasaCtrl, and InfEdit. Regardless of the inversion or editing method of choice, we observe that ParallelEdits consistently outperforms benchmarks in aspect accuracy, aspect preservation, and CLIP/D-CLIP scores. Although we could explore more detailed comparisons across various baselines, it is important to note that current prevailing techniques in the research community primarily focus on single-aspect editing. Our research highlights the necessity to transition towards multi-aspect editing.
***Q3: In ParallelEdits, which CFG is used in DDIM inversion? Using a CFG of 1 typically results in satisfactory inversion of a real image but may reduce editing ability. Conversely, a CFG of 7.5 often leads to reconstruction failure.***
***A3:*** Great question! Adjusting the CFG (Classifier-Free Guidance) can influence editing quality and image/identity preservation. In our experiments, we used a moderate CFG setting of 4.0 to balance aspect accuracy, preservation metrics, and editing quality. For more information on hyper-parameters, please refer to Section E, Implementation Details, in the Appendix.
---
[1] Inversion-Free Image Editing with Natural Language, Xu et al, CVPR24'
[2] Boosting Diffusion-based Editing with 3 Lines of Code, Ju et al, ICLR24'
[3] An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion, Gal et al, ICLR23'
[4] MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing, Cao et al, ICCV23' | Summary: The paper introduces ParallelEdits, a method that manages simultaneous edits efficiently without compromising quality. This is achieved through a novel attention distribution mechanism and multi-branch design. Additionally, the authors present the PIE-Bench++ dataset, an expanded benchmark for evaluating multi-object and multi-attribute image-editing tasks.
Strengths: 1. The task of multi-aspect editing seems interesting and useful in real life.
2. The proposed method is efficient for application.
3. The overall editing results seem promising.
Weaknesses: 1. The authors claim the proposed method is based on the DDCM process [1]. However, when I checked the source paper of [1], I found something strange, especially its Algorithm 1. The dozens of iterations come to a simple conclusion that the output $z$ actually equals $z_0$, which means there is no need for such Virtual Inversion. Also, this process cannot be interpreted by Consistency Models. Hence, I wish the authors could revise their theoretical bases.
2. I think it is better to present the intermediate editing results for sequential editing, which can help readers identify what type of editing the previous methods are not good at.
3. I noticed that there was a benchmark [2] proposed for image editing. I advise the authors to use it for comprehensive evaluation.
[1] Inversion-Free Image Editing with Natural Language (https://arxiv.org/abs/2312.04965)
[2] Diffusion-Model Based Image Editing: A Survey (https://arxiv.org/abs/2402.17525)
Technical Quality: 3
Clarity: 2
Questions for Authors: I noticed that the man has changed after editing in Fig. 2. Could you please explain and address it?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q:** The DDCM process in [1] comes to a simple conclusion that the output z equals $z_0$ after dozens of iterations.
**A:** Indeed, ParallelEdits' source branch uses DDCM, as described in [1], which uses the consistency sampling step popular in consistency model literature [6,7]. DDCM process for source branch was chosen based on [1]'s observations that the method does not suffer cumulative errors over sampling, making the overall method more efficient (fewer samples to generate target image), which is desirable for image-editing applications. Although $z=z_0$ in every iteration of Algorithm 1, the difference between the reconstructed output $z_\tau$ and $z_0$ as captured by $\epsilon_\text{cons}$ continues to evolve (see Figure 3b of [1]). The target branch is calibrated using $\epsilon_\text{cons}$ thereby introducing the desired edit.
**Q:** Results about intermediate editing results for sequential editing
**A:** We have included further results in Figures 3 and 4 of the attached document in the general response. As previously stated in line 48 of the main paper, sequential editing is an ineffective approach due to the potential for undoing previous edits and accumulating errors over time.
**Q:** Additional benchmark needs to be included for comprehensive evaluation.
**A:** We thank the reviewer for highlighting this interesting benchmark. Although it predominantly contains single-aspect editing samples, unlike the more complex multi-aspect samples in PIE-Bench++, we evaluated our method on it and reported the results in Table 1 of the global rebuttal. Our method achieves state-of-the-art performance in terms of the overall LMM score proposed in [2]. Inspired by this, we plan to release a benchmark focused on multi-aspect editing, using PIE-Bench++ and the Aspect-accuracy/preservation metrics from our paper. We are confident it will gain strong recognition from the community.
**Q:** Explanation of man has changed after editing in Fig. 2.
**A:** We appreciate the reviewer’s subtle observation. As shown in Figure 2, the “boat” object nearby has been transformed to “white rubber boat” and leaks into the “man” attention map, and the man has been edited to fit the context of changing a traditional boat to a modern rubber boat. Similar with other attention-map-based approaches [1,3,4,5], the attention maps could leak, even though such minor changes do not affect the overall context of this figure. Finally, such leakage can be easily solved by applying a higher threshold to the attention map.
[1] Inversion-Free Image Editing with Natural Language, Xu et al, CVPR24'
[2] Diffusion-Model Based Image Editing: A Survey, Huang et al, ArXiv24'
[3] Prompt-to-Prompt Image Editing with Cross Attention Control, Hertz et al, ICLR23'
[4] DiffEdit: Diffusion-based semantic image editing with mask guidance, Couairon et al, ICLR 23'
[5] MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing, Cao et al, ICCV23'
[6] Latent consistency models: Synthesizing high-resolution images with few-step inference, Luo et al, ArXiv23'
[7] Consistency Models, Song et al, ICLR23'
---
Rebuttal Comment 1.1:
Title: Thanks for the feedback
Comment: We thank Reviewer J7sk for the positive and constructive feedback. As the rebuttal period is ending, we wanted to check if there are any remaining questions. We have provided additional intermediate results on the baseline method (sequential editing) as requested and evaluated our approach on a different benchmark, where our method still outperforms the state-of-the-art. We appreciate the suggestion and will release PIE-Bench++ as a benchmark for multi-aspect editing. We are glad to address any further questions or suggestions from the reviewer that could help enhance our manuscript. | Summary: In this paper, the authors present a novel multi-aspect image editing method ParallelEdits, by incorporating the attention distribution mechanism and multi-branch editing. Besides, this paper introduces a new dataset PIE-Bench++ for evaluating multi-aspect image editing. Extensive experiments demonstrate the effectiveness of this method.
Strengths: 1. The paper is well-written and well-organized.
2. The proposed method is novel and effectively addresses the multi-aspect image editing problem.
3. The results appear to have impressive visual effects. Edits of different aspects are combined seamlessly.
4. The authors conducted concrete experiments, providing rich quantitative results and qualitative results to demonstrate the effectiveness of the proposed ParallelEdits.
Weaknesses: 1. The pairing process of $E^{i \rightarrow j}$ imposes an additional burden on users, especially when multiple objects are present in the source image.
2. It is not clear how to choose the hyperparameter $\lambda$ for assigning a type for rigid and non-rigid edit actions, and how it affects the cross-branch interaction and editing results.
3. This paper lacks a discussion on controlling the editing strength of multiple editing actions, which is crucial for harmonic and flexible editing.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness part above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author has discussed the limitations and potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q:** The pairing process imposes an additional burden on users when performing multi-object editing.
**A:** The algorithm receives the pairing process (editing action) to determine which aspect is added, removed, swapped, or left unaltered. Meta-data of this form is not a burden but a necessity because it precisely reflects the user's intent when editing without errors. A user-friendly UI can conveniently capture such meta-data. Moreover, prior works also benefit from meta-data e.g. [1] needs rich text input and [2,3] need editing pairs.
**Q:** How the hyperparameter $\lambda$ to be chosen and how it affects the editing.
**A:** The hyperparameter $\lambda$ is the threshold for determining whether an edit is classified as a rigid edit or a non-rigid edit (see equation 3 of the paper). The hyperparameter $\lambda$ was selected through cross-validation by identifying the optimal $\lambda$ that gave the highest aspect accuracy. Experiments showed that both quantitative results and qualitative results do not vary for different choices of $\lambda$. Please see the table below and the Figure 2 of the PDF appended to this rebuttal. We will gladly share additional results if the reviewer desires.
| $\lambda$ | 0.8 | 0.85 | 0.9 (ours) | 0.95 |
|--------|-------|-------|-------|-------|
| AspAcc-CLIP | 50.09 | 50.68 | 51.05 | 50.53 |
**Q:** Lacking discussions on controlling the editing strength of multiple editing.
**A:** We concur with the reviewer that edit strength control is important. As shown in [4,5], single attribute editing tasks provide clear editing strength and flexibility. As for multi-aspect editing, the reviewer would agree that it is unclear if edit for multiple aspect is itself possible. Moreover, evaluation metrics and benchmark datasets for multi-aspect editing are not yet community-standardized. Thus, instead of handling editing flexibility, contemporary efforts [6,7] focus on feasibility aspects of simultaneous multiple edits.
[1] Expressive text-to-image generation with rich text. Ge et al, ICCV23.
[2] Prompt-to-prompt image editing with cross attention control. Hertz, et al, ICLR23.
[3] Null-text inversion for editing real images using guided diffusion models. Mokady, et al, CVPR23.
[4] An edit friendly ddpm noise space: Inversion and manipulations. Huberman-Spiegelglas et al, CVPR24.
[5] Imagen editor and editbench: Advancing and evaluating text-guided image inpainting. Wang, et al, CVPR23.
[6] Ground-A-Score: Scaling Up the Score Distillation for Multi-Attribute Editing. Chang, et al, Arxiv24.
[7] Iterative Multi-granular Image Editing using Diffusion Models. Joseph, et al, WACV24.
---
Rebuttal Comment 1.1:
Comment: Thanks for authors' rebuttal. There remain some questions regarding the paper and rebuttal.
- How did you collect the pairing information for PIE-Bench++?
- The hyperparameter $\lambda$ is the threshold for determining whether an edit is classified as a rigid edit or a non-rigid edit. Could you further explain which edits are rigid edit and which are non-rigid in Fig.2 of the PDF appended and how they change the outputs in Fig.2?
- Can you control editing strength for ParallelEdits? If yes, how do you control it? If no, does it mean the editing result for a given pair prompt is fixed?
---
Reply to Comment 1.1.1:
Title: Additional clarifications
Comment: We thank the reviewer for additional questions and helpful feedback.
***Q1: How to collect the pairing information for PIE-Bench++?***
***A1:*** All editing pairs have been manually annotated to establish editing correspondence like PIE-Bench. To build a robust dataset for community use, annotators manually labeled 700 image prompts.
***Q2: Need further explain which edits are rigid edit and which are non-rigid in Fig.2 of the PDF appended and how they change the outputs in Fig.2?***
***A2:*** In Fig. 2 of the rebuttal PDF, the "harness" is always non-rigid, whereas the "horse" and "field" are rigid edits. According to line 194 of the main paper, edits with an overlap greater than $\lambda$ should be put into the same branch if they have the same editing type. In Fig. 2, decreasing $\lambda$ to 0.8 groups "horse" and "field" into a single rigid edit branch, yielding slightly different results.
***Q3: How to control the editing strength for ParallelEdits?***
***A3:*** ParallelEdits can control the editing strength by adjusting the CFG (Classifier-Free Guidance). As Reviewer 8ae7 also noted, there's a trade-off between achieving satisfactory inversion and robust editing ability. A higher CFG tends to produce stronger editing effects but may lower inversion results and identity preservation. Currently, we have set a fixed CFG of 4.0 for our evaluations, as detailed in Section E, Implementation Details, in the Appendix. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time, insightful suggestions, and valuable comments. We are grateful for the positive recognition of the reviewers that our idea and task are interesting (Reviewers 8ae7 and J7sk), the method is efficient for application (Reviewers J7sk), and our editing results are impressive (Reviewers 6th7 and J7sk).
We have responded to each reviewer's comments in detail below. A PDF document has been uploaded to include additional figures and tables. We hope our response will address the reviewers' concerns.
Pdf: /pdf/bf9faee26144a9669e50668d996eba004adb2908.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Optimal Design for Human Preference Elicitation | Accept (poster) | Summary: This paper studies the problem of data collection for learning preference models. The key idea is to generalize the optimal design, a method for computing information gathering policies, to ranked lists. The authors study both absolute and relative feedback on the lists.
Strengths: 1. The considered problem is useful because collecting human feedback is expensive in practice.
2. The synthetic and real-world experiments show clear advantage of the proposed algorithms while compared with the benchmark methods.
3. This paper is well written and is easy to follow.
Weaknesses: 1. One major concern is the use of optimal design in relative feedback setting. In the absolute feedback case in (1), the optimal design used in (5) is correct as it corresponds to maximizing the metric of fisher information matrix in the least squared regression model. However, in the relative feedback case, the likelihood function is based on a multinomial-type distribution (or logistic-type distribution when $K=2$). In this case, the fisher information matrix is different from the one used in the least squared case. The optimal design for generalized linear models is more challenging than that for the linear models. See below the reference.
{\it Stufken, John, and Min Yang. "Optimal designs for generalized linear models." Design and Analysis of Experiments 3 (2012): 137-164.}
2. I found that some assumptions on the features are missing in the main paper. The main paper does not have any assumption on feature vector $x_{I_t, k}$. However, in the proof of Lemma 8 on line 1072 of page 27, the authors said ``(b) follows from independence of $w_s\eta_s$", where $w_s$ and $\eta_s$ are functions of $x$. In addition, the authors claim that they used the G-optimal design result in [48]. However, [48] considered a ridge regression which guarantees the positive definite of the covariate matrix. However, in the proposed Algorithm 1 (line 11), how do you guarantee the positive definiteness of the sample covariance matrix $\Sigma_n$?
3. Assumption 1 assumes the true parameter belong to the constraint parameter space $\Theta$ such that $\theta^T I_d = 0$ and $\|\theta\|_2 \le 1$. Do you require the estimated parameter also belong to this constraint parameter space $\Theta$? I did not see how the estimator in (9) satisfies these two constraints. Does the estimator in line 12 of Algorithm 1 have any scaling issue while compared to the true parameter $\theta^*$ ($\|\theta^*\|_2 \le 1$)?
4. In the Real-world experiment 3 (Nectar dataset) and Real-world experiment 4 (Anthropic dataset), the authors mentioned that ``During simulation, the ranking feedback is generated by the PL model in (2)." This indicates that these experiments are not real-world experiments but still synthetic ones. I would suggest to include some benchmark real-world experiments.
~~~After rebuttal~~~
I have increased the score after discussing with the authors during rebuttal stage.
Technical Quality: 3
Clarity: 3
Questions for Authors: see Weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wanted to thank the reviewer for carefully reading the manuscript, and especially for pointing out how to present the algorithms better. We answer all questions below. If you have any additional concerns, please reach out to us to discuss them.
**W1: Optimal design in Section 5**
The design is motivated by prior works [7, 106], which showed that the uncertainty can be represented and optimized using outer products of feature vectors. The advantage of these formulations is that they do not contain model-parameter terms, which appear in the Hessian of the log likelihood. Therefore, the optimal design can be solved similarly to linear models. The additional complexity of GLMs is captured through term $\kappa$ (Assumption 2), which is a lower bound on the derivative of the mean function. This term is common in GLM bandit analyses, although several recent works tried to reduce dependence on it
>> Improved Optimistic Algorithms for Logistic Bandits. ICML 2020.
>> An Experimental Design Approach for Regret Minimization in Logistic Bandits. AAAI 2022.
We leave tightening of the dependence on $\kappa$ in our bounds for future work.
**W2: Feature vector and covariance matrix assumptions**
We assume that the length of feature vectors is at most $1$. This is stated in Assumption 1.
We also want to explain what $w_s$ and $\eta_s$ are. In line 1071 (Lemma 8),
$$
\underbrace{\mathbf{x}^{\top} \left(\mathbf{X}^{\top} \mathbf{X}\right)^{-1} \mathbf{X}^{\top}}\_{\mathbf{w}^{\top}} \eta
= \mathbf{w}^\top\eta
= \sum_{s=1}^t w_s\eta_s$$
where $\eta$ is a vector of independent Gaussian noise up to round $t$. Here $w_s$ and $\eta_s$ are the components of $\mathbf{w}$ and $\eta$. The noise $\eta_s$ is independent Gaussian and does not depend on the feature vector $\mathbf{x}$.
When data are logged according to the optimal design (Chapter 21 of [48]), the sample covariance matrix in the least-squares estimator may not be full rank. This problem can be solved in multiple ways. In Algorithm 12 (Chapter 22 of [48]), each non-zero entry of the logging distribution is rounded up to the closest integer. This yields at most $d (d + 1) / 2$ extra observations. In our experiments, we add $\lambda I_d$, for a small $\lambda > 0$, to line 11 in Algorithm 1. This mostly impacts small sample sizes. Specifically, since the optimal design collects diverse feature vectors, the sample covariance matrix is likely to be full rank when the sample size is large.
**W3: Assumptions on the true and estimated model parameters**
The reviewer is right. In the model parameter estimator in (9), $\arg\min_\theta$ should be replaced with $\arg\min_{\theta \in \Theta}$, where $\Theta$ is defined in Assumption 1. This change should also be done in line 16 of Algorithm 2 (Appendix F). In short, we need the same assumptions as in [106], from which we borrow the estimator and concentration bound.
Our analysis in Section 4 relies on the concentration bound for non-adaptive designs, in (20.3) of [48], which does not depend on the scale of $\theta_*$. Therefore, no assumption on $\theta_*$ is needed in our analysis.
**W4: Semi-synthetic experiments**
The reviewer is right and we will adjust the language. The models in Anthropic and Nectar experiments are learned from real-world data but the feedback is simulated. We need to simulate to collect independent absolute and ranking feedback when the same list of items is explored multiple times.
---
Rebuttal Comment 1.1:
Title: The key issue has not been resolved
Comment: My primary concern is that in the case of relative feedback, the likelihood function is based on a multinomial-type distribution. This implies that the Fisher information matrix differs from the one used in least squares scenarios. Specifically, in the relative feedback case, the Fisher information matrix incorporates the derivative of $\mu(x \theta)$, which depends on both $x$ and the unknown parameter $\theta$. This distinction is crucial when comparing the optimal design for Generalized Linear Models to that for linear regression.
The authors have also recognized this issue in the response, commenting that "The design is motivated by prior works [7, 106], which showed that the uncertainty can be represented and optimized using outer products of feature vectors. The advantage of these formulations is that they do not contain model-parameter terms, which appear in the Hessian of the log likelihood. Therefore, the optimal design can be solved similarly to linear models. "
However, I find the statement "the optimal design can be solved similarly to linear models" to be insufficiently rigorous. Given that the paper focuses on "optimal design" both in its title and content, it is essential for the authors to apply a correct optimal design approach specific to the relative feedback scenario.
Indeed, Section 2 of the paper "An Experimental Design Approach for Regret Minimization in Logistic Bandits. AAAI 2022," cited in the authors' response, highlights that "In contrast, for the logistic setting, the G-optimal design objective may be large and we
only have a naive bound obtained by naively lower bounding $H(\lambda) \ge \kappa_0 \sum_{x \in {\cal X}} \lambda_x x x^{\top}$. In general these two criteria can produce extremely different designs. We provide an example where these designs are very different in our supplementary, see Figure 3."
Given this context, I believe the key concern has not been adequately addressed.
---
Rebuttal 2:
Title: Response to reviewer f8gc
Comment: We thank the reviewer for their response. Our brief response to W1 in the rebuttal was not meant to be "insufficiently rigorous". It goes without saying that we will expand the paper with a discussion of the original problem and our taken approach.
As the reviewer pointed out, our Hessian of the negative log-likelihood is more complex and should be stated. Specifically, let $x_{t, k}$ be the feature vector of the item at position $k$ in the list in round $t$ and $z_{t, k, k'} = x_{t, k} - x_{t, k'}$. Then the Hessian over $n$ rounds is
$$\nabla^2 \ell_n(\theta)
= \frac{1}{n} \sum_{t = 1}^n
\sum_{j = 1}^K \sum_{k = j}^K \sum_{k' = j}^K
\frac{\exp[(x_{t, k} + x_{t, k'})^\top \theta]}
{2 (\sum_{\ell = j}^K \exp[x_{t, \ell}^\top \theta])^2}
z_{t, k, k'} z_{t, k, k'}^\top$$
This is shown in lines 1087-1088 in Appendix.
In this work, we maximize the log determinant of relaxed $\nabla^2 \ell_n(\theta_*)$. First, we would like to stress that the exact optimization is impossible unless $\theta_*$ is known. When $\theta_*$ is unknown, two approaches are popular:
1. A plug-in estimate $\hat{\theta}$ of $\theta_*$ is used. The estimate can be computed by a $\theta$-agnostic optimal design and we discuss it later.
2. The $\theta$-dependent term is bounded from below.
We adopt the second approach. Following lines 1089-1090 in Appendix, and using our assumptions that $\\|x_{t, k}\\|_2 \leq 1$ and $\\|\theta\\|_2 \leq 1$, we have
$$\nabla^2 \ell_n(\theta)
\succeq \frac{e^{- 4}}{2 K (K - 1) n}
\sum_{t = 1}^n \sum_{j = 1}^K \sum_{k = j + 1}^K
z_{t, j, k} z_{t, j, k}^\top$$
Therefore, we can maximize the log determinant of relaxed
$$\sum_{t = 1}^n \sum_{j = 1}^K \sum_{k = j + 1}^K
z_{t, j, k} z_{t, j, k}^\top$$
which we do in algorithm Dope. This solution is sound and justified, because we maximize a lower bound on the original objective.
The last point to discuss is if the alternative approach, maximization of $\nabla^2 \ell_n(\hat{\theta})$ with a plug-in estimate $\hat{\theta}$, could be used. There is no evidence that this approach is practical for the sample sizes of hundreds of human interactions that we consider in our experiments. We start with paper
>> An Experimental Design Approach for Regret Minimization in Logistic Bandits. AAAI 2022.
which you looked at. Note that this paper is for logistic models only, which is a special case of our setting for $K = 2$.
To compute the plug-in estimate, they collect data using optimistic probing in Algorithm 2 based on a G-optimal design for linear models. From the proof of their Theorem 6,
$$O\left(d (\log \log d) \frac{C_0}{\Delta_w^{2}}
\log\left(\frac{C_0 L}{\Delta_w^2 \delta}\right)\right)$$
samples are needed to compute $\hat{\theta}$, where $L$ is the number of feature vectors and $C_0 \geq 38$. The most favorable setting for this bound is $\Delta_w = 1$ and $C_0 = 38$. Now we instantiate it in our experiments. We take synthetic Experiment 2, where $d = 36$ and $L = 400$. Suppose that we want the claim in Theorem 6 to hold with probability at least $1 - \delta$ for $\delta = 0.05$. Then the suggested sample size is $22043$. This is two orders of magnitude more than our sample sizes in Figure 1b.
While this paper does not contain any experiments with real data, the author report the sample sizes of Algorithm 2 in Table 1. The algorithm collects data to compute the plug-in estimate $\hat{\theta}$ and then collects more data using a $\hat{\theta}$-dependent optimal design to initialize a bandit algorithm. The lowest reported sample size for a $3$-dimensional problem is $6536$. This is an order of magnitude more than our sample sizes in Figure 1b for a larger $36$-dimensional problem.
Another recent work that sheds light on the performance of plug-in and $\theta$-independent optimal designs is
>> Active Preference Optimization for Sample Efficient RLHF. ICML 2024.
The authors analyze an algorithm with plug-in estimates but implement a practical one using the outer product of feature vector differences, similarly to our work. This algorithm is only for the setting of $K = 2$.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer f8gc,
Can you please let us know if our response addressed your main concern? You argue that we do not apply a correct optimal design approach for the relative feedback scenario. We do, but this was not sufficiently explained in the main paper. All supporting evidence is in Appendix D.1. In summary:
1. The log likelihood of the original problem, its gradient, and its Hessian are all properly stated. See (9) for the log likelihood, line 1097 for the gradient (Appendix D.1), and line 1088 for the Hessian (Appendix D.1). We will bring the Hessian to the main paper and make the relation to the original objective clear.
2. The D-optimal design in the relative feedback setting cannot be solved exactly since the model parameter $\theta_*$ in the Hessian is unknown.
3. To get around this issue in GLMs, a common approach is to eliminate $\theta_*$-dependent terms in the uncertainty model. For instance, the most celebrated works on GLM bandits took this approach. See GLM-UCB and UCB-GLM in
>> Parametric Bandits: The Generalized Linear Case. NeurIPS 2010.
>> Provably Optimal Algorithms for Generalized Linear Contextual Bandits. ICML 2017.
4. We replace $\theta_*$-dependent terms with a lower bound (line 1089 in Appendix D.1). As a result, the log determinant of the new Hessian is a lower bound on the original one, and its maximization is a sound and justified way of optimizing the original objective.
5. We also discussed another solution, where the lower bounds on $\theta_*$-dependent terms are replaced with a plug-in estimate of $\theta_*$. To the best of our knowledge, there is no evidence that this would result in a better solution than in our paper, at comparable problem and sample sizes. This is discussed in detail in our previous response.
Based on the above, we believe that your main concern can be addressed by moving all supporting evidence from Appendix D.1 to the main paper and discussing in detail how the two objectives are related.
Sincerely,
Authors
---
Rebuttal 3:
Comment: We thank the reviewer for their response. The reviewer and us agree that the optimal design problem in Section 5 can be solved in two ways:
1. **Method A:** Solve an approximation where $\theta_*$-dependent terms are replaced with a lower bound (line 1089 in Appendix D.1). We take this approach.
2. **Method B:** Solve an approximation where $\theta_*$ is replaced with a plug-in estimate. The reviewer would like us to take this approach.
Let's examine the pros and cons of both approaches.
**Prior works:** Both the reviewer and us pointed to prior works that took our preferred approaches. Therefore, both approaches can be justified by prior works. Recent works on preference-based learning, which are the closest related works, seem to prefer Method A. For example, see
>> Principled Reinforcement Learning with Human Feedback from Pairwise or K-Wise Comparisons. ICML 2023.
>> Active Preference Optimization for Sample Efficient RLHF. ICML 2024.
>> Provable Reward-Agnostic Preference-Based Reinforcement Learning, ICLR 2024
Interestingly, in the second paper, the authors analyze an algorithm with a plug-in estimate akin to Method B. However, the practical algorithm in experiments uses an approximation akin to Method A. This indicates that Method B may not be practical or yield enough practical benefits.
**Ease of implementation:** Method A is clearly easier to implement. This is because the plug-in estimate in Method B needs to be estimated, which requires solving an additional exploration problem. This also introduces hyper-parameters, such as the number of exploration rounds for the plug-in estimate.
**Theory:** Method A relies on a linear model theory. Method B requires an analysis of how the plug-in estimate concentrates. The logging policy for the plug-in estimate can be quite involved. For instance, the initial exploration in
>> An Experimental Design Approach for Regret Minimization in Logistic Bandits. AAAI 2022.
is over $\tilde{O}(d)$ individual arms, simply to get pessimistic per-arm estimates. The exploration budget is reported in Table 1. The lowest one, for a $3$-dimensional problem, is $6536$. This is an order of magnitude higher budget than in our Figure 1b for a larger $36$-dimensional problem. This indicates that a theoretically-sound design of Method B may be too conservative.
Based on the above discussion, we believe that Method A strikes a good balance between **practicality with a theory support**. We failed to explain in the main paper how the optimal design in Section 5 is approximated. This was an unfortunate omission on our side and we will fix it. All supporting claims are in Appendix D.1 though. We will also stress that the optimal design in Section 5 is not solved optimally. This is not possible and therefore we solve it approximately, as all prior works.
Method B is intriguing because it may perform well with a decent plug-in estimate. The question is whether this would happen within exploration budgets in our paper. To investigate this, we repeat Experiment 2 with $K = 2$ (logistic regression):
* **Dope:** Explore by the policy in (6) for all rounds.
* **Plug-in ($m$):** Explore by the policy in (6) for $m$ rounds. After that, we compute the plug-in estimate of $\theta_*$ using (9) and solve the D-optimal design with it. This policy explores for the remaining $n - m$ rounds. Finally, $\theta_*$ is estimated from logged data from all rounds.
* **OPT:** We solve the D-optimal design with $\theta_*$. This validates our implementation and also shows the gap from the optimal solution.
We report both the prediction errors and ranking losses at $n = 500$ rounds. The gap between Dope and Plug-in was larger for $n < 500$. The results are averaged over $100$ runs.
| | Dope (ours)| Plug-in (m = 400) | Plug-in (m = 300) | Plug-in (m = 200) | Plug-in (m = 100) | OPT |
|-|-|-|-|-|-|-|
| Maximum prediction error | 15.79 ± 1.08 | 19.75 ± 1.48 | 30.52 ± 3.00 | 65.75 ± 13.71 | 100.39 ± 10.72 | 9.22 ± 0.82 |
| Ranking loss | 0.107 ± 0.002 | 0.104 ± 0.003 | 0.103 ± 0.002 | 0.114 ± 0.003 | 0.142 ± 0.003 | 0.092 ± 0.002 |
We observe that the prediction error of Dope is always smaller than that of Plug-in ($6$ times at $m = 100$). OPT outperforms Dope but cannot be implemented in practice. The major gap in the performances of OPT and Plug-in shows that an optimal design with a plug-in estimate of $\theta_*$ can perform much worse than with $\theta_*$. Dope has a comparable ranking loss (within margins of error) to Plug-in at $m = 400$ and $m = 300$. Plug-in has a higher ranking loss otherwise. OPT performs the best again.
Based on our discussion and experiments, we do not see any strong evidence for why we should adopt Method B. It would be more complex than Method A, harder to analyze, and we do not see benefits in our experiments. This also follows the principle of Occam's razor, which tells us to design with a minimal needed complexity. | Summary: The paper considers experiment design for collecting ranking/direct feedback, with an application to fine tuning language models.
Author formulate active exploration for fine-tuning as a ranking problem and propose an algorithm which satisfies the standard guarantees for its ranking loss. Experiments on synthetic and real data are provided as a proof of concept, which show that optimal design consistently outperforms random sampling and a number of other vanilla baselines.
Strengths: - The problem setting applies well to the proposed RLHF/AIHF application. Although this is not used in sota models, but I think the ranking loss gives a strong criteria for fine-tuning.
- The paper is well written. All relevant theoretical guarantees for understanding the algorithm are provided and the rates are compared to previous results.
- Authors provide proof of concept experiments on a large scale AIHF dataset which almost exactly matches their problem setting and demonstrate the benefits of using optimal design.
I think one rarely comes across a paper that manages to deliver on all these criteria.
Weaknesses: - From a purely theoretical standpoint, I am not sure how novel is the result. I am not particularly familiar with the literature on optimal design. However under slightly different problem setting (e.g. BAI for linear bandits), and the complexity bounds are pretty much common knowledge, even with dueling (k-wise or pair-wise) feedback.
- Some (recent) related work on dueling bandits with function approximation (linear, kernelized, admissible setting) are missing. Particularly [1] proposes a very similar approach: active exploration by maximizing the $\log\mathrm{det} V_t$ and MLE for estimation. I put the ones I could think of below.
- I think comparing to other algorithms (not just toy baselines) would really strengthen the story and relevance of the paper. Majority (if not all) of the baselines consider dueling regret (or a sub-optimality gap based on that) but can still be used to learn the reward model for the Nectar or the HH experiment [see 1 & 4]. The data/collection model would be different (e.g. no Lists) but the algorithms can still be evaluated based on the ranking loss to make them comparable to Dope. I think the current way of instantiating a dueling design withing the ranking framework is not realistic.
[1] Nirjhar Das, Souradip Chakraborty, Aldo Pacchiano and Sayak Ray Chowdhury Active Preference Optimization for Sample Efficient RLHF. arXiv preprint, 2024.
[2] Barna Pásztor, Parnian Kassraie, and Andreas Krause. Bandits with Preference Feedback: A Stackelberg Game Perspective. arXiv preprint, 2024.
[3] Johannes Kirschner and Andreas Krause. Bias-robust bayesian optimization via dueling bandits. In International Conference on Machine Learning. PMLR, 2021.
[4] Viraj Mehta, Vikramjeet Das, Ojash Neopane, Yijia Dai, Ilija Bogunovic, Jeff Schneider, and Willie Neiswanger. Sample efficient reinforcement learning from human feedback via active exploration. arXiv preprint, 2023a.
[5] Viraj Mehta, Ojash Neopane, Vikramjeet Das, Sen Lin, Jeff Schneider, and Willie Neiswanger. Kernelized offline contextual dueling bandits. arXiv preprint, 2023b.
[6] Aadirupa Saha. Optimal algorithms for stochastic contextual preference bandits. Advances in Neural Information Processing Systems, 34:30050–30062, 2021
[7] Aadirupa Saha, and Akshay Krishnamurthy. "Efficient and optimal algorithms for contextual dueling bandits under realizability." International Conference on Algorithmic Learning Theory. PMLR, 2022.
[8] Shion Takeno, Masahiro Nomura, and Masayuki Karasuyama. Towards practical preferential bayesian optimization with skew gaussian processes. In International Conference on Machine Learning, pages 33516–33533. PMLR, 2023
[9] Yichong Xu, Aparna Joshi, Aarti Singh, and Artur Dubrawski. Zeroth order non-convex optimization with dueling-choice bandits. In Conference on Uncertainty in Artificial Intelligence. PMLR, 2020
[10] Wenjie Xu, Wenbin Wang, Yuning Jiang, Bratislav Svetozarevic, and Colin N Jones. Principled preferential bayesian optimization. arXiv preprint arXiv:2402.05367, 2024.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. How does the problem of ranking L lists compare to a finite arm (linear) dueling bandit problem with $k$-wise feedback? Is there a reduction from one to another?
2. How does the ranking problem compare to best-arm-identification for contextual bandits? I'm curious if one can formally show an equivalence between the ranking loss and the BAI sub-optimality gap [e.g. in 1, 2]?
3. Is dependence on $\kappa$ in Theorem 6 improvable? I can imagine that $\kappa$ can get really small exponentially fast?
[1] Das, Nirjhar, et al. "Provably sample efficient rlhf via active preference optimization." arXiv preprint arXiv:2402.10500 (2024).
[2] Azizi, Mohammad Javad, Branislav Kveton, and Mohammad Ghavamzadeh. "Fixed-budget best-arm identification in structured bandits." arXiv preprint arXiv:2106.04763 (2021).
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for detailed feedback and positive evaluation of the paper. We answer all questions below. If you have any additional concerns, please reach out to us to discuss them.
**W1: Novelty in optimal designs**
A good introduction to optimal designs is Chapter 21 in [48]. At a high level, optimal designs are a tool for computing optimal uncertainty reduction policies. The policies are non-adaptive and thus can be precomputed, which is one of their advantages. Adaptive bandit algorithms can be obtained by combining optimal designs and elimination. A good example is Chapter 22 in [48]. Therefore, solving an optimal design opens the door to other solutions. To the best of our knowledge, this is the first work to study optimal designs in ranking problems, from both absolute and ranking feedback.
**W2: Comparison to related works on dueling bandits**
Thank you for the numerous [1-10] references to dueling bandits. To simplify comparison, we focus on three main differences:
**$K = 2$ versus $K > 2$:** All of [1-10] are dueling bandit papers ($K = 2$) while we study a more general setting of $K \geq 2$.
**Worst-case optimization over lists:** A classic objective in dueling bandits is to *minimize regret with respect to the best arm* from dueling feedback, sometimes in context. This problem can be studied in both cumulative and simple regret settings. The papers [2-3] and [5-10] are of this type. Our goal is to *sort $L$ lists* and the agent controls the chosen list. The works [1] and [4] are closest to our work in this aspect.
**Adaptive versus static design:** All of [1-10] are adaptive designs, where the acquisition function is updated in each round. Dope is a static design where the exploration policy is precomputed. Interestingly, the practical variant of APO in [1] can also be viewed as a static design, because the optimized covariance matrix depends only on the feature vectors of observed lists but not the observations.
**Q1: Reduction of ranking $L$ lists to dueling bandits**
There is no reduction because the objectives are different. A classic objective in dueling bandits is to *minimize regret with respect to the best arm* from dueling feedback. Our goal is to *sort $L$ lists*. One may think that our problem could be solved as a contextual dueling bandit, where each list is represented as context. This is not possible because the context is controlled by the environment. In our setting, the agent controls the chosen list, similarly to APO in [1].
**Q2: Equivalence of objectives with [1] and [2]**
Algorithm APO in [1] is indeed the closest related work and we wanted to thank you for bringing it up. APO greedily minimizes the maximum error in pairwise ranking of $L$ lists of length $K = 2$. Therefore, it can be applied to our setting by applying it to all possible ${K \choose 2} L$ lists of length $2$ created from our lists of length $K$, as described in lines 296-300. We compare to APO next, both empirically and algorithmically.
**Empirical comparison:** We first report the ranking loss in Experiment 2 (Figure 1b) where $K = 4$:
| | n = 10 | n = 20 | n = 50 | n = 100 |
|-|--------|--------|--------|---------|
| Dope (ours) | 1.1 ± 0.049 | 0.78 ± 0.029 | 0.48 ± 0.017 | 0.32 ± 0.010 |
| APO | 1.5 ± 0.057 | 0.99 ± 0.037 | 0.62 ± 0.022 | 0.48 ± 0.021 |
Next we report the ranking loss on the Nectar dataset (Figure 1c) where $K = 5$:
| | n = 50 | n = 100 | n = 200 | n = 500 |
|-|--------|---------|---------|---------|
| Dope (ours) | 0.51 ± 0.066 | 0.40 ± 0.053 | 0.29 ± 0.038 | 0.19 ± 0.027 |
| APO | 1.00 ± 0.120 | 0.98 ± 0.110 | 0.75 ± 0.095 | 0.73 ± 0.100 |
We observe that Dope has a significantly lower ranking loss than APO. This is for two reasons. First, APO solves the uncertainty reduction problem greedily. Dope solves it optimally, by sampling from an optimal design. Second, APO is designed for $K = 2$ items per list. While it can be applied to our problem, it is suboptimal because it does not leverage the $K$-way feedback that Dope uses.
**Algorithmic comparison:** Dope with ranking feedback (Section 5) can be viewed as a generalization of [1] to lists of length $K \geq 2$. [1] propose 2 methods: one is analyzed and one is practical. We propose a single algorithm, which is practical and analyzable; and provide both prediction error (Theorem 5) and ranking (Theorem 6) guarantees.
There is no equivalence of objectives with [2]. Our discussion in lines 207-214 focuses on similarites in high-probability bounds. The dependence on $n$ and $d$ is expected to be similar because the probability of making a mistake in [2] or a ranking error in our work depends on how well the generalization model is estimated, which is the same in both works.
**Q3: Dependence on $\kappa$ in Theorem 6**
We briefly discuss this in lines 270-275. Theorem 6 is similar to existing sample complexity bounds for fixed-budget BAI in GLMs. In these bounds, the additional complexity of GLMs is captured through term $\kappa$ (Assumption 2), which is a lower bound on the derivative of the mean function. This term is common in GLM bandit analyses, although several recent works tried to reduce dependence on it
>> Improved Optimistic Algorithms for Logistic Bandits. ICML 2020.
>> An Experimental Design Approach for Regret Minimization in Logistic Bandits. AAAI 2022.
We leave tightening of the dependence on $\kappa$ in our bounds for future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and particularly comparison with APO.
I just want to point out the following. I think the dueling bandit setting can be extended to $K$ arm in a less trivial way: by writing the categorical likelihood function and keeping $L$ lists, instead of increasing the number of lists and making them of length $2$. So I am not entirely sure if this is the most fair comparison of the algorithms, but it is indeed still insightful to demonstrate the benefits of $K$-wise vs pairwise feedback in modeling the problem.
Overall, I recommend the paper for acceptance: it delivers on theory, methodology, and real-world experiments. Further, the problem setting is relevant to the considered applications and active fine-tuning of LLMs, making up a truly strong work.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response and supporting our work! | Summary: This manuscript deals with preference models which are at the crossroads of linear bandits, ranking models, and optimal designs. The authors consider a model where one has to rank L lists of K objects. The expected reward for object k of list i is x_{i,k}\theta^* and \theta^* \in \mathbb{R}^d is some unknown quantity. They consider two feedback models: (i) the "absolute feedback model" where the learners observes the noisy reward and (ii) the "ranking feedback model" where the learner observes an order of the K items sampled from the celebrated Plackett-Luce model. In the first model, the authors introduce an optimal design with total budget n for estimating the parameter \theta^* and plug the estimator to bound the error in estimating the L ranking of the K objects. Then, they extend their procedure to the ranking feedback model.
Strengths: 1) This manuscript introduces an extension of Kiefer-Wolfowitz theorem for building an optimal design in maximum prediction error in linear regression. Although the proof ideas are quite similar to classical Kiefer-Wolfowitz theorem, this result seems to be new.
2) This allows them control both the prediction error and the ranking error under both feedback models. Both errors seem to be order-wise optimal.
3) The virtue of their procedure compared to adaptive one is that both the experiment design and the computation of the estimators are pretty simple.
4) The authors illustrate the benefits of their procedure compared to e.g. uniform design on small-scale synthetic numerical experiments.
Weaknesses: 1) The purpose of building G-optimal lemmas is to derive tight optimal bounds, However, in this manuscript (Theorems 3--6), the authors only provide order-wise optimal bounds.
1-a) In Theorem 3, the authors establish that the maximum prediction error is of order (up to log) d^2/n. However, for obtaining such a bound, there is no need to rely on optimal design. Simply using some variant of a uniform design would be sufficient.
1-b) In Theorem 4, the authors plug the analysis of the OLS for the ranking purpose. However, there is no reason why a G-optimal design should be optimal ranking. Indeed, the ranking loss is not a linear function of the prediction loss. Optimizing the ranking error could therefore require quite different designs. For instance, if two items are extremely difficult to compare, then an optimal ranking design should put further emphasis on this comparison than an optimal prediction design.
1-c) In generalized linear models, solving the problem (6) does not necessary lead to an optimal prediction design. Still, the author use this approach in Section 5 for ranking feedback. Hence, it is not clear to what extent this design is the most relevant for the prediction purpose.
2) Minor remark: Lemma 2 and Theorem 5 require that n is larger than the number of lists L. This seems to be in contradiction with the regime L >n which is put forward in Section 2.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) It is not clear to me to what extent the preference models considered in this manuscript have been previously studied. If it is case, could the authors further discuss this literature? If it is not the case, could the authors explain why the linear assumption with observed x_{i,k} is realistic?
2) It is not clear to me to what extent the rate d^2/n is optimal in Theorem 3. Could the authors discuss this rate?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As explained in the weakness section, I feel that the limitations of using the G-optimal design for ranking purposes are not discussed enough in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for detailed feedback. We answer all questions below. If you have any additional concerns, please reach out to us to discuss them.
**W1a: Uniform design would suffice to get a $\tilde{O}(d^2 / n)$ rate in Theorem 3**
We respectfully disagree. Consider the following example. Take $K = 2$. Let $x_{i, 1} = (1, 0, 0)$ for $i \in [L - 1]$ and $x_{L, 1} = (0, 1, 0)$, and $x_{i, 2} = (0, 0, 1)$ for all $i \in [L]$. In this case, the minimum eigenvalue of $\bar{\Sigma}_n$ is $n / L$ is expectation, because only one item in list $L$ provides information about the second feature $(0, 1, 0)$. Following the same steps as in Theorem 3, we would get a rate of $\tilde{O}(d L / n)$. A similar observation was also made in prior works on optimal designs, such as [25] and
>> Best-Arm Identification in Linear Bandits. NeurIPS 2014.
**W1b: Is the optimal design in Section 4 optimal for ranking?**
We agree with the reviewer that our optimal design may not be optimal for ranking. This is because our ranking bound in Theorem 4 is derived using a prediction error bound. We have not focused solely on the optimal design for ranking because we see value in both prediction (Theorem 3) and ranking (Theorem 4) bounds. The fact that we provide both shows the versatility of our approach. We discuss the tightness of the bound in Theorem 4 after the claim. The dependence on $n$ and $d$ is similar to prior works on fixed-budget BAI in linear models and likely optimal. This is because their probability of making a mistake or a ranking error in our work depends on how well the linear model is estimated, which is the same in both works.
**W1c: Optimal design in Section 5**
The design is motivated by prior works [7, 106], which showed that the uncertainty can be represented and optimized using outer products of feature vectors. The advantage of these formulations is that they do not contain model-parameter terms, which appear in the Hessian of the log likelihood. Therefore, the optimal design can be solved similarly to linear models. The additional complexity of GLMs is captured through term $\kappa$ (Assumption 2), which is a lower bound on the derivative of the mean function. This term is common in GLM bandit analyses, although several recent works tried to reduce dependence on it
>> Improved Optimistic Algorithms for Logistic Bandits. ICML 2020.
>> An Experimental Design Approach for Regret Minimization in Logistic Bandits. AAAI 2022.
We leave tightening of the dependence on $\kappa$ in our bounds for future work.
**W2: Lemma 2 and Theorem 5 require that $n > L$**
We do not think so. Can the reviewer be more specific? Both claims contain maximization over lists, $\max_{i \in [L]}$, and not a summation, $\sum_{i = 1}^L$. While the optimized covariance matrix $V_\pi$ (line 130) involves $\sum_{i = 1}^L$, the $i$-th term is weighted by the probability that list $i$ is chosen. By the Kiefer-Wolfowitz theorem, the probability distribution that solves the problem is sparse, has at most $d (d + 1) / 2$ non-zero entries.
**Q1: Have our preference models been studied before?**
Yes. The absolute feedback model is a variant of a click model (line 92). Click models have been studied in contextual bandits since [110] and
>> Contextual Combinatorial Cascading Bandits. ICML 2016.
The probability of a click in these papers is a linear function of context and an unknown model parameter. The ranking feedback model is the same as in [106].
**Q2: Optimality of $\tilde{O}(d^2 / n)$ rate in Theorem 3**
A high-probability upper bound on the prediction error in linear models under the optimal design is
$$|x^\top (\hat{\theta}\_n - \theta\_{\star})|
\leq \underbrace{||\hat{\theta}\_n - \theta_*|\|_{\bar{\Sigma}_n}}\_{\tilde{O}(\sqrt{d})} \cdot
\underbrace{||x||\_{\bar{\Sigma}_n^{-1}}}\_{\tilde{O}(\sqrt{d / n})}
= \tilde{O}(d / \sqrt{n})$$
This bound can be derived by combining (20.3) and Claim 3 in Theorem 21.1, both in [48]. The square of the prediction error is bounded as $(x^\top (\hat{\theta}\_n - \theta\_{\star}))^2 = \tilde{O}(d^2 / n)$. The same rate is achieved in Theorem 3. This is optimal since we bound the squared prediction error for $K$ items ($K$ times more than in a classic linear model) from batches of $K$ observations ($K$ times faster learning). The classic setting can be also viewed as $K = 1$.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Regarding the problems in the lemmas, you assume in lemma 2 that n pi*(i) is an integer. This is not possible unless n>L. This lemma is used in the proof of theorem 5.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Chgi
Comment: We thank the reviewer for the quick response. We answer the question below.
Your conclusion would be correct if all entries of $\pi_*$ could be non-zero. However, we prove in Theorem 1 (Matrix Kiefer-Wolfowitz) that $\pi_*$ has at most $d (d + 1) / 2$ non-zero entries (line 142). Note that this is independent of the number of lists $L$. In fact, the claim would also hold for infinitely many lists, similarly to Chapter 21.1 in [48].
A natural question to ask is if the integer condition in Lemma 2 could be further relaxed. The answer is yes and we will comment on this in the next version of the paper. The key idea is to round each non-zero entry of $n \pi_*(i)$ up to the closest integer. As an example, if $n \pi_*(i)$ was $3.7$, the number of observations of list $i$ would be $4$. This will clearly result in an integer allocation of size at most $n + d (d + 1) / 2$. All our current claims would hold for any $\pi_*$ and this allocation.
---
Rebuttal 2:
Comment: Thank you for getting back with additional questions.
**Tightness of the bounds**
We agree with the reviewer that our optimal designs may not be optimal for ranking. We have not focused solely on the optimal design for ranking because we see value in both prediction (Theorem 3) and ranking (Theorem 4) bounds. The fact that we provide both shows the versatility of our approach.
We also wanted to comment on our solution to the optimal design problem in Section 5. We minimize the prediction error by approximately solving the problem. Specifically:
1. The log likelihood of the original problem, its gradient, and its Hessian are all stated at the following places: (9) for the log likelihood, line 1097 for the gradient (Appendix D.1), and line 1088 for the Hessian (Appendix D.1).
2. The D-optimal design in the relative feedback setting cannot be solved exactly since the model parameter $\theta_*$ in the Hessian is unknown.
3. To get around this issue in GLMs, a common approach is to eliminate $\theta_*$-dependent terms in the uncertainty model. Many works using GLMs in decision making took this approach,
>> Parametric Bandits: The Generalized Linear Case. NeurIPS 2010.
>> Provably Optimal Algorithms for Generalized Linear Contextual Bandits. ICML 2017.
>> Principled Reinforcement Learning with Human Feedback from Pairwise or K-Wise Comparisons. ICML 2023.
>> Active Preference Optimization for Sample Efficient RLHF. ICML 2024.
>> Provable Reward-Agnostic Preference-Based Reinforcement Learning, ICLR 2024
4. We replace $\theta_*$-dependent terms with a lower bound (line 1089 in Appendix D.1). As a result, the log determinant of the new Hessian is a lower bound on the original one, and its maximization is a sound and justified way of optimizing the original objective.
5. The penalty for solving the optimal design approximately appears in our claims as a constant of roughly $5$. This is because the norms of $\theta_*$ and feature vectors are all bounded by $1$. We will make this constant explicit in the next version of the paper.
**Optimality of $\tilde{O}(d^2 / n)$ rate in Theorem 3**
An upper bound on the prediction error in the linear model, when proved through the Cauchy–Schwarz inequality, is
$$|x^\top (\hat{\theta}\_n - \theta\_{\star})|
\leq \underbrace{||\hat{\theta}\_n - \theta\_{\star}||\_{\bar{\Sigma}\_n}}\_{\tilde{O}(\sqrt{d})} \cdot
\underbrace{||x||\_{\bar{\Sigma}\_n^{-1}}}\_{\tilde{O}(\sqrt{d / n})}
= \tilde{O}(d / \sqrt{n})$$
with a high probability. This bound holds for infinitely many feature vectors. The bound can be tightened for a finite number of feature vectors, $m$, to $\tilde{O}(\sqrt{d / n})$, where the $\tilde{O}$ hides $\sqrt{\log m}$. This can be proved using a union bound over (20.3) in Chapter 20 of [48]. When bounding the square of the error, the above bounds become $\tilde{O}(d^2 / n)$ and $\tilde{O}(d / n)$, where the second $\tilde{O}$ hides $\log m$.
In Theorem 3, we show that the prediction error is $\tilde{O}(d^2 / n)$. This matches the rate in the linear model and holds for infinitely many lists. The extra factor of $d$ can be eliminated by assuming a finite number of lists.
Title: Response to Reviewer | Summary: This paper presents a novel approach for data collection to learn preference models from human feedback. The key innovation is generalizing optimal design, a method for computing information gathering policies, to ranked lists. The authors study both absolute and relative feedback settings, developing efficient algorithms for each and providing theoretical analyses. They prove that their preference model estimators improve with more data, as does the ranking error under these estimators. The work is evaluated on several synthetic and real-world datasets, demonstrating the statistical efficiency of the proposed algorithms. This research contributes to the field by providing a theoretically grounded and empirically effective method for learning preference models from human feedback, with potential applications in areas such as reinforcement learning from human feedback (RLHF) and information retrieval.
Strengths: The paper extends optimal design theory to ranked lists, introducing a novel approach to preference learning. This is evidenced by the generalization of the Kiefer-Wolfowitz theorem to matrices, providing a strong theoretical foundation for the Dope algorithm.
Weaknesses: I believe this is a solid piece of work with no obvious errors. From my perspective, considering the following points could potentially strengthen the paper:
1. In the synthetic experiments (Figures 1a and 1b), the authors only consider the case of L=400 and K=4. Authors should consider different combinations of L and K values to provide a more comprehensive evaluation of the algorithm's performance. In particular, testing cases where K>4 could reveal potential issues arising from the K^6 term in the theoretical analysis.
2. The paper compares the proposed Dope algorithm to several baselines, but it lacks comparison to more recent and sophisticated methods in preference learning and active learning.
Technical Quality: 4
Clarity: 3
Questions for Authors: No.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for positive feedback and acknowledging that our work is solid. We answer all questions below. If you have any additional concerns, please reach out to us to discuss them.
**Q1: Synthetic experiments beyond $L = 400$ and $K = 4$**
We vary the number of lists and items, $L \in \\{50, 100, 200, 500\\}$ and $K \in \\{2, 3, 4, 5\\}$, and report the ranking loss of Dope in Experiment 2 (Figure 1b):
| | L = 50 | L = 100 | L = 200 | L = 500 |
|-|--------|---------|---------|---------|
| K = 2 | 0.12 ± 0.06 | 0.28 ± 0.12 | 0.37 ± 0.14 | 0.57 ± 0.21 |
| K = 3 | 0.14 ± 0.06 | 0.24 ± 0.10 | 0.37 ± 0.15 | 0.50 ± 0.19 |
| K = 4 | 0.13 ± 0.05 | 0.24 ± 0.08 | 0.35 ± 0.14 | 0.47 ± 0.18 |
| K = 5 | 0.12 ± 0.04 | 0.21 ± 0.08 | 0.34 ± 0.12 | 0.45 ± 0.15 |
We observe that the problems get harder as $L$ increases (more lists to rank) and easier as $K$ increases (longer lists but also more feedback).
**Q2: Empirical comparison to a recent baseline**
We compare Dope to a state-of-the-art algorithm APO from
>> Active Preference Optimization for Sample Efficient RLHF. ICML 2024.
This recent work was suggested by **Reviewer sgSH** and is the closest related work. APO greedily minimizes the maximum error in pairwise ranking of $L$ lists of length $K = 2$. We extend it to $K > 2$ by applying it to all possible ${K \choose 2} L$ lists of length $2$ created from our lists of length $K$, as described in lines 296-300. We first report the ranking loss in Experiment 2 (Figure 1b) where $K = 4$:
| | n = 10 | n = 20 | n = 50 | n = 100 |
|-|--------|--------|--------|---------|
| Dope (ours) | 1.1 ± 0.049 | 0.78 ± 0.029 | 0.48 ± 0.017 | 0.32 ± 0.010 |
| APO | 1.5 ± 0.057 | 0.99 ± 0.037 | 0.62 ± 0.022 | 0.48 ± 0.021 |
Next we report the ranking loss on the Nectar dataset (Figure 1c) where $K = 5$:
| | n = 50 | n = 100 | n = 200 | n = 500 |
|-|--------|---------|---------|---------|
| Dope (ours) | 0.51 ± 0.066 | 0.40 ± 0.053 | 0.29 ± 0.038 | 0.19 ± 0.027 |
| APO | 1.00 ± 0.120 | 0.98 ± 0.110 | 0.75 ± 0.095 | 0.73 ± 0.100 |
We observe that Dope has a significantly lower ranking loss than APO. This is for two reasons. First, APO solves the uncertainty reduction problem greedily. Dope solves it optimally, by sampling from an optimal design. Second, APO is designed for $K = 2$ items per list. While it can be applied to our problem, it is suboptimal because it does not leverage the $K$-way feedback that Dope uses.
We will include all new experiments in the next version of the paper. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Nearly Tight Black-Box Auditing of Differentially Private Machine Learning | Accept (poster) | Summary: This paper studies the problem of auditing DP-SGD in the black box threat model, i.e. only black-box access to the last iterate $\theta_T$ of DP-SGD. The paper's main contribution is to show experimentally that pre-training helps to get tighter auditing. The argument behind using this idea is that pre-training makes the average gradient norm of “in-distribution” points small, thus making the target point used for auditing/membership attack more “distinguishable”. The paper explores this worst-case initialisation idea for two datasets, MNIST and CIFAR10, and reports tighter privacy lower bounds.
Strengths: - Well-motivated problem. Tight black-box auditing is still an open problem compared to the white-box setting.
- Extensive experimental exploration of the effect of different hyperparameters on the auditing procedure.
- The paper provides confidence intervals on the empirical privacy budget $\epsilon$.
Weaknesses: - The main contribution of this work is incremental. In addition, as noted in the paper, the idea that the initialisation of $\theta_0$ affects auditing/membership inference has already been reported in the literature [10,18,26].
- The justification for using pre-training is hand-wavy and not rigorous. See the question below.
- No code is available to reproduce the results. The experimental setting is well-detailed. However, some important factors for reproducing the results have not been reported. See questions below.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Worst-case vs average-case initialisation: (a) For the worst-case, $\theta_0$ is selected by pretraining on half of the dataset, but what algorithm is used for that? Is it non-private SGD? And what is the random distribution used to generate $\theta_0$ for the average case?
- In Figure 5, the average-case initialisation seems to give tighter privacy and lower bounds, especially for small epsilon. This contradicts this work's main argument, i.e., using worst-case initialization is better. How do you explain this?
- What canary strategy is used for the black-box audit? Do you only use the blank example? Did you test any other black box canaries?
- The justification behind the reported success of pretraining is that the average gradient norm decreases with the number of pre-training epochs, reported in Figure 2. (a) Is the average of gradients computed on all samples (training+ test) or only training samples? (b) Why does this correlate with the success of your membership attack? Specifically, the MIA attack tresholds over the loss of the canary, thus this seems to be the statistic that should be analysed over the pre-training epochs. The gradient average norm is indicative for white-box attacks. (c) Pretraining decreases the gradient average loss of “in distribution” samples, so how would you adapt your auditing procedure for datasets where the empty sample, i.e. your canary, is "in distribution"?
- What are the loss thresholds used for the black-box membership attack? Are they fine-tuned to find the best privacy budget? Also, the function EstimateEps() is never explicitly detailed in the paper to reproduce the results.
- Is it not already possible to use [32]’s framework to run directly your black-box auditing in one run?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: As noted by the authors in the conclusion, the auditing procedure is computationally heavy and time expensive.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer JVDo for their helpful and constructive feedback. In the following, we address their comments and questions and clarify how we believe we can address their concerns regarding weaknesses.
> **Contributions of crafting worst-case initial model parameters**
Although the general idea of looking at initialization was indeed already reported in [10, 18, 26], these works only looked at tweaking the randomization of initialization. Instead, we focus on specifically crafting worst-case initial model parameters, which had not been considered before. We refer the Reviewer to Point 1. of the Global Author Rebuttal comment for a more detailed comparison with prior work.
> **Experiments’ details**
We thank the reviewer for highlighting the gaps in the details of our experimental analysis. In addition to addressing the reviewer’s questions below, please note that we will make the code publicly available with the camera ready; in the meantime, we would be happy to share the code anonymously upon request.
**Answers to Questions**
> **Worst-case vs Average-case initialization**
We refer the Reviewer to Point 3. of the Global Author Rebuttal comment for details regarding crafting worst-case initial model parameters (i.e., pre-training on half of the dataset). For the random distribution, we follow [18] and use Xavier initialization, which we will clearly explain in Section 5.
> **Discrepancies in Figure 5**
We note that the discrepancy is due to the small number of models and refer the Reviewer to the Point 2. of the Global Author Rebuttal comment, where we explain this in detail.
> **Canary strategy**
We use the blank sample when auditing the full model and the ClipBKD sample [18] when auditing the last-layer only fine-tuning. We experimented with both samples (along with a “one-hot” sample) and used the sample with the tightest audits for both settings. We note that ClipBKD performed better for last-layer only fine-tuning as it was particularly designed and tested for this setting. We will clarify this in the revised version of the paper in Section 5.1.
> **Success of pre-training strategy**
In Figure 2, (a) the average of the gradient norms is only computed on training samples. (b) The gradient norms of other samples are important to the success of our attack as they represent the expected change in the loss function due to the other samples during training. Specifically, when the gradient norms of other samples are close to 0, the loss function is mainly impacted by the presence of the target sample only, making the loss of the target much more distinguishable (when the target sample is present vs. when it is not). Put simply, the gradient norms indicate the level of “noise” in the dataset compared to the “signal.” Note that while prior work [18, 26] does aim to maximize the loss of the target sample by carefully crafting the target sample, this has not been enough to achieve tight audits. In our work, we find that we are able to explain (part of) this gap by using the gradient norms of the other sample taken as “noise.” We hope this provides additional intuition as to why our approach works and will clarify this in Section 5.2 as well. (c) Even if the empty sample is “in distribution,” we can mislabel the sample to make it “out of distribution” and use the mislabelled sample as the target sample, which is also a common method used in prior work.
> **Loss thresholds**
As mentioned in Section 4, the loss thresholds are indeed fine-tuned to maximize privacy leakage. We will explicitly include the EstimateEps function (in the main body if there is space and in the appendix otherwise) and also release the code publicly for reproducibility.
> **Using [32] for auditing in a single run**
We thank the reviewer for making the point that our method can potentially be combined with [32] to audit using a single training run, as also briefly mentioned in Section 6. However, this may not be trivial as a large number of target samples with potentially large gradient norms are used in [32]. This might interfere with our approach to reduce the contribution of other samples. Therefore, we leave this to a future paper to investigate this approach in more detail. In the revised version of the paper, we will additionally state these challenges of combining our work with [32].
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for addressing most of my concerns. I updated my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We once again thank reviewer JVDo for their time and updating their score to reflect their concerns being addressed. | Summary: The paper considers the problem of auditing DP-SGD, i.e. deriving empirical lower bounds on the DP parameter $\epsilon$. Many previous auditing works operated in the white-box setting, i.e. are allowed to choose arbitrary gradients for the canary (sensitive example) during DP-SGD, or were in the black-box setting (i.e. could determine some aspects of training, but not specific per-round gradients, and also does not get to see intermediate states) but had a sizeable gap between the empirical $\epsilon$ and the theoretical upper bound. The paper gives a black-box auditing procedure that in some cases achieves empirical $\epsilon$ very close to the theoretical upper bound on standard benchmarks of MNIST and CIFAR10. The main algorithmic insight is to use a model pre-trained on in-distribution data, which causes the gradients of in-distribution examples to be quite small even in the first round of private training, hence increasing the impact of the canary on training. In contrast, previous work used 'average-case' initializations without optimizing for their impact on the empirical $\epsilon$. The authors perform a number of experiments using this improvement, on MNIST and CIFAR, varying whether they train a full model vs the last layer, and varying the dataset size and clip norm. The authors' audit generally improves upon the baseline of an average-case initialization, sometimes as much as a 5x increase in the empirical $\epsilon$.
Strengths: * The paper provides the first audits which come close to matching the theoretical upper bounds while operating in a black-box setting. In particular the empirical improvement over past black-box work is incredibly strong, and the qualitative improvement over white-box audits is important given the threat model the authors use is much more realistic.
* The empirical results are especially impressive given the number of training runs the authors use (which is much smaller than some other auditing works) and the fact that the change needed to achieve the increase in empirical $\epsilon$ is relatively lightweight (a single pre-training run).
* The authors do a good job explaining intuition for why changing different aspects of the training pipeline affects the empirical $\epsilon$ which makes it more likely others can build upon this intuition in practice or in future work, and the intuition is reflected in the empirical results.
Weaknesses: * In some cases we may want to do privacy auditing not to e.g. find bugs in an existing DP-SGD pipeline, but to support the claim that a model has much better privacy guarantees than the worst-case theoretical $\epsilon$ we are reporting. In this case, taking half of the dataset and using it as public pre-training data a privacy violation, i.e. it is unclear how to extend the authors' insights to this application of auditing. This might be a limited weakness, as anyway in practice we are probably going to try to pre-train on a distribution as close to the private distribution as possible, which might retrieve the benefits the authors observe.
* The improvements in Figure 5 don't seem to be very significant, and for eps = 1.0 the average empirical epsilon of average-case initialization is in fact double that of worst-case initialization. This did not affect my score as I understand the authors may not have access to enough compute budget to narrow the confidence intervals, and also I don't think they should be punished for reporting insignificant results, but I think the improvement here is somewhat overstated in the paper.
Technical Quality: 4
Clarity: 3
Questions for Authors: * Do the authors believe the results in Figure 5 would become significant with more trials? i.e. do the authors have some intuition for why an initialization might matter more or less for last-layer-only fine-tuning than for full-model training? If so, it might be nice to include this in the paper.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer uyKT for their helpful feedback. While their questions are hopefully addressed in Point 2. of the Global Author Rebuttal comment, we address their comments regarding weaknesses here.
> **Pre-training on another distribution**
We apologize as we are not entirely sure what the reviewer is referring to in the first point of the weaknesses in their review. We would be happy to address it if they could kindly clarify. Do they mean to say that it is unclear how our work would apply to the setting where a different dataset (instead of half of the target dataset) is used to craft the initial model parameters?
If so, we believe that it would depend on how close the distributions of the other dataset are to the target dataset. Naturally, datasets with closer distributions would yield tighter audits. However, as we focused on achieving the tightest possible audits within the black-box setting in this work, we have not looked into this aspect yet. Nevertheless, we think it would be an interesting aspect to look into for future work and we will explicitly add this in Section 6.
> **Insignificant improvements in Figure 5**
We believe that one of the reasons why there are not any significant improvements to the audits in this setting is that the audits are already nearly tight when using average-case initial model parameters. For instance, at theoretical $\varepsilon = 10.0$, the $\varepsilon_{emp} = 7.69$ for average-case initial model parameters already. Therefore, there is not much room for improvement in this setting resulting in comparable audits between the average-case and worst-case initial model parameters.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
The first weakness is a limited one, so I do not think the authors need to stress about it. Yes, the high-level point is that having in-distribution pretraining data is not feasible in every problem setting, e.g. in the use-case of https://arxiv.org/pdf/2302.03098 who explore auditing the training pipeline in the same run used to generate the final model. I agree with the authors that for the problem setting considered in the paper, having in-distribution public data is a fine assumption.
---
Reply to Comment 1.1.1:
Comment: We once again thank reviewer uyKT for their time and constructive feedback. We agree with the reviewer that the first weakness is a limited one and thank the reviewer for acknowledging our rebuttal. | Summary: This paper proposes a new method for auditing DP-SGD in a black-box setting, where the auditor can only see the final parameters of the model (rather than intermediate steps). The main idea is to select worst-case initial model parameters; this seems to give a substantial advantage to the black-box analysis.
Strengths: The idea of selecting worst-case initial parameters for auditing is clever. Empirically, the proposed method strongly outperforms the chosen baseline.
This work shows that a (partially) black-box adversary isn't much weaker than a white-box one; this demonstrates that DP analyses (which assume the latter) may not be underestimating privacy too much.
The paper is well-presented.
Weaknesses: # Motivations
I think the authors should refine their motivations. They use two: 1) their method is good for detecting bugs, and 2) it provides insights into DP-SGD analyses.
1. Bugs.
The ultimate purpose of _black-box_ auditing is unclear. There's two sides to this discussion. On the one hand, if our goal is to audit a DP-SGD implementation, surely we should be making all the worst-case assumptions (i.e., give as much advantage to the auditor as possible) to look for bugs. On the other hand, if our goal is to evaluate real-world attacks against DP-SGD, then it does make sense to consider a black-box assumption. The assumptions this paper makes are a bit of an hybrid: it assumes black-box for auditing, yet the adversary is allowed to specify the initial model parameters. The only semi-practical scenario I can think of is an attacker who creates a model (for subsequent fine-tuning to be carried out by the victim) and wants to augment the leakage in the fine-tuned version of the model. Aside from this scenario, I would recommend leaving out from this paper any motivation arguing that this work is more practical than other auditing methods for finding "bugs"; please refer to "Questions" for a concrete example.
2. New insights.
The authors do provide one very convincing motivation: a big question is whether DP analyses of DP-SGD could be improved if they considered it as a black-box algorithm; currently, they all assume knowledge of intermediate gradients. This work offers some hope: from the analysis, it seems that the difference between black-box and the theoretically estimated "epsilon" may be closer than expected. If I may, I would strongly recommend the authors to center their motivations on this aspect.
# Comparison
I could not find a real comparison to previous auditing methods. Only two numbers appear (within the Datasets paragraph), reporting the performance of previous works; were these computed under the same experimental setting (and code base)? Ideally, they should be replicated under identical conditions to yours.
Secondly, it'd be very interesting to see what similar white-box methods achieve, in comparison. How tight are they, on those datasets/models?
Technical Quality: 4
Clarity: 3
Questions for Authors: - "Although techniques to tightly audit DP-SGD exist in literature [26, 27], they do so only in active white-box threat models, where the adversary can observe and insert arbitrary gradients into the intermediate DP-SGD steps. However, real-world adversaries cannot always access the model’s inner parameters, let alone insert arbitrary gradients."
As argued above, I don't think this is a valid premise. There's nothing in auditing that requires modelling a real-world adversary. Further, the assumptions that you make (e.g., worst-case initial parameters) do not correspond to a real-world adversary. I think the implications of what you claim here are incorrect, and I don't think it should be part of your motivations.
- Threat model. It does look weird to see a threat model section in an auditing paper, and I would humbly recommend removing it: DP doesn't specify any particular threat model, and auditing is just about ensuring how tightly the definition is being matched empirically. If you do want to keep this section, please note that it's missing the assumption that the auditor (adversary?) can specify the initial model parameters; also, I was unclear as to what the following meant in this context: "assume that the adversary can choose a worst-case target sample as is standard for auditing DP mechanisms".
- "we set B=N".
Can you please clarify whether this is a constraint of your method, and if so what are the potential limitations?
- Sentence "When auditing a DP mechanism [...]". Please define the symbol $R$. (It's clear what it is, but it's undefined.)
- "Crafting worst-case initial parameters". This should be one of the main contributions, but unfortunately the paragraph doesn't provide much information on how exactly these parameters are created. Can you please provide more information in the text (and in rebuttal)?
- Could you please double check: did your cluster only have 1 CPU core?
- Typo "is is".
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: These were adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer ncD1 for their helpful feedback. In the following, we address their comments and questions.
> **Motivation**
Thank you very much for your comments re. our motivation. Indeed, our work is primarily focused on the “New Insights” type of motivation, i.e., determining if the analysis of DP-SGD can be improved if considered as a black-box algorithm. To that end, we will follow the Reviewer’s recommendation to center our motivation around this aspect. We will also follow the Reviewer’s recommendation to leave out any motivation that argues about the “practical” aspect of black-box auditing.
> **Comparison to prior work**
We note that previous work [10, 18, 26] considered the setting with subsampling (i.e., $B \neq N$) whereas we consider full batch gradient descent ($B = N$). Nevertheless, we do present a direct comparison with [10, 26], which is the “average-case $\theta_0$” setting – they, too, use the blank same and loss to carry out the audit. Although we did initially experiment with [18] (i.e., using the ClipBKD sample), we did not find that, in our setting, this significantly improved upon the results presented in “average-case $\theta_0$” (probably because [18] was only designed for and tested on Logistic Regression and Fully Connected Neural Networks) and thus left it out of the paper. In Section 5, we will make it clearer that the “average-case $\theta_0$” setting provides a direct comparison to [10, 26].
**Answers to Questions**
> **Threat model paragraph**
We included this paragraph to delineate the differences between what was done in prior work (active white-box) and what we do (black-box). This also mirrors writing in prior papers [26, 28]. Nevertheless, we will include the worst-case initial parameters assumption in the threat model (and thank the reviewer for suggesting this).
> **Worst-case target sample**
We consider the setting where the rest of the dataset is chosen randomly (“natural dataset”), but the target sample is specifically chosen by the adversary and is not a “natural” sample. To make this clearer, we state that we “assume that the adversary can choose a worst-case target sample as is standard for auditing DP mechanisms.” We hope this clarifies the reviewer’s question.
> **Clarifying $B = N$**
We note that setting $B = N$ is not a constraint of our method; rather, it aims to make auditing easier and is done commonly [26]. Nonetheless, we note that recent work (Cebere et al., 2024) suggests that the privacy analysis when $B \neq N$ might be complicated, and even in stronger threat models, tight auditing may be difficult to achieve.
> **Clarifying $R$**
We will clarify in Section 3.3 that $R$ refers to the number of models/outputs.
> **Details on worst-case initial model parameters**
We refer the Reviewer to Point 3. of the Global Author Rebuttal comment and will clarify this further in the text.
> **Typos**
We apologize for the typos and will fix both of them in the text. We used 36 cores in the cluster (we will also clarify this).
**References**
Cebere, T., Bellet, A., & Papernot, N. (2024). Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model. arXiv:2405.14457.
---
Rebuttal Comment 1.1:
Comment: We once again thank reviewer ncD1 for their time and constructive feedback. As the discussion period draws to a close, we hope our rebuttal has sufficiently addressed the reviewer's concerns and would be grateful to hear the reviewer's thoughts on our comments and clarifications. | Summary: The paper shows that privacy auditing is tight in the threat model where the model initialization is adversarially selected. They find that the decrease in gradient norms over the course of training helps improve the "signal to noise" ratio of the auditing example.
Strengths: The paper's main finding is useful and helps the privacy auditing community get a bit closer to understanding privacy leakage.
The authors propose a reasonable hypothesis for why their approach improves on prior work.
Weaknesses: The general finding that "when other examples' contributions are smaller, auditing is tighter", has been a feature of other papers on auditing, even in the adversarial dataset experiments from Nasr et al. 2021 or the design of ClipBKD in Jagielski et al 2020.
The paper's claims of "near tightness" seem somewhat exaggerated - there are several parameter settings where the proposed auditing does not even appear to be better than prior work, and all experiments show a rather large gap.
Given that the batch size is an important detail for auditing, the setting of B = n should be mentioned in the main body of the paper.
The experiments are unfortunately on rather small models, making it difficult to know how generalizable the strategy is to more recent advances in DP-ML training.
I would appreciate experiments on other datasets - several conclusions are drawn that seem dataset specific (e.g. dataset size), and it would be useful to know if there are more general trends in these.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the gradient norm of the auditing example in your experiments? I wonder if at C=10, the gradient norm is not large enough to "max out" the full clipping norm.
Did you retune hyperparameters when changing the values of clipping norms?
Given the hypothesis that lower gradient norm of other examples leads to tighter audits, have you tried explicitly constructing an initialization where the other gradients are exactly 0? Perhaps even just nonprivate SGD on the private training data will get there.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I think the batch size should be better highlighted, but limitations related to model architectures, datasets, auditing trials are all well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer qSii for their helpful feedback. In the following, we address their comments and questions.
> **When other samples’ contributions are smaller, auditing is tighter**
While the overarching intuitions are similar, our strategy is appreciably different from prior work [18, 27], as we also discuss in Point 1. of the Global Author Rebuttal comment vis-à-vis the novelty of crafting worst-case datasets. In the revised version of the paper, we will discuss this further to clarify how, despite the similarity of the intuitions, our work overcomes some significant challenges compared to prior work.
> **Near tightness claims**
Thank you for your comment. We agree that, in some settings, especially at low(er) $\varepsilon = 1.0$ and $2.0$, it is not entirely clear whether our auditing method improves upon prior work. However, we note that this is mainly due to computational limitations. In fact, there are significant improvements for larger $\varepsilon = 4.0$ and $10.0$ on both the MNIST and CIFAR-10 datasets (e.g., for MNIST, from $\varepsilon_{emp} = 3.41$ in prior work to $6.48$ in our work). This suggests that computational power is a limiting factor as lower epsilon values correspond to FPR/FNR rates that are very close to the random baseline, which would, in turn, require more models to estimate more accurately. Nevertheless, in the final version of the paper, we will tone down our claims of “near tight” audits to specify in which settings the audits are appreciably tighter than prior work.
> **Stating $B = N$ in the main body**
Currently, the setting of $B = N$ is stated in Section 3.2, but we will emphasize/clarify this again in Section 5.
> **Generalizability to deeper models**
Unfortunately, due to both the number of models required for the audit and the large batch sizes, auditing modern deep models (e.g., WideResNet) in the black-box setting tightly is computationally (very) expensive. However, our method is inherently general and does not depend on the model architecture. Furthermore, the crafting of our worst-case initial parameters is efficient as the model has to be only pre-trained once. Therefore, in theory, we do not see any reasons that would prevent the generalizability of our results other than very significant computational costs. In future work, we will look into auditing deep neural networks as soon as we have access to more compute or we can improve the efficiency of the underlying auditing techniques.
> **Experiments on other datasets**
Thank you for this comment. We did indeed think about additional datasets but ultimately settled on MNIST and CIFAR-10, which are considered benchmark datasets for auditing DP-SGD [10, 18, 27]. Moreover, the trends observed (e.g., smaller datasets yielding tighter audits) are the same across both MNIST and CIFAR-10 – due to the different complexities of the datasets, only the scale of the “impact” differs. Nevertheless, on Reviewer qSii’s request, we could add experiments on the FMNIST dataset as well (we are confident this will confirm the same conclusions we draw from MNIST/CIFAR-10).
**Answers to Questions**
> **Gradient norm of auditing sample**
On average, across all iterations, the gradient norm of the auditing sample, before clipping, is $10.5$ for MNIST and $150.4$ for CIFAR-10. Therefore, we do not believe that this is a “maxing out” issue.
> **Re-tuning hyper-parameters for clipping norms**
The hyper-parameters were re-tuned for the clipping norms.
> **Explicitly constructing initialization with 0 gradient norm**
Unfortunately, even if the other gradients’ norms are exactly 0 on iteration 0, it will no longer be 0 for future iterations as the model weights become “corrupted” by the noise addition step [27]. Therefore, to the best of our knowledge, it is not possible to have an initialization that results in the other gradients’ norms being 0 throughout training. Nevertheless, one setting we did consider (but did not include in the paper) is the worst-case neighboring datasets setting, where $D = \emptyset, D’ = \\{(x, y)\\}$, which is numerically equivalent to what you suggest (the gradient norm of the empty dataset will be 0). While the audits were significantly tighter in this setting ($\varepsilon_{emp} \approx 9$ at theoretical $\varepsilon = 10.0$ for both MNIST and CIFAR-10), we did not include it in our final paper since this setting has limited real-world value.
---
Rebuttal Comment 1.1:
Comment: We once again thank reviewer qSii for their time and constructive feedback. As the discussion period draws to a close, we hope our rebuttal has sufficiently addressed the reviewer's concerns and would be grateful to hear the reviewer's thoughts on our comments and clarifications. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful feedback and suggestions. In this message, we clarify points mentioned by multiple reviewers. We also address each reviewer’s concerns separately in individual comments.
**1. Novelty of worst-case initial model parameters (qSii, JVDo)**
As also noted by Reviewers ncD1 and uyKT, one of the main contributions of our paper is the novel approach of crafting worst-case initial model parameters, which results in a strong improvement over prior work. While our method is built on similar intuitions (e.g., reducing the contributions of other samples), it is significantly different from prior work [18, 27] and much more effective.
More precisely, while [18] does look at initialization, it focuses on tweaking the randomness of initialization rather than specifically crafting worst-case initial model parameters, thus resulting in looser audits. Furthermore, [27] crafts an adversarial worst-case dataset but sets the learning rate to 0, which destroys the model’s utility.
Designing an effective adversarial strategy that provides tight audits in the black-box setting without destroying utility has remained an open problem. Our work overcomes this challenge through a novel strategy involving crafting worst-case initial model parameters. As also detailed in our comments to each review, in the revised version of the paper, we will clarify our motivation and contribution more clearly while toning down certain claims.
**2. Discrepancies in Figure 5 (uyKT, JVDo)**
The last-layer-only fine-tuning setting is a common setting studied both when auditing DP-SGD and for training private models with DP-SGD. This motivates us to verify whether our method offers improvements in auditing for this setting as well.
As pointed out by Reviewers uyKT and JVDo, we acknowledge that the improvements in this setting are fairly limited.
We also agree that some discrepancies in Figure 5 were not well explained in the text. We will provide a brief explanation here and will explain this clearly in the revised manuscript. Specifically, the mean of the $\varepsilon_{emp}$ for theoretical $\varepsilon = 1.0$ appears to be tighter for the average-case initial model parameters setting rather than the worst-case initial model parameters. However, both values are still within the confidence interval of each other and therefore, we do not consider this to mean that average-case initialization performs better in this setting. Rather, this was an artifact of auditing using a small number of models (200). In fact, in preliminary testing, when the number of models is increased to 1,000, we find that $\varepsilon_{emp} = 0.35 \pm 0.38$ for average-case and $0.51 \pm 0.52$ for worst-case initial model parameters thus showing that there is in fact no discrepancy in Figure 5. We will explain this clearly in the text and will increase the number of models used to audit just for this setting given that we are able to have access to more compute within the timeframe.
**3. Details on worst-case initial model parameters (ncD1, JVDo)**
We thank the Reviewers for highlighting that some details (e.g., optimizer) are missing.
For MNIST, we pre-train the model on half of the dataset using non-private SGD for five epochs with batch size 32 and a learning rate of 0.01.
For CIFAR-10, we pre-train the model first on the CIFAR-100 dataset (there is a typo in the text “ImageNet-32” => “CIFAR-100”, which we will fix) for 300 epochs with batch size 128 and learning rate 0.1 using non-private SGD. We further fine-tune the model on half of the CIFAR-10 dataset using non-private SGD for 100 epochs with batch size 256 and learning rate 0.1.
We will make the details more explicit by moving the text from Appendix A to Section 4. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Unified Principle of Pessimism for Offline Reinforcement Learning under Model Mismatch | Accept (poster) | Summary: This paper proposes an algorithm that tackles the offline RL task (for tabular state and action space) under two key challenges (i) the mismatch between the environment dynamics used to generate the dataset and the environment used to run the learned policy (for evaluation), and (ii) the bias in the data-collection procedure where for certain (s, a) pairs there are only a few samples in the dataset. In contrast to prior approaches that tackle the two problems separately, this paper proposes a unified solution to both problems under the distributionally robust MDP framework.
Intuitively, distributionally robust MDP aims to learn a policy/value function with the best possible expected return given the worst possible environment dynamics in a set of distributions. Specifically, the set contains all distributions that are close (i.e., smaller than R) under some metrics (this paper considered total variation, chi-squared divergence, and KL divergence). This paper also takes into account the scarcity of data for certain (s, a) pairs by changing the fixed bound R to an adaptive bound R + K_{s}^{a}, where K_{s}^{a} depends on the visit count of (s, a) in the offline dataset.
With this new formulation, the authors derive high-probability lower bounds of the optimality of the learned value functions, under three divergences: total variation, chi-squared divergence, and KL divergence). The bounds appear to be tighter than that in prior work.
Strengths: This paper proposes a theoretically useful offline RL algorithm that can handle model misspecification errors and data scarcity in a unified perspective. Specifically, it turns the task into a canonical algorithm for distributionally robust MDP. This can significantly simplify the analysis and inspire future work to find better connections between the two challenges in offline RL.
The paper contains a thorough introduction and comparison with prior work, which makes it much easier to see the contributions of this paper.
Weaknesses: One major weakness of the proposed algorithm is its practicality. The proposed algorithm relies on tabular state and action representations and involves solving robust Bellman equations which could be prohibitively hard in practice. Also, to compute certain statistics, we need to record the state-action visitation count, which is impractical in practice.
It would be nice if the author can demonstrate potential ways to apply the algorithm in practice. For example, can the algorithm be used together with function approximations?
Technical Quality: 3
Clarity: 3
Questions for Authors: Does the proposed method achieve better empirical performance on tabular environments compared to baselines?
Are there potential ways to make the algorithm more practical?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed certain limitations in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful and insightful feedback. In the following, we provide point-to-point responses to the weaknesses and questions.
**Question 1. Empirical performance in tabular environments** We first refer the reviewer to Section A of the appendix, where we compare our algorithms with two baselines. Additional results for large environments are provided in the rebuttal PDF, Fig. 2. Our method outperforms the non-robust method and demonstrates performance similar to the LCB approach [35], consistent with our theoretical results. Furthermore, as shown in Fig. 1 of the rebuttal PDF, in the tabular setting, our algorithms have a lower execution time and computational complexity than the LCB baseline. Hence we claim that our algorithms perform better than the baselines under tabular environments.
It is also important to note that there is no practical implementation of the baseline [5] due to the NP-hardness of their models. Consequently, we did not include a comparison with [5] (please also see our response to Reviewer 2 Weakness 2).
**Weakness \& Q2. Extend to practical settings.** We want to emphasize that our major contribution is to develop a methodological framework for offline RL with model mismatch, which incorporates both uncertainties from the limited dataset and model mismatch as a **single** distributional uncertainty set and tackles them through a unified DRO framework. To better illustrate our novelty and technique contributions, we present our results in the tabular setting, where we show the sample complexity of our algorithms improves upon or matches the SOTA.
However, our method can be easily extended to large-scale problems with function approximation. For example, in d-rectangular robust linear MDPs [26,5], the nominal transition kernel is the inner product of some d-dimensional feature functions $\phi$ and a d-dim vector $\theta$: $\mathsf{P}^a_{s,s'}=\phi(s,a)^\top \theta_{s'}$. The uncertainty set for the model mismatch is set as $\mathcal{P}=\\{(\phi(s,a)^\top \tau_{s'}): D(\theta_i,\tau_i)\leq R, i\leq d\\}$. Our method can be adapted to solve offline robust linear MDP problems. Specifically, after estimating the parameter $\hat{\theta}$ from the offline dataset as in [5], we can design a similar enlarged uncertainty set $\tilde{\mathcal{P}}=\\{(\phi(s,a)^\top \tau_{s'})_{s,a,s'}: D(\hat{\theta}_i,\tau_i)\leq R+\kappa_i \\}$, where $\kappa_i$ is some enlarged term that accounts for the uncertainty of the dataset. Hence our method will result in a standard robust linear MDP formulation, which can be solved as in [26].
Our method can be further extended to robust MDPs with latent structure or function approximation. Specifically, the nominal transition kernel can be represented or approximated by some parameter function $\mathsf{P}\approx f_\theta$, and the robust MDP for the model mismatch will be set as $\mathcal{P}=\\{ f_{\theta'}: D(\theta,\theta')\leq R\\}$. Similarly, after estimating the parameter $\hat{\theta}$ from the dataset and designing a dataset dependent term $\kappa$, solving the enlarged robust MDP $\mathcal{P}=\\{ f_{\theta'}: D(\hat{\theta},\theta')\leq R+\kappa\\}$ will result in a policy tackling both two sources of uncertainty. Also, it is worth mentioning that such a robust MDP can be solved empirically using adversarial training approaches, as in [a,b].
Another extension to make our method more practical is to design model-free algorithms for our DRO formulation. Unlike the model-based approach presented in our paper, model-free algorithms do not require the model estimation or solving the Bellman equation but can directly learn the optimal policy in an online bootstrapping fashion, making them more suitable for large-scale problems. Existing robust Q-learning algorithms, e.g., [c,d], can be directly adapted by setting the uncertainty set radius to $R+\kappa$ and solving our formulation, which further makes our method more practical.
To verify that our method can also be extended to practical settings, we further implement a model-free robust Q-learning as described in [c] to solve the American Option environment ([c], [64]) in the offline setting. For each dataset, we ran the robust Q-learning algorithm with the uncertainty set constructed as described in our paper and plotted the robust value function of the learned policy. The results, presented in Fig. 3 of the rebuttal PDF, demonstrate that our method can be combined with model-free algorithms to effectively solve offline problems under model mismatch, particularly for large-scale problems. We will add more discussion in our paper.
[a] Rigter, Marc, et al. "Rambo-rl: Robust adversarial model-based offline reinforcement learning." Advances in neural information processing systems 35 (2022): 16082-16097.
[b] Pinto, Lerrel, et al. "Robust adversarial reinforcement learning." International conference on machine learning. PMLR, 2017.
[c] Liang, Zhipeng, et al. "Single-trajectory distributionally robust reinforcement learning." arXiv preprint arXiv:2301.11721 (2023).
[d] Liu, Zijian, et al. "Distributionally Robust $ Q $-Learning." International Conference on Machine Learning. PMLR, 2022.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal and for the additional analysis using transition kernels as well as for discussing potential use cases of the proposed algorithm. I believe this strengthens the paper and will increase my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive feedback on our submission!
We’re glad to hear that you found our responses helpful, and we deeply appreciate your efforts and support. Thank you again! | Summary: The authors propose a unified principle of pessimism using distributionally robust Markov decision processes (MDPs) to handle both data sparsity and model mismatch. They construct a robust MDP with a single uncertainty set and demonstrate that the optimal robust policy achieves a near-optimal sub-optimality gap under the target environment across three uncertainty models: total variation, χ2 divergence, and KL divergence. The proposed approach improves or matches state-of-the-art performance under these models and provides the first result for the χ2 divergence model.
Strengths: - The paper is well-structured and clearly written, making it easy to follow the main arguments and methodologies.
- The paper provides detailed theoretical analysis and guarantees, showing that the proposed method achieves near-optimal performance.
- The proposed framework is versatile as well as easier to implement.
Weaknesses: - While the theoretical guarantees are strong, the empirical validation could be more comprehensive. The experiments are conducted on relatively simple tasks (Frozen-Lake and Gambler problems). It would be beneficial to see results on more complex and diverse benchmarks.
- Computational Cost: The paper claims that the proposed method has a lower computational cost compared to existing methods, but this claim is not empirically validated. A comparison of computational costs (e.g., runtime) with other methods would strengthen this claim.
Technical Quality: 3
Clarity: 3
Questions for Authors: To summarize the main points/questions raised in the weaknesses section:
- Could the authors provide empirical results on more complex tasks to validate the effectiveness of their method?
- Could the authors compare the computational costs of their method with other existing methods?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful and insightful feedback. In the following, we provide point-to-point responses to the weaknesses and questions.
**Weakness 1. Experiment results on more complex and diverse benchmarks.**
We further provide two more experiments on large-scale environments in the rebuttal PDF. We implement our algorithm with the two baselines under the KL divergence uncertainty set.
In this rebuttal, we further add the results under the Garnet problem G(64,16) [a] in Fig 2(a), where there are 64 states and 16 actions. The transition kernels and reward functions are generated following some Gaussian distribution $\mathcal{N}(\mu_{s,a},\sigma_{s,a})$. The comparison under the N-Chain problem is presented in Fig 2(b). And the results under the Cartpole problem are in Fig 2(c). We also implement a model-free algorithm under the American Option problem in Fig 3, where no model estimation is required, indicating the potential of our method in solving large-scale problems.
As the results show, our method performs similarly to the LCB algorithm, aligning with our theoretical results. Moreover, our approach also shows a great improvement compared to the non-robust method, which verifies the robustness of our method.
We will include more experiments under different environments and uncertainty sets in the final version.
**Weakness 2. Computational cost.** Our algorithms are better in terms of computational complexity or practical implementations than both baselines.
The two-layer optimization problem in [5] is an extension of the model studied in [41] to the robust setting, both involving non-rectangular uncertainty sets that are NP-hard to solve [49]. This creates uncertainty regarding the solvability of their models. Specifically, due to the unsolvability of the non-robust model in [41], an adversarial training-based algorithm is designed in [b] with only an experimental convergence guarantee, highlighting the implementation challenges of [5]. In contrast, our algorithm can be implemented with polynomial complexity. Specifically, the total computational complexity of our algorithm under the TV, CS, and KL models is shown in [13] to be $\mathcal{O}(S^2A\log S)$,$\mathcal{O}(S^2A\log S)$, and $\tilde{\mathcal{O}}(S^2A)$.
Compared to [35], our algorithms also offer better computational complexity. Specifically, the penalty term in [35] requires a complicated computation involving a minimum operator, resulting in an additional max-min structure in their algorithm update. The comparison/max-min operator in the LCB algorithm is executed $SA$ times per step, significantly increasing computational complexity. In contrast, our algorithms have a simple structure and do not require additional operators, making them more computationally efficient.
To illustrate our computational efficiency, we include numerical experiments in the rebuttal PDF. We implemented the LCB algorithm from [35] and our DRO algorithm under the KL model, monitoring the execution time of both methods while learning the same policy from the same dataset. We plotted the execution time for each dataset versus the size of the dataset in three environments: Gambler's game, Frozen Lake, and N-chain. As shown in Fig. 1, our algorithm consistently requires less execution time across all three environments, demonstrating lower computational complexity.
Therefore, our methods exhibit superior computational complexity compared to both baselines.
[a] Archibald, T. W., et al. "On the generation of markov decision processes." Journal of the Operational Research Society 46.3 (1995): 354-361.
[b] Rigter, Marc, et al. "Rambo-rl: Robust adversarial model-based offline reinforcement learning." Advances in neural information processing systems 35 (2022): 16082-16097.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply. I still appreciate the contributions of the work and I am in favor of the acceptance of the paper. I have no further questions and will remain my positive score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive feedback on our submission!
We’re glad to hear that you found our responses helpful, and we deeply appreciate your efforts and support. Thank you again! | Summary: This paper studies offline reinforcement learning (RL) under model mismatch. The authors propose a unified distributionally robust optimization (DRO) framework that effectively tackles both the uncertainty from limited dataset coverage and the model mismatch between the training and deployment environments.
Strengths: A Unified DRO Formulation: The authors provide unified results for robust Markov Decision Processes (MDPs) with several types of robust sets.
The authors have carefully designed the uncertainty set radius, incorporating both model mismatch and data sparsity factors. The tight theoretical guarantees provided for three widely studied uncertainty set models (total variation, χ2 divergence, and Kullback-Leibler divergence) demonstrate the robustness and versatility of their approach.
The authors present the first finite-sample analysis for the χ2 divergence uncertainty set in the offline reinforcement learning (RL) setting with model mismatch. This contribution advances our understanding of this particular model and its applications in robust RL.
The paper is well-structured, with a clear problem formulation, comprehensive algorithmic description, and rigorous theoretical analysis.
Weaknesses: It would be beneficial if you could elaborate on the advantages of your work compared to [35] and [5], especially regarding sample complexity under KL and total variation settings. A more detailed comparison would help readers fully appreciate the advancements your method brings to the field.
Given that [5] presents another unified framework for offline distributionally robust RL, it would be interesting to know if your framework can be extended to the χ2 case. If this extension is not possible, highlighting this limitation could actually strengthen your paper by emphasizing the unique contributions of your approach.
Addressing the computational efficiency of your proposed algorithm, particularly in comparison to [35], would be valuable. If your method is computationally efficient, including experimental results similar to those in [35] could provide empirical support for your theoretical findings.
Technical Quality: 3
Clarity: 2
Questions for Authors: na
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have discussed the limitations in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful and insightful feedback. In the following, we provide point-to-point responses to the weaknesses and questions.
**Weakness 1. Advantages compared to [5] and [35].**
In summary, our method has three major advantages compared to both baselines. Firstly, our work is applicable under three different uncertainty set models, whereas [35] is limited to the KL model and [5] addresses only the KL and TV models (please see Weakness 2 for further discussion). Secondly, our algorithms achieve better sample complexity results compared to [5], and match the SOTA complexity under the KL model in [35] (please see discussion in Weakness 1-1). Finally, our algorithms demonstrate superior computational complexity compared to both baselines, and are the most efficient (please also see Weakness 3).
These improvements further strengthens our paper, highlighting our novelty and contributions.
**Weakness 1-1. Comparison of sample complexity with [5] and [35].**
We refer to Table 1 on page 8 for a comparison of the sample complexity results between ours and the two baselines. A more detailed comparison is provided below.
Compared with [5], our methods and results achieve better sample complexity in both KL and TV models. In the TV model, our sample complexity outperforms [5] in terms of dependence on $S$ and $(1-\gamma)$; For the KL model, our complexity is linearly dependent on $S$, while [5] has a quadratic dependence. Furthermore, as noted in [5], their result's exponential term can be replaced by utilizing both $P_{\min}$ and $\mu_{\min}$, while our (asymptotic) complexity result depends solely on $P_{\min}$.
Our results match exactly those of [35] in the KL model, indicating that we are achieving SOTA. Additionally, we provide results for two other models, which cannot be obtained directly using the algorithm in [35] (also see Weakness 2).
**Weakness 2. Extension of baselines to other models.**
Although extending the method in [5] may seem straightforward at first glance, the concrete extension requires additional effort. More importantly, the extended sample complexity result under the CS uncertainty set is also expected to be proportional to $S^2$, similar to the results under the other two models, whereas our method exhibits a linear dependence on $S$. This difference arises because the method in [5] is distribution-based, designing the first-level uncertainty set to cover the true nominal kernel, which results in worse complexity than ours (a more detailed discussion can be found in [46]).
In [35], the LCB-based algorithm is only designed for the KL model and requires additional efforts to extend to other models. Specifically, a careful study of the optimal solution to the corresponding DRO problem is needed to design the penalty term, ensuring a pessimistic estimation of the robust value functions. In contrast, our method can be directly applied from a uniform framework under all three uncertainty sets.
**Weakness 3. Computational complexity comparison.**
Our algorithms enjoy better computational complexity than both baselines.
The two-layer optimization problem in [5] is an extension of the model studied in [41] to the robust setting, both involving non-rectangular uncertainty sets that are NP-hard to solve [49]. This creates uncertainty regarding the solvability of their models. Specifically, due to the unsolvability of the non-robust model in [41], an adversarial training-based algorithm is designed in [a] with only an experimental convergence guarantee, highlighting the implementation challenges of [5]. In contrast, our algorithm can be implemented with polynomial complexity. Specifically, the total computational complexity of our algorithm in the TV, CS and KL models is shown in [13] to be $\mathcal{O}(S^2A\log S)$,$\mathcal{O}(S^2A\log S)$, and $\tilde{\mathcal{O}}(S^2A)$.
Compared to [35], our algorithms also offer better computational complexity. Specifically, the penalty term in [35] requires a complicated computation involving a minimum operator, resulting in an additional max-min structure in their algorithm update. The comparison/max-min operator in the LCB algorithm is executed $SA$ times per step, significantly increasing computational complexity. In contrast, our algorithms have a simple structure and do not require additional operators, making them more computationally efficient.
To illustrate our computational efficiency, we include numerical experiments in the rebuttal PDF. We implemented the LCB algorithm from [35] and our DRO algorithm under the KL model, monitoring the execution time of both methods while learning the same policy from the same dataset. We plotted the execution time for each dataset versus the size of the dataset in three environments: Gambler's game, Frozen Lake, and N-chain. As shown in Fig. 1, our algorithm consistently requires less execution time across all three environments, demonstrating lower computational complexity.
Therefore, our methods exhibit superior computational complexity compared to both baselines.
[a] Rigter, Marc, et al. "Rambo-rl: Robust adversarial model-based offline reinforcement learning." Advances in neural information processing systems 35 (2022): 16082-16097.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. My concerns have been addressed, and I would like to raise my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive feedback on our submission!
We’re glad to hear that you found our responses helpful, and we deeply appreciate your efforts and support. Thank you again!
---
Rebuttal 2:
Comment: Thank you again for reviewing our paper!!
As the discussion period ends soon, we would like to ask if you find our rebuttal and response useful to address your questions and concerns. If not, we are more than willing to clearify them for you!
Thank you very much for your time and support! | null | null | Rebuttal 1:
Rebuttal: We thank the three reviewers for the helpful and insightful feedback. Besides the point-to-point responses, we provide some additional numerical results in the PDF, and a few responses to address some common questions.
**Compare with baselines [5], [35].**
Our method has three major advantages compared to both baselines. **(1). Improved Sample Complexity.** Our algorithms achieve better sample complexity results compared to [5] (under TV and KL models), and match the SOTA complexity under the KL model in [35]. **(2). Higher Computational Efficiency.** Our method is more computationally efficient compared to both baselines. Specifically, the approach in [5] is NP-hard, whereas ours can be implemented in polynomial time. Additionally, due to the complexity of the penalty term design in [35], our algorithms demonstrate superior computational efficiency. We developed three numerical experiments to demonstrate this, as shown in Figure 1. **(3). Broader Applicability.** Our work is applicable under three different uncertainty set models, whereas [35] is limited to the KL model and [5] addresses only the KL and TV models. Extending their methods to other models requires additional effort.
**Experiments under more environments.**
We conducted three experiments with larger problem scales: the Garnet problem with 64 states and 16 actions, the N-Chain problem, and CartPole. The results, shown in Figure 2, indicate that our DRO-based approach achieves the best performance.
**Extension to practical settings.**
Since our main contribution is to develop a universal methodology to unify the two sources of uncertainty in offline RL with model mismatch, we mainly focus on the tabular setting to highlight our algorithm design and theoretical results in our paper. However, our method and framework can be easily extended to large scale problems with function approximation or model-free algorithm design (please also see our response to R3 for a detailed discussion). We also conducted an experiment on the American option problem under the offline setting with model mismatch. Our method can be directly combined with model-free robust Q-learning to develop a more efficient and practical algorithm. The results, shown in Figure 3, demonstrate that our robust Q-learning outperforms the non-robust baseline, highlighting the potential of our method in practical applications.
Pdf: /pdf/e4eabb56dd4c1a8f542f135f1008666fdb545e16.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AnyFit: Controllable Virtual Try-on for Any Combination of Attire Across Any Scenario | Accept (poster) | Summary: To enhance the scalability (mix-and-match) and robustness (details clothing texture preservation) in virtual clothing try-on tasks, the paper proposed three contributions: i) design the Hydra Encoding Block to do parallel self-attention features transmition, ii) present the Prior Model Evolution to leverage groups of models' capabilities, and iii) introduce the Adaptive Mask Boost for generate garments structures without being affected by the original clothes on human.
Strengths: - The paper designs the HydraNet to achieve multi-garment try-on with diffusion-based models. This is the first to extend the parallel UNet architecture towards multi-garment try-on.
- The paper points to a critical challenge but few papers addressed this question, generating accurate torso length and sleeve length of target garments on humans. Also, the method shows decent performance to generate unknown textures when the target clothes are shorter than the original clothes on a human image.
- The paper works decently on a complex background.
Weaknesses: - The clearness of the presentation could be improved. For example, the usage of TryOnDiffusion [58] is cross-attention but the paper claims that [58] use self-attention (line #54). Also, it is unclear how to adjust clothing sizes (line #112), etc.
- Fig. 6 (b) is not a new observation. The issue has been solved by the pre-processing method proposed by VITON-HD [12].
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Regarding line #54, there is inconsistency between the description in the paper and the original shared from TryOnDiffusion [58]. [58] uses cross-attention instead of self-attention, which is claimed to cause pixel-wise structural bias, to do implicit warping. Please clarify. On the other hand, OOTDiffusion [50] utilizes self-attention for preserving features. Can you share the pros and cons of leveraging cross-attention vs. self-attention for preserving features? And please state clearly the reason why this paper selected self-attention.
2. Regarding line #68, it is unclear what “parsing-free mask region” is? Considering line #197, does it mean to exclude the third image counted from the left in left-bottom of Fig. 2 when training? What is the difference between the proposed Adaptive Mask Boost without using parsing to eliminate the contour of original clothes and the method used in HR-VITON, mentioned at the beginning of page 5 (Sec. 3: pre-processing)?
3. I suggest including the notation mentioned in Sec. 3 in Fig. 2 to provide a clearer understanding.
4. Does the Pose Guided mentioned in line #116 mean the Pose Encoder in Fig. 2?
5. Why does the method train at the resolution of 1024x768 but evaluate at 512x384?
6. Regarding the ablation study of Adaptive Mask Boost (AMB) in Table 4, the only matrix that evaluates the structure is SSIM but the result is worse when applying the Adaptive Mask Boost module. Can the paper describe more to prove that the table is good evidence to prove the effectiveness of AMB?
7. Letting the model automatically find the suitable clothing length would misinterpret the accurate garment pattern, which is not expected in commercial usage. For example, we can only know that the model does not follow the original clothing length on input human to generate try-on results but we can’t know whether it generates accurate length, e.g., Fig. 9 and the sleeve length of the fourth case in the first row in Fig. 1. How does the paper think about designs that can ensure accuracy, e.g., COTTON [A]?
8. It is unclear how the method adjusts the size that is mentioned in line #112 after reading Sec. 3.3. Does it mean automatically learned clothing size determination?
9. Regarding line #133, how to prove that self-attention module instead of cross-attention module plays a vital role in the latent warping of the garments?
10. Regarding the N parameter in Hydra Fusion, is it set to 2 when trying on top and bottom clothes? Can it extend to try on scarves, shoes, etc.?
11. Fig. 12 lacks a direct visual comparison for proving the effectiveness of the proposed Prior Model Evolution. I suggest providing the figure with i) input human, ii) target clothes, and results from the four models, including SDXL-base-1.0, SDXL-inpainting-0.1, DreamshaperXL, and the merged one.
12. How is the inference time for top and bottom try-on?
[A] C. -Y. Chen, Y. -C. Chen, H. -H. Shuai and W. -H. Cheng, "Size Does Matter: Size-aware Virtual Try-on via Clothing-oriented Transformation Try-on Network," *2023 IEEE/CVF International Conference on Computer Vision (ICCV)*, Paris, France, 2023, pp. 7479-7488, doi: 10.1109/ICCV51070.2023.00691.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have shared the limitations and potential societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: I will respond directly to save space.
## Weaknesses
#### 1. Misuse of Citing of TryonDIffusion
**Response:** I apologize for the lack of precision in our wording here. In fact, the principles behind the injection of self-attention and cross-attention are the same, and we will address this issue in detail in **Explanation 1 in global Rebuttal**.
### 2. How to Adjust Clothing Sizes
**Response:** Please refer to **Explanation 2 in global Rebuttal**. I hope my response addresses your concerns. If you still have any confusion, feel free to reach out to me again.
### 3. Observation of Fig.6(b)
**Response:** This issue is not resolved by VITON-HD. The results **in Fig. 6(b) in main paper** were obtained using the DCI model, which is significantly later than VITON-HD and utilizes the pre-processing method of VITON-HD. The reason for this phenomenon is that their cloth-agnostic mask leaks the shape of the original clothing; please refer to **Explanation 2 in global Rebuttal** for more details. In fact, this is a common issue, as shown **in Fig. 9 of main paper**, where all other models fail to generate the lower hems of garments correctly—they are either too long or blurry.
## Questions
### 1. Different About Self-attention and Cross-attention.
**Response:** Please refer to **Explanation 1 in global Rebuttal**.
### 2 & 8. Parsing-free Mask Region
**Response:** It doesn't mean to exclude the third image counted from the left in left-bottom of Fig. 2, but human parsing/segmentation map when training. Please refer to **Explanation 2 in global Rebuttal**
### 3 & 4. Advise About Typing
**Response:** Thank you for your valuable suggestions. "Pose Guided" refers to the Pose Encoder. If the paper is accepted, we will incorporate your suggestions in the camera-ready version.
### 5. Question About Different Resolutions
**Response:** This is because most of the metrics from previous methods were measured at a size of 512$\times$384. To ensure comparability with past methods, we generate try-on results at a size of 1024$\times$512, then resize them to 512$\times$384 for metric calculations. We also provide the metrics obtained from AnyFit directly generated at a smaller version (SD1.5, 512$\times$384 generating) here.
| VITON-HD | LPIPS | SSIM | FID | KID |
|----------------|----------|----------|----------|----------|
| AnyFit (SD1.5) | 0.080 | 0.879 | 8.90 | 0.63 |
| AnyFit (SDXL) | 0.075 | 0.893 | 8.60 | 0.55 |
### 6. The reason of Worse SSIM on AMB
**Response:** Worse SSIM on AMB is a reasonable phenomenon. The metrics that truly reflect the effectiveness of AMB should be FID and KID in an unpaired evaluation. This is because AMB performs adaptive mask extension, primarily targeting different cloth types of try-ons (the original outfit and the outfit to be tried on belong to different categories). SSIM is calculated in a paired setting, where the outfit to be tried on is the same as the one the model is wearing. In such a reconstruction scenario (paired setting), a smaller mask area is preferable. If the masked area is zero, meaning no inpainting or try-on occurs, you will attain the best LPIPS and SSIM metrics (exactly same with original one). However, this does not reflect the model's true try-on capability. Our AMB strategy may expand the mask area in certain scenarios, which could lead to slightly worse SSIM, for instance, when the imagined background differs from the original background. Nevertheless, AMB performs exceptionally well on more realistic unpaired try-on tasks, resulting in lower FID and KID.
### 7. Worry About Suitable Clothing Length
**Response:** AnyFit does not strictly follow to the original clothing length for the input human, which means that the model's generation will at least not be misled by incorrect clothing lengths. This clearly results in fewer erroneous outcomes compared to the "follow" version. Moreover, we are not entirely leaving it up to the model to generate; we provide hints by extending the mask region according to the length of the clothing to be tried on during inference. Additionally, the model consistently generates reasonable sleeve lengths in practice even without AMB.
### 9. How to Prove That Self-attention Plays a Vital Role
**Response:** This is a very valuable question. Please refer to **Fig.1 note in global Rebuttal**.
### 10. Expensions to More Hydra Conditioning Branches.
**Response:** Absolutely yes to both questions! The primary intention behind designing Hydra Blocks is to enable multi-condition injections of arbitrary quantities. We believe that if appropriate training data were available, we could support a wider variety of clothing combinations, such as shirts + outer suits + pants + shoes.
### 11. Fig.12 Presenatation.
**Response:** In **Fig.6(a)&(c) in main paper**, we demonstrate the effectiveness of Model Evolution and the impact of varying fusion weights on the output, which strongly showcases the benefits of Model Evolution. Regarding Fig. 12, we cannot provide 4 versions of models as you mentioned. This is because the try-on task is a specific type of inpainting, where models like SDXL-base-1.0 and DreamshaperXL, which are text-to-image models, cannot be directly utilized in this context, as shown in the far-right column of Fig. 6 (c). In this scenario, training SDXL-base-1.0 and DreamshaperXL as base models would be highly impractical and extremely costly (8 A100 for ten days for each one). Their training would require the model to first master the fundamental skills of inpainting before moving on to the refined learning of garment details. As a result, we can only present SDXL-INP results.
### 12. Inference Time Cost.
**Response:** AnyFit takes approximately 5 seconds to generate 30 steps of sampling on a single NVIDIA RTX 3090 GPU, with a memory usage of around 15GB. When injecting both upper and lower garments simultaneously, the time overhead increases by 9%.
---
Rebuttal Comment 1.1:
Comment: Thank the author for providing further information and contributing great work to the try-on community! I enjoyed it, especially the experiment provided in Fig. 1 in the global rebuttal. I have read other reviewers' comments and the rebuttal information. I would like to remain my rating as 'Accept.'
---
Rebuttal 2:
Comment: Thank you for your valuable feedback and for taking the time to evaluate our work. I truly appreciate your acknowledgment of our contributions to the try-on community and your positive remarks regarding our research. It’s also great to hear that you enjoyed the experiment presented in Fig. 1 in the global rebuttal. We will ensure that this figure is included in our revised version to benefit wider readers. Once again, I am grateful for your insights, which have been instrumental in refining our paper.
We hope that our responses during the rebuttal phase have adequately addressed your concerns regarding the weaknesses identified. I also hope that all your questions have been satisfactorily answered through our detailed responses. If there are any remaining issues or points that require further clarification or discussion, please do not hesitate to let us know. We truly value your input and would be more than willing to address any lingering concerns, while improving the quality of our paper.
If we have successfully addressed all your doubts, we would greatly appreciate it if you could consider raising the rating for our paper. We believe that the sufficiency of our experiments (Reviewer v6TS), along with the thorough ablation study showcasing convincing results (Reviewer vsbp), underscores the rigor and depth of our work.
Thank you once again for your feedback. | Summary: The paper introduces AnyFit, a solution for multi-garment try-on. Specifically, they propose HydraNet to simultaneously extract features of both upper and lower garments. Then, they present Hydra Fusion to integrate these garment features into a denoising Unet. Additionally, they propose an Adaptive Mask, combined with text, to explore initial controllable capabilities.
Strengths: 1. They explore multi-garment virtual try-on, an area that has not been extensively covered in previous literature based on diffusion models.
2. The paper is clearly written, and the related experiments are sufficient.
3. They explored initially controllable try-on, which has been less investigated in prior literature.
Weaknesses: 1. There are no major theoretical and technical innovations. For example, HydraNet is a simple variant of the Unet copy which has been extensively adopted as the feature extractor in previous work such as AnimateAnyone and OOTDDiffusion.
2. Similar to other try-on methods based on diffusion models, AnyFit relies on a large amount of high-quality images. However, their proprietary e-commerce dataset is not open, which does not further prompt the development of the try-on task.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why is there a qualitative comparison between AnyFit and OOTDiffusion in the paper, but no quantitative comparison? Additionally, the original paper of OOTDiffusion reported their quantitative results on VITON-HD and Dress Code at the resolution of 512×384.
2. OutfitAnyone (https://github.com/HumanAIGC/OutfitAnyone, https://huggingface.co/spaces/HumanAIGC/OutfitAnyone) may serve as a good qualitative baseline for the multi-garment try-on task.
3. Although it doesn't affect the conclusions, I'm curious why some baseline methods reported significantly different quantitative metrics between AnyFit and OOTDiffusion? For example, on Dresscode, AnyFit reported an FID of 6.94 for LADI-VTON, whereas OOTDiffusion reported an FID of 5.66.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer v6TS,
Thank you for your detailed review and constructive feedback on our paper. We appreciate the effort you have put into assessing our work.
## Weaknesses
### 1. Major Innovations
**Response:** Thank you for your valuable feedback. We believe that identifying a problem is more important than proposing a solution in research. Introducing a problem can stimulate a series of innovative efforts within the community. In this paper, we first identified the issue of artifacts at the junction of upper and lower garments in the multi-garment try-on approach of simple VTON-concat (used by wear-any-way[1]) and analyzed it. We find that it is due to mishandling the relative clothing sizes after concatenation, which leads to garment distortion and blurring. Secondly, we pointed out that the previous agnostic mask leaked the clothing contours, resulting in poor cross-category try-on performance. We proposed a mask boost strategy as a solution. While our approach may seem intuitive, it is effective, and the identification of these research problems paves the way for future investigations.
Furthermore, we think that HydraNet is not simply a variant of U-Net. During the parallelization of the modules, we meticulously studied their functions (as shown in **Explanation 1 in global Rebuttal**) and achieved the most lightweight expansion scheme. Implementing multi-garment try-on is not particularly difficult, but designing an efficient and lightweight solution is a challenge and our area of innovation. These insights are absent in concurrent works, such as Mmtryon [2].
### 2. Proprietary E-commerce Dataset
**Response:** I apologize for the situation. Due to copyright and privacy issues associated with the proprietary dataset, we are unable to make it publicly available until these legal concerns are resolved. However, we believe that the methods proposed in our paper, including the multi-condition injection in HydraNet and the model evolution strategy that enhances baseline capabilities, can significantly advance the development of the try-on task and potentially benefit a broader range of conditional generation tasks.
## Questions
### 1. The Reason of Absent of OOTDiffusion Qualitative Comparison.
**Response:** We originally planned to perform a quantitative comparison with OOTDiffusion; however, it was quite perplexing that the official code for OOTDiffusion encountered compatibility issues on our machine, resulting in out-of-memory errors. Therefore, we were unable to complete the quantitative comparison with OOTDiffusion by the time of our paper submission. As for the qualitative comparison, we used the online generation gradio provided by OOTDiffusion. To ensure the rigor and fairness of our comparisons, the results reported in our experiments were primarily generated and evaluated by ourselves using the official models. We found some discrepancies between our results and those from OOTDiffusion paper, which led us to not utilize the data directly from the original OOTDiffusion paper in our work. For the sake of rigor, we have re-run the official OOTDiffusion model and conducted quantitative evaluations in the past few days, and we report those metrics below, hoping they meet your requirements.
| VITON-HD | LPIPS | SSIM | FID | KID |
|----------------|----------|----------|----------|----------|
| HR-VTON | 0.097 | 0.878 | 12.31 | 3.86 |
| DCI-VTON | 0.072 | 0.892 | 8.76 | 0.92 |
| StableVTON | 0.076 | 0.891 | 9.35 | 1.51 |
| GP-VTON* | 0.083 | 0.892 | 9.17 | 0.93 |
| LADI-VTON | 0.091 | 0.875 | 9.42 | 1.63 |
| IDM | 0.078 | 0.881 | 9.12 | 1.03 |
| OOTDiffusion | 0.093 | 0.856 | 9.16 | 0.68 |
| AnyFit | 0.075 | 0.893 | 8.60 | 0.55 |
Moreover, we observed in the qualitative comparison that the performance of IDM-VTON significantly outperformed OOTDiffusion, while our model AnyFit surpassed IDM-VTON in the quantitative comparison. Therefore, we believe that the conclusion that AnyFit is superior to OOTDiffusion is evident.
### 2. Comparison About OutfitAnyone
**Response:** In **Fig.4 in global Rebuttal PDF**, we provide the generation results by AnyFit using OutfitAnyone person. Unfortunately, the open-source demo provided by OutfitAnyone is no longer available (reported Error), so we are unable to supply the corresponding generation comparisons. However, our generated results demonstrate comparable performance to the images presented in the OutfitAnyone paper. Considering that the OutfitAnyone team collected a significant amount of private multi-garment data triplets for training, while AnyFit only utilized the cropped unclean data from the open-source DressCode for training, the comparable results we achieved are sufficient to demonstrate the superiority of our method.
### 3. The Reason of Different Quantitative Metrics Between AnyFit and OOTDiffusion.
**Response:** Your observation is very meticulous. Since OOTDiffusion did not provide the code for evaluation, we used the metric evaluation scripts from LADI-VTON. It can be observed that the metrics we reported are closer to the results in LADI-VTON. Given the differences between our metrics and those reported in the OOTDiffusion paper, we did not directly accept the metrics reported in OOTDiffusion. Additionally, the official DressCode dataset does not provide clothing-agnostic images (masked images), and the significant difference between OOTDiffusion and AnyFit on the DressCode dataset may stem from the differences in masking strategies.
We hope that our responses address your concerns adequately. Thank you again for your valuable feedback.
*[1] Wear-any-way: Manipulable virtual try-on via sparse correspondence alignment.*
*[2] Mmtryon: Multi-modal multi-reference control for high-quality fashion generation.*
---
Rebuttal Comment 1.1:
Comment: I have read the authors' rebuttal and appreciate their efforts during the rebuttal process. Therefore, I maintain my initial rating.
---
Rebuttal 2:
Comment: First and foremost, we would like to express our sincere gratitude for your valuable feedback and thorough review of our paper. Your insights have been instrumental in enhancing both the quality of our research and its presentation. We are pleased to inform you **these improvements (OOTDiffusion and OutfitAnyone comparisons) will be reflected in our revised version**.
We hope that our efforts and responses during the rebuttal phase have adequately addressed your concerns regarding the identified weaknesses, **specifically the concerns about the innovation of our work. We have also added more comparisons as you required, to further substantiate the contributions of our research**. If there are any remaining issues or points that require further clarification or discussion, please do not hesitate to let us know. We truly value your input and would be more than willing to address any lingering concerns, while improving the quality of our paper.
If we have successfully addressed all your doubts, we would greatly appreciate it if you could consider raising the rating for our paper. We believe that the thorough ablation study showcasing convincing results (Reviewer vsbp), underscores the rigor and depth of our work. Additionally, as noted by Reviewer Xer8, our research highlights a critical challenge in the field and demonstrates effectiveness in complex backgrounds.
Thank you once again for your feedback. | Summary: This paper proposes a novel approach of controllable virtual try-on for any combination of attire across any scenario. It employs a Hydra Block, a lightweight and scalable operator designed for attire combinations, facilitated by a parallel attention mechanism that enables injecting multiple garment features into the main network. The model's robustness and expressiveness are enhanced by synthesizing multiple models' residuals and a mask region boost strategy, which addresses instability issues. The proposed method outperforms benchmarks on high-resolution images and real-world data, showing promising generated results.
Strengths: 1. Anyfit successfully mitigates the limitations of current virtual try-on methods, i.e., handling diverse attire combinations and scenarios.
2. The novel Hydra Block and its parallel attention mechanism are well-explained, and the mask region boost strategy is theoretically sound.
3. The experiments are comprehensive, with detailed reporting of the training and testing procedures, improving the reproducibility of the results.
4. Overall the paper is well written, with clear explanations and logical structuring.
Weaknesses: The manuscript commendably provides a thorough ablation study with convincing results. However, the comparison based on SD1.5 is missing. Adding that leads to a more fair comparison with baselines that use SD1.5 backbones, such as LADI-VTON, StableVTON, and OOTDiffusion.
Technical Quality: 3
Clarity: 3
Questions for Authors: Although the paper claims to combine any clothing items, the visualized results only show outfits with tops and bottoms. Can this method currently support combinations of more clothing items, such as shirts, jackets, and pants?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As discussed in the paper by the authors, AnyFit sometimes shows instability in generating hands with complex structures and the text control capabilities of the method are not perfect yet.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer vsbp,
Thank you for your detailed review and constructive feedback on our paper. We appreciate the effort you have put into assessing our work. Below, we have provided detailed responses to each of your comments and concerns.
## Weaknesses
### Comparison with Baselines using SD1.5 Backbones
**Response:** Your observations are insightful, and your suggestions are very valuable. We conduct a more fair comparison with the baselines using SD1.5 backbone as follows.
| VITON-HD | LPIPS | SSIM | FID | KID |
|--------------------|----------|----------|----------|----------|
| StableVTON-base | 0.081 | 0.867 | 9.71 | 1.68 |
| StableVTON-repaint | 0.076 | 0.891 | 9.35 | 1.51 |
| OOTDiffusion | 0.093 | 0.856 | 9.16 | 0.68 |
| LADI-VTON | 0.091 | 0.875 | 9.42 | 1.63 |
| AnyFit (SD1.5) | 0.080 | 0.879 | 8.90 | 0.63 |
Table 1: Quantitative comparisons on the VITON-HD with SD1.5 as the backbone.
In fact, during the training process, we found that the model's performance with SD1.5 backbone did not further improve when increasing the input resolution (from 512$\times $384 to 1024$\times $768). We believe this is because the base model of SD1.5 was pretrained at a resolution of 512. To pursue a higher-quality fitting effect, we shifted our focus to the SDXL model in subsequent research. The SD1.5 metrics reported in Tab.1 above are from an early version of the training that we can find now and this early version has not actually undergone sufficient training.
## Questions
### Support More Various Combination
**Response:** We believe that if we have corresponding training data, we would be able to support a wider variety of clothing combinations, such as a shirt + jacket + pants + shoes. Currently, the trained model only supports tops + bottoms because we use samples from DressCode for training, which does not provide additional combinations for training pairs. Notably, we observed that MMTRYON[1] produces multi-item combinations with a simple pipeline. We are confident that our AnyFit model, which is specialized for multi-item dressing, would perform even better when exposed to similar data.
We hope that our responses address your concerns adequately. Thank you again for your valuable feedback.
*[1] Zhang, X., Lin, E., Li, X., Luo, Y., Kampffmeyer, M., Dong, X., Liang, X.: Mmtryon: Multi-modal multi-reference control for high-quality fashion generation. arXiv preprint arXiv:2405.00448 (2024)*
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response. My doubts have been clarified, and my rating remains unchanged.
---
Reply to Comment 1.1.1:
Comment: We are glad to hear that all your doubts have been clarified. If there are any remaining issues or points that require further clarification or discussion, please do not hesitate to let us know. We truly value your input and would be more than willing to address any lingering concerns, while improving the quality of our paper.
Thank you once again for your feedback. | Summary: Current image-based virtual try-on methods struggle with achieving high-fidelity and robust fitting across diverse scenarios due to issues like ill-fitting garments and quality degradation. To address this, AnyFit is introduced, leveraging a lightweight, scalable Hydra Block that facilitates feature injection of multiple garments through parallel attention. Additionally, AnyFit enhances robustness and expressiveness by synthesizing residuals from multiple models and implementing a mask region boost strategy. AnyFit outperforms baselines in high-resolution benchmarks and real-world data, enabling high-fidelity virtual try-ons in any scenario, paving the way for future fashion research.
Strengths: 1) AnyFit introduces the innovative Hydra Block, a lightweight and scalable operator that revolutionizes attire combinations through a parallel attention mechanism. This ensures seamless feature integration of multiple garments, overcoming issues of ill-fitting styles and quality degradation.
2) By synthesizing residuals from multiple models and implementing a mask region boost strategy, AnyFit significantly enhances its robustness and expressiveness in real-world scenarios. This ensures stable and high-fidelity virtual try-on experiences across diverse settings.
Weaknesses: 1) While AnyFit offers advanced features and high-fidelity results, its complex parallel attention mechanism and multi-model residual synthesis may lead to increased computational requirements.
2) The performance of AnyFit, like many image-based virtual try-on systems, relies heavily on accurate garment segmentation from the input image. In cases where the segmentation is imprecise or incomplete, the resulting virtual try-on may suffer from artifacts or incorrect fitting, affecting the overall realism and user experience.
3) Training on two limited general datasets may not achieve the desired level of high generalization.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) How does AnyFit overcome issues with ill-fitted garment styles and quality degrading during the training process?
2) Why are the baseline methods in Table 3 missing from Table 1?
3) How does the proposed framework effectively prevent errors when the clothing-agnostic mask results in misestimations?
4) What about the computational complexity of the proposed model?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As the authors mentioned,
1) The proposed model shares the shortcomings of large text-image models, sometimes showing instability in generating hands with complex structures.
2) The proposed model offers initial but not yet fully mature text control capabilities.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer KSKd,
Thank you for your detailed review and constructive feedback on our paper. We appreciate the effort you have put into assessing our work. Below, we have provided detailed responses to each of your comments and concerns.
## Weaknesses
### 1. Increased Computational Requirements
**Response:** We believe that users need not worry about computational burden. We acknowledge that the parallel attention mechanism can lead to a slight increase in computational requirements. However, this increase in computational overhead is minimal. This is because we carefully selected attention matrices for parallelization, while sharing as many parameters as possible across different conditioning branches. As a result, adding an additional conditioning branch only leads to an 8% increase in parameters (Line #92) and a 9% increase in inference time overhead (Tab. 2 in main paper). We contend that Hydra Blocks may appear complex on the surface, but they are, in fact, sufficiently lightweight.
### 2. Dependence on Accurate Garment Segmentation
**Response:** If I understand correctly, you believe that we need a sufficiently accurate model to segment the flat clothing from the input images. However, we do not actually employ a segmentation model in our approach. In our private dataset, many clothing items have backgrounds that are not pure white, but rather light gray, light green, and even contain some brand texts and logos. We found that AnyFit, when exposed to such training data, can easily distinguish between the clothing subject and the background, showing remarkable resilience to minimal background noise. Please refer to **Fig.5 in global Rebuttal PDF** for a comparison, where we inputted garments with irrelevant text in the background alongside garments with background text cropped through object detection. Both produced similar results, which suggests that AnyFit does not require precise clothing segmentation; it possesses the ability to recognize the main subject of the clothing. We believe that if AnyFit is trained on a sufficient quantity of complex background clothing images, or even images of models wearing the same style of clothing from different angles, the model will automatically learn to implicitly extract the clothing, thereby achieving model-to-model virtual try-on. This has already been demonstrated by TryonDiffusion and Wear-Any-Way.
### 3. Limited Training Data
**Response:** Our training results on two limited general datasets, VITON-HD and DressCode, significantly outperform the baselines (Fig. 5(a) and Tab. 3 in main paper), especially in complex scenarios. This should partially demonstrate our model's generalization. Additionally, training on a larger private dataset has further enhanced the model's generalization and stability to a very high level, as shown in Fig. 5 and Fig. 16 in main paper.
## Questions
### 1. Overcoming Issues with Ill-Fitted Garment Styles and Quality Degradation During Training
**Response:** Regarding the quality degradation issue of the SDXL-INPAINTING baseline model, we have enhanced its performance by merging excellent t2i model parameter variations within a model family. Additionally, to address the ill-fitted phenomenon that arises during cross-category fitting, we implemented an Adaptive Mask Boost strategy. Both methods are detailed in Section 3.3 of the main paper. If you still have any questions or concerns, please feel free to reach out to us; we would be happy to assist you.
### 2. Missing Baseline Methods in Table 1
**Response:** In Tab. 1, two baselines, HR-VTON and GP-VTON, are missing from Tab. 3. The GP-VTON model relies on a specific preprocessing model that segments clothing into two sleeves and a torso. Unfortunately, the authors of GP-VTON have not made their segmentation preprocessing model publicly available, which prevents us from testing GP-VTON on the private dataset. HR-VTON was omitted because it is an earlier model that uses GAN and has not been pretrained on a large-scale dataset. Its robustness is significantly lower, as it tends to overfit on the VITON-HD dataset and fails to generate meaningful results on the private dataset. To maintain a compact format, we have excluded the results of HR-VTON in the main paper. However, we are happy to provide visual results from HR-VTON in **Fig.2 in global Rebuttal PDF**, which demonstrates that it did not produce meaningful outcomes, nor was there a necessity for evaluation metrics.
### 3. Handling Errors from Clothing-Agnostic Masks
**Response:** If I understand correctly, you are concerned that the proposed model may produce incorrect clothing-agnostic masks. In fact, the Adaptive Mask Boost strategy we adopted specifically addresses this issue. As shown in **Fig.3 in global Rebuttal PDF** , the masks generated by our model are noticeably more expansive than those of previous methods. This approach helps us to minimize the remnants of the original clothing, thereby not affecting the synthesis of new garments. Additionally, it forces the model to learn the correct clothing boundaries during training and to reconstruct large non-clothing areas that have been masked out. During inference, we provide the correct mask area length according to the aspect ratio of the clothing to be tried on (detailed in lines #182 to #204 in main paper). Through this more flexible masking strategy, we aim to avoid small leakage of the original clothing mask areas (**shown as red box regions in Fig.3 in global Rebuttal PDF**); at the same time, the larger mask areas do not lead to artifacts, which enhances the stability of the VTON results.
### 4. Computational Complexity
**Response:** Our proposed model takes approximately 5 seconds to perform 30-step sampling on a single NVIDIA RTX 3090 GPU, using around 15GB of memory.
We hope that our responses address your concerns adequately. Thank you again for your valuable feedback.
---
Rebuttal Comment 1.1:
Comment: After reading the rebuttal and other reviewers' comments, I remain my initial rating.
---
Rebuttal 2:
Comment: First and foremost, we would like to express our sincere gratitude for your valuable feedback and thorough review of our paper. Your insights have been instrumental in enhancing both the quality of our research and its presentation. We are pleased to inform you that these improvements will be reflected in our revised version.
We hope that our efforts and responses during the rebuttal phase have adequately addressed your concerns regarding the identified weaknesses. If there are any remaining issues or points that require further clarification or discussion, please do not hesitate to let us know. We truly value your input and would be more than willing to address any lingering concerns, while improving the quality of our paper.
If we have successfully addressed all your doubts, we would greatly appreciate it if you could consider raising the rating for our paper. We believe that the sufficiency of our experiments (Reviewer v6TS), along with the thorough ablation study showcasing convincing results (Reviewer vsbp), underscores the rigor and depth of our work. Additionally, as noted by Reviewer Xer8, our research highlights a critical challenge in the field and demonstrates effectiveness in complex backgrounds.
Thank you once again for your feedback. | Rebuttal 1:
Rebuttal: ## Notes of the figures in the attached PDF.
**Fig.1 note: Ablation about connection from different blocks between HydraNet and MainNet.** In fact, we have questioned the connection between HydraNet and MainNet, and we aimed to understand which specific layers are contributing to this pipeline. Please refer to **Fig.1 in global Rebuttal PDF**. We attempted to sever the connection between HydraNet and MainNet by removing the injection of self-attention layers. The results showed that without self-attention layers between the up blocks, the model could not produce the correct try-on results at all. When we removed the injection of cross-attention layers (by using a zero image feature vector instead), the model only exhibited color biases with certain brightly colored garments. This demonstrates that self-attention layers, especially those between the up blocks, are the most important.
**Fig.2 note:** HR-VTON (GAN-based method) performs poorly on private data.
**Fig.3 note:** Comparison of our mask strategy and the previous mask strategy. The latter leaks the clothing outline, causing model cheating.
**Fig.4 note:** Fitting results of characters derived from OutfitAnyone.
**Fig.5 note:** The showcase of robustness against clothing background noise.
## Global explanations
### Explanation 1. Different About Self-attention and Cross-attention injection.
The self-attention layer uses features from the Unet itself to obtain Query tensors and performs attention calculation with the Key and Value tensors also from Unet features. In contrast, the cross-attention layer computes attention with Key and Value tensors derived from textual features. Their module structures are completely same; the only difference lies in the sources of K and V features and the number of channels. When we use Stable Diffusion as the base model for training, the initialized cross-attention weights tend to emphasize the computation of global features from prompts, while the self-attention weights focus more on the details of visual features. This is influenced by the pre-trained weights, and their structures are isomorphic. Therefore, many recent try-on models based on SD primarily utilize self-attention for feature injection. TryonDiffusion is trained from scratch, so it can work with either self-attention or cross-attention. As an early work, it followed the precedent of using cross-attention for feature injection. Regarding your mention of pixel-wise structural bias, the original text should read, "However, channel-wise concatenation cannot handle complex transformations such as garment warping (see Sec. 4). This is because ..., these primitives have strong pixel-wise structural bias." This indicates that the bias arises from channel-wise concatenation injection and is not related to self-attention injection.
### Explanation 2: Explanations about Adaptive Mask Boost and how it differs from HR-VITON
Firstly, the parsing-free mask refers to the fact that we do not use human parsing results (referred to as segmentation maps at the beginning of page 5 of HR-VITON) when generating mask. Such models segment the human body into semantic parts such as limbs, head, upper garments, and lower garments. In the preprocessing of works like HR-VITON, they first generate masks using the torso and limbs from the OpenPose keypoints, for instance, covering the arms and the area between the shoulders and hips with rectangular areas, forming an OpenPose mask. Next, because some clothes are long or bulky, solely using the OpenPose mask may not fully cover the original clothing. They use the clothing map from human parsing as a mask, combining it with the OpenPose mask to form the final cloth-agnostic mask, ensuring that all clothing areas are covered.
However, this reliance on the parsing map for the mask can inadvertently leak the clothing outline, as shown in **Fig.3 in global Rebuttal PDF**. For instance, if you use a short-sleeve model for inferring with long-sleeve clothing, due to the parsing map, the resulting cloth-agnostic mask may have protrusions at the shoulders (derived from the short sleeves). The model would perceive this information, as similar masks during training suggested that this was short-sleeve clothing; thus, the generated result would likely exhibit incorrect protrusions at the shoulders of the long-sleeve garment, as shown in **Fig. 6(b) in main paper**. Essentially, the cloth-agnostic mask during training leaked the correct outline of the original clothing, while the mask used during inference leaked an incorrect outline, leading to errors. The same applies to clothing length; if overly reliant on the human parsing map, when a short garment model tries on a long piece of clothing, it may produce a cloth-agnostic mask that is not long enough, resulting in generated clothing being too short.
We observed this phenomenon in our usage, abandoning the human parsing models and instead opting to inflate the openpose masks in width and randomly extend the masks in length to avoid the issue of the original openpose masks not covering the clothing hem or other areas. Furthermore, due to the inflation and random growth during training, our masks completely avoid leaking the original shape of the clothing. Therefore, the model can only determine how the clothing should appear by observing the clothing itself. Additionally, during inference, we utilize a subject detection model to detect the bounding box of the clothing and suitably extend the length of the openpose mask based on the aspect ratio $\sigma$ of the bounding box. This adaptive mask provides cues for the clothing length during inference, thus assisting in delivering the correct try-on effect. Specific parameters for our mask strategy are detailed in Sec. 3.3 Adaptive Mask Boost of our paper.
Pdf: /pdf/d0a30a41419c9bf7b308938bd85bfab6fbb2ff7e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multivariate Probabilistic Time Series Forecasting with Correlated Errors | Accept (poster) | Summary: Briefly summarise the paper and its contributions. This is not the place to critique the paper; the authors should generally agree with a well-written summary.
The paper proposes an extension to multivariate forecasting model that takes into account the error auto-correlation. The issue of remaining auto-correlation residual is first illustrated clearly in an example with a multivariate forecasting method and the author then introduces their method. The method consists in modelling the cross-covariance through a learned weighted sum of kernels matrices that are projected to obtain the final estimated correlation. This approach leverages previous work on low-rank parametrisation of multivariate forecasting and the authors manages to find an approach that remains tractable despite the complexity of the task. Experiments are conducted on various real-world datasets where their method is shown to improve several forecasting metrics such as CRPS-SUM, RMSE or energy scores.
Strengths: * the paper is well written, even though the approach is very technically involved, the authors did a great job at explaining it well
* the paper is well motivated: insighful data-analysis are made to illustrate the presence of autocorrelated residual and motivate their approach
* large coverage of experiments and empirical evaluations: the experiments are done on a large set of datasets covering many metrics and are also well assessed qualitively
Weaknesses: * potential gaps in experiments:
* matching budget: the method is currently much more expensive that the baseline considered
* some details of Hyperparameter Optimization are currently missing (but will be added at the camera ready according to the rebuttal)
* the paper lacks clarity on whether the method will be competitive with better likelihoods. Currently the method may work well only because of non-competitive likelihood is used (Gaussian) as opposed to Student-t or Non-paranormal which would get better residuals
Technical Quality: 3
Clarity: 4
Questions for Authors: I am separating my questions between the most important ones that would help to raise my score and minor comments.
## Most important
**Lack of clarity whether the method will be competitive with better likelihoods.** Currently, the method only studies a Gaussian noise. However, this noise is a poor fit for the dataset studied and as such is inflating the residuals. This leaves open the question of the method works just because the initial residuals are very poor (and adds a smoothing effect which helps their estimation) and not so much because of the reason given by the paper (that the approach models the autocorrelation aspect).
I can see two ways to remove this potential cofounding factor, one is to include experiments with paranormal distribution which would be more robust from [2], the other is to consider student-t distribution which should also be tractable.
**Details of Hyperparameter Optimization.** The authors mentions:
```
The best hyperparameters for a base model—dataset combination are selected based on the best validation loss. Once the optimal learning rate and hidden size are determined, we apply the same hyperparameters for models both with and without applying our approach
```
How does it work exactly? Which model are you using to determine and fix the hyperparameters? (Selecting the best hyperparameters for your method may favour their transformation)
**Matching budget.** The method proposed by the author is roughly 10x slower on average (and up to 30x slower for exchange-rate from table 10). This point is important and currently a bit buried down, for instance the authors mentions: “* However, there is no evidence that our method slows down convergence. On the contrary, it accelerates convergence for 5 out of 9 datasets for GPVar.” However, the wallclock time are clearly much slower I think this is fine for the paper overall but it should be stated clearly as a limitation (the method proposed by the authors would likely be worse than 10 ensemble of the baseline for instance, I don’t expect this particular point to appear in text or experiment but I want to explain why this is important).
**Citation of multivariate evaluation metrics.** The authors uses CRPS-sum as their main error metrics (in addition to many other metrics including energy scores in their appendix), I would highly recommend citing this paper which discuss the limitation for this metric when assessing multivariate methods (not mine):
*Regions of Reliability in the Evaluation of Multivariate Probabilistic Forecasts*
Étienne Marcotte, Valentina Zantedeschi, Alexandre Drouin, Nicolas Chapados
*Proceedings of the 40th International Conference on Machine Learning*, PMLR 202:23958-24004, 2023.
To be clear, I do not think this is a large problem given that the authors also conducted many high quality qualitative analysis of their results.
## Details
- It would be very useful to clearly spell out the dimension/numbers of projection parameters in a table, possibly in the appendix
- I highly encourage the authors to release their code (the intention is mentioned in the supp, the work will be hard to reproduce w/o given that it is quite technical, and the work will be more impactful as other will be able to directly compare to their setup)
- a small point regarding "real-world data frequently exhibit significant error autocorrelation and cross-lag correlation due to factors such as missing covariates", I agree that missing covariates is indeed a potential use-case but it feels very niche to me (none of the dataset have covariates for instance) compared to model mispecification
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's encouraging comments and insightful suggestions.
# Questions
## Most important
> Lack of clarity whether the method will be competitive with better likelihoods.
***Response:*** Thank you for your valuable feedback. The reviewer is correct in noting that a better likelihood function could potentially regularize the training process, leading to less correlated residuals. For instance, assuming the errors follow a multivariate $t$-distribution can enhance the model's robustness to outliers. Additionally, a more powerful base model can also produce more independent residuals. Given these factors, we designed our approach to dynamically adapt to varying levels of error correlation. Our weighted correlation matrix can assign a higher weight to the identity matrix when errors are less correlated.
**We tested training the baseline models using the likelihood of the multivariate $t$-distribution, and the results are presented in Table 1 (see PDF for the global rebuttal).** It is true that using an alternative distribution can achieve better performance in certain datasets when our method is not implemented. However, we also observed that our method successfully remedies the performance gap for those datasets where the Gaussian assumption is outperformed by the $t$-distribution.
*Note: An important aspect of our method is the ability to use a subset of time series in a training batch for model optimization, which improves scalability. For the multivariate $t$-distribution, the distribution of the subsets of $\boldsymbol{z}\_t$ should have the same degrees of freedom as the distribution of $\boldsymbol{z}\_t$. Since the degrees of freedom becomes an additional output of the model in each training batch, they are not guaranteed to be the same. While this is not a problem for deep learning, it violates the marginalization rule of the $t$-distribution from a statistical perspective.*
Therefore, we chose Gaussian noise for its numerous beneficial properties, including its marginalization rule and well-defined conditional distribution, which are essential for statistically consistent model training and reliable inference. To address model misspecification, a more effective approach could involve first transforming the original observations $\mathbf{z}\_{t}$ into Gaussian-distributed data $\mathbf{x}\_{t}$ using a Gaussian Copula [1], and then applying our method.
> Details of Hyperparameter Optimization.
***Response:*** Thank you for bringing this to our attention. When optimizing the hyperparameters of our model, we first tune each base model (e.g., GPVar) on each dataset individually and select the optimal hyperparameters based on the loss observed on the validation set. For example, there will be an optimal learning rate and hidden size for the GPVar model on the $\mathtt{traffic}$ dataset. We perform this tuning using the baseline models (i.e., without implementing our method). To ensure fair comparison, we use the same set of hyperparameters for the models implemented with our method.
> Matching budget.
***Response:*** The reviewer rightly points out the increased training cost introduced by our method. In Table 10, the clock time per epoch for training with our method is significantly higher compared to the baseline methods. This increase is attributed to the larger $DB \times DB$ covariance matrix used in the likelihood calculation. We will clearly state this limitation in the next version of the paper.
> Citation of multivariate evaluation metrics.
***Response:*** The reviewer is correct about the limitations of $\operatorname{CRPS}\_{\text{sum}}$ in assessing multivariate probabilistic forecasting, despite its wide usage in existing works [1-5]. $\operatorname{CRPS}\_{\text{sum}}$ has been reported to overlook the performance of the model on each dimension [6]. In [7], the authors suggest that neither $\operatorname{CRPS}$ nor the energy score is a good surrogate for the NLL score in small-sample regimes, which are common in practice. However, comparing NLL is infeasible in our case, as sampling must be conducted to calibrate predictions autoregressively over the prediction range. Therefore, we include various metrics for assessment in this paper. We will discuss the limitations of $\operatorname{CRPS}\_{\text{sum}}$ in the next version of the paper.
[1] Salinas, David, et al. High-dimensional multivariate forecasting with low-rank gaussian copula processes. In NeurIPS, 2019.
[2] Rasul, Kashif, et al. Multivariate probabilistic time series forecasting via conditioned normalizing flows. In ICLR, 2021.
[3] Rasul, Kashif, et al. Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. In ICML, 2021.
[4] Drouin, Alexandre, et al. Tactis. In ICML, 2022.
[5] Ashok, Arjun, et al. Tactis-2. In ICLR, 2024.
[6] Koochali, Alireza, et al. Random noise vs. state-of-the-art probabilistic forecasting methods: A case study on CRPS-Sum discrimination ability. Applied Sciences 12.10 (2022): 5104.
[7] Marcotte, Étienne, et al. Regions of reliability in the evaluation of multivariate probabilistic forecasts. In ICML, 2023.
## Details
> It would be very useful ..
***Response:*** Thank you for the suggestion. We have provided more details about the number of projection parameters (see Table 2 and Table 3 in the PDF for the global rebuttal.). It can be seen that we introduce only a negligible amount of new parameters to the model.
> I highly encourage ...
***Response:*** Yes, we will release the code as soon as the work is published.
> a small point regarding ...
***Response:*** We indeed have covariates for each dataset, which include generic features that encode time and time series identification, as detailed in Appendix A.6. We agree that a better likelihood, a stronger base forecasting model, and comprehensive covariates all contribute to less correlated residuals.
---
Rebuttal 2:
Title: Answer to rebuttal
Comment: Thank you for your very clear answer which addressed my major points. I have increased my score to reflect this.
Regarding your comment "An important aspect of our method is the ability to use a subset of time series in a training batch for model optimization, which improves scalability", this is true but I think this is not a contribution of this paper as this was also present in [1].
---
Rebuttal Comment 2.1:
Comment: Thank you for your positive feedback and for increasing your score based on our response. We appreciate your acknowledgment of our clarifications.
Regarding the use of a subset of time series for model optimization, we agree that this approach was also present in [1]. Our intent was to highlight that our method can also inherit this feature. We will revise the text to ensure this distinction is clear. Thank you for pointing this out. | Summary: The paper proposes an approach for multivariate time series forecasting, modelling the correlation between the multivariate errors in close time steps.
The authors show that indeed multivariate errors in close time instants are correlated.
They generalise the work of Zheng et al (Aistats 2024) where the idea of modelling the correlation between errors in different time step has been used only for probabilistic univariate models.
Experimentally, they consider two different deep architectures suitable for multivariate time series forecasting.
In both cases they report better performance compared to the traditional approach which does not model the correlation between the multivariate errors in different time steps.
Strengths: Generalizing the model of the temporal correlation of error from the univariate to the multivariate case is a challenging problem.
The proposed approach, based on modelling the correlation of the error as the sum of kernels with different length scales, can be applied to any deep network model suitable for multivariate forecasting.
The approach is tested with two different neural architectures and positive results are reported in both cases.
The paper is well written.
Weaknesses: None in particular.
Technical Quality: 3
Clarity: 3
Questions for Authors: * You show a better CRPS for the models trained with correlation compared to those without correlation. Could you elaborate if the improvement of CRPS is due to better point forecasts, better variance of the predictive distribution, or both?
* line 169-171: could you clarify (also in the final paper) the sentence: Since the parameters of mapping functions are shared for all time series, we can view z_t as a Gaussian process assessed at points hi_t?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None in particular.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's encouraging comments and insightful suggestions.
# Questions
> Could you elaborate if the improvement of CRPS is due to better point forecasts, better variance of the predictive distribution, or both?
***Response:*** In this paper, we use $\operatorname{CRPS}\_{\text{sum}}$ as the main metric, while energy score, quantile loss, and RRMSE are included as additional metrics in the Appendix. The $\operatorname{CRPS}\_{\text{sum}}$ and energy score primarily assess the **overall quality** of the predictive distribution. While quantile loss evaluates the accuracy of specific quantiles, denoted by $\rho$, from the predictive distribution. Specifically, we assess the $0.5$-risk and $0.9$-risk, with $0.5$-risk being equivalent to the MAE of the mean prediction, which reflects the quality of point forecasts. The $0.9$-risk offers a snapshot of the quality of a specific quantile, which is influenced by the variance/covariance of the predictive distribution. The comparison of $0.5$-risk and $0.9$-risk using GPVar as an example is shown in Table 1. We observe that the results vary across different datasets. In some cases, both $0.5$-risk and $0.9$-risk are improved. In other instances, our method improves either the point forecasts or the higher quantile risk. The outcomes can depend on the characteristics (e.g., noise level) of each model and dataset. For GPVar, our method improves $0.5$-risk on 5 out of 9 datasets and $0.9$-risk on 6 out of 9 datasets. **It is important to note that $0.5$-risk and $0.9$-risk provide only two snapshots of the quality of the predictive distribution and are thus not as comprehensive as $\operatorname{CRPS}\_{\text{sum}}$**.
**Table 1. Comparison of $0.5$-risk and $0.9$-risk using GPVar. "w/o" denotes methods without time-dependent errors, while "w/" indicates our method. Mean and standard deviation are obtained from 10 runs of each model.**
| | | $0.5$-risk | | | $0.9$-risk | |
|---------------------------|---------------|---------------|-----------------------|------------------------|--------------------------|-----------------------|
| | w/o | w/ | rel. impr. | w/o | w/ | rel. impr. |
| $\mathtt{exchange\\_rate}$ | 0.0109±0.0003 | 0.0095±0.0004 | 12.84% | 0.0042±0.0001 | 0.0057±0.0001 | -35.71% |
| $\mathtt{solar}$ | 0.4998±0.0025 | 0.5246±0.0016 | -4.96% | 0.1617±0.0004 | 0.1597±0.0003 | 1.24% |
| $\mathtt{electricity}$ | 0.0405±0.0003 | 0.0397±0.0002 | 1.98% | 0.0211±0.0004 | 0.0185±0.0002 | 12.32% |
| $\mathtt{traffic}$ | 0.0933±0.0001 | 0.0859±0.0001 | 7.93% | 0.0666±0.0001 | 0.0580±0.0001 | 12.91% |
| $\mathtt{wikipedia}$ | 0.2231±0.0005 | 0.2236±0.0006 | -0.22% | 0.2136±0.0002 | 0.2048±0.0001 | 4.12% |
| $\mathtt{m4\\_hourly}$ | 0.0807±0.0001 | 0.0849±0.0001 | -5.20% | 0.0452±0.0002 | 0.0463±0.0001 | -2.43% |
| $\mathtt{m1\\_quarterly}$ | 0.2196±0.0023 | 0.1948±0.0005 | 11.29% | 0.3049±0.0044 | 0.2787±0.0027 | 8.59% |
| $\mathtt{pems03}$ | 0.0568±0.0000 | 0.0574±0.0001 | -1.06% | 0.0317±0.0000 | 0.0317±0.0001 | 0.00% |
| $\mathtt{uber\\_hourly}$ | 0.1035±0.0002 | 0.1013±0.0002 | 2.13% | 0.0533±0.0002 | 0.0528±0.0001 | 0.94% |
> line 169-171: could you clarify (also in the final paper) the sentence: Since the parameters of mapping functions are shared for all time series, we can view z_t as a Gaussian process assessed at points hi_t?
***Response:*** We apologize for any confusion caused. In this paper, the time series variables $\mathbf{z}\_{t}$ jointly follow a multivariate Gaussian distribution. Since any subset of $\mathbf{z}\_{t}$ will also follow a multivariate Gaussian distribution, we can use a training batch comprising $B$ random time series from a total of $N$ time series, on which the likelihood is computed.
Following [1], we assume $\left.\mathbf{z}\_{t} \mid \mathbf{h}\_{t}\right.\sim\mathcal{N}\left( \boldsymbol{\mu}(\mathbf{h}\_{t}), \boldsymbol{\Sigma}(\mathbf{h}\_{t})\right)$, where $\boldsymbol{\Sigma}\_t=\boldsymbol{L}\_{t}\boldsymbol{L}\_{t}^\top+\text{diag}{(\mathbf{d}\_{t})}$, and $\boldsymbol{\mu}\_{t}$, $\boldsymbol{L}\_{t}$, and $\mathbf{d}\_{t}$ are conditioned on $\mathbf{h}\_{t}$:
$$
\boldsymbol{\Sigma}\left(\mathbf{h}\_t\right)=\left[\begin{array}{ccc}d_1\left(\mathbf{h}\_{1, t}\right) & & 0 \\\ & \ddots & \\\ 0 & & d_N\left(\mathbf{h}\_{N, t}\right)\end{array}\right]+\left[\begin{array}{c}\mathbf{l}_1\left(\mathbf{h}\_{1, t}\right) \\\ \cdots \\\ \mathbf{l}_N\left(\mathbf{h}\_{N, t}\right)\end{array}\right]\left[\begin{array}{c}\mathbf{l}_1\left(\mathbf{h}\_{1, t}\right) \\\ \cdots \\\ \mathbf{l}_N\left(\mathbf{h}\_{N, t}\right)\end{array}\right]^\top=\boldsymbol{D}_t+\boldsymbol{L}\_{t} \boldsymbol{L}\_{t}^\top.
$$
where $\mu_i$, $d_i$, and $\mathbf{l}_{i}$ are the mapping functions used to transform the hidden state $\mathbf{h}\_{i,t}$ of time series $i$ at time $t$ to the corresponding distribution parameters. By shared mapping functions, we mean that we use a set of global mapping functions $\tilde{\mu}(\cdot)$, $\tilde{d}(\cdot)$, and $\tilde{\mathbf{l}}(\cdot)$ for all time series, instead of having a separate function for each time series. The shared mapping functions enable us to take any subset of time series and calculate the mean and covariance.
[1] Salinas, David, et al. High-dimensional multivariate forecasting with low-rank gaussian copula processes. In NeurIPS, 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification, I keep unchanged my positive evaluation of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our clarifications and your positive evaluation of our paper. | Summary: The paper proposes a method to improve multivariate probability forecasting by accounting for potential temporal dependencies of the residuals. The paper introduces a dynamic covariance using a small number of latent temporal processes. The method is evaluated on standard dataset of multivariate time-series on two forecasting models, with the CRPS as a metric.
Strengths: The method proposed in the paper is original and clearly motivated.
The paper shows improvements of the CRPS accuracy over a number of time-series for two models.
Weaknesses: The paper partly relies on the premise that incorporating time-dependency in the residuals is crucial for providing better uncertainty estimates. Demonstrating that the new residuals generated by the paper's method are less auto-correlated than those without the method is therefore crucial. However, the paper lacks a comprehensive metric for this evaluation. The theoretical introduction could be enhanced for mathematical clarity and efficiency.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the number of time-series influence your method? In particular, it could be interesting to measure the decorrelation of the residuals with your method (with an appropriate metric that averages over all the time-series), with methods without accounting for it.
Minor comments.
- l18 "vital", this is a strong word
- equation (1) is not accurate. Indeed, to simplify, let's take $P=1, Q=2$, and $x = 0$ (in order to remove it). To simplify further, let's assume $z$ takes discrete values. Writing $A=\{z_T=z^0_T\}, B=\{z_{T+1}=z^0_{T+1}\}, C=\{z_{T+2}=z^0_{T+2}\}$, equation (1) implies the equation $\mathbb{P}(B\cap C | A) = \mathbb{P}(B|A)\mathbb{P}(C|B)$ where $\mathbb{P}$ is the probability measure. This identity is wrong. Indeed, one just need to take $A\cap B\cap C = \emptyset$ but $A\cap B \ne \emptyset$ and $B\cap C \ne \emptyset$.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The limitations of the method should be explored.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's encouraging comments and insightful suggestions.
# Weaknesses
> The paper partly relies on the premise that incorporating time-dependency in the residuals is crucial for providing better uncertainty estimates. Demonstrating that the new residuals generated by the paper's method are less auto-correlated than those without the method is therefore crucial. However, the paper lacks a comprehensive metric for this evaluation. The theoretical introduction could be enhanced for mathematical clarity and efficiency.
***Response:*** Thank you for the suggestions. To the best of our knowledge, there is no widely used metric that can comprehensively evaluate both autocorrelation and cross-lag correlation in our case. Therefore, we chose to use the comparison of ACF plots (Figures 6-12 in the Appendix) to demonstrate the effect of decreased autocorrelation and the comparison of cross-correlation plots (Figures 13-18 in the Appendix) to show the effect of decreased cross-lag correlation. We will enhance the theoretical introduction in the next version of the paper to improve the flow.
# Questions
> How does the number of time-series influence your method? In particular, it could be interesting to measure the decorrelation of the residuals with your method (with an appropriate metric that averages over all the time-series), with methods without accounting for it.
***Response:*** The number of time series does not influence the training since the model is trained based on a random subset of $B$ time series at a time, regardless of the total number of time series $N$. However, during inference, one can increase the number of time series in a batch to leverage the information from more time series, as long as memory allows. We have provided an additional experiment showing the effect of increasing the number of time series in a batch when performing inference (see Fig. 1 in the PDF for the global rebuttal).
To the best of our knowledge, there is no widely used metric that can comprehensively evaluate both autocorrelation and cross-lag correlation in our case. Therefore, we chose to use the comparison of ACF plots (Figures 6-12 in the Appendix) to demonstrate the effect of decreased autocorrelation and the comparison of cross-correlation plots (Figures 13-18 in the Appendix) to show the effect of decreased cross-lag correlation.
> l18 "vital", this is a strong word.
***Response:*** Thank you for pointing this out. We will revise the narrative in the next version of the paper.
> equation (1) is not accurate.
***Response:*** Thank you for your observation. The example given by the reviewer is indeed correct in a general context. However, in Eq.(1), the factorization is based on the joint distribution of Gaussian variables. Since the PDF of a Gaussian is always positive, the factorization in Eq.(1) holds true in this context.
# Limitations
The limitations of the method should be explored.
***Response:*** Methodology-wise, the limitations of our method are mainly two-fold. Firstly, there is a potential model misspecification issue by assuming Gaussian noise. A potential solution is to first transform the original observations $\mathbf{z}\_{t}$ into Gaussian-distributed data $\mathbf{x}\_{t}$ using a Gaussian Copula, and then apply our method. The second limitation comes from the parameterization of the covariance matrix. Specifically, the Kronecker structure $\boldsymbol{C}\_t \otimes \mathbf{I}\_{R}$ for the covariance matrix of the latent variable $\boldsymbol{r}\_t^{\text{bat}}$ may be too restrictive. This structure assumes that the rows (i.e., latent temporal processes) in the matrix $\left[\mathbf{r}\_{t-D+1},\ldots,\mathbf{r}\_{t}\right]$ are identically distributed following $\mathcal{N}\left(\boldsymbol{0},\boldsymbol{C}\_t\right)$. Additionally, the parameterization of $\boldsymbol{C}\_t$ could be expanded. Instead of using SE kernels, $\boldsymbol{C}\_t$ could be parameterized as fully learnable positive definite symmetric Toeplitz matrices, allowing for the modeling of negative correlations. Computation-wise, our method can lead to increased training costs due to the larger $DB \times DB$ covariance matrix used in the likelihood calculation.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and the additional experiments.
I will maintain my position of accepting this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our clarifications and additional experiments. | Summary: This paper introduces a method for multi-variate time series forecasting where the residual errors are not assumed to be independent and modeled to be correlated using a Gaussian process model. Standard time-series approaches assume that the errors are temporally independent, however, this assumption does not hold in many datasets due to model misspecification. It is shown experimentally that the residuals have a non-zero correlation between different time steps.
The proposed methods accounts for the correlation between the time steps using a Gaussian process approach. The correlation matrix of the GP is designed to be both temporal and spatial. Spatial correlations occur due to a low rank assumption of the correlation matrix in the spatial dimension for a particular time step. Temporal correlations occur due to a non-identical correlation matrix for the latent variables across different time steps.
Experimental results on benchmark datasets show improved performance with temporal correlations than without. An extensive analysis and interpretation of the results are provided in the appendix.
Strengths: - The presented approach is well motivated as experimentally shown in the extensive analysis. Residual errors are indeed correlated and hence this is an interesting and significant problem in the time series community.
- The proposed method is technically sound. Gaussian processes are an ideal approach to learn correlations between dimensions and hence a natural choice to account for the temporal correlations.
- Experimental results show improved performance on standard benchmark datasets when modeling the correlations. A full set of experiments and extensive analysis shows potential of impact.
Weaknesses: - The proposed method may not be a completely novel approach to time series forecasting. Gaussian processes have been used for time series predictions extensively and a simple way to account for temporal correlations is the following:
- Learn a deterministic base model on the data, compute the residual and fit a Gaussian process on the residuals.
- The Gaussian process kernel is a product of two kernels, one along the time dimension and the other across time series.
- It is unclear how this simple baseline will perform compared to the proposed method which is complex.
- Some strong baselines are missing from the experiments. Just to give some examples:
- Rangapuram, Syama Sundar, et al. "Deep state space models for time series forecasting." Advances in neural information processing systems 31 (2018).
- Salinas, David, et al. "DeepAR: Probabilistic forecasting with autoregressive recurrent networks." International journal of forecasting 36.3 (2020): 1181-1191
Technical Quality: 4
Clarity: 4
Questions for Authors: - How is the model actually learned? A full algorithm listing all the steps would be very useful.
- Are the base models and residual GP model trained independently? Or are they trained end-to-end?
- How is inference actually done? Let's say we have historical data up to step t, and would like to predict step t+1. The correlation between the residual errors and information about the residual errors till step t helps predict a more accurate residual for step t. Decreasing correlation over longer time ranges means that the advantage of modeling correlated errors will be small for long horizon predictions. Why don't we observe this phenomena in Figures 19 and 20?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Please comment on the usefulness of the method for various time ranges. It seems that the method may not be an advantage when predicting over longer time ranges.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's encouraging comments and insightful suggestions.
# Weaknesses
> The proposed method may not be a completely novel approach to time series forecasting.
***Response:*** Thank you for your valuable suggestions. Our model is not a two-stage model that first fits the mean of the data and then fits the residuals. Instead, our method employs an end-to-end learning approach that dynamically outputs the hyperparameters of the Gaussian Process (GP) on the errors. This approach avoids the need for constant re-training of the GP to obtain hyperparameters over the temporal horizon in the context of a two-stage model.
> Some strong baselines are missing from the experiments.
***Response:*** Thank you for your feedback. We did not use Deep SSM and DeepAR as baselines because they are univariate models that do not account for inter-series relationships. Therefore, we included only multivariate baselines in this paper, following [1].
[1] Salinas, David, et al. High-dimensional multivariate forecasting with low-rank gaussian copula processes. In NeurIPS, 2019.
# Questions
> How is the model actually learned?
***Response:*** Thank you for your comments. Our model is trained end-to-end, similar to any deep learning model. Therefore, we did not provide a list of algorithm steps, as it is not a multi-stage training process. The base forecasting model provides a hidden state $\mathbf{h}\_{t}$ at each time step. In the baseline models, $\mathbf{h}\_{t}$ is projected onto the distribution parameters $(\boldsymbol{\mu}\_{t}, \boldsymbol{L}\_{t}, \mathbf{d}\_{t})$ using a simple neural network. With our method, $\mathbf{h}\_{t}$ is also projected onto the weight parameters $w\_{m,t}$ of the sum of kernels in the dynamic correlation matrix $\boldsymbol{C}\_t$ using an additional simple neural network. The training process remains the same, with the addition of trainable parameters in the neural networks for projecting $w\_{m,t}$ and a modified loss function (Eq. (10)) that accounts for temporal correlation.
> Are the base models and residual GP model trained independently? Or are they trained end-to-end?
***Response:*** As clarified in our response to the previous question, our model is trained in an end-to-end manner.
> How is inference actually done?
***Response:***
The inference process is performed by applying Eq.(12) **recursively** until the desired prediction range is reached. For example, when predicting $\mathbf{z}\_{t+1}=\boldsymbol{\mu}\_{t+1}+\boldsymbol{\eta}\_{t+1}$:
$$
\boldsymbol{\eta}\_{t+1} \mid \boldsymbol{\eta}\_{t}, \boldsymbol{\eta}\_{t-1},\ldots,\boldsymbol{\eta}\_{t-D+2} \sim \mathcal{N} ( \boldsymbol{\Sigma}\_{\star}\boldsymbol{\Sigma}\_{\text{obs}}^{-1}\boldsymbol{\eta}\_{\text{obs}}, \boldsymbol{\Sigma}\_{t+1}- \boldsymbol{\Sigma}\_{\star}\boldsymbol{\Sigma}\_{\text{obs}}^{-1} \boldsymbol{\Sigma}\_{\star}^{\top})
$$
where $\boldsymbol{\eta}\_{\text{obs}}=\operatorname{vec}\left(\left[\boldsymbol{\eta}\_{t-D+2},\ldots,\boldsymbol{\eta}\_{t-1},\boldsymbol{\eta}\_t\right]\right)$ represents the observed residuals, accessible at forecasting step $t+1$. $\boldsymbol{\Sigma}\_{\text{obs}}$ is the partition of $\boldsymbol{\Sigma}\_{t+1}^{\text{bat}}$ that gives the covariance of $\boldsymbol{\eta}\_{\text{obs}}$, and $\boldsymbol{\Sigma}\_{\star}$ is the partition of $\boldsymbol{\Sigma}\_{t+1}^{\text{bat}}$ that gives the covariance of $\boldsymbol{\eta}\_{t+1}$ and $\boldsymbol{\eta}\_{\text{obs}}$.
**By "recursively", we mean that the samples of $\boldsymbol{\eta}\_{t+1}$ can be used as observed residuals for the next prediction step.** Therefore, the decreasing correlation indicated by $\boldsymbol{C}\_t$ does not imply that as we move further from time $t+1$, $\boldsymbol{\eta}_{\text{obs}}$ becomes less useful. On the contrary, $\boldsymbol{\eta}\_{\text{obs}}$ will be continuously updated as we progress. Thus, it is not necessarily the case that the advantage of modeling correlated errors will diminish for long-horizon predictions.
However, the advantage of modeling error correlation can indeed vary over long-term forecasting. Generally, errors will accumulate and propagate to future time steps when performing autoregressive predictions. Using residuals from previous time steps can be advantageous, especially for non-stationary segments of the time series. However, this may not be the case for time series with strong periodic effects, where the model can leverage information from the seasonal lags of the data. As shown in Figures 19 and 20, the advantage of modeling error correlation decreases over time for some datasets, while for others it does not.
# Limitations
> Please comment on the usefulness of the method for various time ranges.
***Response:*** As clarified in an earlier response, it is not necessarily the case that the advantage of modeling correlated errors will diminish for long-horizon predictions. The usefulness of our method in longer-range forecasting can vary based on the quality of predictions generated during the inference process. Generally, errors will accumulate and propagate to future time steps when performing autoregressive predictions. Using residuals from previous time steps can be advantageous, especially for non-stationary segments of the time series. However, this may not be the case for time series with strong periodic effects, where the model can leverage information from the seasonal lags of the data. In cases where the model provides good predictions for long-term forecasting, the advantage of our method can be less obvious.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications. I will maintain my position of accepting this paper. I urge the authors to address the above questions fully in the paper. This will only lead to a further improvement in the presentation of the results.
- A complete figure showing all the components (and perhaps the loss functions used) will be very helpful for future readers. It helps improve the presentation of the paper.
- Limitations: It will be very interesting to see how the error correlations behave over longer horizon forecasts. Both in the case of periodic effects and otherwise.
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive feedback. We will add a complete figure showing all components, including the loss functions, to improve the clarity of our paper. We also recognize the value of exploring error correlations over longer forecast horizons and will add a discussion to address this point in a future version of our paper. | Rebuttal 1:
Rebuttal: Dear AC and Reviwers,
We would like to express our gratitude for your thorough and insightful reviews. We have carefully considered each of your comments and suggestions. Below, we provide a summary of our responses to the main points raised by the reviewers.
1. Clarifications and Misunderstandings
- Learning and inference process: We have clarified that our model is trained end-to-end and performs inference in an autoregressive manner.
- Clarification of viewing $\mathbf{z}_{t}$ as a Gaussian process.
2. Comparison with Related Work
- Is the method competitive with better likelihood? We have added experiments showing that our method is competitive with a likelihood based on the $t$-distribution (Table 1 in pdf).
3. Experimental Results
- How does the number of time series influence our method? We have added experiments showing the effect of increasing the number of time series in a batch when performing inference (Figure 1 in pdf).
- How can the decorrelation of the residuals be measured? We have shown the decorrelation effect in our Appendix using ACF plots and cross-correlation plots.
- Does the improvement in CRPS come from the point forecast or better variance/covariance? We have provided quantile loss as two snapshots from the predictive distribution to assess the improvement (Table 1 in the response to Reviewer Vi7g).
- Experiment details: We have provided information about the number of parameters in each component of our models (Table 2&3 in pdf).
We believe that these revisions have significantly strengthened the paper and addressed the reviewers' concerns. We appreciate the reviewers’ time and effort in providing constructive feedback, which has been invaluable in improving our work.
Thank you for your consideration.
Sincerely,
The authors
Pdf: /pdf/10be0318c5417730ccda07919a60629482b5316a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Instruction-Guided Visual Masking | Accept (poster) | Summary: In this paper, the author introduce Instruction-guided Visual Masking (IVM), a generic and powerful visual grounding method that enhances broad multimodal instruction following tasks in a plug-and-play way. By masking out all instruction-irrelevant image regions, IVM effectively injects superior visual grounding ability to downstream LMMs non-intrusively, significantly boosting both commercial and open-sourced LMMs and achieving state-of-the-art results across numerous challenging multimodal benchmarks.
Strengths: Overall, the motivation of this paper is commendable, as it addresses an interesting problem and provides rich illustrations that facilitate understanding. Furthermore, the work is supported by a substantial number of experiments to validate its effectiveness.
Weaknesses: 1. The routine of retraining models by preparing new datasets, as described in this paper, can be considered somewhat old-fashioned.
2. The technical insight of this paper is relatively weak, and no strong novelty technical methods are proposed.
Technical Quality: 3
Clarity: 3
Questions for Authors: I have a few minor concerns that I would like the author to address.
1. The routine of retraining models by preparing new datasets, as described in this paper, can be considered somewhat old-fashioned. Recent literature has proposed similar grounding datasets, such as Ferret (GRIT) [1] and Kosmos-2 (GRIT) [2]. However, these two works are not cited. There should be some new discussion on motivation about the dataset in this paper.
2. Why does the instruction-related area between 90% and 100% in Figure 5 increase abnormally significantly? It is necessary to consider whether these data needs to be cleaned.
3. Formulas 1 and 2 in the paper are somewhat deliberately mystifying. I understand that the purpose is to learn an optimal model from the hand-labeled data and the auto-generated data. This process is somewhat similar to the pseudo-label curriculum learning proposed in CLIP-VG [3]. It is suggested to increase relevant discussion and citations.
4. I have some confusion regarding Figure 6 of the paper, which requires modification. Specifically, (1) the discriminator and generator should be two separate processes, while in Figure 6, the image, text, and attention are depicted as being fed to the model simultaneously; (2) It is unclear whether the LLM in the figure represents one model or two models since there are two LoRAs shown but only one LMM is illustrated.
5. A framework diagram of the inference model used in the downstream experiment should be drawn according to Figure 6, so as to illustrate how the IVM assists the model in inference.
6. Since the paper claims that IVM is a plug-and-play method for assisting LLM, so I am curious about how this paper performs on the RefCOCO/+/g dataset after incorporating IVM. After all, the motivation behind this paper stems from the visual grounding task, and there has been a number of research on grounding multimodal large language models, such as Ferret [1], Kosmos-2 [2], and LION [4].
7. Other writing issues, such as some vocabulary in the paper should be consistent. For example,
(a) line 135 RefCoCo should be changed to RefCOCO to maintain uniformity;
(b) Lora is used in Figure 5, while LORA is used in the main text, and it is recommended to use LoRA uniformly in order to be consistent with the original article;
(c) LMM and LLM are used in this paper, however LLM has not the full vocabulary, and it is suggested maybe better to use LLM and MLLM uniformly. These rudimentary errors should not appear in NeurIPS submission papers.
On the whole, this paper is a relatively solid work, the overall presentation is good, literature review is relatively sufficient, so I currently give a positive scores. Hope the author can address my concerns, and I will decide whether to raise down or raise up my rating according to the author's rebuttal reply.
--
[1] You, Haoxuan, et al. "Ferret: Refer and Ground Anything Anywhere at Any Granularity." The Twelfth International Conference on Learning Representations. 2024
[2] Peng, Zhiliang, et al. "Kosmos-2: Grounding multimodal large language models to the world." arXiv preprint arXiv:2306.14824 (2023).
[3] Xiao, Linhui, et al. "CLIP-VG: Self-paced Curriculum Adapting of CLIP for Visual Grounding." IEEE Transactions on Multimedia (2023).
[4] Chen, Gongwei, et al. "Lion: Empowering multimodal large language model with dual-level visual knowledge." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for the positive feedback and the constructive comment on our work! Regarding the concerns from the reviewer zLHv, we provide the responses as follows:
>**W1: The routine of retraining models by preparing new datasets, as described in this paper, can be considered somewhat old-fashioned.**
- No, our work introduces a novel and meaningful task setting, instruction-guided visual masking, and finds that all existing methods and data are insufficient for training models effectively in this new context. Thus, constructing suitable data is the most straightforward, effective, and crucial approach to solving this problem, not outdated.
- Moreover, we also introduced a DWSL training objective, which is essential for the IVM model's success. Without DWSL, Figure 9 demonstrates that the naive supervised learning fails miserably when training on the IVM-Mix-1M data.
>**W2: The technical insight of this paper is relatively weak, and no strong novelty technical methods are proposed.**
- No, as noted above, the instruction-guided visual masking task offers a new perspective and approach to improving multimodal instruction following abilities.
- Also, the DWSL approach provides an effective solution for leveraging mixed-quality data to train a good model.
>**Q1: The routine of retraining models by preparing new datasets can be considered somewhat old-fashioned. Recent literature has proposed similar grounding datasets, such as Ferret (GRIT)[1] and Kosmos-2 (GRIT)[2]. However, these two works are not cited. There should be some new discussion on motivation about the dataset in this paper.**
- Thanks for bringing these insightful works to our attention! We will include all these works in future versions of our work. Here we provide a brief discussion of their core differences from our work:
- The GRIT dataset is fundamentally different from our IVM-Mix-1K dataset in several key aspects:
- `Label Granularity`: IVM aims to mask out instruction-irrelevant visual contents, necessitating pixel-level predictions instead of the simple instance-level detection in GRIT. This significantly raises the data demand.
- `Data Source and Construction Method`: Most data in the GRIT dataset originate from sources already equipped with grounding labels, which are then transformed into instruction-following data using LLMs akin to the first part of our proposed pipeline in Figure 4 (1). In contrast, lots of IVM-1M-Mix data derive from unlabeled data with a broader range of instructions, such as robot learning data.
>**Q2: Why does the instruction-related area between 90% and 100% in Figure 5 increase abnormally significantly? It is necessary to consider whether these data needs to be cleaned.**
- No, this data is essential and should not be cleaned, since many instructions require a comprehensive understanding of all the visual input without specific visual grounding. For example, all caption-style instructions like "Describe the image," "Where am I," or "Write a poem according to the image" should focus on the entire image rather than specific image areas (see Figure 7 for details). So, the model must learn to respond accurately to these complex instructions without losing contextual information.
>**Q3: Formulas 1 and 2 in the paper are somewhat deliberately mystifying... This process is somewhat similar to the pseudo-label curriculum learning proposed in CLIP-VG [3]. It is suggested to increase relevant discussion and citations.**
- Thanks for bringing this insightful work to our attention! We are very happy to discuss relevant works in future revisions! Here, we provide a brief discussion:
- Pseudo-label curriculum learning in CLIP-VG differs from our DWSL objective. DWSL can be regarded as a weighted regression objective, where all labels are pre-determined and fixed. In contrast, CLIP-VG is more like semi-supervised learning where the labels are gradually updated and thus may suffer from compounding errors in the iterative cycles.
>**Q4: I have some confusion regarding Figure 6 of the paper, which requires modification...**
- We apologize for any confusion caused by the original figure and appreciate this constructive feedback! We have updated Figure 6 (modified) in the PDF attached with **General Response**.
>**Q5: A framework diagram of the inference model used in the downstream experiment should be drawn according to Figure 6, so as to illustrate how the IVM assists the model in inference.**
- Thanks for this helpful suggestion! We have added the inference pipeline in Figure a in the PDF attached with **General Response**. Very happy to hear any further comments!
>**Q6:I am curious about how this paper performs on the RefCOCO/+/g dataset after incorporating IVM...**
- Yes, indeed, the comparison with VG methods on the RefCOCO-series benchmarks can be found in Table 7 in Appendix E.1, where IVM demonstrates comparable results to SOTA specialist VG models and outperforms other generalist models.
- Here, we also compare IVM with the mentioned KOSMOS-2, LION, Ferret and Shikra[1]. Results show that, as a plug-and-play tool, IVM achieves strong results compared to other specially designed or trained baselines. Note that we report the result on the validation split of each dataset here and the '\*' denotes zero-shot performance.
|Methods|RefCOCO↑|RefCOCO+↑|RefCOCOg↑|
|---|---|---|---|
|KOSMOS-2*|52.32|45.48| 60.57|
|LION-4B|89.73|83.60|85.69|
|Shikra-7B[1]|87.01|81.60|82.27|
|Ferret-7B|87.49|80.78|83.93|
|IVM-7B(ours) |90.1|83.3|82.9|
- Thanks for mentioning these great works again! We will include these comparisons in future versions of our work.
>**Q7:Other writing issues...**
- Thanks for the efforts and these helpful comments! We apologize for the rudimentary errors found in the article. We will conduct a thorough review and correct them in future revisions!
[1]Shikra: Unleashing multimodal llm's referential dialogue magic, 2023
---
Rebuttal Comment 1.1:
Title: The author's response is polite, and I appreciate the author's efforts during this rebuttal process.
Comment: I noticed that the author did not fully respond to some of my questions, such as explaining equations 1 and 2 in Q3, etc. Additionally, the authors' results in Q6 appear to be insufficient compared to several baselines. Nevertheless, Considering the author's response is polite, and I appreciate the author's efforts during this rebuttal process. Therefore, I would like to raise my rating by 1 point to `weak accept`. I expect the authors to make corresponding revisions regarding the aforementioned issues in the final version.
---
Rebuttal 2:
Title: Thanks for increasing the score!
Comment: We really thank the reviewer for increasing the score, and sorry for the confusion in the rebuttal phase! Regarding the remaining concerns, we provide the following detailed responses.
>**Explanation about Eq. (1) and Eq.(2)**
- Due to the space limits in the rebuttal phase (6000 characters), we did not include very detailed discussions on Eq. (1) and Eq. (2) in the rebuttal. The introduction of Eq.(1) and Eq. (2) is inspired by recent offline IL work [2] that tries to learn a good model jointly from a high-quality near-expert dataset (human data) and a mixed-quality suboptimal dataset (machine-generated data), which typically consists of two stages:
- `Discriminator Training`. Eq. (1) tries to train a discriminator $d$ to distinguish between human data $\mathcal{D}_E$ and the machine-generated data $\mathcal{D}_D$. By doing so, the output of the discriminator $d$ becomes a confidence value for data quality ($d$ will output near 1 or near 0 if the data can be clearly identified as human or machine data; $d$ will output around 0.5 if the annotation quality is hard to judge), see Figure 9 (b) for discriminator output statistics.
- `Discriminator-weighted Supervised Learning`. After training the discriminator, the $d$ value can be an adaptive weight function to reprioritize the training annotations in Eq. (2), where high-quality annotations will be more fitted than low-quality ones. In this sense, the side-effect of bad annotations in the mixed-quality machine data can be largely filtered out, see Figure 9 (a) for detailed experimental evidence.
Here, all annotations are pre-collected rather than gradually updated during training as CLIP-VG does. So, we're more insensitive to the compounding errors than CLIP-VG.
>**Additional discussions on Q6**
- We are really sorry for the confusion here. Due to time limits, we tested IVM only on the validation split rather than all refcoco splits. Now, we're trying to implement IVM-13B by scaling IVM on more human and machine data. After that, we will conduct more thorough experiments in the future.
>**I expect the authors to make corresponding revisions regarding the aforementioned issues in the final version.**
- Sure! We will consider all these helpful suggestions when revising our paper!
Thanks to the reviewer again for the efforts and engagement in the discussion phase, and open to any further comments!
[2] Discriminator-weighted offline imitation learning from suboptimal demonstrations. ICML 2022
---
Rebuttal Comment 2.1:
Title: Thanks for the authors' responses. That's all.
Comment: Thanks for the authors' responses. That's all. | Summary: This paper introduces the Instruction-guided Visual Masking (IVM), a versatile visual grounding model designed to improve alignment between textual instructions and specific image regions. It outlines the development of a visual masking data generation pipeline and a new learning technique, Discriminator Weighted Supervised Learning (DWSL), which prioritizes high-quality data samples to enhance performance on multimodal tasks.
Strengths: 1. This paper introduces instruction-guided visual masking (IVM). The IVM-enhanced multimodal models can focus on task-relevant image regions to better align with complex instructions. This implies that the model can become more sensitive to instructions.
2. Figures 2 and 3 are helpful to understand the method.
3. This paper has collected a richer and more complex visual grounding dataset.
Weaknesses: 1. The IVM model architecture shown in Figure 6 is not conducive to understanding the approach.
2. It is recommended to compare more methods on a multimodal benchmark.
Technical Quality: 4
Clarity: 2
Questions for Authors: 1. The paper mentions that the IVM-based model does not show significant gains on the GQA, SQA, and VQAv2 benchmarks. It suggests that these benchmarks may not depend heavily on grounding abilities, which raises some doubts for me. Intuitively, the results for simple visual inputs related to instructions should be better.
2. Additionally, I am concerned that the improved performance of the proposed method over comparative models might largely be attributed to the use of a larger Visual Grounding dataset. I am interested in seeing how other models would fare if they were trained or fine-tuned using the same dataset.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: Limitations and broader impact have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for the positive feedback and the constructive comment on our work! Regarding the concerns from the reviewer EbuK, we provide the responses as follows:
>**W1: The IVM model architecture shown in Figure 6 is not conducive to understanding the approach.**
- Sorry for any confusion caused by the original figure! We have updated Figure 6 (modified) in the PDF attached with **General Response**. Very happy to hear any further comments!
>**W2: It is recommended to compare more methods on a multimodal benchmark**
- Thanks for this helpful suggestion! Given that we use the llava-1.5 model as our base model, we initially reported only the six most classic baselines from the same period in Table 2 to ensure a fair comparison. We will include additional baselines to more clearly demonstrate IVM's performance in future versions.
>**Q1: The paper mentions that the IVM-based model does not show significant gains on the GQA, SQA, and VQAv2 benchmarks. It suggests that these benchmarks may not depend heavily on grounding abilities, which raises some doubts for me. Intuitively, the results for simple visual inputs related to instructions should be better.**
- No, in scenarios with simple visual inputs, the bottleneck is no more visual grounding but other abilities like reasoning, etc, since almost all visual contents are instruction-relevant (see Figure 7 in our paper for details ). Thus, the IVM-based model does not show significant gains on the simple GQA, SQA, and VQAv2 benchmarks that do not require strong visual grounding abilities (we provide some examples in Figure b in the PDF attached with General Response), but shows superior advancements on the challenging V* benchmark.
>**Q2: Additionally, I am concerned that the improved performance of the proposed method over comparative models might largely be attributed to the use of a larger Visual Grounding dataset. I am interested in seeing how other models would fare if they were trained or fine-tuned using the same dataset.**
- No, the DWSL training objective is also very crucial for the improvements. Figure 9 (a) demonstrates that DWSL is the only method that can effectively leverage both the high-quality but small human data and the large but mixed-quality machine data to achieve superior results. The simple supervised training baseline, however, fails miserably in training solely on machine or human data.
- Also, note that the supervised learning baseline degenerates to the previous method LISA [1] as we follow the architectural framework established by LISA. So simply finetuning baseline models using the same dataset is not enough to get good results. DWSL, however, can enjoy considerable improvements.
[1] LISA: Reasoning Segmentation via Large Language Model, CVPR 2024
---
Rebuttal 2:
Comment: I feel that my concerns were not fully addressed. The explanations provided seem more intuitive and lack solid theoretical or experimental support, such as Q1 and Q2. Given this, I decided to lower my rating to weak accept.
---
Rebuttal Comment 2.1:
Title: Thanks for the further comments
Comment: We thank the reviewer for the efforts and engagement in the discussion phase. Regarding the remaining concerns, we provide additional responses as follows:
>**Additional responses to Q1**
- For Q1, we provide the analysis of the relationships about data quantities w.r.t different `instruction-relevant visual ratios (IVR)` (the ratios of preserved pixels in the original images) generated by IVM on different multimodal benchmarks including the simple GQA, SQA, VQAv2 (due to time limits, we evaluated on 1/10 subset of the VQAv2 dataset, if the reviewer would like to see results on full datasets, we will try our best to finish the experiments by the author rebuttal deadline) and the more complex V* and EgoThink benchmarks.
|Benchmark|<20% (IVR) |20%-40% (IVR)|40%-60% (IVR)|60%-80% (IVR)|>80% (IVR)|
|---|---|---|---|---|---|
|V*|100%|0%|0%|0%|0%|0%|
|Egothink| 67% | 14% | 4% | 3% | 12%|
|VQAv2 (40K samples)| 13% | 16% | 12% | 10% | 49%|
|GQA| 17% | 14% | 8% | 4% | 57% |
|SQA| 0% | 5%| 11% | 7% | 77% |
- These statistics demonstrate that most visual contents in GQA, SQA and VQAv2 benchmarks are instruction-relevant (>80% IVR). Existing base MLLMs can easily focus on the correct image areas to follow the instructions, rather than be distracted by the minor visual distractors (for example, see Figure b in the PDF in **General Response**).
- On the contrary, only a small ratio of visual contents in the challenging V* and Egothink benchmark are instruction-relevant (<20% IVR). In this case, MLLMs are more likely to be distracted by a lot of irrelevant visual content if the MLLMs' visual grounding ability is not that strong. With the IVM assistance, however, the performance can be greatly enhanced via the explicit surgical visual grounding.
>**Additional responses to Q2**
- For Q2, `LISA is the only comparable baseline in our setting`. Specifically, most MLLMs cannot be directly finetuned or trained in our setting as they are not primarily designed to predict dense image heatmaps given complex instructions that require strong reasoning ability. Among all these MLLMs, LISA is the most relevant work that can be fairly compared, as LISA is specifically tailored for reasoning segmentation, which shares some similarity with our setups.
Indeed, we have empirically compared the baseline method LISA (please check the Human+machine label (SL) baseline in Figure 9 (a) in our paper for details). Here, we summarize the results more directly (detailed comparisons of diverse data quantities and data components can be found in Figure 9 (a)).
|Training object & data|IVM on IVM-Mix-1M (Ours)|LISA on IVM-Mix-1M|
|---|---|---|
|Improvements on V* benchmark|**+26.2**|+16.2|
`If the reviewer has any further detailed concerns or requires any other experimental supports, please do not hesitate to point them out. We will be more than happy to address them and improve the quality of our paper.`
---
Rebuttal 3:
Comment: Hi! The discussion is approaching to close. So, hope our additional responses successfully address your remaining concerns. If not, please do not hesitate to point them out. We would really appreciate any further comments that can improve the quality of our paper! | Summary: This paper presents IVM (Instruction-guided Visual Masking). The key idea is that we could mask out the instruction-irrelevant regions in the given image. The trained model is tasked to mask out the irrelevant regions, enforcing the multimodal model to focus on the task-related visuals. Such grounding-centric approach is effective in enhancing different multimodal models.
The paper also details their solution to create a large number of reliable pixel-level labels. The paper presents a MoE pipeline with various visual grounding models to collect reliable labels.
The paper further introduces DWSL for IVM training. DWSL helps IVM training to focus more on higher quality training data.
Strengths: - The proposed IVM is sound and effective. I found the problem formulation of generating heatmap interesting. The experiments clearly demonstrate the effectiveness of the proposed method.
- The MoE pipeline is well-developed and can be applied to both labeled and unlabeled data.
- The IVM architecture with discriminator training can effectively reduce the impact of low-quality data.
- The paper is well-written and solid.
Weaknesses: I don't find significant problems in the paper. One possible improvement for this paper is that it would be interesting to provide some typical failure cases of the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: see weakness
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for the positive feedback and the constructive comment on our work! Regarding the concerns from the reviewer bwCu, we provide the responses as follows:
>**W1: I don't find significant problems in the paper. One possible improvement for this paper is that it would be interesting to provide some typical failure cases of the proposed method.**
- Thanks for this suggestion! Indeed, several failure cases along with detailed discussions can be found in Figure 14 in Appendix E.2. These examples are intended to deepen the understanding of the IVM model and inspire future improvements.
---
Rebuttal Comment 1.1:
Title: comment
Comment: Thank you. I have no further comment at this point.
---
Reply to Comment 1.1.1:
Title: Thanks for the responses!
Comment: Thanks to the reviewer again for the efforts and engagements, and open to any further comments!
---
Rebuttal 2:
Title: I am not convinced by reviewer bwCu's comments and rating, as it seems that the reviewer bwCu was aware of the author's identity and deliberately gave high marks.
Comment: After reading the comments of the other reviewers, I am not convinced by reviewer bwCu's comments and rating. To be specific,
- (a) The reviewer bwCu claims that his confidence in this paper is "absolutely certain about your assessment. You are very familiar with the related work." However, the reviewer bwCu did not give some valuable and in-depth comments to this paper. I think, if the reviewer bwCu is familiar with this field, he should be able to find the defects of this paper and make some valuable comments.
- (b) Reviewer bwCu gave nothing but praise for this paper;
- (c) Judging from the comments of the other three reviewers, there are still more or less problems in this paper. However, reviewer bwCu directly gave the extreme rating of "Strong Accept".
To sum up, it seems that the reviewer bwCu was aware of the author's identity and deliberately gave high marks.
---
Rebuttal Comment 2.1:
Comment: I agree with reviewer zLHv's perspective on the concerns regarding reviewer bwCu's comments and rating.
Compared to the comments from other reviewers, reviewer bwCu's comments seem overly positive and lack the professional suggestions expected.
This raises my curiosity about the reason behind the reviewer bwCu's strong accept rating. | Summary: For the purpose of precise instruction following performance in LLM, this paper proposes a versatile grounding model that is compatible with diverse multi-modal models. Leveraging the LLM, a visual masking data generation pipeline is built and 1 million image-instruction pairs are constructed. On top of it, an Instruction-Guided Visual Masking (IVM) model is trained to focus on task-relevant regions that align with complex instructions.
Strengths: 1. An IVM model is proposed to enhance multimodal instruction following via nuanced surgical visual grounding. Overall, this model is simple yet effective. Such a model can be seamlessly incorporated into a multimodal model to boost the performance of downstream tasks.
2. A dataset creation pipeline with a mixture of experts is carefully devised and along with it an 1-M dataset is built.
3. To effectively utilize the dataset, a discriminative weighted supervised learning training strategy is devised to select the high-quality dataset pairs.
4. Extensive experiments have been conducted to validate the effectiveness of the proposed approaches on various tasks, e.g. visual understanding, reasoning segmentation model and real robot control.
Weaknesses: 1. This work proposes a visual grounding model. However, it seems that the comparison with state-of-art visual grounding methods (VG) is missing. I understand this task involves slight differences with reasoning segmentation (RS) tasks. But will the VG task evaluation be more direct?
2. This approach relies heavily on the human manually labeled data, without which the performance will significantly drop. Therefore, it is more like a dataset creation work. Finetuning on this dataset, other compared works might also achieve similar or even better performance.
3. The derived framework is a little bit heavy, with LLM and heavy visual rendering head involved . There might be a more efficient network framework to achieve similar performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: Refer to the questions in weakness section.
Other questions related to technical details.
1. How does this approach adapt to images with distinct image resolution? Is there any specifically designed relative position embedding?
2. Is there any plan to extend this work to more complicated robot action? The pick and put are very simple action. The real world application will involve more convoluted operations such as visual navigation or sequential manipulation.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for the positive feedback and constructive comments on our work! Regarding the concerns from the reviewer 5SdR, we provide the responses as follows:
> **W1: It seems that the comparison with visual grounding methods (VG) is missing. Will the VG task evaluation be more direct?**
- Indeed, the comparison with VG methods on the RefCOCO-series benchmarks can be found in Table 7 in Appendix E.1, where IVM demonstrates competitive results to SOTA specialist VG models and outperforms other generalist models.
- Here, we provide more comparisons to other baselines that are specifically designed to ground multimodal large language models. Results show that, as a plug-and-play tool, IVM achieves strong results compared to other specially designed or trained baselines. Note that we report the result on the validation split of each dataset here and the '\*' denotes zero-shot performance.
| Methods | RefCOCO↑ | RefCOCO+↑ | RefCOCOg↑ |
| -------- | -------- | -------- | -------- |
| KOSMOS-2*[1]| 52.32 | 45.48| 60.57 |
| LION-4B[2]|89.73|83.60|85.69|
| Shikra-7B[3]| 87.01 | 81.60 | 82.27 |
| Ferret-7B[4]| 87.49 | 80.78 | 83.93 |
| IVM-7B(ours) | 90.1 |83.3 |82.9 |
> **W2: This approach relies heavily on the human manually labeled data... Therefore, it is more like a dataset creation work. Finetuning on this dataset, other compared works might also achieve similar or even better performance.**
- No, our work extends beyond mere dataset creation. Firstly, we introduce a novel and groundbreaking setting, instruction-guided visual masking, where all previous methods failed under this setting. To address this issue, we further developed a specialized IVM-Mix-1M dataset and introduced an innovative Discriminator Weighted Supervised Learning (DWSL) training objective, both of which are our primary contributions.
- Especially, the DWSL training objective is very crucial for the improvements. Figure 9(a) clearly demonstrates that DWSL is the only method that can effectively leverage both the high-quality but small human data and the large but mixed-quality machine data to achieve superior results. The simple supervised training baseline, however, fails miserably in training on machine, human, or joint data.
- Also, note that the supervised learning baseline degenerates to the previous method LISA[5] as we follow the model architectural framework established by LISA. So simply finetuning baseline models using the same dataset is not enough to get good results. DWSL, however, can enjoy considerable improvements.
> **W3: The derived framework is a little bit heavy, with LLM and heavy visual rendering head involved. There might be a more efficient network framework to achieve similar performance.**
- This limitation has been thoroughly discussed in Appendix A (Limitation and future work) of our paper. Exploring the training of smaller models and more direct methods to achieve comparable results remains a promising direction for future work.
- However, it’s crucial to highlight that the IVM-7B can significantly enhance the performance of larger models like GPT-4, offering a relatively effective solution.
>**Q1.1: How does this approach adapt to images with distinct image resolution?**
- We follow the official image transformation tool in previous work[5, 6] to standardize the input image resolution: 1024x1024 for SAM and 386x386 for MLLM (Multimodal Large Language Model).
>**Q1.2: Is there any specifically designed relative position embedding?**
- No, we follow the standard position embedding from [5, 6].
>**Q2: Is there any plan to extend this work to more complicated robot action...**
- Sure! This will be very promising and interesting!
- Also, note that although the task is short-horizon, our evaluation already poses a significant challenge for current robot models due to the huge adversarial human disturbances and visual distractors. Please see supplementary materials for those interesting videos.
[1] Kosmos-2: Grounding Multimodal Large Language Models to the World, 2023
[2] Lion: Empowering multimodal large language model with dual-level visual knowledge, CVPR 2024
[3] Shikra: Unleashing Multimodal LLM’s Referential Dialogue Magic, 2023
[4] Ferret: Refer and Ground Anything Anywhere at Any Granularity, ICLR 2024
[5] LISA: Reasoning Segmentation via Large Language Model, CVPR 2024
[6] Segment Anything, ICCV 2023 | Rebuttal 1:
Rebuttal: ## **General Response**
We sincerely thank all the reviewers for the positive feedback and constructive comments on our work!
Here, we summarize the contents in the attached PDF.
1. For Reviewer EbuK (W1) & zLHv (Q4): We updated the Figure 6 (modified) according to the constructive comments.
2. For Reviewer zLHv (Q5): We added the IVM-enhanced MLLM inference pipeline in Figure a.
3. For Reviewer EbuK (Q1): We provided some examples from the simple GQA, SQA, and VQAv2 benchmarks.
Pdf: /pdf/7c9eece1947325a898879becbec230577cba9f22.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Sequential Decision Making with Expert Demonstrations under Unobserved Heterogeneity | Accept (poster) | Summary: Imitation learning in a setting where experts have contextual information not available to the imitator. Imitator sees state-action (no reward) trajectories for the expert data. The paper proposes to use the expert data as prior and then do meta-RL to learn a policy.
Strengths: - The problem setting considered is important and there isn't too much prior work in it.
- The proposed method seems sensible.
- The paper provides theoretical and empirical justification for the proposed method.
Weaknesses: - The empirical evaluation is limited to simple domains and it is not discussed why that is the case. It would be good to include an explanation on why this was not scaled up to some more complex environments.
- Comparison to ExPLORe: if there is a way to use the same actor-critic algorithm for the proposed method and the comparison that would strengthen the paper. As the text says, now it is not clear whether the difference comes from the online RL part or from the difference in how they use the expert data.
- ADVISOR [1] targets the same problem setting. Differences to that algorithm should be discussed. And maybe a pointer should be given on why their method is demonstrated in high-dimensional domains and this isn't. Not saying that this has to be evaluated in high-dimensional domains, but that it is not immediately clear why this isn't.
- Figure 4
- Explain the pane meanings explicitly in the caption.
- Explain the axes explicitly in the caption. I.e., are these learning curves?
- How come the naive baseline does worse in the low entropy setting than in the high entropy one?
[1] Luca Weihs, Unnat Jain, Iou-Jen Liu, Jordi Salvador, Svetlana Lazebnik, Aniruddha Kembhavi, and Alex Schwing. Bridging the imitation gap by adaptive insubordination. Advances in Neural Information Processing Systems, 34:19134–19146, 2021.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why is human-in-the-loop highlighted as a limitation?
- Can you give a more fleshed out example of how the problem of unobserved contextual information might appear in a realistic problem?
- Can you lay out an idea on how to scale the method?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are discussed but some of the limitations highlighted seem a little strange. See questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and in-depth feedback. We are encouraged that you find the problem setting important, and the proposed method sensible with theoretical and empirical justification. We address your questions below.
> The empirical evaluation is limited to simple domains and it is not discussed why that is the case...
Please see the General Response for our detailed answer.
> Comparison to ExPLORe: if there is a way to use the same actor-critic algorithm for the proposed method ...
Thank you for mentioning this interesting point. As we have discussed in the paper, we agree with you that some of the results might be due to the difference between the base actor-critic method in ExPLORe and the bootstrapped DQN method used in our approach. However, note that in Table 1, on the left side (Fixed # Hazard = 9), under $\beta = 0.1$, the ExPLORe method achieves near-optimal results while our approaches suffer large regrets. This particular example shows that, at least in the Frozen Lake environment, the baseline actor-critic method is superior to the baseline bootstrapped DQN since the expert data is relatively uninformative for $\beta = 0.1$, i.e., the taken actions are almost random. As we increase the value of $\beta$ and the expert data gets more informative, our proposed approaches outperform the naive bootstrapped DQN, showing that the main performance comes from the proposed method of learning the priors and not the base bootstrapped DQN algorithm. We leave the variant of ExPerior that relies on actor-critic methods to future work.
> ADVISOR [1] targets the same problem setting. Differences to that algorithm should be discussed ...
The main difference to ADVISOR is that they assume having access to the expert policy $\pi^{\text{teach}}$, i.e., they have a human-in-the-loop setting during training and can query the expert for new states. On the other hand, our approach only assumes a finite offline data from the expert trajectories. In other words, there is no expert demonstration for new states that the learner can encounter. We will elaborate on the difference in the revised version. Please see our General Response for the high-dimensional domains.
> Figure 4: (1) Explain the pane meanings explicitly in the caption. (2) Explain the axes explicitly in the caption. I.e., are these learning curves?
Thank you for the suggestions. We will include the meaning of each panel (low-entropy to high-entropy setting) in the caption. The y-axis refers to the expected reward for some fixed number of evaluation episodes at each training episode $t \leq T$. We will include that in the caption as well.
> Figure 4: How come the naive baseline does worse in the low entropy setting than in the high entropy one?
This is an astute observation. The reason is that as the entropy of the tasks gets higher, i.e., the goal location gets uniformly distributed over all the columns, the task will intrinsically get easier. Since the agent always starts from the top-left corner, it is much harder for it to explore and reach the bottom-right corner compared to bottom-middle or bottom-left cells. This is why, on average, the naive baseline gets higher rewards in the high-entropy case compared to the low-entropy setting (the goal column always being in the right corner).
> Q. Why is human-in-the-loop highlighted as a limitation?
By human-in-the-loop, we mean the requirement of the algorithm to query experts during training or inference. Our approach, on the other hand, only requires an offline dataset of expert demonstrations, which is a strictly simpler requirement. It might be difficult to access an online expert (e.g., a clinical doctor) during training. This is what we mean by a limitation. Having said that, there are benefits in human-centric AI and looping humans in decision-making that we haven’t discussed in our paper. We will elaborate on this point in the paper’s Conclusion.
> Q. Can you give a more fleshed out example of how the problem of unobserved contextual information might appear in a realistic problem?
In clinical applications, the objective descriptions of a patient are often documented through electronic health records (EHR). However, the subjective opinion of a doctor after having conversations with the patient and observing their visual clues or other information, such as their lifestyle, might lead to potentially different diagnoses. Even by adding clinical notes to the EHR data, it is fundamentally challenging to capture all the observations made by the doctors that influence their decisions. Another example is large language models, which are trained on datasets that do not capture contextual information. For example, suppose we ask an expert about the location of a historical building. If the expert remembers that as part of her internal knowledge, she will answer immediately. Training LLMs on such datasets might result in hallucination as the model does not capture the contextual information – When asked about the location of another historical building, the model might think like an expert and generate some response based on their internal knowledge, which can be wrong. On the other hand, what you might expect from such a model is to search on the internet to respond. Please See [1] and [2] (Figure 7) for more detail.
> Q. Can you lay out an idea on how to scale the method?
Please see the General Response for our detailed answer.
We hope our response addressed your comments. If you have any additional questions that could help you improve your evaluation of the work, please feel free to let us know.
===================================
[1] Johnson, Daniel D., et al. "Experts Don't Cheat: Learning What You Don't Know By Predicting Pairs." arXiv preprint arXiv:2402.08733 (2024).
[2] Wang, Kuan, et al. "Adapting LLM Agents with Universal Feedback in Communication." ICML 2024 Workshop on Foundation Models in the Wild.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. Most of my concerns have been addressed. I'm raising my score. | Summary: The paper proposes a 2-stage learning method for sequential decision making, wherein step 1 involves leaning a prior from expert demonstrations and step 2 is online RL where that learnt prior is used. The main problem being addressed is that of heterogeneity of data and/or contexts provided by the expert and those encountered by the online learning agent.
Strengths: - Relevant problem: Finding useful, scalable and efficient ways to leverage offline expert demonstrations for online decision making is of key interest to the community.
- The proposed method combines imitation learning with online model-based RL, paying attention to the issue of heterogeneity between the available contexts that gave rise to the offline data, that might be unavailable during the online RL step.
- The problem is well-motivated and the approach is easy to understand conceptually; contributions are clearly outlined.
- Presentation (writing, figures etc) is mostly clear, with some minor comments included in the below section.
Weaknesses: Clarity
- Notation: even with the notation paragraph, I find it quite difficult to follow. E.g. I think some `t` superscripts are missing (the definition of the q-function); should $\mathcal{C}$ be part of the definition of the MDP?
- Terminology: I'm a little confused with the terminology "parametric" vs "non-parametric". Unless I'm missing something, the latter is still parametric as you are optimising over a family of distributions?
- Typo: in figure 1, annotation below "Step 3" is repeated, I guess one should be $\mu_{\theta}*$.
- Figure 1: clarity can be improved if you write down what is being learnt and what is being updated.
- I found the algorithm box quite useful, will be great to have it in the main paper (e.g. next to Figure 1).
Max-entropy prior:
- Clearly, if $\mu_0 \in \mathcal{P}(\epsilon)$, then we get a trivial solution, $\mu_{ME}=\mu_0$ (usually Uniform), so do you need an assumption on ${P}(\epsilon)$ to get a non-trivial solution?
- If $\mathcal{C}$ is unbounded, then you can't really define a uniform distribution on it, how do you deal with that?
- Can you handle improper priors?
Technical Quality: 3
Clarity: 2
Questions for Authors: In addition to those raised in the Limitations section:
1. Do you have an intuition on how badly "misspecified" the learnt expert prior can get until learning for the online agent becomes very difficult? Is there a way to detect a poorly specified prior, and potentially reverting to an alternative (e.g. uninformative) after a few steps online?
2. For the "parametric" ExPrior - why score gradients? Have you thought about or tried continuous relaxation when the problem is discrete?
3. On line 112 "Note that since the task variable is unobservable, the learner's policy will not depend on it" - does it depend implicitly though, e.g. can the learner infer and integrate over that inferred task distribution?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: There are no specific negative societal impact that need to be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback. We are pleased that you consider the paper well-motivated, relevant, and with a conceptually easy-to-understand approach. We have addressed your questions below.
> Notation: even with the notation paragraph, I find it quite difficult to follow...?
We assume the states are partitioned by the horizon $[H]$, i.e., each state $s \in \mathcal{S}$ is only reachable in one specific horizon $h \in [H]$. This can be achieved by re-defining the state space as $\mathcal{S} \times H$. This way, none of the value functions or Q-functions need to depend on the horizon. The superscript $t$, however, corresponds to the episode number and not the horizon. The value/Q-functions are defined independently of the episode. The only variable that gets updated over episodes is the learned policy $\pi^t$. Also, we agree that $\mathcal{C}$ should be part of the definition of the MDP. We will include these comments in the notation page of the revised manuscript, as there will be more space (10 pages instead of 9).
> Terminology: I'm a little confused with the terminology "parametric" vs "non-parametric"...
Looking at the max-entropy expert prior in Proposition 1 (line 189), you can see that.
$$\mu\_{\text{ME}}(c) = \lim\_{k \to \infty} \frac{\exp \left( \mathbf{m}(c)^\top \boldsymbol{\alpha}_{k} \right)}{\mathbb{E}\_{c'\sim \mu\_{0}} \left[ \exp \left( \mathbf{m}(c')^\top \boldsymbol{\alpha}\_{k} \right) \right]}$$
where each $\boldsymbol{\alpha}\_k$ is an $N$-dimensional parameter with $N$ being the number of demonstrations. In other words, the length of parameters required to model the max-entropy prior increases by the number of samples. This is what we mean by a "non-parametric" approach, in contrast to "parametric" models with a fixed set of learnable parameters. Moreover, we do not assume any specific form for the family of distributions in $\mathcal{P}(\epsilon)$, except belonging to $L^1(\mathcal{C}, \mu\_0)$ defined in lines 577-583.
> Typo: in figure 1, annotation below "Step 3" is repeated, I guess one should be \mu_{\theta^\star}.
This is a mistake, and we will fix it in the revised version. Thank you for pointing this out.
> Figure 1: clarity can be improved if you write down what is being learnt and what is being updated.
$\mu\_{\theta^\star}$, $\mu\_{\text{ME}}$ are learned from data. The posterior distributions $\mu\_{\theta^\star}(\cdot | \text{history})$, $\mu\_{\text{ME}}(\cdot | \text{history})$ are being updated, and the policy $\pi\_c(\cdot | s)$ is learned/calculated given the sampled Q-functions. We will differentiate between those with different colours in Figure 1.
> I found the algorithm box quite useful, will be great to have it in the main paper (e.g. next to Figure 1).
We moved algorithm boxes 1 and 2 to the appendix solely for the lack of space. If the paper gets accepted, we will include them in the main paper. (The camera-ready version allows for one more page.)
> Clearly, if $\mu\_0 \in \mathcal{P}(\epsilon)$, then we get a trivial solution, $\mu\_{\text{ME}} = \mu\_0$ (usually Uniform) ...?
Your assessment is correct. If the uniform distribution is already in the set of feasible prior functions $\mathcal{P}(\epsilon)$, then $\mu\_{\text{ME}} = \mu\_0$. However, this is not a trivial solution since the uniform distribution will maximize the marginal likelihood of the data and is a plausible distribution. This scenario can happen if the effect of unobserved confounding is extreme, i.e., the unobserved factors can change the optimal policies arbitrarily and provide no information on the task distribution. For example, consider a multi-armed bandit setting with $K$ actions, where there are $K$ task types, each with a different arm as the optimal action. The expert demonstration data will contain each of the arms with equal probability. In other words, the expert data provides no information, and the best thing we can do is to have a non-informative prior like the uniform prior distribution.
> If $\mathcal{C}$ is unbounded, then you can't really define a uniform distribution on it ...? Can you handle improper priors?
Note that our result does not force $\mu\_0$ to be a uniform distribution or $\mathcal{C}$ to be bounded. It only requires $\mathcal{C}$ to be a measurable set with an existing measure $\mu\_0$. In the case of unbounded $\mathcal{C}$, one can choose $\mu\_0$ to be a Gaussian distribution. However, our current results, like Lemma 5 on page 14, require the priors to be in $L^1\left(\mathcal{C}, \mu\_0\right)$, i.e., $\mathcal{C}$ should have a finite measure under the feasible priors. For that reason, the current theoretical results do not handle improper priors.
> Q. Do you have an intuition on how badly "misspecified" the learnt expert prior can get ...?
This is the exact reason we have provided the non-parametric max-entropy approach. Table 3 (Page 22) shows that misspecified priors result in large regrets. In cases where the practitioner is not quite sure about the parametric form of the prior, we suggest using the non-parametric max-entropy approach.
> Q. For the "parametric" ExPrior - why score gradients? Have you thought about or tried continuous relaxation when the problem is discrete?
In the parametric ExPerior, we do not assume the parameter set $\boldsymbol{\Theta}$ is necessarily continuous, as long as we can solve the optimization in line 175 (potentially without gradient-based methods).
> Q. On line 112 "Note that since the task variable is unobservable ...
We meant explicit dependence here. The learner policy only depends on the history of interactions and the current state. As you mentioned, one can use the history to infer/integrate the task distribution. We will clarify more on this.
We hope our comment addresses your questions. Please let us know if we can answer any further questions that might improve your assessment of the work. | Summary: This paper attempt to leverage offline demonstrations to speed up online learning under unobserved heterogeneity, and unlike zero-shot meta reinforcement learning, the proposed ExPerior does not require the task labels (reward labels). ExPerior utilizes expert data to establish an informative prior distribution for online exploration. The experimental results on multi-armed bandits and MDPs showcase the superiority against current baselines.
Strengths: The setting proposed in this paper is realistic and significant. To solve the problem that the learner faces uncertainty and variability in task parameters, the paper proposes two approaches, parametric and non-parametric, and provides sufficient theoretical basis. Experiments under multi-armed bandits and MDPs also prove its effectiveness.
Weaknesses: 1. Just as stated at the end of the paper, the experiments conducted in the paper are limited, and the experiments under MDPs even made the assumption that the transition functions are invariant to the task variables, which makes the experimental environment overly simple. In more difficult experimental benchmarks such as MuJoCo, it is questionable whether the proposed method can show the expected effect.
2. Due to the deficiency mentioned in the first point just now, the paper also cannot further compare with modern offline meta RL algorithms (such as PEARL, FOCAL, CORRO) under complex MDPs, making the baselines seem a bit scarce.
3. The introduction of related concepts in the paper is slightly insufficient. Besides, there is a clerical error in the upper right part of Fig. 1: the first \mu_{\rm{ME}} should be \mu_{\theta^*}.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What does "Heterogeneity" mentioned in the paper represent? Does it refer to the differences between the transition functions and reward functions in MDPs? The paper should explain this.
2. The paper mentions that the off-policy meta RL method PEARL requires task labels. Does the task label here refer to the reward label? In fact, the trajectories required by PEARL only contain (s, a, r, s') information.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: A/N
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. We are encouraged that you find the paper to have a realistic and significant setting and sufficient theoretical and experimental results. We address your questions below.
> Just as stated at the end of the paper, the experiments conducted in the paper are limited, and the experiments under MDPs even made the assumption that the transition functions are invariant to the task variables, which makes the experimental environment overly simple. In more difficult experimental benchmarks such as MuJoCo, it is questionable whether the proposed method can show the expected effect.
We agree that the current experiments do not consider complex high-dimensional settings. Please see the General Response we provided for our detailed plan to scale our approach and include such experiments in the revised version.
> Due to the deficiency mentioned in the first point just now, the paper also cannot further compare with modern offline meta RL algorithms (such as PEARL, FOCAL, CORRO) under complex MDPs, making the baselines seem a bit scarce.
Thank you for mentioning the works FOCAL [1] and CORRO [2]. We will cite these in the related work section of the revised paper. However, independent of the complexity of the MDPs, the mentioned baselines are not immediately applicable for our problem setup. PEARL, FOCAL, and CORRO all require reward signals to be available in the offline data. Moreover, FOCAL and CORRO assume the offline data is diverse enough to learn the optimal policies solely based on the offline data and do not continue the learning during the online phase. On the other hand, our setting only assumes offline expert demonstration data (with no reward) and requires using the reward signals during the online phase. This is why we did not include those baselines in our experiments, rather than the complexity of the MDPs.
> The introduction of related concepts in the paper is slightly insufficient. Besides, there is a clerical error in the upper right part of Fig. 1: the first \mu_{\rm{ME}} should be \mu_{\theta^*}.
Could you please elaborate on which parts of the introduced concepts are slightly insufficient so we can explain them in more detail? Also, thank you for finding the typo in $\mu_{\rm{ME}}$. We will fix it in the updated version.
> Q: What does "Heterogeneity" mentioned in the paper represent? Does it refer to the differences between the transition functions and reward functions in MDPs? The paper should explain this.
Yes, your understanding is accurate. We will elaborate more on what we mean by "Heterogeneity" in the introduction.
> Q: The paper mentions that the off-policy meta RL method PEARL requires task labels. Does the task label here refer to the reward label? In fact, the trajectories required by PEARL only contain (s, a, r, s') information.
Thank you for pointing this out. This confusion is probably due to the paper's phrasing. Quoting the paper (lines 85-86): "Similarly, Zhou et al. [40] and Rakelly et al. [41] require the task label and reward labels.". This should have instead been, "Similarly, Zhou et al. [40] and Rakelly et al. [41] require the task label and reward labels, *respectively*." PEARL requires offline reward information, which is not accessible in our setting. We will fix this confusion in the revised version.
We hope our response addressed your comments. Should you have any additional questions that could assist in improving your evaluation of the work, please feel free to let us know.
===================================
[1] Li, Lanqing, Rui Yang, and Dijun Luo. "Focal: Efficient fully-offline meta-reinforcement learning via distance metric learning and behavior regularization." arXiv preprint arXiv:2010.01112 (2020).
[2] Yuan, Haoqi, and Zongqing Lu. "Robust task representations for offline meta-reinforcement learning via contrastive learning." International Conference on Machine Learning. PMLR, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, I maintain my original score. | Summary: This paper studies online transfer learning in an unknown Markov decision process. The learner has access to demonstration trajectories generated by an expert, who has access to an independent context variable. The expert can observe the values of the latent context to make a decision, but such information is unobserved to the learner. The authors propose a learning strategy that allows the learner to accelerate its online learning process by leveraging the confounded expert's trajectories. More specifically, the expert selects values of the action following a specific parametric family of sofmax policy. Using this parametric information, the learner can extrapolate an informative prior about the underlying system dynamics (i.e., transition function and reward function) from the confounded trajectories and utilize the extrapolated prior to improve future learning. The authors then derive a Bayesian regret bound for the proposed method. Simulation results support the proposed approach.
Strengths: - The paper is well-organized and clearly written. Simulations are comprehensive, supporting the proposed approach.
- Theoretica regret analysis is provided. I have not read through all the proofs for Theorem 2. But the result seems reasonable.
Weaknesses: - The proposed method requires the expert to follow a specific form of policy to generate the demonstration data. This parametric restriction is rather strong and may not necessarily hold in many applications.
- It is unclear to see from Theorem 2 under which condition the expert's demonstration could accelerate the online learning process. It would be appreciated if the authors could further elaborate on this point.
- Some related references are missing. Extrapolating knowledge from confounded data is one of the main problems in causal inference. There is a growing line of work studying learning from confounded demonstration data in canonical RL tasks, including:
1. Kallus, Nathan, and Angela Zhou. "Confounding-robust policy improvement." Advances in neural information processing systems 31 (2018).
2. Zhang, Junzhe, and Elias Bareinboim. "Near-optimal reinforcement learning in dynamic treatment regimes." Advances in Neural Information Processing Systems 32 (2019).
Technical Quality: 3
Clarity: 3
Questions for Authors: - Could the regret bound in Theorem 2 outperform standard bandit regret when the expert's demonstration is benign? Could the authors elaborate on this?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed the limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for your helpful comments. We are pleased that you find the paper clearly written, well-organized, and with comprehensive simulations. We address your questions in the following.
> The proposed method requires the expert to follow a specific form of policy to generate the demonstration data. This parametric restriction is rather strong and may not necessarily hold in many applications.
We agree with you that Assumption 1, about the expert policy being noisily rational, may not hold in practice. Appendix C.1, Table 2 on Page 22 considers misspecified expert models. In particular, we looked into “optimal” experts, “noisily rational” experts with misspecified values of $\beta$, and a new type of experts we called “random-optimal,” who take completely random actions with some fixed probability and optimal actions otherwise. In all instances, our methods, which rely on the noisily rational experts, achieve near-optimal results. This observation justifies the use of Assumption 1, even in instances where it does not necessarily hold. We would appreciate it if the reviewer has any suggestions for including other types of misspecification in Table 2.
> It is unclear to see from Theorem 2 under which condition the expert's demonstration could accelerate the online learning process. It would be appreciated if the authors could further elaborate on this point.
As discussed in lines 302-312, as well as Figure 3 (b), the extra term in the regret bound of Theorem 2, i.e.,
$$\sum\_{a, a’ \in \mathcal{A}, a \neq a’} \sqrt{\frac{p(a)}{p(a) + p(a’)} \left(1 - \frac{p(a)}{p(a) + p(a’)}\right)} \left[\sqrt{p(a)} + \sqrt{p(a’)}\right]$$
closely resembles the variance (or the entropy in the case of $K = 2$) of the optimal actions under the distribution of unobserved factors. In other words, **if the variance of the optimal actions is small, i.e., the effect of unobserved factors is negligible, the regret will be close to zero**. We will elaborate more on this point in the revised version of the paper.
> Some related references are missing. Extrapolating knowledge from confounded data is one of the main problems in causal inference. There is a growing line of work studying learning from confounded demonstration data in canonical RL tasks ...
Thank you for providing the missing related references. We will include those papers in the revised manuscript. We emphasize that the first work, Kallus et al. (2018), only considers the offline setting, while the latter assumes the availability of rewards in the offline observational data and does not assume the demonstrations are generated by experts. Our work, on the other hand, integrates **offline expert** demonstration data with **online** reinforcement learning.
> Q: Could the regret bound in Theorem 2 outperform standard bandit regret when the expert's demonstration is benign? Could the authors elaborate on this?
Yes, as discussed above and in lines 302-312 of the manuscript, the standard bandit regret is bounded by $\tilde{\mathcal{O}}\left(KT\right)$ for $K$ number of actions and $T$ episodes. The bound in Theorem 2, however, is
$$\tilde{\mathcal{O}}\left(\sqrt{T} \cdot \underbrace{\sum\_{a, a’ \in \mathcal{A}, a \neq a’} \sqrt{\frac{p(a)}{p(a) + p(a’)} \left(1 - \frac{p(a)}{p(a) + p(a’)}\right)} \left[\sqrt{p(a)} + \sqrt{p(a’)}\right]}\_{\text{term} I}\right)$$
When the expert’s demonstration is benign, e.g., the variance of the optimal actions under unobserved confounding is small, the second term (term I) will be smaller than $\sqrt{K}$. As an example, consider a $K$-armed case, where a given fixed arm $a_1$ is optimal regardless of the unobserved heterogeneity. Then, for all $a \neq a_1$, we have $p(a) = 0$, resulting in term I to be zero. This is a case when the expert data is extremely informative and hence gives us zero regret.
We hope our comment addresses your questions. Please let us know if we can answer any further questions that might improve your assessment of the work.
*1. The big-O notation $\tilde{\mathcal{O}}$ in this comment ignores the log terms.*
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response. They have answered my questions. Theorem 2 states that the proposed algorithm is able to achieve performance improvement when the expert always picks the optimal action in demonstrations. This is somewhat expected since, in this case, the latent is not effective, and the demonstration data could be directly transferred. On the other hand, this is quite an extreme condition that might be hard to reach in practical conditions. I will keep my current score. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their helpful and thorough feedback. We are happy that they found our work clearly written, well-organized, and with comprehensive simulations (Reviewer ${\color{red} \text{11BA}}$), significant with a realistic setting and sufficient theoretical basis (${\color{blue} \text{hfW9}}$), well-motivated, easy to understand, with clearly-outlined contributions (${\color{orange} \text{2ZF5}}$), and sensible with an important setting and theoretical and empirical justification (${\color{ForestGreen} \text{FGqG}}$). We discuss the common concern on the simplified experiments and ongoing plans to scale our method, ExPerior, to high-dimensional settings. We answer each reviewer's specific questions and concerns separately.
**Limitated Empirical Evaluation (${\color{blue} \text{hfW9}}$, ${\color{ForestGreen} \text{FGqG}}$)**
1. First, we emphasize that we have run our proposed methods on bandits, MDPs and POMDPS where, in practically all instances, our proposed algorithm outperforms the baselines. To elaborate, within each setting we study:
a. Multi-armed bandits (with various prior functions, misspecified expert models, misspecified prior functions, and under both exact posterior sampling and stochastic gradient Langevin dynamics),
b. Markov Decision Processes (MDPs) in the Deep Sea and Frozen Lake environments, and
c. Partially Observable MDPs (POMDPs) in the Frozen Lake environment.
To our knowledge, most papers in the literature only consider a subset of those settings (bandits, MDPs, or POMDPs) for their experiments, and we view the breadth of applicability of our approach to be a considerable strength of ExPerior.
2. The environments used in our experiments, i.e., *Deep Sea*, *Frozen Lake*, or similar grid environments, are standard in the literature, particularly for imitation learning from experts under unobserved confounding [1,2,3,4].
We are currently working on expanding the framework to study its performance on more high-dimensional problems and/or more complex environments such as MuJoCo. Therefore, we are working on providing such experiments in the revised version.
**Scaling the Method (${\color{ForestGreen} \text{FGqG}}$)**
ExPerior, derives a prior distribution over Q-functions from expert demonstrations and initializes the Q-networks by sampled parameters from such prior. For high-dimensional settings, the Q-networks will have a large number of parameters; to handle this, we intend to:
1. Pre-train a representation network using self-supervised learning on the offline expert data, e.g., by pre-training a convolutional module to capture the meaningful state representation from images of the Frozen Lake environment (instead of using the grid representation) or the MuJoCo environment.
2. Construct the Q-network by freezing the pre-trained module and adding a learnable linear layer on top of it so it will capture different Q-functions for different values of unobserved factors. This enables us to learn and sample from a prior distribution defined over a simple linear layer instead of the entire Q-network.
===================================
[1] Warrington, Andrew, et al. "Robust asymmetric learning in pomdps." International Conference on Machine Learning. PMLR, 2021.
[2] Shenfeld, Idan, et al. "Tgrl: An algorithm for teacher guided reinforcement learning." International Conference on Machine Learning. PMLR, 2023.
[3] Hao, Botao, et al. "Bridging imitation and online reinforcement learning: An optimistic tale." arXiv preprint arXiv:2303.11369 (2023).
[4] Johnson, Daniel D., et al. "Experts Don't Cheat: Learning What You Don't Know By Predicting Pairs." arXiv preprint arXiv:2402.08733 (2024). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
UnSeg: One Universal Unlearnable Example Generator is Enough against All Image Segmentation | Accept (poster) | Summary: This paper aims to design a universal unlearnable example generator for image segmentation tasks. Specifically, this paper aims to address three important factors for unlearnable examples in image segmentation:1) data efficiency 2) generation efficiency 3) transferable efficiency.
To design such a model, the author makes use of the pretrained SAM model to conduct a min-min optimization problem, which involves two models) a noise generation model based on the pretrained SAM model; 2) a surrogate model training from scratch for minimizing the training loss given the corrupted unlearnable image.
To this end, the author conducts comprehensive experiments for a diverse set of downstream tasks to verify the effectiveness of the proposed method.
Strengths: The problem set is of practical meaning, especially in the current situation where private and copyright-protected data are widely used to train large-scale deep learning models. Previous methods to generate unlearnable examples are mainly designed for image classification tasks, which are not directly transferable for image segmentation tasks. To solve this problem, the author proposes to leverage the segmentation foundation model for generating unlearnable examples. The proposed method is easy to optimize and transferable for different downstream tasks.
Weaknesses: The key experiment setup is not clear (see questions).
What are the benefits of using the SAM as the backbone for the unlearnable example generation model and the surrogate model, I miss the ablation on this. Besides, what is the benefit of using the SAM pretrained weight for the unlearnable example generation model? Is this crucial for the success of the proposed method?
Technical Quality: 3
Clarity: 2
Questions for Authors: It appears that the UnSeg model is trained on the HQSeg44k dataset. Consequently, the unlearned dataset is also HQSeg44k, but with added unlearnable noise. The authors subsequently used the HQSeg44k dataset to fine-tune different models for downstream tasks, including semantic segmentation, instance segmentation, and panoptic segmentation.
If my previous explanation is correct, I am concerned about the fairness of this approach. Specifically, there may be a significant domain gap between the fine-tuning dataset and the evaluation dataset. Furthermore, this setup does not demonstrate that the proposed method can generate unlearnable noise for example beyond the training data used for the UnSeg model.
But from the caption of Figure 5, it seems that the UnSeg model is employed to generate unlearnable noise for new datasets used in training different downstream tasks, the authors should clarify this in lines 249 to 265.
Please clarify the following:
On which dataset is the inference of UnSeg conducted (i.e., what training dataset is used for different downstream tasks)?
Which dataset is used as the evaluation set for the trained models of different downstream tasks?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and valuable feedback.
**Q1: What are the benefits of using the SAM as the backbone for the unlearnable example generation model and the surrogate model.**
One straightforward benefit of using SAM as the backbone is that it renders our entire framework interactive. This means that 1) the noise can be generated interactively during the optimization process, and 2) the surrogate model can be optimized on a small-scale interactive segmentation dataset, *improving data efficiency*.
Furthermore, the trained noise generator inherits SAM's promptable characteristic and high transferability. This allows our method to generate the unlearnable noise for any downstream image through a single forward pass, *improving generation efficiency*. Additionally, since SAM is an interactive segmentation model whose predictions do not contain any class information, our approach avoids overfitting to any specific category, thus learning a universal unlearnable representation.
As suggested, we replaced the MAE-pretrained ViT surrogate model with an ImageNet-pretrained ResNet50 surrogate model, and we validated the effectiveness of the trained noise generator using Mask2Former on the ADE20K and Cityscapes datasets. We find that the performance of the noise generator significantly declined when using ResNet50 as the surrogate model. We believe that using the MAE-pretrained weights can prevent the noise generator from being biased towards specific categories, thereby enhancing its transferability.
|Surrogate Model|Backbone|ADE20K Panoptic| | |Cityscapes Panoptic| | |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|||PQ|AP-Pan|mIoU-Pan|PQ|AP-Pan|mIoU-Pan|
|Clean|ResNet50|39.7|26.5|46.1|62.1|37.3|77.5|
| |Swin-Tiny|41.6|27.7|49.3|63.9|39.1|80.5|
|MAE pretrained ViT|ResNet50|11.7|7.5|17.7|5.7|1.1|7.8|
| |Swin-Tiny|4.1|3.4|10.6|7.2|1.7|12.6|
|ImageNet Pretrained ResNet50|ResNet50|35.6|23.4|43.5|42.5|16.5|64.8|
| |Swin-Tiny|28.4|19.7|39.7|34.2|57.2|11.6|
**Q2: The influence of using the SAM pretrained weight for the unlearnable example generation model?**
Our primary consideration in using SAM is to ensure that the noise generator possesses promptable attributes after optimization. Therefore, we initialized the generator with the pretrained SAM weights and kept these weights frozen during training. This allows efficient fine-tuning of the newly added parameters. Here, we further test the use of randomly initialized weights for the noise generator. We run the experiments on the Pascal VOC dataset using DeepLabV1. We report two types of results in the table below: *making all classes unlearnable* and *making only some classes unlearnable*. The results indicate that our framework is not sensitive to the initial weights of the noise generator and the noise generator works well with random initialization. We will include this result in the revision.
|Initialization of Generator Weights|All class|Multi-class| | | | |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| | |Aeroplane |Bicycle |Bird |Boat |Bottle|
|Clean|70.6|81|36|84|66|72|
|Pretrained SAM initialization |6.2|30.5|16.4|58.6|19.1|40.4|
|Random initialization |4.8|28.1|6.2|42.1|6.5|25.9|
**Q3: On which dataset is the inference of UnSeg conducted (i.e., what training dataset is used for different downstream tasks)? Which dataset is used as the evaluation set for the trained models of different downstream tasks?**
The outcome of our method is a single noise generator, which generates unlearnable noise for any given image in any downstream dataset. On downstream tasks, we applied our noise generator to convert the images in the training dataset into unlearnable images and train a SOTA model for the task on the unlearnable training images. We test the model’s performance on the clean test set. The lower the performance the better data protection offered by our noise generator. Please note that this setup strictly follows existing works in the field [a,b,c,d,e,f].
We will clarify this in lines 249 to 265, as suggested. The noise generator was trained on the HQSeg-44k dataset.
Here is a more detailed response to your question:
**For semantic, instance and panoptic segmentation**: we use the trained noise generator to convert the Pascal VOC, ADE20K, Cityscapes and COCO training sets into their corresponding unlearnable training sets. The task-specific models are then trained on the unlearnable training sets and evaluated on the original validation sets.
**For Interactive segmentation**: we use the trained noise generator to convert the HQSeg-44K into its unlearnable version. The SAM-HQ model is then trained on the unlearnable HQSeg-44K dataset and evaluated on the DIS, COIFT, HRSOD and ThinObject datasets.
**For remote sensing instance segmentation**: we use the trained noise generator to convert the WHU, NWPU, and SSDD training sets into unlearnable training sets. The Rsprompter model is then trained on the unlearnable training sets and evaluated on the WHU, NWPU, and SSDD validation sets, respectively.
**For medical image segmentation**: We randomly select 80% of the data from the Lung segmentation and Kvasir-seg datasets as the training sets, while using their remaining 20% as the validation sets. We then use the trained noise generator to convert the training sets into their corresponding unlearnable versions. The UNet++ model is then trained on the unlearnable training sets and evaluated on the clean validation sets.
[a] Unlearnable examples: Making personal data unexploitable, Huang et.al., ICLR 2021
[b] Adversarial examples make strong poisons, Fowl et.al., NeurIPS 2021
[c] Unlearnable clusters: Towards label-agnostic unlearnable examples, Zhang et.al., CVPR 2023
[d] Availability attacks create shortcuts, Yu et.al., SIGKDD 2022
[e] Autoregressive perturbations for data poisoning, Pedro et.al., NeurIPS 2022
[f] Cuda: Convolution-based unlearnable datasets, Sadasivan et.al., CVPR 2023
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's response, this solves most of my concerns, and I will raise my rating to 5 (reflected in the revised rating). Though I am quite familiar with SAM, I am not an expert in "Unlearnable Example Generator". I'd like AC to downweight my review in the final decision.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback.
Comment: Thank you for taking the time to review our paper. We greatly appreciate your feedback.
---
Rebuttal 2:
Title: Your additional feedback means a lot to us.
Comment: Dear Reviewer Mhnt,
Thank you for your initial comments. We appreciate your questions regarding the training of our UE generator, the datasets used for downstream tasks, and the benefits of using SAM as the backbone. We have addressed these points in our rebuttal. Please review our response to see if it adequately resolves your concerns. If any details remain unclear, we are happy to provide further explanations. We will also revise our paper to clarify these points. Thank you for your time, and we value your additional feedback. | Summary: This article uses the powerful SAM to fine-tune a non-learnable examples generator, achieving good protection effects on downstream datasets and models.
Strengths: This article breaks through the concept of unlearnable from classification to segmentation, and the experiment has achieved good results.
Weaknesses: I am mainly concerned about whether the unlearnable samples proposed in this paper on segmentation have much practical application? The author used a relatively strong segmenter to train an unlearnable sample generator, but trained unlearnable samples on a relatively weak segmenter. Is this reasonable in real world?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please show experiments on adversarial training on non-learnable examples.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: This paper could not cause any societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and valuable feedback.
**Q1: Main concern about the author used a relatively strong segmenter to train an unlearnable sample generator, but trained unlearnable samples on a relatively weak segmenter. Is this reasonable in the real world?**
While we understand the reviewer’s concern, we would like to argue that our settings are both reasonable and practical in real-world applications. As the current AI and large-scale pre-training are devouring our data and destroying our privacy, Unlearnable Examples (UEs), as a promising data protection technique, has become an active research area in the past two years, and most existing works [a,b,c,d,e,f,g,h] follow a similar setting as ours.
The use of SAM (i.e., a strong segmenter) follows the principle of **using large models against large models**, and this can produce a single powerful generator to provide universal protection against many downstream tasks. Please note that our setting is not only reasonable but also very challenging as the training dataset of the noise generator is completely different from the downstream datasets and the generated noise should be powerful enough to **prevent model training** (which is more powerful than adversarial noise as it only prevents inference). The trained unlearnable noise generator by our method can readily be applied by the data owners to protect their private data (which are not necessarily segmentation data). With our protection, the data won’t be easily exploited by a machine learning model **even if the data is accidentally leaked**.
Moreover, we employed **state-of-the-art (SOTA) models in each domain** to evaluate our method on downstream tasks. For example, the Mask2Former model we tested is a SOTA model for universal image segmentation and has become a recognized milestone in this area. Similarly, the SAM-HQ, RSPrompter, and DINO models are also SOTA in their respective domains, all of which are strong enough to assess the effectiveness of our method.
[a] Unlearnable examples: Making personal data unexploitable, Huang et.al., ICLR 2021
[b] Adversarial examples make strong poisons, Fowl et.al., NeurIPS 2021
[c] Unlearnable clusters: Towards label-agnostic unlearnable examples, Zhang et.al., CVPR 2023
[d] Availability attacks create shortcuts, Yu et.al., SIGKDD 2022
[e] Autoregressive perturbations for data poisoning, Pedro et.al., NeurIPS 2022
[f] Cuda: Convolution-based unlearnable datasets, Sadasivan et.al., CVPR 2023
[g] Transferable unlearnable examples, Ren et.al., ICLR 2023
[h] Robust unlearnable examples: Protecting data against adversarial learning, Fu et.al., ICLR 2022
**Q2: Experiments on adversarial training.**
Thanks for your suggestion. In fact, we have already evaluated our method against adversarial training (**AT**) and an advanced AT method specifically designed for segmentation tasks (**DDC-AT**). Please kindly find the copied results in the table below, a detailed discussion can be found in Section 4.3 of our initial submission. As shown in the table, our method is resistant to both AT and DDC-AT, reducing the test mIoU to 23.1 % and 28.5%, respectively.
|Clean|No Defense |AT|DDC-AT|
|:---:|:---:|:---:|:---:|
|75.1|5.8|23.1|28.5|
**Q3: This paper could not cause any societal impacts.**
In terms of societal impacts, please allow us to clarify the following:
**(Societal Impact)** As a powerful data protection technique following the concept of **Unlearnable Examples** (UEs), we strongly disagree with the reviewer that our work *could not cause any societal impacts*. The idea of UEs has been figured by **[MIT Technology Review](https://www.technologyreview.com/2021/05/05/1024613/stop-ai-recognizing-your-face-selfies-machine-learning-facial-recognition-clearview/)**, which we believe is a clear sign of the positive societal impact of this line of work.
**(Practical Application)** As the release of powerful segmentation models like SAM, we recognize the potential damage it could bring to private data as personal images can now be easily segmented, interpreted, and manipulated by such models. Our work aims to stop the training of SAM or at least make the training more costly. This urges us to build a universal UE generation method that allows everyone to protect their images against (the training of) segmentation models. We anticipate our released noise generator to be a useful tool for many potential users. As demonstrated in our paper, UnSeg can be applied to protect natural scene images, remote sensing images, and even medical images. UnSeg is very lightweight, occupying less than 400 MB of memory, and can generate an unlearnable version of an image in just 113 ms. These characteristics further highlight the practical value of UnSeg.
**(Novelty)** To the best of our knowledge, our work is the first UE generation method for segmentation models, which we believe is a novel and important generalization of UEs to more complex and foundational fields in computer vision. Unlike all previous methods, our UnSeg generates unlearnable examples using visual prompts, which we believe is technically novel. Our extensive experiments on 7 mainstream image segmentation tasks proves the empirical novelty of our method. For the first time in the field, we could use a single universal UE generator to fight against a wide range of downstream tasks at this scale.
**(Efficiency)** The high practical value of UnSeg is evident in its significant improvements in data efficiency, generation efficiency, and transferability compared to previous methods. UnSeg can be fine-tuned on a 44k dataset in just 10 hours, demonstrating high data efficiency. It can convert an image into its corresponding unlearnable version in just 113 ms, showcasing high generation efficiency. The trained noise generator can be directly applied to different tasks and target models without retraining, highlighting high transferability.
---
Rebuttal Comment 1.1:
Title: Your further feedback is greatly appreciated.
Comment: Dear Reviewer zWqb,
Thank you for your initial comments. We understand your concern about the practical application of our work and have prepared a detailed response to address it. Please have a look at our response and kindly let us know if it addresses your concerns. Your further feedback is valuable to us, and we hope to resolve any remaining issues before the rebuttal period ends. Thanks again for your time.
---
Rebuttal Comment 1.2:
Title: Concerns are addressed.
Comment: The authors have addressed my concerns, and I decide to raise my score.
---
Reply to Comment 1.2.1:
Title: Thank you very much!
Comment: We’re glad that our responses have addressed your concerns. We sincerely appreciate your prompt and positive feedback. Thank you very much! | Summary: Aiming to provide a solution for protecting sensitive/private images, this paper introduces UnSeg, a framework designed to generate unlearnable examples (UEs) to protect images from unauthorized usage in image segmentation models. Utilizing the Segment Anything Model (SAM) and bilevel optimization, UnSeg creates noise that renders images unusable for training segmentation models. The framework is validated through extensive experiments across multiple segmentation tasks, datasets, and architectures (listed in Table 1).
Strengths: - **(Writing)** The reviewer enjoyed reading the paper and the motivation of the idea. Great job, authors!
- **(Method)** The reviewer finds the paper to be quite interesting. The idea of the paper is to leverage a foundational model like SAM to add perturbations to a local area that cannot be segmented. Further, the reviewer is impressed with the approach to train a universal perturbation generator that can be readily applied to craft unlearnable noise for any given image in one single forward pass. This is a distinct property of image-agnostic adversarial attacks and its use in this paper’s setting is intuitive.
- **(Experiment)** The experiment section is comprehensive. Comparison on three benchmarks on Table 1 shows strong performance of the proposed method. Further, the proposed method also tests the strategy on non-standard benchmarks like Remote Sensing Segmentation and Medical Image Segmentation in Figure 5. The paper also investigates the impact of perturbations on object detection Table 3 and against deployed defenses in Table 4.
Weaknesses: The reviewer did not find major flaws with the paper.
- **(Method)** The reviewer feels there will be restrictions due to prevalent memory bandwidth for adoption of this method. The authors can expand on the real-world use cases of the proposed method (in terms of deployment).
- **(Method)** The reviewer doesn’t clearly understand the intuition behind the Section “Training Stability and Epsilon Generalization”. The point of segmentation methods being more fine-grained doesn’t justify balancing the perturbations by scaling factor $v$.
- **(Experiments)** The reviewer fails to understand why the methods in [48, 61] do not serve as proper baselines. Since the authors compare their method in cross domain/task settings (like in Table 3), these works also serve as good baselines.
- **(Experiments)** This paper would definitely benefit from analysis of statistical significance. Localized perturbations may get distributed in different patterns and hence may affect the segmentation performance.
- **(Typo)** L223, 220 v -> $v$
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Is the method applicable when the model is other than the SAM?
2. Will the method work if the segmentation model leverages contextual knowledge in the input images to create better object masks?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Authors do not describe any limitations clearly. The reviewer feels the if the surrogate model is not strong/robust enough, the perturbations might not be potent to break the target segmentation models. The reviewer did not find any particular negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful, constructive and encouraging reviews.
**Q1: More details on the real-world use cases and deployment of the proposed method.**
Our trained universal unlearnable noise generator is very lightweight and can be deployed similarly to SAM. Specifically, the noise generator occupies only 385.37 MB of memory. During inference on a single RTX 4090 GPU, the maximum VRAM usage is 2893 MB, and it only takes 113.52 ms to generate UEs for a single image, utilizing 2.76 GB of physical memory.
**Q2: The intuition behind “Training Stability and Epsilon Generalization”.**
While image classification models only focus on specific features of an image (as shown by CAM [a]), segmentation models must make accurate predictions for every pixel. Such a difference makes segmentation models more sensitive to pixel-level perturbations. Moreover, the bi-level optimization process optimizes a surrogate model and a noise generator, both aimed to reduce the prediction loss. The $\epsilon$ hyperparameter works as a knob to balance the two components. During our exploration, we find that a larger $\epsilon$ tends to hinder the training of the noise generator, as the added noise causes an extremely low loss. To solve this problem, we chose to use a smaller epsilon when training the noise generator, then magnify the noise at inference time to guarantee unlearnable effect. For this strategy to work, training under a smaller epsilon should be able to generalize to inference with a large epsilon (we call this **epsilon generalization**). The effectiveness of our method proves that this novel technique indeed works.
[a] Learning deep features for discriminative localization, Zhou et.al., CVPR 2016
**Q3: Why the methods in [48, 61] do not serve as proper baselines.**
Yes, these methods are indeed excellent. However, they were all designed for image classification and adapting them for segmentation models is very challenging. Therefore, in Table 2, we only considered training-free SynPer for comparison. To address your concern, we test two more SOTA unlearnable methods: Autoregressive Perturbations (AR) and Convolution-based Perturbations (CUDA). Due to space limitations, please refer to our response to Q4 of Reviewer 2xcM and the uploaded PDF file for further details. In summary, UnSeg consistently achieves better performance compared to other methods across different datasets and tasks. We genuinely believe that the high efficiency and transferability of UnSeg will establish it as a solid baseline for seg UEs.
**Q4: Analysis of statistical significance.**
Thanks for the thoughtful suggestion. As the unlearnable noise is small, the unlearnable images are statistically indistinguishable from the original images. To show this, we plot the pixel value distribution of a clean image and its unlearnable counterpart, where the two distributions are almost the same (the plots can be viewed in the PDF files we uploaded). We will add more plots to the revision.
**Q5: Typo error on L223, 220.**
Thanks. We will fix the error in the revision.
**Q6: Is the method applicable when the model is other than the SAM?**
Yes. As a simple and flexible framework, UnSeg can potentially work with segmentation models of a similar architecture to SAM, such as SEEM [a], Semantic-SAM, and Grounded SAM. UnSeg can also work with the recently released SAM 2 [b] model, which we plan to verify in our future work.
[a] Segment everything everywhere all at once, Zou et.al., NeurIPS 2023
[b] SAM 2: Segment Anything in Images and Videos, Ravi et.al, arXiv preprint arXiv:2408.00714 (2024)
**Q7: Will the method work if the segmentation model leverages contextual knowledge to create better object masks?**
Yes. For example, both DeepLabV1 and DeepLabV3 use dilated convolution to leverage region-level contextual knowledge. However, our method can reduce their mIoU to below 10% (please see Figure 4). Mask2Former employs a significant amount of masked transformers to learn long-range contextual knowledge, and our UnSeg can work effectively against it (Table 2). UnSeg also works on SAM-HQ and RSPrompter which both utilize SAM’s pre-trained knowledge (Figure 5 (a)(b)).
Unlike classification UE methods which add noise to the entire image, UnSeg proves that adding unlearnable noise locally/partially is sufficient to prevent the learning of segmentation models.
**Q8: The limitation of UnSeg and the influence of the surrogate model.**
One limitation of UnSeg is its moderate effectiveness in protecting objects with very simple textures and shapes, such as birds or bottles. We believe this is because the simplicity of these categories allows the segmentation models to make accurate predictions without relying on shortcuts. We hope to address this limitation in our future work.
We agree with the reviewer that the surrogate model plays a crucial role in optimizing the noise generator. As noted in Unlearnable Clusters, using CLIP as the surrogate model can enhance unlearnable effectiveness. Stable UEs also shows that more robust surrogate models improve the protection effectiveness.
Here, we replace the surrogate model from the MAE-pretrained ViT with an ImageNet-pretrained ResNet50.
We find that the performance of the noise generator significantly declined when using ResNet50.
We believe that using the MAE-pretrained weights can prevent the noise generator from being biased towards specific categories, thereby enhancing its transferability.
|Surrogate Model|Backbone|ADE20K Panoptic| | |Cityscapes Panoptic| | |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|||PQ|AP-Pan|mIoU-Pan|PQ|AP-Pan|mIoU-Pan|
|Clean|ResNet50|39.7|26.5|46.1|62.1|37.3|77.5|
| |Swin-Tiny|41.6|27.7|49.3|63.9|39.1|80.5|
|MAE pretrained ViT|ResNet50|11.7|7.5|17.7|5.7|1.1|7.8|
| |Swin-Tiny|4.1|3.4|10.6|7.2|1.7|12.6|
|ImageNet Pretrained ResNet50|ResNet50|35.6|23.4|43.5|42.5|16.5|64.8|
| |Swin-Tiny|28.4|19.7|39.7|34.2|57.2|11.6|
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you authors for your rebuttal and hard work. My concerns are cleared and I will maintain my rating. Good luck!
---
Reply to Comment 1.1.1:
Title: Thank you for your valuable feedback.
Comment: We greatly appreciate your recognition of our hard work and your quality review. Your valuable comments have significantly improved our work. Your encouragement means everything to us. We will carefully revise our paper following your suggestions. Thank you very much! | Summary: The paper addresses the issue of privacy concerns in training large-scale image segmentation models using unauthorized private data. The authors propose a novel framework called Unlearnable Segmentation (UnSeg) to generate unlearnable noise that, when added to images, makes them unusable for model training. This framework involves training a universal unlearnable noise generator by fine-tuning the Segment Anything Model (SAM) through bilevel optimization on an interactive segmentation dataset. The effectiveness of UnSeg is demonstrated across six mainstream image segmentation tasks, ten widely used datasets, and seven different network architectures, significantly reducing segmentation performance when unlearnable images are used.
Strengths: 1. The motivation and the proposed method are impressive. The work addresses three key challenges in generating unlearnable examples: data efficiency, generation efficiency, and transferability. The proposed method effectively employs the Segment Anything Model (SAM) to tackle these challenges.
2. Extensive experiments and ablation studies are conducted to evaluate the proposed method.
Weaknesses: 1. The generation of unlearnable examples (UEs) requires object masks, which can be challenging to obtain in common practice. The authors should discuss this issue and consider using bounding boxes or clicks as alternative options. Reporting the results of these alternatives would be beneficial.
2. Why is the surrogate model initialized with the pretrained MAE model weights instead of SAM weights?
3. As shown in Table 5, the model trained with a mixed dataset of clean and unlearnable data achieves similar or even better performance compared to the model trained with only clean data. This suggests that unlearnable data might not reduce performance and could even enhance it. Consequently, network trainers might still collect images without concern, despite the proposed method, unless all images available on the internet are unlearnable samples.
4. Can the authors provide more results from different networks and datasets using the clean-unlearnable mixed training dataset in Table 5?
5. Table 2 only compares one state-of-the-art (SOTA) method. The authors should include more SOTA methods, including attack methods for segmentation and classification, to effectively validate the superiority of the proposed method.
Technical Quality: 3
Clarity: 2
Questions for Authors: See Q1 and Q3 in weaknesses, pls.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have introduced the limitations and broader impact of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful and valuable reviews.
**Q1: Discussion about object masks, and more experiments regarding different prompts.**
The reason why we consider only object masks as prompts, rather than boxes or points, is to **eliminate ambiguity** in the prompts. For example, in an image containing a person, if a defender clicks on a point on the person's face to make it unlearnable, the model would be uncertain about what the point refers to: the entire person or just the face? It is also unclear to the defender which object/region has been protected, leading to ambiguity. Using object masks as prompts enables precise specification of the objects to be protected, effectively avoiding the aforementioned ambiguity. In fact, with powerful tools like SAM, it is quite easy to obtain the object masks.
Here, we provide more experiments with different prompts on the Pascal VOC dataset using DeepLabV1. We report two types of results: making all classes unlearnable and making only some classes unlearnable. It shows that our method is robust across different types of prompts, achieving excellent protection even with point prompts.
|Prompt Type |All class|Multi-class| | | | |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| | |Aeroplane |Bicycle |Bird |Boat |Bottle|
|Clean|70.6|81|36|84|66|72|
|Point|6|27.2|17.8|55|18.1|29.2|
|Box|6.4|41.2|24.4|60.1|23.7|38.1|
|Mask|6.2|30.5|16.4|58.6|19.1|40.4|
**Q2: Why is the surrogate model initialized with the pretrained MAE model weights instead of SAM weights?**
Our bi-level (min-min) optimization alternatively optimizes the surrogate model and the noise generator to minimize the loss between the surrogate model's predictions and the true labels. This means if the surrogate model is initialized with the pre-trained SAM weights, it will lead to a very low initial loss as SAM is already sufficient for high-quality predictions. In this case, the noise generator will no longer need training. In other words, using SAM weights will hinder the optimization of the noise generator. The pre-trained MAE weights, on the other hand, can alleviate this problem and are also a common initialization for segmentation tasks.
**Q3: More experiments from different networks and datasets using the clean-unlearnable mixed training dataset.**
Thanks for your suggestion. Here, we add the results obtained using the DeepLabV1 on the Pascal VOC dataset.
|Method|0%|20%|40%|60%|80%|100%|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|Clean Only|-|69|69.6|69.8|70.5|70.5|
|Mixed Data|7|66.6|68.2|69.9|70.4|-|
Then, we conduct new experiments using UNet++ with 5 different backbones on the Kvasir-seg dataset.
|Method|backbone|0% Clean|20% Clean|40% Clean|60% Clean|80% Clean|100% Clean|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|Clean Only|RN50|-|67.3|70.1|71|71.6|72.3|
| |D169|-|69.3|70.7|72.1|72.2|73.6|
| |EfficientB6|-|69.7|71.2|73.5|72.7|74|
| |Res2Net|-|67.6|70.8|71.2|71.7|73.6|
| |RegNetX|-|68.1|69.2|71.1|71.3|72.6|
|Mixed Data|RN50|2.5|67.2|68.7|70.3|71.8|-|
| |D169|6|69|69.5|71.2|72.4|-|
| |EfficientB6|7.4|70.6|71.9|73.3|73.2|-|
| |Res2Net|6.7|68.7|70.5|71.7|72.6|-|
| |RegNetX|2.1|69.8|69.7|71.1|71.4|-|
We find that: 1) All the results on the mixed datasets are consistently lower than those on the 100% clean. Moreover, the model’s performance when trained on the mixed training set is the same as when trained only on the clean portion of the training data. This implies that the UEs generated by UnSeg contribute almost nothing to the model's training. This result aligns with existing works (UEs, UCs, SynPer, AdvPoison, etc.). 2) The model trained on a mixed dataset sometimes performs worse than the model trained on only clean data. This indicates that our method may also hinder the model's learning on clean data to some extent.
**Q4: More comparisons with SOTA methods.**
We believe that previous optimization-based (compared to training-free or generative) methods (e.g., UEs/RUEs/TUEs/AdvPoison) face data efficiency, generation efficiency, and transferability challenges, as they need to optimize the unlearnable noise for each new sample. These challenges render them impractical for image segmentation which has diverse scenarios (datasets), higher resolution and complex models. Therefore, we only considered training-free methods (no generative methods available) for comparison in Table 2.
Here we test two more SOTA training-free unlearnable methods: Autoregressive Perturbations (AR) [a] and Convolution-based Perturbations (CUDA) [b]. **Please refer to the uploaded PDF for the complete results**. For the AR method, we use its AR process to generate sample-wise noise of size $\epsilon=1$ (L2 constraint) for each category. For the CUDA method, we use filters of size 3 and set the blur parameter to 0.3 to generate the noise for each category.
|Dataset|Method|Panoptic| | |Instance| | | |Semantic|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| | |PQ|AP-Pan|mIoU-Pan|AP|AP-S|AP-M|AP-L|mIoU|
|ADE20k|Clean|39.7|26.5|46.1|26.4|10.4|28.9|43.1|47.2|
| |AR|37.8|24.9|43.1|25.4|9.4|27.7|43.3|43.9|
| |Cuda|10.7|8.4|19.6|12|3.9|14.6|22.5|19.6|
| |UnSeg(Ours)|11.7|7.5|17.7|6.2|5|8.6|7.3|16.7|
|COCO|Clean|51.9|41.7|61.7|43.7|23.4|47.2|64.8|-|
| |Cuda|6.7|4.7|11.2|9.7|3.7|10.9|18.8|-|
| |UnSeg(Ours)|4.2|3.2|5.2|4|5.8|3.7|1.7|-|
|Cityscapes|Clean|62.1|37.3|77.5|37.4|-|-|-|79.4|
| |AR|51.6|36|68.3|35.5|-|-|-|68.9|
| |Cuda|51.6|31.4|69.1|29.9|-|-|-|65.8|
| |UnSeg(Ours)|5.7|1.1|7.8|2.3|-|-|-|10.9|
As can be observed, 1) AR loses effectiveness in preventing the segmentation models; 2) CUDA works well on the ADE20K dataset yet inferior to UnSeg on the COCO and Cityscapes dataset; and 3) UnSeg achieves the best performance. We genuinely believe our UnSeg can serve as a solid baseline for segmentation UEs.
[a] Autoregressive perturbations for data poisoning, Pedro et.al., NeurIPS 2022
[b] Cuda: Convolution-based unlearnable datasets, Sadasivan et.al., CVPR 2023
---
Rebuttal Comment 1.1:
Comment: Thanks a lot to the authors for their responses, which have addressed most of my concerns. While the effectiveness of the proposed method on clean-unlearnable mixed data in Q3 is not particularly impressive, I still believe this is a solid piece of work and will maintain my original score.
---
Reply to Comment 1.1.1:
Title: Thanks for your valuable and timely feedback.
Comment: We greatly appreciate your recognition of our work and your positive feedback. Your acknowledgment means a lot to us. We will incorporate the new results into our revision. Thank you very much! | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful and productive feedback.
We are grateful to read that the reviewers agreed that the motivation and the proposed method in this work is impressive and quite interesting (**2xcM, PRaH**), has been extensively evaluated and achieves excellent protection results (**2xcM, PRaH, zWqb, Mhnt**), and that the work is of high practical significance thanks to its high efficiency and transferability (**Mhnt, 2xcM, PRaH**).
We have uploaded a PDF containing the comparison of UnSeg with two more SOTA methods to address the concerns of Reviewer **2xcM** and **PRaH** regarding baselines. The PDF also includes pixel value distributions of the clean and the unlearnable images to address the Reviewer **PRaH**'s concerns regarding the analysis of statistical significance.
We have individually responded to the questions and concerns raised by each reviewer point-by-point in the discussion boxes, and we have outlined our improvement plans and experiments in detail. If any questions remain unanswered or our responses are unclear, we would appreciate the chance for further engagement with reviewers.
Once again, we sincerely thank all reviewers for the valuable feedback, which would further improve the quality of this paper.
Pdf: /pdf/b3173d034424e0b26b45cf00e5b2468b47d115d9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exploring Structured Semantic Priors Underlying Diffusion Score for Test-time Adaptation | Accept (poster) | Summary: The paper proposes DUSA, which uses diffusion models for test-time adaptation. Class codes are chosen for an image based on the results of a pretrained classifier and random sampling; these codes are used to generate noise candidates, which are combined based on the predicted probabilities for the associated classes. Error and weight updates are computed for both the classifier and the diffusion model based on the error between the aggregated and ground truth noises. This takes advantage of the semantics of the diffusion model to handle corrupted, challenging images not only for classification, but also for segmentation. The results include a variety of tasks and classifiers.
Strengths: [S1] Figure 1 is an outstanding, very well-designed portrayal of the method.
[S2] The method is novel, with its use of single time steps and CSM.
[S3] The results are outstanding, with large improvements across multiple tasks, reproducible on multiple models and with hyperparameter settings that do not seem too brittle.
Weaknesses: [W1] Existing art, e.g. [1], show there may be some benefit to using a few time steps. It also seems that in the image classification regime, the increase in computational complexity is unacceptably large. However, given existing work in the TTA area already uses many steps, it would have been nice to see some exploration of the impact of using more than 1 step, insofar as it would likely make the approximation in Equation 9 more accurate.
[W2] Considering the related work is positioned at the end, the introduction should do a little more to introduce and motivate the task. TTA is substantially more niche than other tasks, and the paper's narrative flow suffers from not making the motivation more explicit.
[1] Mukhopadhyay et al., Do text-free diffusion models learn discriminative visual representations?
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the latency of the method? Limitations mentions that it would not work for real time (e.g. autonomous driving) applications, but how far off is it?
Was any exploration of ensembling time steps attempted?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Existing art, e.g. [1], show there may be some benefit to using a few time steps. It would have been nice to see some exploration of the impact of using more than 1 step, insofar as it would likely make the approximation in Equation 9 more accurate. Was any exploration of ensembling time steps attempted?**
**A1:** Thanks for the valuable advice. The inspiring work [1] unveils the discriminativeness of U-Net features in diffusion models and achieves superior unsupervised unified representation learning performance across diverse tasks, where ensembling of timesteps is found beneficial. Our DUSA shares a similar spirit of exploring the discriminativeness in generative diffusion models, thus the ensembling of timesteps is definitely worth trying out.
To explore the impact of ensembling timesteps as suggested, we experiment by performing DUSA on more than one timestep simultaneously and averaging their losses, and the results are as follows:
|Timestep(s)|Gauss.|Defoc.|Snow|Contr.|
|-|:-:|:-:|:-:|:-:|
|{50}|64.0|50.8|69.5|69.3|
|{100}|64.2|54.7|70.1|68.9|
|{200}|63.4|55.1|69.8|66.6|
|{50,100}|**64.3**|54.0|**70.2**|69.2|
|{50,100,200}|**64.3**|**55.4**|**70.2**|69.1|
It can be seen that, the ensembling of different timesteps **does bring a further improvement in performance**, thanks to the knowledge from multiple timesteps. Furthermore, our DUSA has its special advantage of draining knowledge from every single timestep, and can still get good results with reduced computational overhead. We appreciate the insightful advice and will add the discussions in the revision.
[1] Mukhopadhyay et al., Do text-free diffusion models learn discriminative visual representations?
**Q2: Considering the related work is positioned at the end, the introduction should do a little more to introduce and motivate the task.**
**A2:** Thanks for the good suggestion. In test-time adaptation, a pre-trained model is updated on the fly to make accurate predictions on the incoming target samples without label access, which is challenging as the target data distribution might drift from that in pre-training. Being a competitive family of generative models, diffusion models exhibit great capability in modeling data distributions and even show discriminative potential in learned features [1,2,3,4], making them a strong candidate to provide guidance for discriminative tasks. In this work, we aim to extract discriminative priors from diffusion models to facilitate the challenging task of test-time adaptation.
We thank the reviewer's efforts in improving the readability of our manuscript and will further polish up the expression and integrate the above discussions into the introduction in the revision.
[1] Mukhopadhyay et al., Do text-free diffusion models learn discriminative visual representations?
[2] Zhang et al., Diffusionengine: Diffusion model is scalable data engine for object detection.
[3] Zhao et al., Unleashing text-to-image diffusion models for visual perception.
[4] Xu et al., Open-vocabulary panoptic segmentation with text-to-image diffusion models.
**Q3: What is the latency of the method? Limitations mentions that it would not work for real time (e.g. autonomous driving) applications, but how far off is it?**
**A3:** Thanks for the question. As shown by the results in the table below, our DUSA predicts at a rate of 0.11s/image for classification with Diffusion Transformer (DiT), which is **-99.6% in time** compared to Diffusion-TTA. For segmentation, the time is 4.5s/image for segmentation with ControlNet.
|ConvNeXt-L|Time/Image (A6000)|Gauss.|Defoc.|Snow|Contr.|
|-|:-:|:-:|:-:|:-:|:-:|
|Diffusion-TTA|27.71s|59.3|50.3|64.7|50.5|
|DUSA|**0.11s (-99.6%)**|**64.2 (+4.9)**|**54.7 (+4.4)**|**70.1 (+5.4)**|**68.9 (+18.4)**|
The latency is largely related to the selection of the diffusion model as well as the task complexity. With this said, we can freeze the diffusion model (DUSA-Frozen) and only update the task model in DUSA, which will still provide decent performance on a handful of tasks with **further reduced latency** and nearly real time, and the results are as follows:
|ConvNeXt-L|Time/Image (A6000)|Gauss.|Defoc.|Snow|Contr.|
|-|:-:|:-:|:-:|:-:|:-:|
|DUSA|0.11s|64.2|54.7|70.1|68.9|
|DUSA-Frozen|**0.05s**|58.4|33.7|64.2|62.0|
We will continue researching how to reduce the latency for discriminative tasks further while utilizing the diffusion model's strong power of generative modeling.
---
Rebuttal Comment 1.1:
Title: Score unchanged
Comment: Thanks for the clarifications. I think they improve the paper, and I think they make my initial rating more accurate. The paper is indeed technically sound, with high impact on TTA.
---
Reply to Comment 1.1.1:
Title: Thank you for the suggestions and positive feedback
Comment: Dear Reviewer Your,
Thank you for your thoughtful feedback. We greatly appreciate your acknowledgment and find your suggestions invaluable for enhancing our paper. We hope our work will shed light on the evolving field of test-time adaptation.
Best regards,
Submission335 Authors. | Summary: This work extends the Diffusion-test time adaption framework of [26] where the diffusion loss is averaged over timesteps to a simpler and theoretically justified framework, where a single timestep of the diffusion model can extract the semantic priors from the generative model. This reduces the instability of the Monte Carlo-based likelihood estimation over the timesteps. The proposed approach is applied to ImageNet classification on ConvNext and segmentation.
Strengths: 1. The work is well-written and well-motivated with self-contained justification for using a single timestep for extracting the semantic prior from the diffusion model.
2. Extensive experiments on the full test-time adaptation and continual test-time adaptation of ImageNet show that the approach outperforms prior work by a large margin.
3. Adequate ablations are performed to show the effect of selection of timestep and different components of the proposed framework.
Weaknesses: 1. During optimization updates are performed on the diffusion weights as well. How the joint update for the two networks is performed is unclear. The timestep selection should impact the diffusion model's update schedule and the output logits.
2. The computational overhead needed for the approach and the gain over the prior work are not discussed.
3. The work claims that the approach can be applied to any discriminative task. Class embedding as conditional is generally required for the diffusion model. This should limit the discriminative settings that can be considered for TTA with diffusion models.
4. The paper does not discuss how are the conditionals computed for the segmentation task. The increase in computational overhead should also be discussed with the increased complexity of the task.
5. Single timestep for adapting diffusion models on vision tasks has been discussed in [a] and [b]. These works can be discussed in the related work.
[a] Unsupervised semantic correspondence using stable diffusion. NeurIPS 2023.
[b] Slime: Segment like me. ICLR 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How is the update performed for the diffusion model as in [26] updating the diffusion model is optional. Does it incur any additional overhead?
2. Do different tasks, for example, segmentation and classification, have a single timestep where they perform well, or depending on the granularity of the task the choice of the timestep is different?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of the work are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: How the joint update for the two networks is performed is unclear.**
**A1:** Thanks. Before adaptation, **a single fixed timestep is selected** and other timesteps are dismissed. Hereafter, the diffusion model is **only updated on this single timestep**, i.e., we added a **same** level of noise to all images and denoise them at a **same** timestep. Note that the task model predicts on clean images, thus **not affected by timestep**. The diffusion noise estimations and the task model predictions are then integrated into Eq. 10 for a joint update.
**Q2: The timestep selection should impact the diffusion model's update schedule and the output logits.**
**A2:** Thanks. In our DUSA, the diffusion model is only updated on a **single fixed timestep**. We agree that timestep selection should impact the amount of noise in $x_t$ and the timestep condition $t$ for noise estimation, but we argue that the timestep in DUSA is **fixed throughout adaptation** and how we select it makes no difference to DUSA's pipeline. Besides, the task model predicts logits from clean images and is **not affected by timestep selection**.
**Q3: The computational overhead needed for the approach and the gain over the prior work are not discussed.**
**A3:** Thanks. Firstly, with $K$ classes and $T$ diffusion timesteps, the prior work Diffusion-TTA has a computational complexity of $\mathcal{O}(T)$, indeed 180 timesteps for each image. By contrast, our DUSA has an $\mathcal{O}(K)$, further reduced to $\mathcal{O}(b)(b\ll K)$ by practical designs (lines 136-139, 193). We list the computation time with gain as follows:
|ConvNeXt-L|Time/Image (A6000)|Gauss.|Defoc.|Snow|Contr.|
|-|:-:|:-:|:-:|:-:|:-:|
|Diffusion-TTA|27.71s|59.3|50.3|64.7|50.5|
|DUSA|**0.11s (-99.6%)**|**64.2 (+4.9)**|**54.7 (+4.4)**|**70.1 (+5.4)**|**68.9 (+18.4)**|
It can be seen that our DUSA shows significant gain over Diffusion-TTA with a tremendous reduction (-99.6%) in computational overhead.
**Q4: Class embedding as conditional is generally required for the diffusion model. This should limit the discriminative settings that can be considered for TTA with diffusion models.**
**A4:** Thanks. We agree that diffusion models generally require class embeddings. However, the specific form of class embeddings can vary. For classes with semantic meanings, we can construct a prompt containing class names and use a text model as embedder. For classes without explicit meanings, we can discretize them to IDs and map them to embedding vectors. In this way, we believe any discriminative task can be taken into consideration.
**Q5: How are the conditionals computed for the segmentation task? The increase in computational overhead should also be discussed with the increased complexity of the task.**
**A5:** Thanks. As we stick to text-to-image diffusion models, the conditionals for segmentation are computed **at class level**. Specifically, we construct a prompt "a photo of a {class}" and use the diffusion model's text encoder to obtain class embedding. Please note that **a single conditional noise estimation** in diffusion could also be viewed as **a noise estimation for every single pixel**. Therefore, the computation of conditionals for the segmentation task is **as simple as** in classification: given a class, predict the noise. The only difference is the way of ensembling noise estimations with predictions as in Eq. 12, which introduces little overhead.
The increase in computational overhead is first raised by a large network to adapt in segmentation. Besides, the overhead is partially due to using a larger diffusion model (ControlNet, 1.7B params) than that in classification (DiT, 675M params) and the sliding-window strategy for non-square images. Lastly, the overhead also comes from the unpolished use of our CSM module for this task. We find that our method is versatile and competitive across tasks, and it's majorly the task complexity that adds to overhead. We will add these discussions in the revision.
**Q6: Single timestep for adapting diffusion models on vision tasks has been discussed in [a] and [b]. These works can be discussed in the related work.**
**A6:** Good suggestion. [a] gets strong results in finding semantic correspondences by obtaining transferable prompt embeddings at a single timestep. [b] achieves significant gains in the one annotation-based image segmentation task at any granularity by learning reusable text embeddings at a single timestep. While also using a single timestep, our DUSA focuses on utilizing the diffusion model to adapt a discriminative model. We find the works relevant and will discuss them in the revision.
[a] Unsupervised semantic correspondence using stable diffusion. NeurIPS 2023.
[b] Slime: Segment like me. ICLR 2024.
**Q7: How is the update performed for the diffusion model as in [26] updating the diffusion model is optional. Does it incur any additional overhead?**
**A7:** Thanks. As described in A1, we update the diffusion model on a single fixed timestep. Like [26], updating the diffusion model is optional in DUSA, and we show the performance of freezing the diffusion model in **Table R6** in uploaded PDF. It can be seen that when not updating diffusion model, our method still outperforms [26]. Besides, with updating, our DUSA takes the lead by a large margin, with a vastly reduced overhead of **-99.6%** adaptation time compared to [26], as shown in A3.
**Q8: Do different tasks, for example, segmentation and classification, have a single timestep where they perform well, or depending on the granularity of the task the choice of the timestep is different?**
**A8:** Thanks. We vary the selected timestep for segmentation and report the results in **Figure R2** in uploaded PDF. As the timestep being too large or too small is not preferred (lines 169-172), we focus on a narrower range here. It can be seen that there is no global best for all scenarios, but our $t=100$ proves effective across tasks.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. The rebuttal discusses the listed weaknesses and there are no further questions.
---
Reply to Comment 1.1.1:
Title: Thank you for your positive feedback
Comment: Dear Reviewer 6Z3D,
It's good to know that we have addressed your concerns. We really appreciate your thoughtful comments and response, and are sincerely grateful for your efforts in improving the quality of our work.
Best regards,
Submission335 Authors. | Summary: This paper proposes to perform Test-Time Adaptation (TTA) with the help of diffusion models, based on the theoretical observations that effective discriminative priors are hidden within conditional diffusion losses.
The method involves a joint adaption of the task discriminative model and the generative diffusion model, with its optimization objective given from a theoretical perspective. To make it more efficient, the paper shows that the objective can be decoupled to fit modern classifier-free guidance (CFG) based models, and can still work well when only using one single timestep and a few selected class candidates. Moreover, the method is shown to naturally fit dense prediction tasks.
Experimental results on fully TTA and continual TTA tasks show the efficacy on both image classification and semantic segmentation, surpassing previous diffusion-based TTA methods.
Strengths: **Novelty in research problem and method.** Utilizing the discriminative natures within generative diffusion models to enhance discriminative task models is an interesting and underexplored topic, and could be valuable for the research community. Extracting the underlying semantic knowledge from diffusion losses from a theoretical perspective is novel (Eq. 10). Empirical contributions to improve the efficiency and versatility are also valuable.
**Significant results.** The evaluations on two discriminative tasks under two different TTA settings show the effectiveness of the proposed method, which outperforms previous baselines by large margins, including diffusion-based ones. These empirical results are significant.
Weaknesses: **Clarity on the training objective in Eq. 11.** Though the objective proposed in Eq. 10 is well-supported from the theoretical perspective, the alternative and more efficient one in Eq. 11 is somewhat counter-intuitive and needs further elucidation. Please refer to the "Questions" section.
**The role of diffusion models.** The proposed method (especially for image classification) relies on a conditional diffusion model that is pre-trained on the same image dataset and with the same set of class labels. Though the experiments on semantic segmentation are done with a large text-to-image model pre-trained on web images, the paper does not show whether it is better than a model pre-trained on the same distribution. Besides, how does the quality (e.g., sampling FID, model capacity, training duration, training dataset diversity) of diffusion models affect TTA performance?
The method also relies on the fine-tuning of diffusion models, rather than only evaluating off-the-shelf models. This makes the TTA process more costly, since diffusion models are usually much larger than discriminative task models.
Technical Quality: 3
Clarity: 3
Questions for Authors: The reviewer has some questions about the proposed objective.
- Can you give some insights or intuitions on why the modified objective (Eq. 11) works as well as the original objective (Eq. 10)?
- Related to the above, why the unconditional part in Eq. 11 is still needed?
- Firstly, the unconditional version of CFG-based models is much weaker than the conditional version, and they usually do not agree well with each other. Do the unconditional training really help the conditional noise estimation $\epsilon_\phi(x_t,t,c_y)$ ?
- Moreover, the conditional part in Eq. 11 does not involve tuning the diffusion weights $\phi$, then why does the diffusion model (unconditional version) still need to be constrained by a regular CFG objective, given that it has already been well pre-trained?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately discussed most of the limitations, and there have no concerns about negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Though Eq. 10 is well-supported from theoretical perspective, the alternative Eq. 11 needs further elucidation. Insights or intuitions on why the modified objective (Eq. 11) works as well as the original objective (Eq. 10)?**
**A1:** Thanks for the valuable question. Please note that we can view Eq. 10 from two perspectives: (a) the task model learns from the diffusion model, (b) weighted optimization is performed so that $\epsilon_\phi(x_t,t,c_y)$ with a larger weight $p_\theta(y|x_0)$ is more responsible for estimating true noise. Therefore, Eq. 10 **encourages the conditional diffusion model to adapt to test-time data**, under **instruction of task model predictions**.
Similarly, two perspectives of Eq. 11 can be shown. The conditional part adapts the task model as in Eq. 10. For the unconditional part, we start from Eq. 9:
$$\epsilon\approx\sum_yp(y|x_t)\epsilon_\phi(x_t,t,c_y).\quad(\text{Eq. 9})$$
Without loss of generality, we ignore the estimation error below. In a CFG-based diffusion model, the true noise in Eq. 9 can be unconditionally estimated:
$$\epsilon_\phi(x_t,t,\varnothing)=\sum_yp(y|x_t)\epsilon_\phi(x_t,t,c_y).\quad(1)$$
Please note that all noise estimations are from **a same diffusion model**, and the weights $p(y|x_t)$ are the **implicit priors in the diffusion model**. Therefore, we can view (1) as an **internal constraint** of the diffusion model. Intuitively, the unconditional part **implicitly enforces conditional adaptation of the diffusion model to test-time data**, now under **instruction of the implicit priors from the diffusion model itself**, explaining why Eq. 11 works as well as Eq. 10. We will add the discussions in the revision.
**Q2: The unconditional version of CFG-based models is much weaker than the conditional version, and they usually do not agree well with each other. Do the unconditional training really help the conditional noise estimation?**
**A2:** Thanks. We agree that the unconditional version is weaker and there can be disagreement. However, this version is still decent for noise estimation, as it also undergoes extensive diffusion training. Besides, in A1 we show that **the unconditional training helps adapt conditional noise estimations to test-time data** in CFG-based models. Therefore, we prefer to view the two versions as **cooperative**, and their agreement is not forced.
We compare unconditional training with freezing the diffusion model in **Table R1** in uploaded PDF. We can see that the unconditional results are comparable to conditional training. This strengthens our belief that **the unconditional training does help the conditional noise estimations**, with reduced overhead.
**Q3: The conditional part in Eq. 11 does not involve tuning the diffusion weights, why does the diffusion model (unconditional version) still need to be constrained by a regular CFG objective?**
**A3:** Thanks. The regular CFG objective **explicitly enhances both conditional and unconditional noise estimations** for better generation. In contrast, Eq. 11 focuses on utilizing the diffusion model to **facilitate task model adaptation** (conditional part) and leveraging the implicit priors to **adapt the conditional noise estimations** (unconditional part), as shown in A1.
We constrain the diffusion model with unconditional training to handle a **potential distribution shift** between diffusion model's training set and test-time data. Results are shown in **Table R1** in uploaded PDF. If the diffusion model is generalizable to unseen data, unconditional training can be removed with limited sacrifice in performance (DUSA-Frozen for Gaussian). Otherwise, should the test-time data be out-of-distribution for the diffusion model, the absence of unconditional training might cause degradation (DUSA-Frozen for Defocus). Therefore, the constraints are preferred for a more consistent performance gain.
**Q4: Whether a large text-to-image model pre-trained on web images is better than a model pre-trained on the same distribution for segmentation.**
**A4:** Thanks. Actually, the diffusion model we adopt for semantic segmentation is a ControlNet (based on Stable Diffusion v1.5) fine-tuned on ADE20K, same distribution as the task model. Although not equal to training from scratch, we assume the fine-tuning makes the diffusion model aware of the task model's training data distribution.
We compare the results of adapting with ControlNet against Stable Diffusion models in **Table R3** in uploaded PDF. It can be seen that the diffusion model's training on the same distribution as the task model **generally merits more performance gain**, as it narrows the gap between the task model and the diffusion model for better cooperation.
**Q5: How does the quality of diffusion models affect TTA performance?**
**A5:** Thanks. We compare the quality of popular diffusion models in **Table R4** in uploaded PDF. We apply our DUSA to them and show classification in **Table R2** and segmentation in **Table R3** in uploaded PDF. It can be seen that while a diffusion model of better quality fosters TTA performance (SD series), one training on the same distribution as the task model is generally more favorable (blue lines).
**Q6: The method also relies on the fine-tuning of diffusion models, rather than only evaluating off-the-shelf models. This makes the TTA process more costly, since diffusion models are usually much larger than discriminative task models.**
**A6:** Thanks. We agree that the fine-tuning of diffusion models makes TTA more costly, but we highlight the substantial performance gain obtained by updating the diffusion model (DUSA) against freezing it (DUSA-Frozen), as shown in **Table R1** in uploaded PDF.
Besides, our DUSA is **more than 100x faster** in comparison with Diffusion-TTA, and approaches the strong traditional TTA method EATA in speed. The results are in **Table R5** in uploaded PDF, where our DUSA shows leading performance with little extra cost.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed explanations, and these have addressed most of my concerns. Therefore, I lean towards acceptance, and I encourage the authors to add the discussions about the unconditional / conditional parts in Eq. 11 to the main paper.
The points that "unconditional part to handle a potential distribution shift between training and test-time data" and "conditional part to guide the task model to learn from the diffusion model" are interesting.
---
Reply to Comment 1.1.1:
Title: Thanks for the response
Comment: Dear Reviewer oxUu,
Thank you for the positive feedback. We find your suggestions and questions really helpful in improving our work. We will follow your advice and add the interesting discussion on unconditional / conditional parts in Eq. 11 to the main paper in the revision.
Best regards,
Submission335 Authors. | Summary: Tackling the limitations of prior research on test-time adaptation using pre-trained diffusion models, this study expands the diffusion prior for more practical scenarios. The authors extend the use of the diffusion prior to dense prediction tasks, enhancing inference speed with a refined diffusion loss function that eliminates the need for time averaging. This time-agnostic characteristic of the proposed diffusion loss undergoes rigorous validation through mathematical derivations and straightforward experimental validations.
While the pre-trained diffusion prior proves effective in various tasks, this study also demonstrates its utility in test-time adaptation for both classification and semantic segmentation with remarkable performance gain. These advancements are particularly crucial for practitioners in the field, offering more efficient and robust tools for real-time applications.
Strengths: ## Test-Time Adaptation for Dense Prediction Tasks
Recent advancements have seen the extensive use of large-scale pre-trained diffusion models as priors for various tasks. The application of diffusion priors in test-time adaptation for dense prediction tasks is a particularly intriguing discovery and is deemed to be a valuable approach moving forward.
## Time-Step Efficiency of the Proposed Method
Traditional diffusion models often require generating multiple samples due to the expectation across temporal steps in their loss functions. This research successfully removes the expectation over temporal steps, thereby enhancing efficiency. However, the formulation of the proposed method isn't fully understood, prompting additional questions.
Weaknesses: ## Misleading Title of the Paper
The algorithm described in the Appendix should ideally be integrated into the main body of the paper.
Ambiguity in the Use of the Adjective 'Fresh'
## the term "fresh" is used ambiguously in lines 66-68:
> A fresh proposition is provided from a theoretical perspective to extract discriminative priors from score-based diffusion models, which are capable of handling both classification and dense prediction tasks at test time in a single timestep.
Technical Quality: 3
Clarity: 2
Questions for Authors: ## Accuracy of Diffusion-TTA with a Single Time Step as in Figure 3
The reduction of necessary time steps from an average of 180 to a dramatically lower number is a significant advantage. However, the verification of this reduction and a clear comparison with standard diffusion-TTA remain unclear. Specifically, it is questioned whether averaging out the loss function $\mathcal{L}_{DUST}(\theta, phi; t)$ over T=180 under the same conditions as TTA-Diffusion would affect performance. Conversely, the performance of TTA-Diffusion with only a single timestep, as shown in Figure 3, is also in question.
## Formal Proof of Biased Approximation
Diffusion-TTA relies heavily on the Monte Carlo method across up to 180 timesteps, resulting in a biased approximation of likelihood and high computational complexity. This methodological reliance is confusing since Monte Carlo estimations generally converge towards more accurate values with more samples. The paper mentions a biased approximation but lacks concrete proof. It's questioned whether the proposed DUST method can demonstrate that it does not rely on biased approximations; if not, such claims might be considered exaggerated.
## Advantages Over Diffusion-TTA in Dense Prediction Tasks
There are doubts about whether Diffusion-TTA is also applicable and effective for dense prediction tasks, which is missing in the manuscript.
## Effect of Training Set on Diffusion Prior
It would be interesting to see the effects of using a diffusion model trained on ImageNet instead of one trained on a large dataset like Stable Diffusion for an apple-to-apple comparison. Given that Stable Diffusion benefits from a larger dataset, not comparing it to other models might be seen as a limitation. Despite the inevitable nature of research progression, the lack of analysis on this aspect in both the current paper and TTA-Diffusion is questioned. The potential for performance improvement remains, considering the model's training on noise through diffusion.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations of the paper have been highlighted within the weaknesses section, with detailed questions addressed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The algorithm in Appendix should be integrated into the main paper.**
**A1:** Thanks. Following your advice, we will integrate the algorithm into the main body of the paper in the revision.
**Q2: The term "fresh" is used ambiguously in lines 66-68.**
**A2:** Good suggestion. We agree that "a fresh proposition" might be ambiguous, and we clarify by restating it as "a novel proposition" and will update it in the revision.
**Q3: The verification of this reduction and a clear comparison with standard Diffusion-TTA remain unclear. Whether averaging out the loss function $\mathcal{L}_{DUSA}(\theta, \phi; t)$ over T=180 under the same conditions as Diffusion-TTA would affect performance.**
**A3:** Thanks. Diffusion-TTA is built on the Monte Carlo estimation of likelihood, and demands **randomly sampling up to 180 timesteps** to boost performance. By contrast, our DUSA utilizes semantic priors from a single timestep, and needs only a **single fixed timestep** to achieve superior performance.
For a clear comparison with the standard Diffusion-TTA, **we put our DUSA under the same conditions of Diffusion-TTA**: (a) we change the timestep selection **from fixed to random sampling**, (b) we increase the timesteps number **from 1 to 180**. The comparison results are as follows:
|ConvNeXt-L|Gauss.|Defoc.|Snow|Contr.|
|-|:-:|:-:|:-:|:-:|
|Diffusion-TTA|59.3|50.3|64.7|50.5|
|DUSA|**64.0 (+4.7)**|**54.7 (+4.4)**|**69.9 (+5.2)**|**68.0 (+17.5)**|
It can be seen that our DUSA significantly outperforms Diffusion-TTA even under its same conditions.
**Q4: Comparision of Diffusion-TTA and DUSA with only a single timestep, as shown in Figure 3.**
**A4:** Thanks. To compare under only a **single fixed** timestep, **modifications are made to Diffusion-TTA**: (a) we reduce the utilized number of timesteps **from 180 to 1**, (b) we change the timestep selection **from random sampling to a fixed timestep**. The comparison results for $t=100$ are as follows:
|ConvNeXt-L|Gauss.|Defoc.|Snow|Contr.|
|-|:-:|:-:|:-:|:-:|
|Diffusion-TTA|59.8|48.1|64.3|61.5|
|DUSA|**64.2 (+4.4)**|**54.7 (+6.6)**|**70.1 (+5.8)**|**68.9 (+7.4)**|
Our DUSA surpasses Diffusion-TTA by a large margin, justifying the superiority of DUSA on only a single timestep. More results on varied timesteps can be found in **Figure R1** in uploaded PDF.
**Q5: The paper mentions a biased approximation but lacks concrete proof. Whether the proposed DUSA method can demonstrate that it does not rely on biased approximations.**
**A5:** Thanks. Indeed, our original claim "... for a biased approximation of likelihood" may cause confusion, and a better expression is "... to estimate a biased approximation of likelihood". Actually, the bias **comes from Diffusion-TTA's theoretical approximation of the likelihood** instead of the Monte Carlo method, and we provide a proof below.
Diffusion-TTA is built on the evidence lower bound (ELBO) of log-likelihood in diffusion:
$$\log p_\phi(x_0|c)\geq \mathbb{E}\_q\Big[\log\frac{p\_\phi(x\_{0:T}|c)}{q(x\_{1:T}|x_0)}\Big],\quad(1)$$
where $q$ is the forward process and $p_\phi$ is the backward process. We can further derive the ELBO in (1) with denoising network $\epsilon_\phi$:
$$\mathbb{E}_q\Big[\log\frac{p\_\phi(x\_{0:T}|c)}{q(x\_{1:T}|x_0)}\Big]=-\mathbb{E}\_\epsilon\Big[\sum\_{t=2}^Tw_t\\|\epsilon-\epsilon\_\phi(x_t,t,c)\\|_2^2-\log p\_\phi(x_0|x_1,c)\Big]+C\approx-T\mathbb{E}\_{\epsilon,t}[\\|\epsilon-\epsilon\_\phi(x_t,t,c)\\|_2^2]+C,\quad(2)$$
where the **theoretical approximation** is made by simplifying weights $w_t$ to 1 and ignoring the $\log p\_\phi(x_0|x_1,c)$ term. Note that Diffusion-TTA actually estimates the simplified objective in (2), thus the result is **theoretically biased** from real likelihood because the approximation in (2) just prevents the equality sign in (1) from holding.
In contrast, our DUSA objective in Eq. 10 is built on the theory in Eq. 4 with **no theoretical approximation**. The task model thus directly utilizes semantic priors from the diffusion model. We will add the discussions in the revision.
**Q6: Advantages over Diffusion-TTA in dense prediction tasks. Whether Diffusion-TTA is also applicable and effective for dense prediction tasks.**
**A6:** Thanks. Diffusion-TTA is applicable to dense prediction tasks, as depicted in [26]. However, a pixel-level conditioning capability is assumed on the diffusion model for Diffusion-TTA to be effective, which requires re-training of diffusion models and reduces versatility. We re-train the diffusion model on ADE20K according to [26] to fit segmentation task, and comparison results are as follows:
|SegFormer-B5|Gauss.|Defoc.|Snow|Contr.|
|-|:-:|:-:|:-:|:-:|
|Diffusion-TTA|15.3|23.5|23.7|22.9|
|DUSA|**23.6 (+8.3)**|**24.7 (+1.2)**|**27.3 (+3.6)**|**27.1 (+4.2)**|
It can be seen that our DUSA consistently outperforms Diffusion-TTA on the dense prediction task.
**Q7: The effects of using a diffusion model trained on ImageNet instead of one trained on a large dataset like Stable Diffusion. Given that Stable Diffusion benefits from a larger dataset, not comparing it to other models might be seen as a limitation.**
**A7:** We appreciate the reviewer's insights in exploring our DUSA on diffusion models varying in training set. We would clarify that, for image classification, an ImageNet pre-trained Diffusion Transformer (DiT) is already used throughout our paper. We only use Stable Diffusion-based ControlNet for the segmentation task. As suggested, we further compare to other diffusion models and provide results for classification in **Table R2** in uploaded PDF and segmentation in **Table R3** in uploaded PDF.
We can find that a diffusion model trained on the same distribution as the task model is generally more beneficial for adaptation. Intuitively, the diffusion model's being aware of the pre-training data of the task model eliminates the disagreement between them, fostering their cooperation under a narrowed distribution gap.
---
Rebuttal Comment 1.1:
Comment: **Regarding all questions except Question 5**
Thank you for addressing my concerns related to the experiments. Your comprehensive work, especially given the short timeframe, has completely resolved my questions.
**Regarding Question 5**
I am still unclear on a particular point. While I understand that Diffusion-TTA relies on a biased approximation due to the optimization of the ELBO, I’m confused about the biased approximation mentioned in Eq (7), even though the current formulation begins from Eq (4). It seems to me that this could be considered a form of circular reasoning since the denoising estimator in Eq (7) is trained using the ELBO. Is my interpretation correct?
---
Reply to Comment 1.1.1:
Title: Further response to concerns on approximation bias
Comment: Dear Reviewer V2Wk,
We are glad to hear that your concerns related to the experiments have been completely addressed. We would like to make a further demonstration of the **unbiased estimation** in our DUSA. As a short answer, the denoising estimator in Eq (7) is actually **unbiased**, there is **no theoretical approximation** for Eq (7), and we argue that **no circular reasoning** exists in our DUSA. Please find the details below.
In Eq (7), we show that the true noise $\epsilon$ equals the weighted ensemble of conditional score functions $\nabla_{x_t}\log p(x_t|y)$ **with no approximation**. The reviewer's confusion might come from our **Eq (8)**, where the conditional score functions $\nabla_{x_t}\log p(x_t|y)$ are further **estimated** by conditional noise estimations $\epsilon_\phi(x_t,t,c_y)$ from the diffusion model.
We would like to clarify that, the approximation symbol $\approx$ in Eq (8) **does not indicate a bias in estimation**, but rather indicates that the conditional noise estimations **might not be totally accurate** as they are predictions from the denoising network in diffusion. Indeed, **the denoising estimators are unbiased**, as they are trained by the simplified objective in Eq (3). To train a diffusion model with Eq (3), we first randomly sample noise $\epsilon\sim\mathcal{N}(0,I)$ and uniformly sample timestep $t$, and obtain noised sample $x_t$ following Eq (2). Then, the training objective in Eq (3) is optimized by minimizing:
$$\mathcal{L}(\phi)=\mathbb{E}\_{x_t|\epsilon,t}[\\|\epsilon-\epsilon_\phi(x_t,t,c_y)\\|_2^2].$$
At the optimal point of the objective, we have:
$$\frac{\partial\mathcal{L}(\phi)}{\partial\epsilon_\phi}=\mathbb{E}\_{x_t|\epsilon,t}[2(\epsilon-\epsilon_\phi(x_t,t,c_y))]=0,$$
which implies:
$$\mathbb{E}\_{x_t|\epsilon,t}[\epsilon_\phi(x_t,t,c_y)]=\mathbb{E}\_{x_t|\epsilon,t}[\epsilon]=\epsilon,\quad\text{Eq}~(*)$$
meaning that the conditional noise estimations $\epsilon_\phi(x_t,t,c_y)$ are **unbiased** with regard to the true noise $\epsilon$.
For a more comprehensive understanding of the **unbiased estimation** in our DUSA, we provide a step-by-step explanation below, justifying that **there is no circular reasoning**.
1. We start from the **equation** Eq (4), where the unconditional score function $\nabla_x\log p(x)$ can be expressed by an ensemble of conditional score functions $\nabla_x\log p(x|y)$.
2. With Eq (6) in Corollary 1, we show that there is a **equation** between the score function and the true noise $\epsilon$ on every single timestep.
3. We now replace the unconditional score function in Eq (4) with Eq (6) and get the **equation** in Eq (7).
We remind that **no approximation is made** during all steps above. Also, Corollary 1 indicates that:
$$\nabla_{x_t}\log p(x_t|y)=-\frac{\epsilon}{\sqrt{1-\bar{\alpha}_t}},\quad\text{Eq}~(**)$$
as the true noise $\epsilon$ is sampled in the forward process and thus not relevant to the existence of conditioning. The proof for conditional score functions resembles that for Corollary 1 in Appendix C.
4. We then use conditional noise estimations $\epsilon_\phi(x_t,t,c_y)$ to **estimate** the true noise $\epsilon$ in Eq $(**)$, where the estimations are all **unbiased**, as proved in Eq $(*)$. This step results in Eq (8).
5. Combining Eq (7) and Eq (8), we instantly get Eq (9) and thus our DUSA objective in Eq (10). This step involves **no** approximation or estimation.
Through this **non-circular** chain of reasoning, we can find that in DUSA there is **no theoretical approximation** and **the estimations are all unbiased**.
We hope the proofs and chain of reasoning above have addressed the theoretical concerns. Please let us know if you have further questions or suggestions, and we are more than happy to continue the discussion.
Best regards,
Submission335 Authors. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their efforts in reviewing our paper and the constructive feedbacks that are quite helpful in improving the quality of the paper. We are more than encouraged that reviewers find:
+ our research topic of utilizing discriminative semantic priors within diffusion models to enhance discriminative task models to be **novel**, **interesting** and **valuable** (*Reviewer oxUu*).
+ our method to be with **significant advantage in efficiency** and **intriguing discovery** (*Reviewer V2Wk*), **novel** (*Reviewer Your*, *Reviewer oxUu*) and **valuable** (*Reviewer oxUu*), **well-motivated** (*Reviewer 6Z3D*).
+ our experimental results to show **remarkable gain** (*Reviewer V2Wk*), **outperform by large margins** (*Reviewer 6Z3D*, *Reviewer oxUu*) and **significant** (*Reviewer oxUu*), **adequate ablations** (*Reviewer 6Z3D*), **outstanding** and **reproducible** (*Reviewer Your*).
+ our advancements to be **particularly crucial** (*Reviewer V2Wk*).
+ our paper to be **well-written** (*Reviewer 6Z3D*), with **outstanding** and **well-designed** illustration (*Reviewer Your*).
We highly value the concerns and suggestions from each reviewer and have done our utmost to address them with detailed responses. A one-page PDF is also uploaded to provide better demonstrations of our responses, where the figures and tables are referred to as **Figure R** and **Table R** for the sake of clarity.
We hope that our rebuttal has sufficiently addressed the concerns raised by the reviewers. Please reply if you have any further questions, and we will be more than happy to continue the discussion.
Pdf: /pdf/077b52d3c3aff66eb82fa011fd1f45842c27b355.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
How Does Message Passing Improve Collaborative Filtering? | Accept (poster) | Summary: This paper rethinks the application of message passing mechanisms in collaborative filtering methods and makes two key findings: 1) Message passing (MP) improves collaborative filtering primarily through the forward pass rather than the backward propagation process, and 2) MP is more effective for cold-start users compared to warm users. Based on these findings, this paper proposes a novel test-time aggregation method tailored for collaborative filtering, supported by theoretical derivations to demonstrate its validity. Finally, extensive experiments validate the effectiveness of the proposed method.
Strengths: 1. This paper rethinks the message passing mechanisms in collaborative filtering and makes two novel key findings.
2. The method has theoretical guarantees to ensure its effectiveness.
3. The structure of this work is logical, and the writing is well-organized.
Weaknesses: 1. Equation 4 seems unable to separate the forward pass neighbor representation from the backward pass gradient update in $s_{ij}$. Therefore, it is theoretically challenging to validate this finding, even though Table 1 empirically demonstrates that the forward pass primarily drives the performance improvement.
2. Typically, evaluation metrics for recommendation models use NDCG@10 and Recall@10, as users tend to focus on items ranked higher in the list while ignoring those ranked lower. Therefore, NDCG@10 and Recall@10 better reflect the true effectiveness of recommendation models.
3. It is necessary to explain some abbreviations, such as 'OOM' and '-'.
4. This paper has many grammatical problems, so it is suggested to improve it. For example, "TAG-CF is extremely versatile can be used as a plug-and-play module to enhance...".
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We really appreciate your acknowledgment of our proposal's significance, theoretical soundness, and presentation. Please see below for our detailed response:
## W1: Clarification on Equation 4.
* The purpose of Equation 4 is comparing differences between a vanilla matrix factorization model and the LightGCN model in terms of how user-item similarities are computed. Once we find any differences between these two models, we can individually ablate/remove the additional inductive biases given by LightGCN and observe the performance downgrade to determine their significance to the recommendation performance. Specifically, for matrix factorization models, the similarity computation is extremely intuitive and it is as simple as the inner product between the corresponding user and item embeddings (i.e., $s_{ij}=\mathbf{u}_i^\intercal \cdot \mathbf{i}_j$). Whereas for LightGCN, the similarity computation is a combination of the same inner product and inner products between adjacent node embeddings. In this paper, we start our analysis from perspectives of forward and backward passes.
* From the perspective of backward pass, the idea is that while incorporating similarities between adjacent nodes, the positive supervision signal from the target user-item pair is also back-propagated to adjacency node embeddings. This could improve the recommendation performance because of the idea behind collaborative filtering (i.e., in the case of matrix factorization, neighbors’ similarity scores are highly correlated with the similarity of the target user-item pair). To determine if this is the case and how much can such back propagation improve the recommendation performance, we remove the gradients flowed back to neighbor embeddings (i.e., in implementation, we freeze the detach gradient operation from pytorch) and accordingly observe the performance downgrade. This variant is denoted as LightGCN w/o grad., which surprisingly shows little performance downgrade.
* For the perspective of forward pass, the idea is that to calculate the similarity between a pair of user and item, LightGCN also incorporates a weighted summation of similarities adjacent nodes. The underlying assumption is that neighbors’ similarity scores are highly correlated with the similarity of the target user-item pair. To determine if this operation is helpful, we need one ablation that only removes the numerical values of neighbors’ similarities and keeps everything else intact. In this case, the most straightforward way is removing the neighbor similarities from the calculation yet at the same time keeping the gradients brought by this calculation so that we do not ablate both backward and forward passes at the same time. And this is one of our variants coined LightGCN w/o neigh. Info, which proves that neighbor information drives the good performance of message passing in LightGCN.
* This phenomenon indicates that (1) both additional representations passed from neighbors during the forward pass and accompanying gradient updates to neighbors during the back-propagation help the recommendation performance, and (2) within total performance gains brought by MP, gains from the forward pass dominate those brought by the back-propagation. Comparing LightGCN with LightGCN w/o neigh. Info, we notice that the incorporation of gradient updates brought by MP is relatively incremental (i.e., ~2\%). However, to facilitate these additional gradient updates for slightly better performance, LightGCN is required to conduct MP at each batch, which brings tremendous additional overheads. This rationale motivates our further investigation and the proposal of TAG-CF.
## W2: Additional resutsl for ranking metrics at 10.
* Thanks for pointing this out. We explore this setting following existing works such as LightGCN and UltraGCN. We agree with you that Recall and NDCG at 10 can better reflect the true performance. Following your suggestion, we accordingly report Recall and NDCG at 10 in the one-page supplemental material and we observe that TAG-CF consistently brings performance improvement to baseline models when we explore more strict metrics, which further validates our proposal.
## W3, W4: Presentation improvement to the manuscript.
* Thanks for your suggestions. In Table 2, “OOM” refers to out-of-memory and we use “-” to represent out-of-memory as well because we want to avoid repetitive denotations. We will clarify this point in the table caption. Furthermore, we will double check spelling and grammatical errors remaining in the manuscript to ensure the readability.
**In light of our answers to your concerns, we hope you consider raising your score. If you have any more concerns, please do not hesitate to ask and we'll be happy to respond.**
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response; it has resolved my concerns. I will maintain my positive rating.
---
Reply to Comment 1.1.1:
Title: Thanks for your response.
Comment: We are delighted that our rebuttal satisfactorily leverages your concerns. We appreciate your kind responses and acknowledgement of our work. | Summary: This paper investigates the role of message passing in collaborative filtering, providing empirical analysis of its impact. Based on their findings that message passing primarily benefits through additional neighbor representations during forward passes and helps low-degree nodes more than high-degree nodes, the authors propose TAG-CF, a test-time aggregation framework for collaborative filtering. TAG-CF demonstrates competitive performance with state-of-the-art graph-based collaborative filtering methods while significantly reducing computational overhead.
Strengths: 1. The paper experimentally analyzes the impact of message passing on collaborative filtering.
2. Based on the analysis results, an efficient TAG-CF method is proposed.
3. Extensive experiments were conducted on various datasets.
Weaknesses: W1. Crucially, this paper overlooks related work, which do not require training.
W2. The paper claims that TAG-CF can achieve performance similar to training-time message passing with just one aggregation at test time. More detailed analysis is needed to determine if this holds true in all situations and under what conditions these results occur.
W3. While it's claimed that performance improvement is greater for low-degree nodes, the theoretical explanation for this is somewhat lacking. A deeper analysis of why this phenomenon occurs is needed. In particular, it seems highly dependent on the degree cutoff.
Technical Quality: 2
Clarity: 2
Questions for Authors: Q1. The authors have overlooked related work, such as GF-CF[1], BSPM[2], and SVD-AE[3], which are models without trainable parameters and training stages. The authors need to include these in their experimental results for comparison with test-time aggregation research or describe their commonalities and differences in the related work section. These should be mentioned as essential research before discussing test-time aggregation.
Q2. When the authors refer to low-degree nodes, is this based on the entire dataset or just the test or training set? If it's based on the entire dataset, couldn't the performance improvement be biased depending on how the test set is split?
Q3. The method for selecting the degree cutoff in TAG-CF+ seems somewhat heuristic. Is there a more systematic method for choosing this parameter?
Q4. What problems might arise when applying this to large-scale recommendation systems in practice?
Q5. Have you analyzed how TAG-CF's performance varies with the structural characteristics of the graph (e.g., node centrality, average path length, etc.)?
Q6. How does the dimension of user/item representations affect the performance of TAG-CF?
> [1] Shen, Yifei, et al. "How powerful is graph convolution for recommendation?." Proceedings of the 30th ACM international conference on information & knowledge management. 2021.
>
> [2] Choi, Jeongwhan, et al. "Blurring-sharpening process models for collaborative filtering." Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2023.
>
> [3] Hong, Seoyoung, et al. "SVD-AE: Simple Autoencoders for Collaborative Filtering." arXiv preprint arXiv:2405.04746 (2024).
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We really appreciate your acknowledgment of our proposal's efficiency and our experiment's extensiveness. Please see below for our detailed response:
## W1, Q1: Missing related works
* Thanks for pointing this out. Please see G1 (general response) for details. In summary, we (i) add experiments over 4 efficient baselines, and (ii) add a dedicated section about related efficient training for matrix factorization (MF). TAGCF consistently improves efficient MF methods (i.e., GC-FC, SVD-GCN, SVD-AE, and Turbo-CF). On average, TAGCF improves these methods by 5.6%, 8.9%, 3.9%, 7.8%, and 7.6% on our five datasets resp.
## W2: Analysis w.r.t. behaviors of TAG-CF
* In our paper, we claim that test-time message passing (MP) improves the performance of MF methods similarly to training-time MP. We discover that MP (both at training and testing time) helps low-degree users more than high-degree ones. In Table 2, the majority of cases (15/18 cases, or 83.3%) support this finding. In rare cases where high-degree users benefit more, the gap between groups is marginal and our proposed TAGCF still improves the performance (i.e., 6.7%, 2.3%, and 1.8% overall NDCG improvement). TAGCF consistently improves the performance of MF methods over 6 datasets (Table 2). Besides, we also apply TAGCF to training-free baselines (i.e., GFCF, SVD-GCN, SVD-AE, and Turbo-CF), as you suggest, and we observe that TAGCF consistently improves upon them. The added experiments can be found in the 1pg supplement. On average, TAGCF improves 4 training-free methods by 5.6%, 8.9%, 3.9%, 7.8%, and 7.6% on five datasets resp.
## W3: Analysis and discussion about degree cut-off
* We agree with you that the behaviors of TAGCF are dependent on the degree cutoff, as we show that MP (both at training and testing time) improves for low-degree users more than high-degree ones. Empirically, we observe that the majority of cases (15/18 in Table 2) support our finding. Moreover, we prove that both BPR and DirectAU optimize the objective of MP with some additional regularization (i.e., dissimilarity between non-existing user/item pairs for BPR, and representation uniformity for DirectAU).
* Combining this theory with the prior empirical observations, we show that these two supervision signals inadvertently conduct MP in the backward step, even without explicitly treating interaction data as graphs (see lines 219-237). To substantiate, we slowly upsample low-degree training users and observe the improvement that TAGCF introduces at each upsampling rate (see Appx. E).
## Q2: Definition of low-degree user
* We agree with your point: when the full set is used to calculate degree, experiments might be biased towards testing splits. We use the _training set alone_ to calculate user degrees, as our findings are closely correlated with the amount of exposure a user embedding gets _during training_.
## Q3: Degree cut-off selection
* There are 2 strategies to select degree cut-offs in TAG-CF+. One optimizes ranking performance, and the other focuses on budget (i.e. performance/cost ratio).
* For the former: We treat degree cut-offs as hyper-parameter tuning and select the one that with best validation metrics. We explore this in the original draft (circled optimal points in Figure 2/3).
* For the latter: GNN inference in industrial settings is prohibitively expensive especially for high-degree users, as their receptive field increases exponentially. As shown in Figure 2, performance gain from TAG-CF+ plateaus with a relatively small user degree. In these cases, the cost for further conducting TAG-CF+ on high-degree users grows exponentially, but doing so brings limited gains. Hence, we can also select the cut-off s.t. the performance/cost ratio starts to plateau. We will add this discussion to our manuscript.
## Q4: Challenges in large-scale RecSys
* The challenges of applying TAGCF to large-scale RecSys are the utilization of graphs in large-scale ML pipelines. There are often O(100 million) users/items and O(trillion) of edges connecting them. Simply loading the entire sparse interaction matrix into CPU RAM is then infeasible. TAGCF utilizes graph knowledge while avoiding most of the computational overhead of MP. It is extremely flexible, simple to implement, and enjoys benefits of graph-based CF with minimal cost.
## Q5: Other graph characteristics
* Thanks for pointing this out. We conduct additional experiments on Katz and Betweenness centrality. We split users equally into 3 buckets according to these characteristics and show the improvement brought by TAGCF to each bucket. Since these characteristics are expensive to compute, we only conduct this experiment on MovieLens-1M. We observe that improvement gradually increases as measurements go down, which echoes our observations on node degree.
||NDCG@20 Impr.|Recall@20 Impr.
|:-|:-:|:-:|
||Katz
High|29.2%|7.2%
Med|33.4%|7.9%
Low|35.2%|8.9%
||Betweenness
High|27.8%|6.9%
Med|34.7%|8.1%
Low|35.6%|9.0%
## Q6: Embedding dimension
* We conduct experiments on the improvement from TAGCF to MF with different dimensions. Due to time limitations and costs for training larger models, we only conduct experiments on Anime and MovieLens-1M. We observe that the performance of MF increases as the dimension increases. TAG-CF can consistently improve the performance of MF models with similar trends as observed in the original draft.
|Dimension|64|128|256
|:-|:-:|:-:|:-:|
|| Anime
NDCG@20 MF|24.01|30.14|34.73
+TAG-CF|27.25(9.8%)|32.82(9.0%)|37.96(9.3%)
Recall@20 MF|29.15|35.78|38.43
+TAG-CF|31.95(6.9%)|38.10(6.5%)|41.01(6.7%)
||MovieLens-1M
NDCG@20 MF|22.51|24.34|25.08
+TAG-CF|29.65(31.7%)|32.18(32.2%)|32.73(29.57)
Recall@20 MF|26.30|28.82|29.57
+TAG-CF|28.40(8.0%)|31.09(7.9%)|31.87(7.7%)
**In light of our answers to your concerns, we hope you consider raising your score. If you have any more concerns, please do not hesitate to ask and we'll be happy to respond.**
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns. I raise my score to 5.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply.
Comment: We greatly appreciate your valuable insights and constructive feedback on our paper. It's truly encouraging to see that our response effectively addresses the concerns you raised. Thank you for recognizing the efforts we've put into our work. | Summary: The paper investigates on an interesting topic: how message passing is playing a role in graph-based recommender systems. Upon experiments on LightGCN model, authors posit the advantages of message passing lie in the forward pass to aggregate neighborhood information while at the same time, backward propagation has little effect on the results. Based on this assumption, authors propose test-time aggregation for collaborative filtering where the representation of users/items is aggregated at the testing time for more efficient training.
Strengths: 1. The paper identifies roles for message passing in graph-based recommendation.
2. It provides more insights for researchers to understand the mechanism behind graph-based recommendation.
3. The proposed method is efficient.
Weaknesses: 1. Lack of pre-training description
2. Lack of hyper-parameter analysis
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the detailed pre-training method for the method? I am assuming the pre-training strategy is impactful for the results.
2. An experiment on different pre-training methods is needed.
3. Authors design m,n hyper-parameters to adjust the neighborhood impact. Are there any findings we can obtain from the two hyper-parameters? I would like to see hyper-parameter testing on them.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We really appreciate your acknowledgment of our proposal's significance, insights, and efficiency. Please see below for our detailed response:
## W1, Q1, Q2: Pre-training strategies for MF models
* We agree with you that the pre-training strategies are impactful for the results, as TAG-CF is a test-time augmentation framework whose performance significantly depends on the starting performance of the base model that it improves on. In Table 2 of our original manuscript, we applied TAG-CF to MF methods pre-trained by ENMF, DirectAU, and UltraGCN. In Table 5, we also applied TAG-CF to MF derived by BPR. All these applications of TAG-CF to MF methods pre-trained by different methods show consistent performance improvement. Nevertheless, as suggested by other reviewers, we also apply TAG-CF to four other training-free matrix factorization methods (i.e., GFCF, SVD-GCN, SVD-AE, and Turbo-CF). As shown in the one-page additional material, TAGCF can still consistently improve the performance of these efficient matrix factorization methods (i.e., GC-FC, SVD-GCN, SVD-AE, and Turbo-CF). On average, TAGCF improves these four methods by 5.6%, 8.9%, 3.9%, 7.8%, and 7.6% on Amazon-book, Anime, Gowalla, Yelp-2018, and MovieLens-1M respectively.
## W2, Q3: Sensitivity experiments for m and n.
* Thanks for pointing this out. Please refer to G2 in our general response for details.
**In light of our answers to your concerns, we hope you consider raising your score. If you have any more concerns, please do not hesitate to ask and we'll be happy to respond.**
---
Rebuttal Comment 1.1:
Title: keep score unchanged
Comment: I will keep my positive score unchanged. Thank you for author's rebuttal
---
Reply to Comment 1.1.1:
Title: Thanks for your reply.
Comment: Your valuable insights and constructive feedback on our paper are deeply appreciated. We are grateful for acknowledging the dedication we've invested in our research. | Summary: This paper investigates the role of message passing (MP) in collaborative filtering (CF). Unlike most GNN-based CF research, which assumes that performance gains arise from improved representation learning through GNNs, this work questions that assumption. Through empirical experiments and theoretical analyses, the paper finds that the key contributions of MP in CF are: 1) forward message passing rather than back-propagation, and 2) benefits for low-degree users instead of high-degree users.
Based on these observations, this work propose a simple and efficient plug-in method called TAG-CF. TAG-CF can be easily integrated into any well-trained CF model by performing a single layer of message passing during the inference stage. To further reduce computational workload, TAG-CF+ conducts MP only on low-degree users. Experiments on four datasets verify the effectiveness of TAG-CF in improving CF models with almost no additional computational overhead.
Strengths: 1. This paper focuses on an important open problem: what is the role of MP plays in CF. This problem is worth investigating, and this work attempts to answer it from both empirical and theoretical perspectives.
2. This article is easy to follow, with a natural transition from the analysis of MP to the proposal of TAG-CF.
3. The proposed TAG-CF is efficient and effective. The plug-in nature of the method ensures the industrial application of the proposed method.
Weaknesses: 1. The discussion following Theorem 1 "both BPR and DirectAU optimize the objective of message passing" is not rigorous enough. Here are some aspects regarding to the discussion:
* The assumption that $\|\mathbf{u}_i\|^2=\|\mathbf{i}_j\|^2=1$ does not hold in LightGCN, thus the discussion on the theorem is only empirical but not strictly hold for LightGCN. In other words, the BPR loss for real-world LightGCN may not be upper-bounded by the objective of message passing.
* Even if the claim "both BPR and DirectAU optimize the objective of message passing" holds, there should be an additional experiment to show that directly train a CF model on the objective of message passing achieves comparable performance of LightGCN-BPR.
2. Important baselines such as GFCF [1] and SVD-GCN [2] are missing. These works also dedicated to speed up GNN-based CF. On Gowalla dataset, both GFCF and SVD-GCN cost only around 30s for training, which is more efficient than the proposed TAG-CF. Moreover, the reported Recall@20 on Gowalla are 0.1849 and 0.1905 respectively, also shows comparable or better performance. The authors need to add these baseline in the experiment and discuss what is the advantage of TAG-CF compared to these baselines.
3. According to Figure1, it is not sufficient to draw the conclusion that "Message Passing in CF Helps Low-degree Users More Compared with High-degrees". In Yelp dataset, the improvement of degree=16 users is only about 15%, which is less than the value of most higher-degree users (degree from 17-34). And what is the improvement for the users whose degree is less than 16? Still, this empirical experiment on only two dataset is not enough to support this conclusion.
[1] Yifei Shen, et al., “How Powerful is Graph Convolution for Recommendation?” CIKM 2021
[2] Shaowen Peng, et al., “SVD-GCN: A Simplified Graph Convolution Paradigm for Recommendation”. CIKM 2022
Technical Quality: 2
Clarity: 3
Questions for Authors: Besides the weakness part, I also have some questions:
1. What is the setting of w/o both in Table 1. If it is a MF model using BPR, how can it achieve such good performance (Recall@20=18.42). In Table 2, MF model also shows superisingly good performance on most dataset, almost comparable to LightGCN, how does the MF set and tuned in the experiment?
2. In Table 2, low degree users' performance even better than overall performance, which seems to be inconsistent with Figure 1, are there any explanations?
3. How does m and n be tuned in the experiment? Are there any sensitivity experiments on these important hyper-parameters?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We really appreciate your acknowledgment of our paper’s significance, written quality, and practicality. Our detailed response to your concerns is as follows:
## W1.1: $|u_i|^2 = |i_j|^2 = 1$ does not hold in LightGCN
* We agree that $|u_i|^2 = |i_j|^2 = 1$ does not hold in LightGCN due to the message passing (MP) and the mean averaging across embeddings from different MP layers. All of these operations will void this assumption. However, the main purpose of this theorem to support our claim that matrix factorization (MF) methods with trending objective functions (e.g., BPR and DirectAU) partially optimize the objective of MP with some additional regularization (i.e., dissimilarity between non-existing user/item pairs for BPR, and representation uniformity for DirectAU). Hence, directly optimizing these two objectives for MF partially fulfills the effects brought by MP during the back-propagation. We will refine our theorem and restrict it to MF methods, where the assumption of $|u_i|^2 = |i_j|^2 = 1$ makes sense. We sincerely appreciate your constructive feedback and the refined theorem will be: “During the training of MF methods, objectives of BPR and DirectAU are strictly upper-bounded by the objective of message passing.”
## W1.2: Directly training MF using the message passing objective
* The objective of MP layer is $\mathcal{O}=\min_\mathbf{Z}\{tr(\mathbf{Z}^\intercal\mathbf{L}\mathbf{Z}))\}$, which enforces embeddings of adjacent node to be similar. Training one MF model directly optimizing this objective is equivalent to BPR without negative sampling or DirectAU without the uniformity term, which will lead to overfitting and collapsing issues.
* The purpose of our theorem is to connect the gain brought by the MP to node degree. Our theorem shows that BPR and DirectAU partially conduct inadvertent MP during the back-propagation. In the case of CF, the amount of training signals for a user is directly proportional to the node degree. High-degree active users naturally benefit more from the inadvertent MP from objective functions, because they acquire more training signals. Hence, when explicit MP is applied to CF methods, the performance gain for high-degree users is less significant than that for low-degree users. Because the contribution of the MP over high-degree nodes has been mostly fulfilled by the inadvertent MP during the training.
* Retrospecting on your suggestion, we believe that studying the direct correlation between the objective of MP and matrix factorization models can provide understanding towards the role of MP for CF. Hence, we provide the value of the MP objective quantified by $\sum_{(i,j)\in\mathcal{D}} ||\mathbf{u}_i - \mathbf{i}_j||^2$ at different training steps. The results for MF trained with DirectAU on MovieLens-1M is shown below:
|Training steps (epoch)|0|2|4|6|8|10
|:-|:-:|:-:|:-:|:-:|:-:|:-:|
|$\|\|u_i-i_j\|\|^2$|0.931|0.493|0.478|0.464|0.461|0.461
* We notice that the objective of MP is gradually optimized as the training progress and plateaus when the model converges, which echos with our finding that it optimizes the objective of MP with some additional regularizations.
## W2: Missing important baselines
* Thanks for pointing this out. Please see G1 (general response) for details. In summary, we (i) add experiments over 4 efficient baselines, and (ii) add a dedicated section about related efficient training for matrix factorization (MF). TAGCF consistently improves efficient MF methods (i.e., GC-FC, SVD-GCN, SVD-AE, and Turbo-CF). On average, TAGCF improves these methods by 5.6%, 8.9%, 3.9%, 7.8%, and 7.6% on our five datasets resp.
## W3, Q2: Insufficient conclusion in Figure 1
* In Figure 1, we indeed observe a local increasing trend in performance improvement from degree 16 to 20. However, when we compare the performance improvement of the whole low-degree users to that of the high-degree users, we can still observe that the improvement for low-degree users is larger than high-degree users (e.g., 4.8% on low-degree vs. 2.6% on overall for ENMF and 11.2% vs. 10.4% for UltraGCN). We do admit that there are certain cases where the low-degree improvement is slightly worse than the overall (i.e., 1.7% on low-degree vs. 1.8% on overall for MF). In fact, in these cases, the gaps in-between are very marginal (i.e., 0.1%) compared with gaps we observe for other cases (i.e., 2.2% and 0.8%). In Table 2, only 3 out of 18 cases (i.e., 16.7%) are scenarios where low-degree improvement is smaller and all remaining 15 cases (83.3%) support our findings that low-degree improvement is larger. Besides, even in these three rare cases, our proposed TAGCF still improves the performance (i.e., 6.7%, 2.3%, and 1.8% overall improvement in NDCG). So we believe that the conclusion we draw in this section is sufficient given abundant supporting evidence from experiments.
## Q1: Setting in Table 1
We train both MF and LightGCN using the DirectAU loss and explore the same setting we have for Table 2. We individually tune hyper-parameters for MF and LightGCN (i.e., learning rate, batch size, and weight decay) to make sure that all models perform at their utmost, so that our findings are not based on weaker baselines. A vanilla MF model trained by DirectAU can already outperform LightGCN trained with BPR loss, which is the reason why we explore DirectAU loss for both MF and LightGCN in the first place. When we train both models using DirectAU, their performance gap is not as big as it is for gaps between models trained with BPR loss. We will include a dedicated section describing the hyper-parameters we used for each dataset.
## Q3: Sensitivity experiments for m and n
* Thanks for pointing this out. Please refer to G2 in our general response for details.
**In light of our answers to your concerns, we hope you consider raising your score. If you have any more concerns, please do not hesitate to ask and we'll be happy to respond.** | Rebuttal 1:
Rebuttal: We thank the reviewers for their feedback and constructive suggestions. We are pleased that most reviewers appreciated **the promising efficiency and effectiveness of our proposal**, e.g.,: "the proposed TAG-CF is efficient and effective and ensures the industrial application of the proposed method (jYGx)", "the proposed method is efficient (yNfp)", and "efficient TAG-CF method is proposed (3fxq)". Besides, most reviewers also recognized **the promising impact of our research**, said: "focuses on an important open problem (jYGx)", "provides more insights for researchers (yNfp)", and "rethinks the message passing and makes two novel key findings (vJWR)".
**[G1: Additional experiments and discussions w.r.t. related works]** At the same time, multiple reviewers are concerned about the coverage of our related works and experiments, especially for those efficient baselines. We appreciate the insightful suggestions and agree that our work is relevant to other efficiency-focused efforts in recommender systems. To leverage this comment, we not only conduct additional experiments over four efficient baselines (i.e., GFCF [1], SVD-GCN [2], SVD-AE [3], and Turbo-CF [4]) but also add a dedicated section discussing related works about efficient training for matrix factorization. Before we dive into the experimental results, we would like to further note that TAG-CF is a test-time augmentation framework such that TAG-CF needs to be coupled with existing matrix factorization baselines (e.g., MF-BPR, MF-DirectAU, UltraGCN, GFCF, SVD-GCN, etc). It does not make sense to compare the performance of TAGCF across baselines because the better performance might come from TAGCF itself or a more performing backbone matrix factorization model, making the comparison between baselines non-trivial. In our original manuscript, we apply TAGCF to a series of matrix factorization methods (i.e., MF-BPR, ENMF, and UltraGCN) and TAGCF consistently enhances their performance. Following this setting, we also apply TAGCF to other matrix factorization methods as suggested by reviewers. Their performance and the improvement brought by TAGCF are systematically presented in the one-page pdf. From Table 10, we can observe that TAGCF can still consistently improve the performance of these efficient matrix factorization methods (i.e., GC-FC, SVD-GCN, SVD-AE, and Turbo-CF). On average, TAGCF improves these four methods by 5.6%, 8.9%, 3.9%, 7.8%, and 7.6% on Amazon-book, Anime, Gowalla, Yelp-2018, and MovieLens-1M respectively.
Besides additional experimental results, we also drafted a section discussing related works as follows:
Efficient efforts in marix factorization. A branch of research specifically focuses on improving the efficiency of matrix factorization [1,2,3,4,5]. For instance, GFCF [1] and Turbo-CF [4] explore graph signal processing to linearly convolve the interaction matrix and use the resulted matrix directly for recommendation without training. Furthermore, SVD-GCN [2] and SVD-AE [3] utilize a low rank version of the interaction matrix to further accelrate the convolution efficiency and yet remain the promising performance. Besides, BSPMs [5] studies using diffusion process to gradually reconsturct the interaction matrix and achieves promising performance with fast processing. In parallel with these existing efforts, we propose to enhance any existing matrix factorization method through test-time augmentation that harnesses graph-based heuristics.
[1] Shen, et al., How Powerful is Graph Convolution for Recommendation? CIKM21
[2] Peng, et al., SVD-GCN: A Simplified Graph Convolution Paradigm for Recommendation. CIKM21
[3] Hong, et al. SVD-AE: Simple Autoencoders for Collaborative Filtering. IJCAI24
[4] Park, et al. Turbo-CF: Matrix Decomposition-Free Graph Filtering for Fast Recommendation. SIGIR24
[5] Choi, et al. Blurring-sharpening process models for collaborative filtering. SIGIR23
**[G2: Additional hyper-parameter experiments on m and n]** Furthermore, multiple reviewer also ask about TAG-CF's sensitivity to hyper-parameters (i.e., m and n in the test-time message passing). To mitigate this concern, we plotted heat maps of the performance improvement brought by TAGCF to MF with different m and n configurations and show the results in the one-page pdf. From Figure 4, we can observe that m and n are important for the success of TAG-CF. Fortunately, across datasets, the optimal selection of m and n is pretty similar (e.g., m=n=-0.5 or m=n=0). The other solution to automatically tune m and n could be initialzing m and n to -0.5 (i.e., the value that generally works well across datasets) and conducting gradient descent on them using the training loss. But in this work we observe that manually tuning them on a small set of candidates can already deliver promising results.
**Besides this general response, we also leveraged individual feedbacks in the designated section. We hope we have satisfactorily answered your questions. If so, could you please consider increasing your rating? If you have remaining doubts or concerns, please let us know, and we will happily respond. Thank you!**
Best regards, \
TAG-CF authors
Pdf: /pdf/0e60987037f7c32a37a5ea7c3e4914a8e87feedf.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Efficient $\Phi$-Regret Minimization with Low-Degree Swap Deviations in Extensive-Form Games | Accept (poster) | Summary: This paper explores and advances the research direction of linear-swap regret in extensive-form games, extending it to low-degree swaps and bridging the gap to results on general swap regret. The technical contributions include the concept of k-mediator deviations, which relate to low-degree polynomials and depth-k decision trees on n>>k variables, and a crucial observation that if a player is allowed to output a probability distribution over strategies, computing it is sufficient to compute an approximate fixed point of the deviations in expectation.
The concept of depth-k deviations allows a deviator to select an action after observing a strategy on a single terminal for k adaptive rounds. Choosing $k=1$ or $k=N$ leads to linear-swap deviations or all swap deviations, respectively. The main results are as follows:
1. For depth-k deviations and the strategy space of a hypercube, there is an online learning algorithm that achieves at most $\\epsilon$ average $\\Phi$-regret in $N^{O(k)} / \\epsilon^2$ rounds with $N^{O(k)} / \\epsilon$ running time per round.
1. For deviations that are polynomials of degree k and the strategy space of a hypercube, there is an online learning algorithm that achieves at most $\\epsilon$ average $\\Phi$-regret in $N^{O(k)} / \\epsilon^2$ rounds with $N^{O(k^3)} / \\epsilon$ running time per round.
2. It is PPAD-hard to guarante a regret of at most $\\epsilon / \\sqrt{N}$ if a regret minimizer outputs a single strategy in $\[0,1\]^N$.
3. For depth-k deviations and extensive-form games, there is an online learning algorithm that achieves at most $\\epsilon$ average $\\Phi$-regret in $N^{O(k)} / \\epsilon^2$ rounds with $N^{O(k)} / \\epsilon$ running time per round.
4. For deviations that are polynomials of degree k and extensive-form games of depth $d$, there is an online learning algorithm that achieves at most $\\epsilon \\Phi$ average $\\Phi$-regret in $N^{O(kd)^3} / \\epsilon^2$ rounds with $N^{O(kd)^3} / \\epsilon$ running time per round.
Strengths: - The results bridge the gap between linear swap and arbitrary swap regret.
- The technical contributions show the advantage of probabilistic strategies with respect to hardness, i.e., that a fixed point in expectation is enough for optimization.
Weaknesses: - It seems the gap is bridged "from below", i.e., for large $k$ the results may become worse than known bounds.
- ~~Allowing the learner to output probability distributions of strategies seems to weaken the model and/or the guarantees of results in this model, e.g., more than allowing mixed strategies over pure strategies does.~~ (edit: see rebuttal)
Technical Quality: 3
Clarity: 2
Questions for Authors: Could you elaborate on the difference between mixed strategies and probability distributions over strategies? Is it true that the worst-case regret for the former is bounded, but (at least in your results) the worst-case regret is unbounded?
Note: As a reader who is completely unfamiliar with the topic, the preliminaries could have covered a *deviations*.
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: /
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: *It seems the gap is bridged "from below", i.e., for large $k$ the results may become worse than known bounds.*
This is correct, in particular, for $k > \tilde O(1/\epsilon)$ our bounds become worse than those of Peng and Rubinstein [2024] and Dagan et al [2024].
*Allowing the learner to output probability distributions of strategies seems to weaken the model and/or the guarantees of results in this model, e.g., more than allowing mixed strategies over pure strategies does.*
We do not believe that working with mixed rather than behavioral strategies (see also next response regarding the difference between mixed and behavioral) is "weakening" in any meaningful sense. Indeed:
* In Theorem 3.3, we show that if a learner is restricted to playing only behavioral strategies, it is PPAD-hard to achieve $\Phi$-regret $\epsilon/\sqrt{N}$ even when $\Phi$ is the set of degree-$k$ polynomials and $k$ and $\epsilon$ are absolute constants. Thus, in some sense, going beyond behavioral strategies to use mixed strategies is *necessary* to achieve efficient learning algorithms.
* Peng and Rubinstein [2024] and Dagan et al [2024] also use mixed rather than behavioral strategies to achieve their result (this should be unsurprising, given the previous bullet!)
*Could you elaborate on the difference between mixed strategies and probability distributions over strategies?*
A mixed strategy *is* a probability distribution over pure strategies. The reviewer might be confusing a mixed strategy with a *behavioral* strategy. A behavioral strategy is a mixed strategy that selects actions independently at every decision point. Each point $\boldsymbol x \in \text{conv}(\mathcal X)$ essentially uniquely\* specifies a behavioral strategy, but does not uniquely specify a mixed strategy.
\*except at decision points reached with probability zero
*Is it true that the worst-case regret for the former is bounded, but (at least in your results) the worst-case regret is unbounded?*
As stated above, Theorem 3.3 shows that no efficient regret minimizer can exist that only uses behavioral strategies, unless P = PPAD. Theorems 3.1, 3.2, and 3.4 show that, when mixed strategies are permitted, efficient regret minimizers *do* exist.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and the clarifications! Based on your explanations and pointing out that I was at least partially confused about different types of strategies, I withdraw the second weakness. Given that my rating is an educated guess because of my lack of expertise, it affects my rating slightly positively, but in a way that's not mirrored by the rating scale. | Summary: This work aims to bridge the gap between the $N^{O(1/\varepsilon)}$ result for attaining the $\varepsilon$ swap regret and the $\operatorname{poly}(N)/\varepsilon^2$ result for attaining the $\varepsilon$ linear-swap regret for extensive-form games. To this end, the authors generalize the untimed communication deviations proposed by [1] to k-mediator deviations. The authors prove that obtaining the $\varepsilon$ regret w.r.t. the set of k-mediator deviations in $N^{O(k)}/\varepsilon^2$ rounds is achievable. As several byproducts, the authors show a sample complexity of $N^{O(kd)^3}/\varepsilon^2$ for minimizing the regret against degree-k polynomial deviations, and an algorithm with the lowest computation complexity to compute the $\varepsilon$-correlated equilibria in normal-form games in the medium-precision regime.
[1] Zhang et al. Mediator Interpretation and Faster Learning Algorithms for Linear Correlated Equilibria in General Extensive-Form Games. ICLR, 24.
Strengths: 1. The motivation to bridge the gap between the $N^{O(1/\varepsilon)}$ result for attaining the $\varepsilon$ swap regret and the $\operatorname{poly}(N)/\varepsilon^2$ result for attaining the $\varepsilon$ linear-swap regret is of interest.
Weaknesses: My main concern regarding this work is the presentation. Several key parts of the current writing are not very clear and somewhat hard to follow, detailed as follows.
1. The main motivation of this work is to bridge the gap between the $N^{O(1/\varepsilon)}$ result for attaining the $\varepsilon$ swap regret and the $\operatorname{poly}(N)/\varepsilon^2$ result for attaining $\varepsilon$ linear-swap regret in previous works. Does the current result of the $N^{O(k)}/\varepsilon^2$ sample complexity for obtaining the $\varepsilon$ regret w.r.t. the set of k-mediator deviations well interpolate between the above two results? It is a bit strange that there seem to be no discussions about the relationship between the results in this work and the two results in previous works. I think the result in this work replicates the $\operatorname{poly}(N)/\varepsilon^2$ result when $k=1$, but it is unclear to me in which sense the result in this work will replicate the previous $N^{O(1/\varepsilon)}$ result.
2. Why do we need to relate the k-mediator deviations to low-degree polynomials? After relating the k-mediator deviations to low-degree polynomials, the convergence rate is worse by a $N^{O(k^2d^3)}$ factor. There might be some benefits in doing so, but I did not seem to find the relevant descriptions in the paper.
3. There is a rich line of most related works in the literature, while it is currently a bit heavy to fully understand the advantages of the results in this work against those in previous works merely from the statements in Section 5. It would be better if a table reflecting the convergence rate, the computation complexity, and the required regime of $\varepsilon$ of the results in this work and those in previous works is included in the paper.
4. There is an expectation taken over the randomness of the player in the regret defined in Eq. (1), while the previous works study high probability regret (say [2,3]). As such, it seems that the algorithm in this work can only obtain approximate correlated equilibria in expectation, while previous works can obtain the approximate correlated equilibria with high probability. I would suggest the authors incorporate necessary discussions about this point.
5. For Section 4.1, it seems that the proposed operation of approximating fixed points in expectation can only work with finding approximate correlated equilibria in expectation. Can this trick be extended to the case of finding approximate correlated equilibria with high probability? This is also somewhat related to my question above.
6. In Line 135, it is not clear to me what is the purpose of constructing the depth-k decision tree deviations. It would be better to first introduce the motivation for doing so before diving into the details of its construction.
7. At the end of the statement of Theorem 3.4, it seems that it should be something like $N^{O(k)}/\varepsilon$ (or $N^{O(k)}/\varepsilon^2$) instead of $N^{O(k)}$?
8. In Line 228, what does it mean by “succinct representation”?
9. For Section 4.2, my understanding is that the authors consider a more convenient and suitable formulation than the tree-form formulation to better leverage the connection between low-depth decision trees and low-degree polynomials. Are there any technical difficulties in achieving this?
10. In this work, most of the time $\Phi$ is termed as the set of the deviations while sometimes it is termed as the set of the transformation functions. I would suggest the authors unify the descriptions for clarity.
Overall, the current writing prevents me from well evaluating the significance of the results and the technical novelty of this work and hence I am not able to recommend an acceptance of this work.
[2] Bai et al. Efficient Phi-Regret Minimization in Extensive-Form Games via Online Mirror Descent. NeurIPS, 22.
[3] Farina et al. Polynomial-Time Linear-Swap Regret Minimization in Imperfect-Information Sequential Games. NeurIPS, 23.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weakness part above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. *On "Bridging the gap"*: You are correct: while we match the $\text{poly}(N)/\epsilon^2$ result for linear-swap regret when $k=1$, we do not match the $N^{\tilde O(1/\epsilon)}$ bound for swap regret shown by Dagan et al. and Peng and Rubinstein.
2. *$k$-mediator deviations and low-degree polynomials*: Note that the $k$ in the $N^{O(k)}/\epsilon^2$ bound refers to something different than the $k$ in the $N^{O(k^3)}/\epsilon^2$ bound: the former is the number of mediators; the latter is the degree of a polynomial. As all $k$-mediator deviations are degree-$k$ polynomials (but not vice-versa), it should not be surprising that our bound in the latter case is worse.
3. Thanks for the suggestion. We have created a table comparing the results in that section to past results, and attached it in our message to all reviewers. We will include this table in the next version.
4. A clarification here: Our algorithms are *deterministic*--they produce as their output on each round $t$ a *distribution* $\pi^t$. The fact that Eq. (1) contains an expectation simply is how regret is defined, and does not denote that the algorithm ever actually samples from $\pi^t$. We will makes this more clear in the revised version as we see that it can cause confusion.
5. Like above, a "fixed point in expectation" is actually a *distribution* $\pi^t$, which is computed by a deterministic algorithm.
6. There are (at least) two reasons to consider the set of depth-$k$ decision trees: first, they are by themselves a fairly natural class of functions, and second, our result about low-degree deviations fundamentally uses a connection between degree-$k$ polynomials and depth-$O(k^3)$ decision trees (Theorem 4.2). We will add a note about this to the next version.
7. That sentence should read "For $\Phi^k_\text{med}$, the same bounds hold except that both instances of $N^{O(kd)^3}$ should be replaced by $N^{O(k)}$." We will fix this in the next version.
8. "Succinct" here simply means the game is given by an efficient oracle for computing the utility vectors (Eq. (2)), rather than as an explicit payoff tensor. This is fairly standard language, also used by (among many other papers) the cited paper [Papadimitriou and Roughgarden 2008]
9. In the first part of Section 4.2, to provide better intuition about the general case, we indeed focus on a very specific type of tree-form decision problems (corresponding to the hypercube). As the reviewer points out, this is convenient because of the connection between low-depth decision trees and low-degree polynomials (Theorem 4.2). The main technical challenge is to extend this to general tree-form decision problems, and that is the subject of the rest of Section 4.2.
11. Thank you. As you correctly point out, "deviation" and "[strategy] transformation function" are the same thing. We'll unify the terminology in the next version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. However, I am still not fully convinced by your responses to Q4 and Q5. Why is your algorithm deterministic? Since in each round, your algorithm will actually sample an action $x^t\sim \pi^t$, it can hardly be said that your algorithm is deterministic unless all the $\pi^t$'s are Dirac measures over the action space. Further, even if your algorithm is deterministic, why could you guarantee a sublinear regret result against the adversary? It is known that a deterministic algorithm can have linear regret against adversarial multi-armed bandits [4].
[4] Lattimore et al. Bandit Algorithms. 2020.
---
Reply to Comment 1.1.1:
Comment: Thank you for the response. Our algorithm is indeed deterministic. We never sample an action from $\pi^t$. The output of the learner is that distribution instead of a sample from it. This is similar to Hedge where the output is a distribution over actions, as opposed to EXP3 where the output is a sample from the distribution. Using deterministic algorithms is in line with much of the prior work on the subject; see, for example, [1,2,3] and references therein. Also, note that we operate in the full feedback model, and not under bandit feedback (see Lines 105-107 in our paper). In that setting, there are many well-known deterministic algorithms with sublinear regret, such as online gradient descent and Hedge, so the lower bound cited by the reviewer is not applicable.
We hope this clarifies the reviewer’s question, and we will highlight further in the revision that no randomization is used in our algorithm.
[1] Piliouras et al. Beyond Time-Average Convergence: Near-Optimal Uncoupled Online Learning via Clairvoyant Multiplicative Weights Update, NeurIPS 2022
[2] Daskalakis et al. Near-Optimal No-Regret Learning in General Games, NeurIPS 2021
[3] Syrgkanis et al. Faster Convergence of Regularized Learning in Games, NIPS 2015 | Summary: This work seems to design some fast regret minimization algorithms. However, to be honest, I could not find the learning protocols nor understand the learning problem set-up. It is hard for me to find what information is revealed in each round after taking some action.
So, I suggest adding some simple applications to help readers to understand the learning protocol, for example, the tree-form decision problems.
Major comments:
Is there any lower bound? How can I measure the tightness of the presented results?
Is it related to P vs NP? Can it be reduced to an NP-hard problem?
For theoretical analysis, is it finite-time analysis or asymptotic analysis?
I know it is a theory work, but is it any experiments?
Minor comments:
In Line 110, a comma is missing in $\{ \ \}$.
In Line 143, it is better to say $x[j]'s$.
In Line 271, it should be \citep{} for the citation.
Strengths: see the first box
Weaknesses: see the first box
Technical Quality: 3
Clarity: 2
Questions for Authors: see the first box
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I hope to see some experiments though
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q: *However, to be honest, I could not find the learning protocols nor understand the learning problem set-up.*
A: We will clarify further how the learning protocol proceeds in the revised version. Note that we operate in the standard model when it comes to learning in extensive-form games.
Q: *Is there any lower bound? How can I measure the tightness of the presented results? Is it related to P vs NP? Can it be reduced to an NP-hard problem?*
A: The paper [Anonymous, 2024] that was included in the supplemental materials shows an exponential lower bound in the case of *swap regret*, that is, when $k=N$. One can interpret our results as "interpolating" between the known extreme cases of $k=1$ (where efficient algorithms exist due to Farina & Pipis [2023] and Zhang et al [2024]) and $k=N$, where the above lower bound holds. See also response to aMpb's Q1 below.
Q: *For theoretical analysis, is it finite-time analysis or asymptotic analysis?*
A: The theoretical analysis is all finite-time. Each theorem specifies an upper bound on the number of rounds required to achieve a certain equilibrium gap $\epsilon$. The big-O notation hides only absolute constants.
Q: *I know it is a theory work, but is it any experiments?*
A: We agree that the main merit of the paper is theoretical. Our notion of rationality is the strongest that can be efficiently (i.e., at a polynomially sublinear regret rate) guaranteed in imperfect-information sequential games (i.e., extensive-form games), a fundamental class of strategic interactions. We have not run experiements using our algorithm, but we agree that this may be something interesting to do in the future.
Thank you for pointing out the minor errors. We will fix them in the next version. | null | null | Rebuttal 1:
Rebuttal: Thanks for the reviews! We have attached to this message a pdf containing two tables which compare the results of this paper to results of past papers. We will include both of these tables in the next version.
Pdf: /pdf/be852b0e12ad6f0fe4dad928196167a0184f04e1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.